doi
stringlengths
0
570
pub_date
stringclasses
355 values
sections
listlengths
1
245
abstract
stringlengths
0
5.25k
title
stringlengths
0
228
figures
listlengths
0
130
authors
stringlengths
0
11.9k
references
listlengths
0
835
formulas
listlengths
0
679
2024-02-19
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b37", "b49", "b28", "b29", "b31", "b9", "b23", "b28", "b28", "b37", "b45", "b48", "b23", "b29", "b47", "b45", "b28", "b44", "b7", "b48", "b7" ], "table_ref": [], "text": "Generative models have recently achieved significant advancements in producing 2D images [34,35,37] and videos [23,39,51]. The creation of 3D content [30,31,33], a critical representation of the real world, has garnered increasing attention and has shown rapid development. Empowered by robust priors from both 2D and 3D diffusion models, the generation of 3D scenes [11,19,25,30,41] from text prompts or single images is now feasible and has witnessed remarkable progress. However, most existing research concentrates on static scenes and often overlooks the dynamic nature of the real world.\nIn contrast, dynamic 3D scenes (or 3D videos) more effectively represent the richly informative 3D world, offering significant applications in video games, augmented reality, and virtual reality. Despite its importance, 3D video generation remains relatively unexplored partly due to the underdevelopment in video generation models. MAV3D [41] represents a pioneering effort in generating 3D videos from text prompts. It employs temporal Score Distillation Sampling (SDS) [30] to transfer knowledge from a text-tovideo diffusion model [39] into a dynamic NeRF representation [4]. However, content generation solely from text often lacks control despite its diversity. This limitation has inspired the integration of various conditioning signals in 2D [13, 47,50] and image-based conditioning in 3D [25,31,49]. In the realm of controllable video generation, the technique of conditioning on an initial image to guide subsequent motion [7,47] has become both effective and popular. For example, generating a scene based on the prompt: \"A panda is dancing\", users may wish to define the appearance and starting pose of the panda by providing a reference image.\nIn this paper, we explore image-text-based 3D video generation having inspired by text-to-3D-video and controllable video generation. Guided by the motion outlined in a text prompt, our approach aims to lift and animate a single input image into a 3D video. To achieve this goal with limited input, we utilize Score Distillation Sampling (SDS) [30] to infuse knowledge from diffusion priors into a dynamic NeRF model. This implies that an efficient and effective NeRF model is essential for representation. MAV3D employs HexPlane [4] to map the X, Y, Z, and time axes onto six 2D planes and subsequently fusing these features to determine density and color. Nonetheless, we risk falling into the Janus problem with a naive adoption of MAV3D since we only have a single reference image as both a condition and a source of supervision. Specifically, the Janus problem happens when the camera direction of the reference image view is perpendicular to one of the 2D planes in HexPlane. As a consequence, the plane features overfit to the reference image, where both the front and back views resemble the reference image without a proper 3D geometry. Moreover, we empirically observe some traces of the Janus problem even when the reference image is off-perpendicular the 2D planes in HexPlane. Refer to Appendix. B for a detailed discussion and visualization of the results of Hex-Plane. We address these challenges by adopting a robust 4D grid feature encoding model that is capable of representing spatio-temporal information. This model predicts color and density using features derived from the 4D grid.\nTo animate 3D videos from a single image, we propose a static-to-dynamic and coarse-to-fine strategy that structures the optimization of the 4D representation into three distinct stages. Initially, we develop a robust static 3D model from the reference image with 2D image diffusion prior [35] and 3D diffusion prior [19]. This static model serves as the initialization of the dynamic model, from which the 3D video animation emerges. In the second stage, we employ a video diffusion prior [46] to generate the motion across various timesteps and camera perspectives. During this stage, the reference image is instrumental in aligning the first frame with the source image. However, a challenge arises as the object in the 3D video tends to drift away from the reference image over time. This drift is largely attributed to the reference image influencing only the initial frame, while subsequent frames primarily rely on the knowledge gained from the video diffusion model.\nPersonalized modeling techniques [9,36] are effective for aligning reference images with diffusion priors. However, these methods cannot be applied to video diffusion model when only a single image instead of a video is provided. The semantic drift of video diffusion therefore seems to be inevitable. Nevertheless, personalized modeling based on a single image is feasible for image diffusion priors, and a video can be conceptualized as a sequence of consecutive images. Consequently, in the third stage of our approach, we employ frame-level processing with personalized modeling to counteract semantic drift. Specifically, this stage focuses on refining the details and appearance of the 3D video, while preserving its structure and motion. We utilize ControlNet-Tile [50] diffusion prior with the second stage 3D video as a condition. Textual Inversion [9] is employed for personalized modeling. Additionally, the ControlNet-Tile diffusion prior not only compensates for reference information but also enhances the video's resolution, as it can be applied effectively to resized low-resolution images. Through the three-stage optimization process that is powered by robust 2D and 3D diffusion priors, our Animate124 is capable of generating realistic and diverse 3D videos. Our contributions are summarized as follows:\n• We introduce Animate124, a novel framework for animating a single image into a 3D video, utilizing a 4D grid dynamic NeRF representation. • We propose a static-to-dynamic and coarse-to-fine strategy to optimize the 4D representation, integrating 2D, 3D, and personalized modeling diffusion priors. • We conduct extensive qualitative and quantitative experiments to compare Animate124 with baselines and stateof-the-art text-to-4D method (MAV3D [41]), demonstrating the superiority of our method." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b24", "b24", "b0", "b24", "b25", "b40", "b6", "b36", "b27", "b30", "b10", "b14", "b30", "b28", "b16", "b18", "b23", "b29", "b3", "b45" ], "table_ref": [], "text": "Dynamic Neural Rendering. In our framework, we employ Neural Radiance Fields (NeRFs) [26] to represent the 4D spatio-temporal scenes. NeRFs [26] enable the rendering of images from diverse target viewpoints by modeling a 3D scene through a neural network that interprets spatial coordinates. The architecture of neural network varies, extending from basic MLPs (Multilayer Perceptrons) [1,26] to more complex voxel grid features [27,42]. Regarding dynamic NeRFs for 4D spatio-temporal scenes, one popular approach involves separately learning a canonical field and a deformation field across different network layers. However, this technique faces challenges when dealing with changes in scene topology. Another prevailing method decomposes the spatio-temporal dimension into multiple planes [4,8,38], but this plane-based approach tends to result in the Janus problem when generating 3D video from a single reference image. To overcome these limitations, inspired by Park et al. [29], we leverage a 4D grid model to effectively represent dynamic 3D scenes, thereby facilitating the animation of a single image into a 3D video.\nText-to-3D Generation. The evolution of multimodal foundation models [32,35,37] has led to significant advancements in in-the-wild text-to-3D generation. Initial efforts [12,14,16] focused on aligning text prompts with rendered images using CLIP [32]. Recognizing the detailed semantic capabilities of diffusion models, Dream-Fusion [30] [18][19][20] involves fine-tuning 2D image generation models to produce multi-view images directly. Alternatively, some methods [25,31,44] combine 2D diffusion with the above fine-tuned multi-view 3D diffusion to optimize 3D representation by distilling diffusion knowledge. However, these techniques primarily focus on static scenes and do not incorporate aspects of animation.\nAnimating Single Image to Video. Image-to-video generation has gained considerable popularity in both academic circles [5,23,47] and commercial applications * † . These methods typically utilize the reference image either as the initial frame or to extract semantic information as a condition. We recognize the potential of animating the wellestablished static 3D scene with text, especially for applications in the metaverse and video game industries. This motivates us to pioneer the exploration of in-the-wild imageto-3D-video generation." }, { "figure_ref": [ "fig_0" ], "heading": "Our Method", "publication_ref": [], "table_ref": [], "text": "Overview. Our framework leverages static-to-dynamic and coarse-to-fine strategy to animate a single image into a 3D\n* https://www.pika.art/ † https://research.runwayml.com/gen2 video, structured in three distinct stages. Specifically, we develop a static NeRF using the reference image in the first stage (static-stage) under the guidance of both 2D and 3D diffusion priors. Subsequently in the second stage (coarse-stage), the 4D grid dynamic NeRF is initialized from the static NeRF and further refined with the assistance of video diffusion prior and 3D diffusion prior. In the final stage (fine-stage), we employ a personalized image diffusion prior to mitigate the semantic drift introduced by the video diffusion model. The image diffusion prior is specifically fine-tuned with the reference image to provide additional supervision. The comprehensive framework encompassing both the coarse and fine stages is illustrated in Fig. 2 with the static optimization stage excluded due to space constraints." }, { "figure_ref": [], "heading": "4D Grid Encoding", "publication_ref": [ "b25", "b6", "b28", "b26" ], "table_ref": [], "text": "Given a camera position and a timestep t, a ray is cast from the camera center through each pixel on the image penetrating into the scene. We sample 3D points along this ray and determine the color and density to render the image through volume rendering. Multi-scale grid encoding [27] is an efficient and effective method for storing and representing 3D scenes. The features extracted from this grid are instrumental in calculating both density and color. In the temporal domain, Fridovich-Keil et al. [8] have demonstrated that multi-scale grid encoding is not essential. Consequently, we construct our 4D grid in the following manner: we divide the time dimension evenly into T grids, and for each time grid, we establish a 3D multi-scale grid V (excluding hash encoding). Spatio-temporal features F x,y,z,t are then linearly interpolated from the two nearest time grids:\nF x,y,z,t = t + -t ∆ t V x,y,z,t-+ t -t - ∆ t V x,y,z,t+ ,(1)\nwhere x,y and z represent the spatial coordinates, and t denotes the normalized time within the range of [0, 1]. t + , t -and ∆ t refer to the nearest upper time grid, lower time grid and the time interval between two consecutive grids. Utilizing these spatio-temporal features, we are able to generate color c and density τ via projection MLPs. In line with DreamFusion [30], we generate albedo and simulate random light sources to accurately represent color.\nTemporal Total Variation Loss. To effectively transmit the information from the first frame to subsequent time grids while promoting temporal smoothness, we apply a total variation (TV) loss [28] to the 3D grid V across the adjacent time dimensions:\nL T V = T -1 t=0 x,y,z (V x,y,z,t -V x,y,z,t+1 ) 2 . (2\n)\nRef View" }, { "figure_ref": [], "heading": "2D Video Diffusion", "publication_ref": [], "table_ref": [], "text": "Prompt: A panda is dancing\n3D Diffusion Dynamic Cameras First Frame Novel View ℒ !\"# Coarse Stage ℒ $%$&' ℒ $%$&(% Ref View 2D Video Diffusion Prompt: A panda is dancing 3D Diffusion Dynamic Cameras First Frame Novel View" }, { "figure_ref": [], "heading": "Fine Stage", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "ControlNet Refiner", "publication_ref": [], "table_ref": [], "text": "Prompt: A <token> is dancing \nNovel View Reference View Copy ℒ $%$&' ℒ $%$&(% ℒ $%$&) ℒ !\"#" }, { "figure_ref": [], "heading": "Static Scene Optimization", "publication_ref": [ "b29" ], "table_ref": [], "text": "Following Magic123 [31], we employ Score Distillation Sampling (SDS) losses from both 2D and 3D diffusion priors to guide the optimization of the static scene. This strategy effectively enhances both texture quality and 3D geometry. Specifically, stable diffusion [35] conditioned on the text prompt e is adopted as 2D diffusion prior and Zero-1-to-3-XL [19] conditioned on the reference image Ĩr and relative pose ∆p is adopted as 3D diffusion prior. The SDS loss is formulated as:\nLSDS = Eσ,p,ϵ ω(σ) ϵ 2D ϕ (I p ; σ, e) -ϵ ∂I p ∂θs + λ3DEσ,p,ϵ ω(σ) ϵ 3D ϕ I p ; σ, Ĩr , ∆p -ϵ ∂I p ∂θs ,(3)\nwhere θ s represents the parameters of the static NeRF model and I p is the rendered RGB image from the camera position p. ϵ 2D ϕ (•) and ϵ 3D ϕ (•) denote the predicted noise by 2D and 3D diffusion priors, respectively. ω(σ) refers to a weighting function corresponding to the noise timestep σ.\nAdditionally, we leverage the RGB Ĩr , foreground mask M r and depth dr from the reference image to further refine the model from the reference view:\nL rec = λ rgb || M r ⊙ ( Ĩr -I r )|| + λ mask || M r -M r || + λ d   1 - cov M r ⊙ dr , M r ⊙ d r std M r ⊙ dr std( M r ⊙ d r )   ,(4\n) where λ rgb , λ mask and λ d denote the weights of RGB, mask and depth loss. ⊙ denotes Hadamard product. cov and std denote covariance and standard deviation, respectively. The static scene is optimized by the combination of the above two losses:\nL static = L SDS + L rec .\n(5)" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Coarse Dynamic Scene Optimization", "publication_ref": [ "b44", "b28", "b38", "b44", "b14" ], "table_ref": [], "text": "The dynamic NeRF is initialized from the static NeRF. Specifically, each 3D multi-scale grid V within the T time grids is initialized with the parameters of the static model. All the time grids share the same projection layers that were pre-trained in the static stage. The dynamic model is thus initialized as a model that can generate 3D video where each frame is the same static scene. We then optimize it to align with the motion described by the text prompt e. We distill latent video diffusion model [46] with SDS loss [30,41] to achieve this goal. Specifically, given the camera trajectory r(t) of a video with a fixed number of frames N f , we can cast rays and sample timesteps based on the frame rate. Subsequently, a video V r(t) is rendered from the dynamic NeRF and fed into the video diffusion prior to for SDS loss:\nLSDS-T = E σ,r(t),ϵ ω(σ) ϵ V id ϕ V r(t) ; σ, e -ϵ V r(t) ∂θ d ,(6)\nwhere θ d is the model parameters and ϵ V id ϕ (•) denotes the predicted noise by the video diffusion prior. Temporal Balanced Sampling. Unlike static 3D scenes, dynamic scenes necessitate sampling frames over a temporal range of t ∈ [0, 1]. MAV3D utilizes Make-A-Video [40] as its diffusion prior, which can condition on the video frame rate. However, this model is not publicly available. Consequently, we adopt ModelScope [46] as our video diffusion prior. ModelScope is not trained for extremely high frame rates and also lacks the capability to condition on the frame rate. To mitigate issues related to extreme frame rates, we limit the FPS (frames per second) to a range of [16,256]. At each iteration, we sample N f timesteps from a randomly chosen starting timestep and FPS. The distribution of sampled timesteps is depicted in Fig. 3a. However, random sampling tends to result in the beginning and ending timesteps being less frequently sampled compared to the middle timesteps, leading to suboptimal optimization of the first and last time grids (examples are provided in Appendix. D). Moreover, as the reference image constitutes the first frame of the generated video, it is beneficial to sample more from the first frame to retain the reference information. As a result, we allocate a higher probability of α specifically for sampling timesteps that begin at time 0 (the first frame), and similarly, a probability of α for timesteps concluding at time 1. The distribution of this temporal balanced sampling method is illustrated in Fig. 3b. First Frame Supervision. Since we animate the reference image to a 3D video, the reference image, which serves as the first frame, offers additional supervision for the video generation process. Specifically, we enhance the supervision of the dynamic NeRF by incorporating both the reconstruction loss L rec (as detailed in Eq. 4) and the 3D diffusion prior SDS loss L SDS-3D :\nLSDS-3D = Eσ,p,ϵ ω(σ) ϵ 3D ϕ V r(t) 0 ; σ, Ĩr , ∆p -ϵ ∂V r(t) 0 ∂θ d ,(7)\nonly on the first frames of the sampled videos V r(t) 0 , which occurs with an approximate probability of α. The final loss is thus formulated as:\nL dynamic = L SDS-T + λ T V L TV + 1 t0=0 (L rec + λ 3D L SDS-3D ),(8)\nwhere L TV denotes the total variation loss (Eq. 2) and λ T V is the weight for this loss." }, { "figure_ref": [], "heading": "Semantic Refinement", "publication_ref": [ "b7", "b23", "b7", "b48", "b28", "b28", "b46", "b48", "b8" ], "table_ref": [], "text": "During the coarse dynamic scene optimization stage, information from the reference image is exclusively applied to the first frame. Details of the objects in subsequent frames are derived solely from the video diffusion prior and guided by the text prompt e. For example, in a scenario where a \"panda\" is animated to dance, frames beyond the first are likely to distill a \"panda\" representation from the video diffusion prior that is potentially different from the reference image. Consequently, semantic drift becomes inevitable in the coarse stage.\nIn image-to-3D generation, personalized modeling [9,36] is commonly used to represent the reference image in the text-to-image model. RealFusion [25] utilizes a unique token learned through textual inversion [9] to represent the reference image in the text-to-image diffusion prior. DreamCraft3D [43] optimizes the text-to-image diffusion prior using augmented renderings of the reference image, as facilitated by DreamBooth [36]. However, a single image is insufficient to learn a personalized model or token for a text-to-video diffusion prior. As a solution, we approach each frame independently and optimize it using the personalized text-to-image diffusion prior. Utilizing a naive text-to-image (T2I) personalized model can introduce unexpected motion changes since the T2I model lacks awareness of the other frames. Our objective is to have the T2I model concentrated solely on refining textures and details. To this end, ControlNet-Tile [50] is an ideal diffusion prior as it conditions on a low-resolution image while refining details and enhancing resolution. Accordingly, we learn a token to represent the reference image for the base model of ControlNet (Stable Diffusion v1.5) and optimize individual frames I r(t) t with this token. To prevent error accumulation, the conditioning image Îr(t) t is generated from the fixed dynamic NeRF model established in the coarse stage. This diffusion prior also guides the dynamic NeRF through SDS loss, which is formulated as follows:\nLSDS-R = E σ,r(t),ϵ ω(σ) ϵ CN ϕ I r(t) t ; σ, Îr(t) t , e -ϵ I r(t) t ∂θ d ,(9)\nwhere θ d is the parameters of the dynamic NeRF model and ϵ CN ϕ (•) denotes the predicted noise by the personalized ControlNet diffusion prior. The video and 3D diffusion priors are also leveraged in this stage, so the final loss is:\nL refine = L SDS-T + λ R L SDS-R + λ T V L TV + 1 t0=0 (L rec + λ 3D L SDS-3D ),(10)\nwhere L SDS-T , L TV , L rec and L SDS-3D denote the video diffusion SDS loss (Eq. 6), total variation loss (Eq. 2), first frame reconstruction loss (Eq. 4) and first frame 3D diffusion SDS loss (Eq. 7), respectively. λ R , λ T V and λ 3D are the weights for ControlNet SDS loss, TV loss and 3D prior SDS loss, respectively. Over-Saturation and Over-Smoothing. Score Distillation Sampling (SDS) [30] requires a large classifier-free guidance (CFG) scale to effectively distill knowledge from text-to-image diffusion models. This requirement arises because a large CFG scale can diminish the diversity of the T2I model, focusing more on fidelity to the given text. However, as indicated by qualitative results from Dream-Fusion [30] and ProlificDreamer [48], SDS often encounters issues of over-saturation and over-smoothing due to the heightened CFG scale. Our coarse stage model faces similar challenges. ControlNet-Tile [50] conditions on a lowresolution coarse image, mirroring the effect of ancestral sampling [10,22]. This effect allows us to use a CFG scale comparable to that in standard image generation tasks (e.g., a scale of 7.5) in this diffusion prior, which can mitigate the over-saturation and over-smoothing problems observed in our coarse model." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b13", "b29", "b30" ], "table_ref": [], "text": "Implementation Details. In the static stage, static NeRF model is optimized for 5,000 iterations. The model contains 16 levels of grid encoding, with resolutions ranging from 16 to 2,048, and a dimension of 2 for each level. Two layers of MLPs with 64 hidden dimensions are used to calculate density and albedo. For the reference view reconstruction loss, λ rgb , λ mask and λ d are set to 5, 0.5 and 0.001, respectively. Stable Diffusion v1. 5 [35] and Zero-1-to-3-XL [19] serve as our 2D and 3D diffusion priors for this stage with λ 3D set to 40. In the coarse dynamic scene optimization stage, dynamic NeRF is optimized for 10,000 iterations with the same loss weight as in the static stage. The time grid size of dynamic NeRF is set to 64, and λ T V is set to 0.1 for regularization purposes. In the semantic refinement stage, the model is further trained for 5,000 iterations and the loss weight for ControlNet SDS loss, λ R is adjusted to 1. Adam [15] with a learning rate 0.001 is adopted across all stages, and the rendering resolution is 128×128. Camera Setting. As for the reference view, following Magic123 [31], we assume the reference image is shot from the front view (polar angle 90 • and azimuth angle 0 • ) with the radius 1.8 meters, and the field of view (FOV) of the camera is set to 40 • . During dynamic training stages, we adopt dynamic camera [41] to simulate the camera motion in the text-to-video diffusion model. Benchmark and Evaluation Metrics. As the first endeavor to animate a single image into a 3D video, we build a benchmark comprising 24 image-text pairs for evaluation. Our methodology is assessed across three dimensions: text-video alignment, image-video alignment, and overall video quality. For each 3D video, we render views from 10 different views around the scene. In terms of text-video alignment, we measure the retrieval accuracy of text prompts (CLIP-R [14]) and compute the image-text cosine similarity (CLIP-T) for every frame of the rendered videos. For image-video alignment, we render a video from the reference camera pose and then calculate the cosine similarity between the CLIP visual features of each frame and the reference image. Regarding video quality, we evaluate the consistency between frames by calculating the cosine similarity between CLIP visual features of every two consecutive frames in each rendered video. For these assessments, we utilize the CLIP [32] ViT-B/32 variant. In addition, following MAV3D [41], we conduct user studies on five qualitative metrics: (1) similarity to the reference image; (2) faithfulness to the textual prompt; (3) video quality; (4) realism of motion; and (5) amount of motion." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_5" ], "heading": "Comparison with Other Methods", "publication_ref": [ "b23", "b23", "b23" ], "table_ref": [], "text": "As the pioneering work on image-text-to-4D generation, we have developed two baselines for comparative analysis. These baselines employ distinct image-to-3D static generation methods to establish a static NeRF, which is subsequently optimized using SDS loss derived from a video diffusion prior. Specifically, our baselines: Zero-1-to-3-V and RealFusion-V utilize Zero-1-to-3 [19] and RealFusion [25] in their static stages, respectively. Furthermore, we conduct a qualitative comparison of Animate124 with MAV3D to provide a comprehensive evaluation. This comparison focuses on the prompts and examples featured on the MAV3D website ‡ , thus allowing for a detailed analysis of our approach in relation to existing methodologies. Comparison with Baselines. In our comparative analysis, Animate124 is evaluated against two baselines, Zero-1-to-3-V [19] and RealFusion-V [25], as presented in Tab. 1 and Fig. 4. As reported in Tab. 1, Animate124 outperforms both baselines quantitatively in terms of CLIP and human evaluations. Fig. 4 illustrates that Zero-1-to-3-V [19] fails to preserve the original appearance of the reference image. For example, the first example exhibits a color shift to red, and the \"accordion\" in the second example is absent. RealFusion-V [25], on the other hand, exhibits inconsistencies in 3D geometry. As shown in the second example, \"View 1\" should represent the reference view, but the \"kangaroo\" is inaccurately rotated. In contrast, Animate124 successfully maintains both consistent 3D geometry and the fidelity of the reference image appearance. Comparison with MAV3D. We compare Animate124 with MAV3D [41] on two distinct settings, as illustrated in Fig. 5. The text-to-4D generation of MAV3D relies solely on text prompts, while its image-to-4D generation exclusively uti- Table 1. Comparison with other methods. Across all metrics, higher scores indicate better results. Human evaluation is shown as a percentage of majority votes favoring the baseline compared to our model in the specific setting. lizes the reference image as a prompt. In contrast, Ani-mate124 leverages both the reference image and the text prompt. To facilitate a more direct comparison with the image-to-4D approach of MAV3D, we refrain from specifying motion through text in Fig. 5b and use the textual name of the object as the prompt instead. Fig. 5a demonstrates that Animate124 is capable of generating dynamic motion that aligns the protagonist with the reference image. In comparison, MAV3D (Fig. 5a) struggles to control the protagonist in the 3D video. Regarding image-to-4D generation, MAV3D fails to preserve the original appearance of the reference image, as evidenced by the flamingo's body turning black. Conversely, Animate124 successfully produces more realistic videos and consistently maintaining the appearance of the reference image. These outcomes highlight the efficacy of our method." }, { "figure_ref": [], "heading": "Model CLIP Evaluation Human Evaluation CLIP-R CLIP-T CLIP-I CLIP-F Text", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ablation Studies and Analysis", "publication_ref": [ "b48" ], "table_ref": [], "text": "Effectiveness of Semantic Refinement. Semantic refinement aims at alleviating semantic drift of the video generation model with the personalized ControlNet [50]. As shown in the 3rd and last row of Tab. 2, semantic refinement can improve the performance on text alignment, image alignment, and video consistency. The improvement in the image alignment is the most significant, further demonstrating the effectiveness of addressing semantic drift. Qualitative evaluation and the comparison with super-resolution prior are illustrated in Appendix. C. Effectiveness of 3D Diffusion Prior. 3D diffusion prior serves as a strong supervision for the first frame, which helps to learn both geometry and texture in all three stages.\nIn the first and last row of Tab. 2, removing the 3D prior impairs the overall geometry and thus influences all three aspects of the generated video. Effectiveness of Temporal Balanced Sampling. Removing temporal balanced sampling leads to the first frame supervision being almost ignored in the dynamic stages. The model degenerates to text-to-4D generation with image-to-3D static initialization and semantic refinement. Therefore, this model has good text alignment but the similarity to the reference image is quite poor. In addition, this model is not consistent in 3D geometry with severe Janus problem due to the lack of guidance from the 3D diffusion prior." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduce Animate124, the first work to animate a single in-the-wild image into a 3D video through textual motion descriptions. With the guidance of three diffusion priors, we optimize an advanced 4D grid dynamic NeRF in a static-to-dynamic and coarse-to-fine manner. Specifically, we leverage 2D image and 3D diffusion priors to develop a static 3D scene, which is then animated with video diffusion prior. To address the semantic drift inherent in the video diffusion prior, we further proposed semantic refinement, incorporating a personalized diffusion prior as additional supervision. With the innovative three-stage framework, Animate124 is capable of producing high-quality 4D dynamic scenes from both the reference image and textual descriptions." }, { "figure_ref": [], "heading": "A. Outline", "publication_ref": [ "b48" ], "table_ref": [], "text": "• A comparative analysis of our 4D grid encoding and the HexPlane [4] dynamic NeRF backbone is presented in Sec. B; • A comparison between our semantic refinement using ControlNet prior [50] and the refinement based on superresolution prior is detailed in Sec. C; • The qualitative demonstration of the effectiveness of our proposed temporal balanced sampling is provided in Sec. D." }, { "figure_ref": [ "fig_6" ], "heading": "B. Backbone", "publication_ref": [], "table_ref": [], "text": "In this paper, we leverage dynamic NeRF with 4D grid encoding to represent the spatio-temporal scene. Specifically, we divide the time dimension evenly into T grids, and for each time grid, we establish a 3D multi-scale grid V . The spatio-temporal features are obtained by interpolating two nearest time grids. In contrast, MAV3D [41] employs Hex-Plane [4] dynamic NeRF for 3D video representation. This method maps the X, Y, Z, and time axes onto six 2D planes, fusing these features to calculate density and color. Our approach is directly compared with HexPlane in Fig. 6. Note that for HexPlane, we set the azimuth angle to 45 • to prevent the reference camera pose from being perpendicular to one of the planes, which could result in the significant Janus problem. The fish fin in the first example illustrates how 4D grid encoding typically exhibits more motion than HexPlane. Furthermore, the back view of a 4D scene in the second example demonstrates that, despite careful adjustment of the reference camera pose, HexPlane is more prone to the Janus problem compared to 4D grid encoding. Consequently, we choose 4D grid encoding for dynamic scene representation." }, { "figure_ref": [], "heading": "C. Refinement", "publication_ref": [ "b48" ], "table_ref": [], "text": "In this paper, we introduce semantic refinement to mitigate the semantic drift associated with video diffusion models and to enhance the resolution of videos generated in the dynamic fine stage. This refinement is achieved through personalized modeling using the ControlNet [50] diffusion prior. MAV3D [41] also employs a coarse-to-fine approach, utilizing a super-resolution diffusion prior in their refinement stage to enhance results.\nTo assess the effectiveness of our approach in addressing semantic drift, we conduct a comparison between our per-" }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Reference Image", "publication_ref": [], "table_ref": [], "text": "Prompt: A panda is dancing sonalized ControlNet prior and an image super-resolution prior § in Fig. 7. In each example presented in Fig. 7, the first row depicts the 3D video as generated in the coarse stage, while the second and third rows show the outcomes following semantic and super-resolution refinement, respectively. It is evident that while super-resolution can improve resolution (as seen in the fourth frame of the first example), it can also amplify errors (such as the back of the fox in the second example) present in the coarse stage, due to its lack of reference image context. In contrast, our semantic refinement not only enhances video quality but also rectifies semantic inaccuracies from the coarse stage. § https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler" }, { "figure_ref": [ "fig_9" ], "heading": "D. Temporal Balanced Sampling", "publication_ref": [], "table_ref": [], "text": "To enhance the supervision of the reference image and improve the learning of the initial and final time grids, we introduce temporal balanced sampling. In Fig. 8, we compare this technique with random sampling on two examples of the early timesteps. Since temporal balanced sampling can gather more information from the first frame, our method can present a more accurate appearance in the early stages. " } ]
A blue flag with Chelsea Football Club logo on it, attached to a flagpole, waving with a smooth, gentle curve Figure 1. To the best of our knowledge,
Animate124: Animating One Image to 4D Dynamic Scene
[ { "figure_caption": "Figure 2 .2Figure 2. The overall framework of our Animate124. After learning the static scene (the first stage, not shown in the figure), the dynamic scene is optimized with a coarse-to-fine strategy in two stages. In the coarse stage, we optimize the dynamic NeRF with the combination of video diffusion and 3D diffusion priors. Subsequently, in the fine stage, additional ControlNet prior is introduced to refine the details and correct semantic drift. The condition of ControlNet derives from the frozen coarse stage model to reduce error accumulation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(a) Random Sampling. (b) Temporal Balanced Sampling.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Distribution of the sampled timestep.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "sitting, playing an accordion with its front paws Time Time Figure 4. Qualitative comparison with the baseline methods on image-text-to-4D generation. Results in two views are shown. View 1 is the reference view and view 2 is another view. We use square in view 1 to better illustrate the motion and difference among methods.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison with MAV3D text-to-4D generation. Reference image is not used in MAV3D. Animate124 can generate dynamic motion with the reference image as the protagonist while MAV3D cannot precisely control the subject. Comparison with MAV3D image-to-4D generation. Text prompt is not used in MAV3D. Animate124 can better preserve the appearance and pose of the reference image.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure5. Qualitative comparison with MAV3D on 4D generation with the samples on its website. Note that we extract frames from the videos on MAV3D website, thus the images in two views may not be perfectly matched.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. The comparison between 4D grid encoding and HexPlane. In each example, six consecutive frames are displayed. HexPlane is observed to struggle with limited motion and a conspicuous Janus problem.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure7. The comparison between semantic refinement and super-resolution refinement in the dynamic fine stage. In each example, the first row depicts the 3D video as generated in the coarse stage, while the second and third rows show the outcomes following semantic and super-resolution refinement, respectively. Semantic refinement yields superior results by incorporating more semantic information of the reference image.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "An astronaut, helmet in hand, rides a white horse.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. The comparison between the temporal balanced sampling and random sampling. In each example, we display six consecutive frames from the early timesteps. Temporal balanced sampling notably enhances the visual quality of these early frames in the videos.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Align. Image Align. Video Quality Realistic Motion More Motion Ablation studies on the proposed components.", "figure_data": "Zero-1-to-3-V [19]0.87460.30040.79250.96630.16670.18330.18330.17500.2667RealFusion-V [25]0.89390.30750.80260.96910.25830.30420.27500.26250.4083Animate1240.93110.31700.85440.9781-----ModelCLIP-R CLIP-T CLIP-I CLIP-Fw.o. 3D Prior0.90420.30200.81410.9704w.o. BalancedSampl.0.93770.32050.80830.9728w.o. SemRefine.0.92210.31290.83310.9715Animate1240.93110.31700.85440.9781", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Yuyang Zhao; Zhiwen Yan; Enze Xie; Lanqing Hong; Zhenguo Li; Gim Hee Lee
[ { "authors": "Jonathan T Barron; Ben Mildenhall; Matthew Tancik; Peter Hedman; Ricardo Martin-Brualla; Pratul P Srinivasan", "journal": "", "ref_id": "b0", "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", "year": "2021" }, { "authors": "James Booth; Anastasios Roussos; Stefanos Zafeiriou; Allan Ponniah; David Dunaway", "journal": "", "ref_id": "b1", "title": "A 3d morphable model learnt from 10,000 faces", "year": "2016" }, { "authors": "Ang Cao; Justin Johnson", "journal": "", "ref_id": "b2", "title": "Hexplane: A fast representation for dynamic scenes", "year": "2023" }, { "authors": "Haoxin Chen; Menghan Xia; Yingqing He; Yong Zhang; Xiaodong Cun; Shaoshu Yang; Jinbo Xing; Yaofang Liu; Qifeng Chen; Xintao Wang; Chao Weng; Ying Shan", "journal": "", "ref_id": "b3", "title": "Videocrafter1: Open diffusion models for high-quality video generation", "year": "2023" }, { "authors": "Rui Chen; Yongwei Chen; Ningxin Jiao; Kui Jia", "journal": "", "ref_id": "b4", "title": "Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation", "year": "2023" }, { "authors": "Patrick Esser; Johnathan Chiu; Parmida Atighehchian; Jonathan Granskog; Anastasis Germanidis", "journal": "", "ref_id": "b5", "title": "Structure and content-guided video synthesis with diffusion models", "year": "2023" }, { "authors": "Sara Fridovich-Keil; Giacomo Meanti; Frederik Rahbaek Warburg; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b6", "title": "K-planes: Explicit radiance fields in space, time, and appearance", "year": "2023" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "ICLR", "ref_id": "b7", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b8", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Lukas Höllein; Ang Cao; Andrew Owens; Justin Johnson; Matthias Nießner", "journal": "", "ref_id": "b9", "title": "Text2room: Extracting textured 3d meshes from 2d text-to-image models", "year": "2023" }, { "authors": "Fangzhou Hong; Mingyuan Zhang; Liang Pan; Zhongang Cai; Lei Yang; Ziwei Liu", "journal": "", "ref_id": "b10", "title": "Avatarclip: Zero-shot textdriven generation and animation of 3d avatars", "year": "2022" }, { "authors": "Lianghua Huang; Di Chen; Yu Liu; Yujun Shen; Deli Zhao; Jingren Zhou", "journal": "", "ref_id": "b11", "title": "Composer: Creative and controllable image synthesis with composable conditions", "year": "2023" }, { "authors": "Ajay Jain; Ben Mildenhall; Jonathan T Barron; Pieter Abbeel; Ben Poole", "journal": "", "ref_id": "b12", "title": "Zero-shot text-guided object generation with dream fields", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b13", "title": "Adam: A method for stochastic optimization", "year": "" }, { "authors": "Han-Hung Lee; Angel X Chang", "journal": "", "ref_id": "b14", "title": "Understanding pure clip guidance for voxel grid nerf models", "year": "2022" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b15", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Minghua Liu; Chao Xu; Haian Jin; Linghao Chen; Zexiang Xu; Hao Su", "journal": "", "ref_id": "b16", "title": "One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization", "year": "2023" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b17", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2007" }, { "authors": "Yuan Liu; Cheng Lin; Zijiao Zeng; Xiaoxiao Long; Lingjie Liu; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b18", "title": "Syncdreamer: Generating multiview-consistent images from a single-view image", "year": "2023" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "ACM TOG", "ref_id": "b19", "title": "Smpl: A skinned multiperson linear model", "year": "2015" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "NeurIPS", "ref_id": "b20", "title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "Zhengxiong Luo; Dayou Chen; Yingya Zhang; Yan Huang; Liang Wang; Yujun Shen; Deli Zhao; Jingren Zhou; Tieniu Tan", "journal": "", "ref_id": "b21", "title": "Videofusion: Decomposed diffusion models for high-quality video generation", "year": "2023" }, { "authors": "Julieta Martinez; Rayat Hossain; Javier Romero; James J Little", "journal": "", "ref_id": "b22", "title": "A simple yet effective baseline for 3d human pose estimation", "year": "2017" }, { "authors": "Luke Melas-Kyriazi; Christian Rupprecht; Iro Laina; Andrea Vedaldi", "journal": "", "ref_id": "b23", "title": "Realfusion: 360{\\deg} reconstruction of any object from a single image", "year": "2023" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Springer", "ref_id": "b24", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "", "ref_id": "b25", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Michael Niemeyer; Jonathan T Barron; Ben Mildenhall; S M Mehdi; Andreas Sajjadi; Noha Geiger; Radwan", "journal": "", "ref_id": "b26", "title": "Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs", "year": "2022" }, { "authors": "Sungheon Park; Minjung Son; Seokhwan Jang; Young Chun Ahn; Ji-Yeon Kim; Nahyup Kang", "journal": "", "ref_id": "b27", "title": "Temporal interpolation is all you need for dynamic neural radiance fields", "year": "2023" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "ICLR", "ref_id": "b28", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Guocheng Qian; Jinjie Mai; Abdullah Hamdi; Jian Ren; Aliaksandr Siarohin; Bing Li; Hsin-Ying Lee; Ivan Skorokhodov; Peter Wonka; Sergey Tulyakov", "journal": "", "ref_id": "b29", "title": "Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b30", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Amit Raj; Srinivas Kaza; Ben Poole; Michael Niemeyer; Nataniel Ruiz; Ben Mildenhall; Shiran Zada; Kfir Aberman; Michael Rubinstein; Jonathan Barron", "journal": "", "ref_id": "b31", "title": "Dreambooth3d: Subject-driven text-to-3d generation", "year": "2023" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b32", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b33", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b34", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "NeurIPS", "ref_id": "b35", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Ruizhi Shao; Zerong Zheng; Hanzhang Tu; Boning Liu; Hongwen Zhang; Yebin Liu", "journal": "", "ref_id": "b36", "title": "Tensor4d: Efficient neural 4d decomposition for high-fidelity dynamic reconstruction and rendering", "year": "2023" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni", "journal": "", "ref_id": "b37", "title": "Make-a-video: Text-to-video generation without text-video data", "year": "2022" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni", "journal": "", "ref_id": "b38", "title": "Make-a-video: Text-to-video generation without text-video data", "year": "2022" }, { "authors": "Uriel Singer; Shelly Sheynin; Adam Polyak; Oron Ashual; Iurii Makarov; Filippos Kokkinos; Naman Goyal; Andrea Vedaldi; Devi Parikh; Justin Johnson", "journal": "", "ref_id": "b39", "title": "Text-to-4d dynamic scene generation", "year": "2023" }, { "authors": "Cheng Sun; Min Sun; Hwann-Tzong Chen", "journal": "", "ref_id": "b40", "title": "Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction", "year": "2022" }, { "authors": "Jingxiang Sun; Bo Zhang; Ruizhi Shao; Lizhen Wang; Wen Liu; Zhenda Xie; Yebin Liu", "journal": "", "ref_id": "b41", "title": "Dreamcraft3d: Hierarchical 3d generation with bootstrapped diffusion prior", "year": "2023" }, { "authors": "Jiaxiang Tang; Jiawei Ren; Hang Zhou; Ziwei Liu; Gang Zeng", "journal": "", "ref_id": "b42", "title": "Dreamgaussian: Generative gaussian splatting for efficient 3d content creation", "year": "2023" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b43", "title": "Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation", "year": "2023" }, { "authors": "Jiuniu Wang; Hangjie Yuan; Dayou Chen; Yingya Zhang; Xiang Wang; Shiwei Zhang", "journal": "", "ref_id": "b44", "title": "Modelscope text-to-video technical report", "year": "2023" }, { "authors": "Xiang Wang; Hangjie Yuan; Shiwei Zhang; Dayou Chen; Jiuniu Wang; Yingya Zhang; Yujun Shen; Deli Zhao; Jingren Zhou", "journal": "NeurIPS", "ref_id": "b45", "title": "Videocomposer: Compositional video synthesis with motion controllability", "year": "2023" }, { "authors": "Zhengyi Wang; Cheng Lu; Yikai Wang; Fan Bao; Chongxuan Li; Hang Su; Jun Zhu", "journal": "NeurIPS", "ref_id": "b46", "title": "Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation", "year": "2023" }, { "authors": "Dejia Xu; Yifan Jiang; Peihao Wang; Zhiwen Fan; Yi Wang; Zhangyang Wang", "journal": "", "ref_id": "b47", "title": "Neurallift-360: Lifting an in-the-wild 2d photo to a 3d object with 360{\\deg} views", "year": "2023" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b48", "title": "Adding conditional control to text-to-image diffusion models", "year": "2009" }, { "authors": "Daquan Zhou; Weimin Wang; Hanshu Yan; Weiwei Lv; Yizhe Zhu; Jiashi Feng", "journal": "", "ref_id": "b49", "title": "Magicvideo: Efficient video generation with latent diffusion models", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 334.41, 479.47, 210.7, 23.23 ], "formula_id": "formula_0", "formula_text": "F x,y,z,t = t + -t ∆ t V x,y,z,t-+ t -t - ∆ t V x,y,z,t+ ,(1)" }, { "formula_coordinates": [ 3, 344.98, 684.96, 196.26, 30.2 ], "formula_id": "formula_1", "formula_text": "L T V = T -1 t=0 x,y,z (V x,y,z,t -V x,y,z,t+1 ) 2 . (2" }, { "formula_coordinates": [ 3, 541.24, 695.69, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 70.08, 148.41, 451.13, 122.61 ], "formula_id": "formula_3", "formula_text": "3D Diffusion Dynamic Cameras First Frame Novel View ℒ !\"# Coarse Stage ℒ $%$&' ℒ $%$&(% Ref View 2D Video Diffusion Prompt: A panda is dancing 3D Diffusion Dynamic Cameras First Frame Novel View" }, { "formula_coordinates": [ 4, 283.76, 119.83, 249.23, 117.91 ], "formula_id": "formula_4", "formula_text": "Novel View Reference View Copy ℒ $%$&' ℒ $%$&(% ℒ $%$&) ℒ !\"#" }, { "formula_coordinates": [ 4, 57.45, 473.46, 228.91, 56.72 ], "formula_id": "formula_5", "formula_text": "LSDS = Eσ,p,ϵ ω(σ) ϵ 2D ϕ (I p ; σ, e) -ϵ ∂I p ∂θs + λ3DEσ,p,ϵ ω(σ) ϵ 3D ϕ I p ; σ, Ĩr , ∆p -ϵ ∂I p ∂θs ,(3)" }, { "formula_coordinates": [ 4, 60.15, 635.72, 222.34, 65.48 ], "formula_id": "formula_6", "formula_text": "L rec = λ rgb || M r ⊙ ( Ĩr -I r )|| + λ mask || M r -M r || + λ d   1 - cov M r ⊙ dr , M r ⊙ d r std M r ⊙ dr std( M r ⊙ d r )   ,(4" }, { "formula_coordinates": [ 4, 379.75, 399.02, 94.48, 9.65 ], "formula_id": "formula_7", "formula_text": "L static = L SDS + L rec ." }, { "formula_coordinates": [ 4, 312.8, 609.99, 232.32, 31.22 ], "formula_id": "formula_8", "formula_text": "LSDS-T = E σ,r(t),ϵ ω(σ) ϵ V id ϕ V r(t) ; σ, e -ϵ V r(t) ∂θ d ,(6)" }, { "formula_coordinates": [ 5, 50.11, 561.48, 247.97, 34.73 ], "formula_id": "formula_9", "formula_text": "LSDS-3D = Eσ,p,ϵ ω(σ) ϵ 3D ϕ V r(t) 0 ; σ, Ĩr , ∆p -ϵ ∂V r(t) 0 ∂θ d ,(7)" }, { "formula_coordinates": [ 5, 84.25, 649.82, 202.11, 24.75 ], "formula_id": "formula_10", "formula_text": "L dynamic = L SDS-T + λ T V L TV + 1 t0=0 (L rec + λ 3D L SDS-3D ),(8)" }, { "formula_coordinates": [ 5, 308.86, 535.53, 241.75, 34.68 ], "formula_id": "formula_11", "formula_text": "LSDS-R = E σ,r(t),ϵ ω(σ) ϵ CN ϕ I r(t) t ; σ, Îr(t) t , e -ϵ I r(t) t ∂θ d ,(9)" }, { "formula_coordinates": [ 5, 327.55, 631.4, 217.56, 24.75 ], "formula_id": "formula_12", "formula_text": "L refine = L SDS-T + λ R L SDS-R + λ T V L TV + 1 t0=0 (L rec + λ 3D L SDS-3D ),(10)" } ]
2023-11-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10" ], "table_ref": [], "text": "Neural Style Transfer (NST) refers to the process of changing the appearance of an input image based on a reference style image, whilst preserving the input image's underlying content. For example, a photograph of a landscape can be made to take on the style of a Van Gogh painting. More recently, NST has been extended to work for three-dimensional data, such as 3D meshes, point clouds, and radiance fields. Given the inherently creative and artistic nature of NST, another domain where its application holds immense potential is within the realm of 3D computer games. By integrating style transfer techniques into computer games, developers could dynamically alter the visual aesthetics of a game in real-time, and players could be given the ability to choose from an array of artistic styles, influencing the appearance of the game's world and characters according to their preferences. However, there is limited work applying NST to 3D computer games. Whilst NST approaches have not been specifically tailored for 3D computer games, image and video NST methods can be applied at the end of the 3D computer graphics pipeline, as a post-processing effect [1]. This essentially treats the data as a sequence of images. Here, temporal consistency across consecutive frames is the prominent challenge. Some video NST approaches utilise optical flow information and introduce a temporal consistency loss to achieve temporal stability [2,3,4,5,6], whilst other approaches rely on improving the stability of the transferred content and style features [7,8,9]. Nevertheless, employing such models at the post-process stage of the 3D computer graphics pipeline, results in undesired flickering effects and inconsistent stylisations.\nPrevious work has demonstrated that the utilisation of G-buffer data can lead to improved quality of generated stylised game scenes [10,11]. Our work takes advantage of the intermediate data that is generated by a 3D computer graphics rendering pipeline, and proposes an approach for integrating an NST model at an earlier stage of the rendering process (Figure 1), resulting in improved, more stable artistic stylisations of game worlds. Our method retrieves data from the camera colour buffer, generates consistent stylised game frames, and writes back to the colour buffer, before post-processing. We believe this is the first work to stylise in this way. The primary contributions of our work can be summarised as follows: 1) We train a fast Stylisation network on both a real-world and a synthetic image dataset, capable of producing fast high-quality artistic stylisations; 2) We present an approach that integrates a trained stylisation network at an early stage of the rendering pipeline, avoiding the visual artefacts and inconsistencies that occur when employing stylisation as a post-effect; 3) We evaluate the results of our system qualitatively and quantitatively, demonstrating how the games community can benefit from the NST field.\n2 Related Work" }, { "figure_ref": [], "heading": "Image & Video NST", "publication_ref": [ "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b6", "b32", "b33", "b34", "b2", "b1", "b3", "b35", "b36", "b5", "b37", "b7", "b8", "b38", "b39", "b6", "b35" ], "table_ref": [], "text": "Gatys et al. [12] proposed a model that minimises a content loss and a style loss, based on features extracted from pre-trained CNNs on object classification. Since this seminal work, multiple NST approaches have emerged, proposing end-to-end systems trained on singular styles that manage to improve upon the efficiency and time required to generate one stylisation. These are capable of synthesising a stylised output with a single forward pass through the network [13,14,15,16]. While some models trained to capture the style of a particular artist or a specific art genre [17,18], and more efficient multiple-style-per-model approaches were also developed [19,20,21], recently, the research has shifted to developing arbitrary style transfer systems. The method of Huang and Belongie [22] suggested the use of an Adaptive Instance Normalisation layer (AdaIN) that allows transferring the channel-wise mean and variance feature statistics between the content and style feature activations, thus achieving arbitrary style transfer. Other arbitrary-style-per-model methods were also developed that improve upon the performance [23,24,25,26,27,28,29] or offer solutions tailored to particular challenges [30,31]. Meta networks were also employed [32], as well as systems making use of the recently developed transformer architecture [7,33,34].\nTo alleviate the issue of temporal inconsistency across subsequent frames when video is considered, Ruder et al. [35,3] employed a temporal constraint based on optical flow information. Typically, the optical flow map (calculated between two frames of the original video) is used to warp the previous stylised frame to give an estimation of the next frame. This gives a temporal loss function that can be minimised during training. Other work has subsequently improved the computation speed [2,4] or demonstrated structure and depth-preserving qualities [36,37]. Gao et al. [6] developed a fast model that incorporates multiple styles, while arbitrary video style transfer models have also been proposed [38,8,9,39], some of which are extensions from image NST approaches with additional temporal considerations [40,7,36]." }, { "figure_ref": [], "heading": "NST in 3D Computer Games", "publication_ref": [ "b6", "b7", "b8", "b5", "b40", "b22", "b9", "b10", "b9", "b40" ], "table_ref": [], "text": "Although methods exist for stylising three-dimensional data and can offer 3D artists diverse options for generating or improving a game's assets, no substantial efforts have been noted for real-time in-game artistic stylisation. The image and video NST models [7,8,9,6] can potentially be integrated at the end of a computer game's rendering pipeline, intercepting each rendered frame and producing a stylised version of it. An example of this has been exhibited by Unity's implementation [41] which is based on the method of Ghiasi et al. [23] that produces a stylised image from an input image in a single forward pass from the neural network. Multi-style in-game style transfer is achieved allowing the viewer to change the stylisation of the scene in real-time. Nonetheless, the implementation does not consider any G-buffer or 3D information. Instead, it intercepts the final rendered 2D image (using an off-screen buffer), which means it can be applied as a final 'filter' for any game. This also results in unstable stylisations and causes the intended post-process effects being diminished.\nThe recent approach by Richter et al. [10] to enhancing the photorealism of computer-generated imagery might be the first to take into account information from intermediate buffers (G-buffers) that becomes available through a game engine's rendering process. This method, although explicitly focused on photorealistic enhancement, can be significantly impactful to style transfer algorithms that consider the stylisation of game environments. Their technique also works at the end of the rendering pipeline -the image enhancement network outputs an enhanced image given an input rendered image. However, the image enhancement network is fed with information about the geometry, materials, and lighting, extracted from intermediate rendering buffers during training.\nSimilarly, the image-to-image translation method proposed by Mittermueller et al. [11] trains a network to learn the mapping between low-poly game scenes to a synthetic dataset compiled using the Red Dead Redemption 2 (RDR2) game. The mapping takes into account intermediate data such as depth, normals, and albedo generated by conventional game rendering pipelines, for improved image domain transfer. Although the developed EST-GAN validates the effectiveness of G-buffer data for the generation of stylistic game scenes, it does not utilise this information in real-time and does not demonstrate any impact on the stability of sequential stylised game frames.\nIntegrating NST at the end of the 3D rendering pipeline is the only approach that has been suggested for the synthesis of real-time photorealistic [10] or artistic [41] game worlds. Nevertheless, the amount of post-processing that is executed, and the unpredictable camera movement and scene shifts, do not allow for coherent and robust post-process stylisations.\nHere we propose an approach for producing stable and aesthetically pleasing visual effects in computer games by integrating a style transfer model before the post-process stage of the 3D computer game rendering pipeline.\n3 Injecting NST into the 3D rendering pipeline" }, { "figure_ref": [ "fig_0" ], "heading": "Style Transfer Network", "publication_ref": [ "b12", "b41", "b42" ], "table_ref": [], "text": "The network architecture is shown in Figure 2. Similarly to state-of-the-art methods [13,42,43], we utilise a Transformation network f W , that intercepts an input image x and transforms it into an output image ŷ via the mapping ŷ = f W (x). To improve upon the efficiency and inference time required to generate a stylised frame given an input image, we reduce the number of residual layers and remove the ReLU activation function from the first three convolutional layers. The final configuration of our network consists of three convolutional layers followed by instance normalisation, two residual layers (composed of convolutions, instance normalisation, and ReLU), and three deconvolutional layers that upsample the input and then perform convolution. The first two deconvolutional layers are followed by instance normalisation and ReLU activation. " }, { "figure_ref": [], "heading": "Content & Style Losses", "publication_ref": [ "b12", "b43" ], "table_ref": [], "text": "We use the perceptual loss functions introduced in the work of Johnson et al. [13] and employ a pre-trained image recognition network (VGG-16 [44]) to produce feature representations of the original and transformed images. Content loss is defined as the Euclidean distance between the feature representations of the input image and the corresponding transformed image, as extracted from the relu2_2 layer:\nl ϕ0 content (ŷ, x) = 1 C j H j W j ∥ϕ j 0 (ŷ) -ϕ j 0 (x)∥ 2 2 (1)\nwhere ϕ 0 is the image classification network, ϕ j 0 represents the activations of the j th layer of ϕ 0 , and H × W × C is the shape of the processed image.\nThe style is represented by features extracted from multiple layers of VGG-16 (J = {relu1_2, relu2_2, relu3_3, relu4_3}). The Gram matrix G is then computed to give feature correlations that can be utilised to define the style loss function. This is then defined as the squared Frobenius norm of the difference between the calculated Gram-based style representations:\nL ϕ0,j style (ŷ, y) = ∥G ϕ0 j (ŷ) -G ϕ0 j (y)∥ 2 F (2\n)\nand it is summed up for all the layers j in J. Here, y and ŷ refer to the original style image and the transformed image, respectively." }, { "figure_ref": [], "heading": "Depth Loss", "publication_ref": [ "b41", "b42", "b36", "b44", "b42", "b36" ], "table_ref": [], "text": "Previous approaches that consider depth information during training [42,43,37] have shown improvements to the synthesised results in terms of structure retainment and depth preservation performance. As the trained stylisation network is required to be used in a game setting -and it is highly desired to sustain the depth of the stylised game frames -we utilise a depth reconstruction network (MiDaS) [45] to define a depth reconstruction loss [43,37]:\nL M iDaS depth (ŷ, x) = ∥M iDaS 1 (ŷ) -M iDaS 1 (x)∥ 2 2\n(3)" }, { "figure_ref": [], "heading": "Difference of Gaussians Loss", "publication_ref": [ "b45", "b46", "b47" ], "table_ref": [], "text": "A particular issue that occurs in stylisation approaches is an undesired halo effect around distinct parts of an image. This effect is compounded by the significance that is placed on edges in human vision [46,47] meaning that edge inconsistencies stand out. We use the Difference-of-Gaussians (DoG) operator in order to improve upon the global and local structure preservation of stylised image frames, and thus attempt to alleviate the issue of the undesired halo effect.\nInspired by the neural processing in the retina of the human eye, the DoG response is equivalent to a band-pass filter that discards most of the spatial frequencies that are present in an image. The DoG operator is derived from the concept of convolving an image with two Gaussian kernels of different standard deviations and then taking the difference between the two convolved images. This feature enhancement algorithm has been shown to produce aesthetic edge lines and has been previously utilised for image stylisation [48]. We, therefore, define a DoG loss that is based on the difference between the DoG responses (DoGR) of the original image x and the corresponding stylised image ŷ:\nL DoG (ŷ, x) = ∥DoGR(ŷ) -DoGR(x)∥ 2 2 (4)" }, { "figure_ref": [], "heading": "Training Details", "publication_ref": [ "b48", "b42", "b12", "b49", "b50", "b51", "b0" ], "table_ref": [], "text": "The Stylisation Network is trained for 2 epochs with a batch size of 2 and a learning rate of 1 × 10 -3 . The content and style weights are set to 1 × 10 5 and 1 × 10 10 , respectively. The weight for the depth loss and the DoG loss is set to 1 × 10 3 . The Adam optimizer [49] is employed with a learning rate of 1 × 10 -3 . The setting of the hyperparameters is adopted from [43] -this maintains the optimal content-style ratio as in the implementation of Johnson et al. [13]. To accommodate robust stylisation of game environments and synthetic scenes, both real-world images and frames from computer-generated sources are used to train the stylisation network. The MS COCO dataset [50] is used, mixed with frames from the MPI Sintel training set [51]. The data is shuffled and all the images are resized to 360 × 360 during training. In order for the trained Stylisation network to be suitable for in-game stylisation, we export the trained model to the ONNX [52] format. This is supported by Unity and the Barracuda package [1].\nFigure 3: Overview of the modified 3D rendering pipeline. The NST module is added before the Post-process stage whilst the colour buffer information is available for read and write." }, { "figure_ref": [ "fig_2" ], "heading": "In-game Stylisation", "publication_ref": [ "b52", "b0" ], "table_ref": [], "text": "To accommodate real-time in-game stylisation we use the Unity game engine and the High Definition Rendering Pipeline (HDRP) [53]. Custom Passes can be configured within Unity's rendering pipeline and can be executed at certain points during the HDRP render loop. Six injection points for a Custom Pass are offered, with a selection of buffers being available at each. The injection points are: Before Rendering, After Opaque, Depth and Normal, Before Pre-Refraction, Before Transparent, Before Post-Process, and After Post Process. To generate a stylised image, it is necessary to read and write to the colour buffer that is available after the Opaque, Depth and Normal stage. In order for the stylisation to affect the transparent objects in the scene, we opt to inject the custom pass before the Post-Process stage.\nThe overall modified Unity HDRP rendering pipeline is depicted in Figure 3. During rendering, HDRP writes colour data from visible objects (renderers) in the scene to the colour buffer. During a custom pass, a depth pyramid and a colour pyramid are created (as shown in Figure 3). The colour pyramid constitutes an iterative series of mipmaps, crafted by the HDRP, extracted from the colour buffer at a specific juncture within the rendering pipeline. The NST Module is inserted after the Distortion stage and before the Post-process stage, intercepting the colour buffer mipmap and producing an artistic stylisation for each frame (this is supported by the Barracuda package [1] that allows for neural network inference). The synthesised texture is then passed to a custom compute shader that writes the colour to the camera colour buffer. This allows for the Post-process stage to utilise the stylised frames, before the final render. Our proposed system is capable of producing stable real-time stylised frames free from undesired artefacts and flickering effects. Embedding the NST module earlier in the rendering pipeline also allows for the post-process effects (such as depth of field, bloom, or motion blur) to effectively be visible, adding to the look and feel of the game. Such effects would be diminished if the stylisation effect was applied at the final render -examples of this are shown in Figure 5." }, { "figure_ref": [ "fig_1" ], "heading": "Results & Discussion", "publication_ref": [ "b53", "b54", "b55", "b56" ], "table_ref": [], "text": "Our NST system is embedded in the rendering pipeline, intercepting each G-buffer colour frame and producing a stylised version that is then passed through the Post-process stage. Figure 4 demonstrates some final rendered frames for different reference style images and for different game scenes. Our method is capable of producing robust stylisations even for complicated scenes with difficult lighting. The final renders do not suffer from undesired artefacts or flickering effects, while the halo effect around the objects in the scene is significantly reduced. The following subsections further demonstrate temporally consistent stylisations of frames from various open-source games [54,55,56,57] and compare the results against state-of-the-art methods in image and video style transfer. Videos of our results and comparisons to state-of-the-art methods are included on the project's website. " }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "Qualitative Results", "publication_ref": [ "b6", "b8", "b7", "b5", "b6", "b8", "b7", "b5", "b6", "b8", "b7", "b5" ], "table_ref": [], "text": "Qualitative comparisons against four state-of-the-art methods -AdaAttN [7], CSBNet [9], MCCNet [8], and FVMST [6] -are shown in Figure 5. This includes stylisations for two consecutive frames for two different game scenes and two different style images.\nFigure 5 shows that AdaAttN [7] preserves much of the content information, however, the stylisation effect is not very visible -the yellow colour that is eminent in the style image is absent from the stylised frames (Figure 5(b)). The video NST methods, CSBNet [9] and MCCNet [8], reproduce the style image more faithfully than AdaAttN, but they create undesired artefacts such as the yellow halo around the trees that are visually distinguished from the background. FVMST [6] also captures the style quite well, but generates a white artefact encircling the mountain's background edge and it produces a sudden shift of the sky colour (from light, it turns to dark blue/black) that is not visible in the original frames. Our approach reproduces the style image faithfully and eliminates the undesired effects that are visible in the results of the state-of-the-art methods. The structure of the original frames is preserved comparably to the stylisations of AdaAttN but with higher stylisation intensity and better preservation of the luminance and lighting of the scene.\nTo demonstrate the effectiveness of our approach, we also include close-ups of consecutive frames of a game scene that includes a moving 3D object in a complex background. When looking at the zoomed-in cut-outs in Figure 5(a), the halo effect around the object's edge is more noticeable in the generated frames of AdaAttN [7], CSBNet [9], and MCCNet [8], while the stylisations of FVMST [6] create a disturbing white blob. In addition, the close-ups provide strong evidence of the capability of our approach to retaining the luminance in the scene and the game's post-effects.\nThe prominent depth-of-field effect in the original frames is completely ignored when the stylisation is performed at the final render using state-of-the-art methods that enhance the details on the background. Our system makes the 3D object stand out and preserves the lighting and the game's overall look and feel." }, { "figure_ref": [ "fig_3" ], "heading": "Quantitative Results", "publication_ref": [ "b57", "b58", "b59", "b11", "b60", "b11" ], "table_ref": [ "tab_0" ], "text": "For quantitative comparisons, frames are extracted from 4 different games and 12 different gameplays, including indoor and outdoor scenes, featuring moving objects and complicated lighting. This results in an evaluation dataset of 2100 The average results are reported in Table 1.\nTo quantitatively gauge the performance of our method in video stability and temporal coherence we utilise the warping error that is calculated as the difference between a warped next frame (using optic flow) and the original next frame.\nFlowNetS [58] is used to compute the optical flow of the original videos. In addition, we employ the LPIPS (Learned Perceptual Image Patch Similarity) metric [59] to measure the average perceptual distances between the adjacent frames in order to verify the smoothness of the stylised game sequences. The results show that our approach is superior to the state-of-the-art methods in generating temporally consistent in-game stylisations.\nPerceptual metrics are employed to quantitatively assess the stylisation quality. SSIM [60] and Content error (L c ) [12] are used to evaluate the effectiveness of the methods in retaining content information; SIFID [61] and Style error [12] are used to evaluate style performance. Our system manages to preserve content adequately. Whilst our algorithm's effectiveness in reproducing the style image is sufficient, some stylisation qualities are lost when the post-process effects are performed on top of the stylisations. In order to retain the intended post-effects applied to a game, certain aspects of the style likeness to the original image are traded off. Arguably, this compromise can be deemed desirable in a game setting and, as has been demonstrated, this trade-off leads to more consistent and temporally stable stylisations. Figure 6 demonstrates example results of our approach under different configurations. In-game stylisation has a significant impact on temporal coherence, in comparison to stylising each rendered frame as a post-effect (Section 4.2)." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Here, we show that the latter also produces less appealing stylisations, with much of the content information being discarded. In addition, training without the DoG Loss results in more visible halos around the objects in the scene, whereas training with DoG Loss leads to generated frames with enhanced object stylisation and reduced boundary artefacts. The same applies to Depth Loss, as our method synthesises visibly improved results when depth is considered. The inclusion of the MPI Sintel dataset also has an impact on the performance -the stylisation network trained only on the MS COCO dataset neglects the synthetic nature of the game and struggles to generate frames that retain the content adequately, producing undesired effects." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b21", "b6", "b7", "b21" ], "table_ref": [], "text": "To demonstrate the effectiveness of applying NST as part of the rendering pipeline of a computer game, we have trained a single-style-per-network model. Future work could experiment with arbitrary-style-per-model networks [22,7,8] which would provide the user with the option to upload and use their own reference style image. Another important consideration in applying NST in a game setting is running time. We reduced the number of residual layers and removed activation from the initial convolution layers to improve upon the inference time of the trained network which requires approximately 0.9 seconds to stylise an image of size 512 × 512. When injecting stylisation in the rendering pipeline the frame rate of a game running in Unity at Full HD resolution drops to ∼10fps. Utilising a more lightweight network architecture (arbitrary style transfer networks report better inference time, e.g., AdaIN [22]: 0.065 seconds) could result in stylised game environments running at higher frame rates." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have proposed a novel approach for injecting NST into a computer graphics rendering pipeline. Our NST framework is capable of producing coherent and temporally stable stylised frames in computer games. Our NST module intercepts frames from the colour buffer and synthesises artistic stylisations that are then written back to the camera colour buffer. Robust stylisations are achieved without interfering with the applied post-process effects. We demonstrate qualitative and quantitative results that reveal a promising new avenue for integrating NST within game development processes." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was funded by the EPSRC." } ]
Neural Style Transfer (NST) research has been applied to images, videos, 3D meshes and radiance fields, but its application to 3D computer games remains relatively unexplored. Whilst image and video NST systems can be used as a post-processing effect for a computer game, this results in undesired artefacts and diminished post-processing effects. Here, we present an approach for injecting depth-aware NST as part of the 3D rendering pipeline. Qualitative and quantitative experiments are used to validate our in-game stylisation framework. We demonstrate temporally consistent results of artistically stylised game scenes, outperforming state-of-the-art image and video NST methods.
NEURAL STYLE TRANSFER FOR COMPUTER GAMES
[ { "figure_caption": "Figure 2 :2Figure 2: The Stylisation Network consists of three convolutional layers (Conv), two residual layers (Res) and three deconvolutional layers (Deconv). Instance normalisation layers (IN) and the ReLU activation function are included at the first two deconvolution layers.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Our approach for different style images and different game scenes. Original content images are above stylised frames. Adjacent frames show temporal stability.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Comparison against state-of-the-art approaches. Top row: original frames, with the style image top left; two adjacent frames from two different game scenes are shown.We provide zoomed-in cut-outs on the right of each two-frame sequence, for better comparisons. Our method produces robust stylisations that capture the style image more efficiently and preserve content and luminance information of the scene more effectively in comparison with the state-of-the-art approaches.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Ablation study on the impact of the different components of our system. Each column shows a zoomed-in comparison between our method (in-game) trained with all components (green) and our stylisation network (a) applied as a post-effect, (b) trained without DoG Loss (c) trained without Depth Loss, and (d) trained without the MPI Sintel data (red).", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative results. Warping Error and LPIPS error (both in the form ×10) capture the smoothness of the generated video. SSIM and L c relate to content preservation, and SIFID and L s quantify the style performance. Results are given for our NST system injected in the game's rendering pipeline (game) and for the NST network applied as a post-effect (image). Bold values in Warping Error and LPIPS Error indicate our (in-game) approach is best at preserving temporal consistency.", "figure_data": "MethodWarping Error ↓ LPIPS Error ↓ SSIM ↑ SIFID ↓L c ↓L s ↓AdaAttN [7]1.64770.32170.78201.6115 0.4945 1.0391CSBNet [9]1.74580.39080.63702.2468 0.8674 1.0053MCCNet [8]1.65190.35470.66371.5555 0.8065 1.0042FVMST [6]1.85240.32150.58552.2529 0.7834 1.0077Ours (image)1.67640.36020.67401.2063 0.6532 0.9808Ours (in-game)1.57980.29300.60571.8679 0.7830 1.0612", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Eleftherios Ioannou; Steve Maddock
[ { "authors": "", "journal": "Introduction to barracuda: Barracuda", "ref_id": "b0", "title": "Unity Technologies", "year": "2023" }, { "authors": "Haozhi Huang; Hao Wang; Wenhan Luo; Lin Ma; Wenhao Jiang; Xiaolong Zhu; Zhifeng Li; Wei Liu", "journal": "", "ref_id": "b1", "title": "Real-time neural style transfer for videos", "year": "2017" }, { "authors": "Manuel Ruder; Alexey Dosovitskiy; Thomas Brox", "journal": "CoRR", "ref_id": "b2", "title": "Artistic style transfer for videos and spherical images", "year": "2017" }, { "authors": "Chang Gao; Derun Gu; Fangjun Zhang; Yizhou Yu", "journal": "", "ref_id": "b3", "title": "Reconet: Real-time coherent video style transfer network", "year": "2018" }, { "authors": "Tian Qi; Chen ; Mark Schmidt", "journal": "", "ref_id": "b4", "title": "Fast patch-based style transfer of arbitrary style", "year": "2016" }, { "authors": "Wei Gao; Yijun Li; Yihang Yin; Ming-Hsuan Yang", "journal": "", "ref_id": "b5", "title": "Fast video multi-style transfer", "year": "2020" }, { "authors": "Songhua Liu; Tianwei Lin; Dongliang He; Fu Li; Meiling Wang; Xin Li; Zhengxing Sun; Qian Li; Errui Ding", "journal": "", "ref_id": "b6", "title": "Adaattn: Revisit attention mechanism in arbitrary neural style transfer", "year": "2021" }, { "authors": "Yingying Deng; Fan Tang; Weiming Dong; Haibin Huang; Chongyang Ma; Changsheng Xu", "journal": "", "ref_id": "b7", "title": "Arbitrary video style transfer via multi-channel correlation", "year": "2021-05" }, { "authors": "Haofei Lu; Zhizhong Wang", "journal": "", "ref_id": "b8", "title": "Universal video style transfer via crystallization, separation, and blending", "year": "2022" }, { "authors": "Hassan Stephan R Richter; Vladlen Abu Alhaija; Koltun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "Enhancing photorealism enhancement", "year": "2022" }, { "authors": "Martina Mittermueller; Zhanxiang Ye; Helmut Hlavacs", "journal": "", "ref_id": "b10", "title": "EST-GAN: Enhancing style transfer gans with intermediate game render passes", "year": "2022" }, { "authors": "Leon A Gatys; Alexander S Ecker; Matthias Bethge", "journal": "", "ref_id": "b11", "title": "Image style transfer using convolutional neural networks", "year": "2016" }, { "authors": "Justin Johnson; Alexandre Alahi; Li Fei-Fei", "journal": "Springer", "ref_id": "b12", "title": "Perceptual losses for real-time style transfer and super-resolution", "year": "2016" }, { "authors": "Dmitry Ulyanov; Vadim Lebedev; Andrea Vedaldi; Victor S Lempitsky", "journal": "", "ref_id": "b13", "title": "Texture networks: Feed-forward synthesis of textures and stylized images", "year": "2016" }, { "authors": "Dmitry Ulyanov; Andrea Vedaldi; Victor Lempitsky", "journal": "", "ref_id": "b14", "title": "Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis", "year": "2017" }, { "authors": "Chuan Li; Michael Wand", "journal": "Springer", "ref_id": "b15", "title": "Precomputed real-time texture synthesis with markovian generative adversarial networks", "year": "2016" }, { "authors": "Artsiom Sanakoyeu; Dmytro Kotovenko; Sabine Lang; Bjorn Ommer", "journal": "", "ref_id": "b16", "title": "A style-aware content loss for real-time hd style transfer", "year": "2018" }, { "authors": "Dmytro Kotovenko; Artsiom Sanakoyeu; Sabine Lang; Bjorn Ommer", "journal": "", "ref_id": "b17", "title": "Content and style disentanglement for artistic style transfer", "year": "2019" }, { "authors": "Jonathon Vincent Dumoulin; Manjunath Shlens; Kudlur", "journal": "", "ref_id": "b18", "title": "A learned representation for artistic style", "year": "2017" }, { "authors": "Dongdong Chen; Lu Yuan; Jing Liao; Nenghai Yu; Gang Hua", "journal": "", "ref_id": "b19", "title": "Stylebank: An explicit representation for neural image style transfer", "year": "2017" }, { "authors": "Hang Zhang; Kristin Dana", "journal": "", "ref_id": "b20", "title": "Multi-style generative network for real-time transfer", "year": "2018" }, { "authors": "Xun Huang; Serge Belongie", "journal": "", "ref_id": "b21", "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "year": "2017" }, { "authors": "Golnaz Ghiasi; Honglak Lee; Manjunath Kudlur; Jonathon Vincent Dumoulin; Shlens", "journal": "", "ref_id": "b22", "title": "Exploring the structure of a real-time, arbitrary neural artistic stylization network", "year": "2017" }, { "authors": "Yijun Li; Chen Fang; Jimei Yang; Zhaowen Wang; Xin Lu; Ming-Hsuan Yang", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Universal style transfer via feature transforms", "year": "2017" }, { "authors": "Zheng Xu; Michael Wilber; Chen Fang; Aaron Hertzmann; Hailin Jin", "journal": "", "ref_id": "b24", "title": "Learning from multi-domain artistic images for arbitrary style transfer", "year": "2018" }, { "authors": "Jing Huo; Shiyin Jin; Wenbin Li; Jing Wu; Yu-Kun Lai; Yinghuan Shi; Yang Gao", "journal": "", "ref_id": "b25", "title": "Manifold alignment for semantically aligned style transfer", "year": "2021" }, { "authors": "Young Dae; Kwang Park; Lee Hee", "journal": "", "ref_id": "b26", "title": "Arbitrary style transfer with style-attentional networks", "year": "2019" }, { "authors": "Jan Svoboda; Asha Anoosheh; Christian Osendorfer; Jonathan Masci", "journal": "", "ref_id": "b27", "title": "Two-stage peer-regularized feature recombination for arbitrary image style transfer", "year": "2020" }, { "authors": "Jie An; Haoyi Xiong; Jun Huan; Jiebo Luo", "journal": "", "ref_id": "b28", "title": "Ultrafast photorealistic style transfer via neural architecture search", "year": "2020-04" }, { "authors": "Zhiyuan Hu; Jia Jia; Bei Liu; Yaohua Bu; Jianlong Fu", "journal": "", "ref_id": "b29", "title": "Aesthetic-aware image style transfer", "year": "2020" }, { "authors": "Xiao-Chang Liu; Yong-Liang Yang; Peter Hall", "journal": "", "ref_id": "b30", "title": "Learning to warp for style transfer", "year": "2021" }, { "authors": "Falong Shen; Shuicheng Yan; Gang Zeng", "journal": "", "ref_id": "b31", "title": "Neural style transfer via meta networks", "year": "2018" }, { "authors": "Yingying Deng; Fan Tang; Weiming Dong; Chongyang Ma; Xingjia Pan; Lei Wang; Changsheng Xu", "journal": "", "ref_id": "b32", "title": "Stytr2: Image style transfer with transformers", "year": "2022" }, { "authors": "Xuan Luo; Zhen Han; Lingkang Yang; Lingling Zhang", "journal": "", "ref_id": "b33", "title": "Consistent style transfer", "year": "2022" }, { "authors": "Manuel Ruder; Alexey Dosovitskiy; Thomas Brox", "journal": "Springer International Publishing", "ref_id": "b34", "title": "Artistic style transfer for videos", "year": "2016" }, { "authors": "Shiguang Liu; Ting Zhu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b35", "title": "Structure-guided arbitrary style transfer for artistic image and video", "year": "2021" }, { "authors": "Eleftherios Ioannou; Steve Maddock", "journal": "Computers", "ref_id": "b36", "title": "Depth-aware neural style transfer for videos", "year": "2023" }, { "authors": "Wenjing Wang; Shuai Yang; Jizheng Xu; Jiaying Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b37", "title": "Consistent video style transfer via relaxation and regularization", "year": "2020" }, { "authors": "Zijie Wu; Zhen Zhu; Junping Du; Xiang Bai", "journal": "Springer", "ref_id": "b38", "title": "Ccpl: Contrastive coherence preserving loss for versatile style transfer", "year": "2022" }, { "authors": "Xueting Li; Sifei Liu; Jan Kautz; Ming-Hsuan Yang", "journal": "", "ref_id": "b39", "title": "Learning linear transformations for fast image and video style transfer", "year": "2019" }, { "authors": "Thomas Deliot; Florent Guinier; Kenneth Vanhoey", "journal": "", "ref_id": "b40", "title": "Real-time style transfer in unity using deep neural networks", "year": "2020" }, { "authors": "Xiao-Chang Liu; Ming-Ming Cheng; Yu-Kun Lai; Paul L Rosin", "journal": "", "ref_id": "b41", "title": "Depth-aware neural style transfer", "year": "2017" }, { "authors": "Eleftherios Ioannou; Steve Maddock", "journal": "Eurographics Digital Library", "ref_id": "b42", "title": "Depth-aware neural style transfer using instance normalization", "year": "2022" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b43", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "René Ranftl; Katrin Lasinger; David Hafner; Konrad Schindler; Vladlen Koltun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)", "ref_id": "b44", "title": "Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer", "year": "2020" }, { "authors": "E Stephen; Palmer", "journal": "MIT press", "ref_id": "b45", "title": "Vision science: Photons to phenomenology", "year": "1999" }, { "authors": "David Marr; Ellen Hildreth", "journal": "Proceedings of the Royal Society of London. Series B. Biological Sciences", "ref_id": "b46", "title": "Theory of edge detection", "year": "1167" }, { "authors": "Holger Winnemöller; Jan Eric Kyprianidis; Sven C Olsen", "journal": "Computers & Graphics", "ref_id": "b47", "title": "Xdog: An extended difference-of-gaussians compendium including advanced image stylization", "year": "2012" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b48", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b49", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "D J Butler; J Wulff; G B Stanley; M J Black", "journal": "Springer-Verlag", "ref_id": "b50", "title": "A naturalistic open source movie for optical flow evaluation", "year": "2012-10" }, { "authors": " Onnx", "journal": "", "ref_id": "b51", "title": "Open neural network exchange", "year": "2019" }, { "authors": "", "journal": "Unity Technologies", "ref_id": "b52", "title": "High definition render pipeline overview: High definition rp", "year": "2021" }, { "authors": "", "journal": "Unity Technologies", "ref_id": "b53", "title": "Unity terrain -hdrp demo scene", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b54", "title": "Unity-technologies/fontainebleaudemo: Fontainebleau demo", "year": "2022" }, { "authors": " Polygonautic", "journal": "", "ref_id": "b55", "title": "Seed hunter", "year": "2020" }, { "authors": "", "journal": "Unity Technologies", "ref_id": "b56", "title": "Book of the dead: Environment: Hdrp: Tutorial projects", "year": "2023" }, { "authors": "Eddy Ilg; Nikolaus Mayer; Tonmoy Saikia; Margret Keuper; Alexey Dosovitskiy; Thomas Brox", "journal": "", "ref_id": "b57", "title": "Flownet 2.0: Evolution of optical flow estimation with deep networks", "year": "2017" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b58", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE transactions on image processing", "ref_id": "b59", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "Tamar Rott Shaham; Tali Dekel; Tomer Michaeli", "journal": "", "ref_id": "b60", "title": "SinGAN: Learning a generative model from a single natural image", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 215.67, 153.76, 325, 23.22 ], "formula_id": "formula_0", "formula_text": "l ϕ0 content (ŷ, x) = 1 C j H j W j ∥ϕ j 0 (ŷ) -ϕ j 0 (x)∥ 2 2 (1)" }, { "formula_coordinates": [ 4, 231.26, 263.52, 305.53, 13.91 ], "formula_id": "formula_1", "formula_text": "L ϕ0,j style (ŷ, y) = ∥G ϕ0 j (ŷ) -G ϕ0 j (y)∥ 2 F (2" }, { "formula_coordinates": [ 4, 536.8, 266.59, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 205.3, 401.2, 200.9, 12.69 ], "formula_id": "formula_3", "formula_text": "L M iDaS depth (ŷ, x) = ∥M iDaS 1 (ŷ) -M iDaS 1 (x)∥ 2 2" }, { "formula_coordinates": [ 4, 220.34, 579.33, 320.33, 12.69 ], "formula_id": "formula_4", "formula_text": "L DoG (ŷ, x) = ∥DoGR(ŷ) -DoGR(x)∥ 2 2 (4)" } ]
2023-11-30
[ { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "We propose CatVersion, an inversion-based method that learns the personalized concept through a handful of examples. Subsequently, users can utilize text prompts to generate images that embody the personalized concept, thereby achieving text-to-image personalization. In contrast to existing approaches that emphasize word embedding learning or parameter fine-tuning for the diffusion model, which potentially causes concept dilution or overfitting, our method concatenates embeddings on the feature-dense space of the text encoder in the diffusion model to learn the gap between the personalized concept and its base class, aiming to maximize the preservation of prior knowledge in diffusion models while restoring the personalized concepts. To this end, we first dissect the text encoder's integration in the image generation process to identify the feature-dense space of the encoder. Afterward, we concatenate embeddings on the" }, { "figure_ref": [ "fig_2" ], "heading": "Introduction", "publication_ref": [ "b17", "b18", "b21", "b3", "b8", "b19", "b3", "b7", "b29", "b6", "b19", "b8" ], "table_ref": [], "text": "Recently, text-guided diffusion models [13,18,19,22] have garnered significant attention due to their remarkable highfidelity image synthesis capabilities. These models utilize natural language descriptions to synthesize high-quality images aligned with these texts. However, they still encounter challenges when dealing with personalized concepts that are difficult to describe accurately.\nText-to-image (T2I) personalization [4,9,20] offers un-precedented opportunities for users to describe personalized concepts. With a handful of examples representing one concept, users can employ free-text descriptions to synthesize images depicting the concept. Based on the extensive prior knowledge derived from text-guided diffusion models, recent T2I personalization schemes [4,8] invert the concept by optimizing word embeddings corresponding to a pseudoword, and then combine the pseudo-word with free text to create a new scene of the concept. However, we observe that during pseudo-word guided synthesis, especially when combining with free-text, the personalized concept is prone to becoming diluted or lost, as shown in Figure 2. This is because optimizing word embeddings with few examples to represent the personalized concept accurately is highly challenging. Voynov et al. [30] and Zhang et al. [37] optimize multiple word embedding instances for the multi-scale blocks of the U-Net backbone or different time steps of the denoising process, providing more precise control. However, the challenge of optimizing word embeddings has not yet been alleviated. To avoid this challenge, Ruiz et al. [20] and Kumari et al. [9] resort to fine-tuning all or part of the parameters of the network for aligning rare word embeddings with the target concept. However, this will undermine the prior knowledge of the diffusion model and lead to overfitting in personalized concept generation.\nWe introduce CatVersion, a prompt inversion method that concatenates embeddings into a highly integrated feature space, which helps to restore the personalized concept more faithfully and enabling more robust editing. In contrast to directly optimizing word embeddings corresponding to personalized concepts, CatVersion learns the gap between the concept and its base class by concatenating learnable embeddings to the Keys and Values in the highly integrated feature space of the CLIP text encoder, facilitating a more effective inversion process.\nSpecifically, we show that various attention layers of the CLIP text encoder emphasize different aspects in the textguided diffusion generation process. Shallow layers mainly focus on the construction of subject information, while as the layers deepen, more other complex and abstract information are gradually integrated. Based on this integration, we locate the highly integrated feature space within the last few attention layers of the CLIP text encoder where learning personalized concepts is more effective. Then, we concatenate personalized embeddings to the Keys and Values in these layers and optimize them. Unlike word embeddings that learn personalized concepts directly, these personalized embeddings are ultimately represented as a residual on the original attention output to learn the gap between the personalized concept and its base class. We refer to the personalized embeddings learned in this way as \"residual embeddings\". Like word embeddings, residual embeddings are plug-and-play and can be combined with free text to ap- ply personalized concepts to different scenarios. In T2I personalization evaluation, it is crucial to carry out objective quantitative research. We analyze the CLIP image alignment score and find that it does not adapt well to personalized generation tasks. So we improve it to make an accurate and unbiased evaluation.\nIn summary, our contributions are threefold: • We analyze the integration of the CLIP text encoder in T2I diffusion models and introduce a tightly integrated feature space that facilitates the concept inversion. • We propose CatVersion, a straightforward yet effective inversion method. It concatenates embeddings to the Keys and Values within the tightly integrated feature space of the text encoder to learn the gap between the concept and its base class as residuals. • To quantify the results more accurately and unbiased, we adjust the CLIP image alignment score to make it more rational. Extensive experiments, both qualitatively and quantitatively, demonstrate the effectiveness of our method in faithfully restoring the target concepts and enabling more robust editing." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Text-to-Image Diffusion Models", "publication_ref": [ "b14", "b24", "b33", "b34", "b37", "b1", "b10", "b16", "b15", "b26", "b0", "b13", "b23", "b17", "b21", "b18" ], "table_ref": [], "text": "Text-to-image generation has been extensively studied in the past few years. Prior efforts mainly focus on GANbased [15,25,[33][34][35][36]38] and autoregressive network-based [2,3,11,17] methods. The former utilizes multimodal visual-language learning such as CLIP [16] to achieve se- mantic alignment between text description and generated images, demonstrating satisfactory results in text-guided image generation. However, its training process is relatively unstable and prone to mode collapse. Based on the transformer architecture, the latter adopts a tokenizer like VQ-VAE [27], converting the input text description into continuous vector representations and then generating high-fidelity images based on these representations as conditions. Although it exhibits excellent performance in text-to-image generation quality, its training process requires significant computing resources and memory usage.\n••• ••• Pre-Trained U-Net ••• Self-Attention 𝐾 𝑉 dog ••• ••• dog Pre-\nIn recent years, diffusion models [1,7,14,23,24], as score-based generative models, have garnered significant acclaim owing to their stunning image-generation capabilities. Therefore, they have quickly become a new frontier in image synthesis. Text-guided diffusion models use text as an extra condition to guide image generation. By training massive text-image data pairs, it can associate text embeddings with the feature of the image, thereby guiding the image generation process. Recent works such as GLIDE [13], DALL-E 2 [18], Imagen [22], and Stable Diffusion [19] have demonstrated the remarkable performance of textguided diffusion models in generating diverse and highfidelity images. We extensively utilize the prior knowledge of these models for T2I personalization." }, { "figure_ref": [], "heading": "Diffusion-based T2I personalization", "publication_ref": [ "b3" ], "table_ref": [], "text": "The personalized concept is often abstract and difficult to express accurately using text descriptions. When using the T2I diffusion models to synthesize the image with these concept, obstacles are often encountered. T2I personalization aims to learn abstract concepts from a handful of sample images and apply these concepts to new scenarios.\nGal et al. [4] learn the personalized target by optimiz- These methods either learn word embeddings at the text encoder's input to represent personalized concepts, or fine-tune all or part of the parameters of the diffusion model to align the rare token with the personalized concept. However, the challenge that word embeddings are difficult to optimize is not addressed." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Latent Diffusion Models", "publication_ref": [ "b13", "b23", "b0", "b17", "b20", "b18" ], "table_ref": [], "text": "Diffusion models [7, 14,23,24] are a class of generative models. They convert the input Gaussian noise into sample images that match the target distribution through iterative denoising. Diffusion models allow for conditional guided generation based on category labels [1], text [13,18], or images [21]. The simplified optimization objective for training diffusion models is as follows:\nL DM (θ) := E t,x0,ϵ ∥ϵ -ϵ θ (x t , t)∥ 2 ,(1)\nHere, x t denotes the noisy image at time step t. It constructs by adding noise ϵ ∼ N (0, I) to the nature image x 0 . ϵ θ (•) denotes the noise predicted by the neural network. Latent diffusion model (LDM) [19] is a text-guided diffusion model. It uses an encoder E(•) to map images to latent space and perform an iterative denoising process. Afterward, the predicted images are mapped back to pixel space through the decoder D(•). The simplified optimization objective of LDM is as follows:\nL LDM (θ) := E t,x0,ϵ ∥ϵ -ϵ θ (x t , t, τ θ (c)∥ 2 . (2\n)\nIn this process, the text description c is first tokenized into textual embeddings by a Tokenizer. These textual embeddings are then passed through the CLIP text encoder τ θ (•)\nto obtain text conditions. The resulting text conditions are used to guide the diffusion denoising process." }, { "figure_ref": [], "heading": "A Comparison of CatVersion and Word Embedding Inversion", "publication_ref": [], "table_ref": [], "text": "As mentioned in Section 1, existing inversion methods focus on learning personalized concepts in word embedding space, which can lead to the fading or even loss of concepts because the word embedding space is a feature-sparse space. Optimizing word embeddings to represent personalized concepts accurately is challenging. Instead, we optimize the embeddings in a feature-dense space within the CLIP text encoder. We use the last three attention layers as our inversion space, as they are more feature-dense than the earlier layers due to the cumulative spatial integration in CLIP. We will demonstrate this property in Section 4. Moreover, while more expressive, directly learning the token embeddings of the target concept in feature-dense space is also challenging because these embeddings tend to capture correlations with other text token embeddings. Since the Keys and Values are essential in attention blocks for learning the inter-word correlation, we concatenate the learnable embeddings to the Keys and Values, which are ultimately represented as a residual on the original attention output to learn the gap between the personalized concept and its base class. This process utilizes the existing contextual knowledge in the base class and proceeds with soft optimization from the base class to personalized concepts. In practice, our CatVersion has demonstrated a significant enhancement in reconstruction accuracy and image editability compared to optimization in the word embedding space." }, { "figure_ref": [ "fig_3" ], "heading": "Method", "publication_ref": [ "b18" ], "table_ref": [], "text": "This work uses Stable Diffusion [19] as the generative backbone. As shown in Figure 3, CatVersion concatenates residual embeddings within the tightly integrated feature space of the CLIP text encoder and subsequently optimizes these embeddings. Precisely, we first locate the highly integrated feature-dense space of the text encoder to facilitate personalized concept inversion (Section 4.1). Then, we concatenate the residual embeddings to the Keys and Values in this space and optimize them to learn the gap between the personalized concept and its base class (Section 4.2). Finally, we improve the CLIP image alignment score to evaluate T2I personalization more objectively (Section 4.3)." }, { "figure_ref": [ "fig_5" ], "heading": "The Feature-dense Space of CLIP", "publication_ref": [ "b27" ], "table_ref": [], "text": "The word embedding space is feature-sparse. The errors in word embedding learning are easily amplified when considering the correlation with free text. We aim to identify a feature-dense space in the text encoder of the diffusion model, facilitating the learning of personalized concepts. The text condition module of stable Diffusion consists of a Tokenizer and a CLIP text encoder. The text encoder consists of 12 layers of Transformer blocks, each composed of a self-attention layer, a feed-forward layer, and normalization layers. For each text input, the Tokenizer splits it into a token sequence and then send it into the CLIP text encoder for feature encoding. The final output is used as a conditional input for the cross-attention layer of the U-Net. In this process, the self-attention mechanism in CLIP text encoder is crucial for learning the correlation between tokens, which is essential for understanding the semantics of the sentence. In addition, the textual features are gradually abstracted from shallow to deep layers. These claims are based on the following consensus and observation. Consensus: The role of the attention mechanism in Transformer. The attention mechanism [28] is the foundation of Transformer's powerful capabilities. It enables the model to focus on the most relevant parts of the input, allowing it to gather relevant information efficiently and accurately. The self-attention mechanism can establish connections between various parts of the input sequence, enhancing contextual understanding. Observation: Integration across various layers in the text encoder. We examine the integration of different selfattention layers within the text encoder of the T2I diffusion model in concept learning. More precisely, we employ the proposed CatVersion to concatenate embeddings into the self-attention layers of every two transformer blocks within the text encoder and then optimize these embeddings. We observed the same phenomenon when testing the generated results on different datasets. As depicted in Figure 4, optimizing embeddings across all self-attention layers will overfit the target concept into the limited scene and lose its editability. Optimizing embeddings in shallow layers tends to emphasize the mastery of the simple concept, such as \"dog\", while potentially sacrificing integration with other semantic information. When only optimizing the embeddings of intermediate modules, the results also integrate some other relatively complex concepts, such as \"Times Square\". When optimizing embeddings in the last few modules, in addition to considering the above concepts, the result also integrates more abstract concepts, such as the action of \"running\". Therefore, as the attention layer becomes deeper, the feature integration increases. Moreover, there is a trend in information integration towards increasing abstraction, where abstraction levels increase with deeper layer positions.\nBased on the above analysis and additional experimental results, we define the feature-dense space of the CLIP text encoder as the last three self-attention layers. These layers not only play an essential role in learning the features of the target concept at different levels of abstraction but also facilitate the contextual understanding between the target concept and other semantics." }, { "figure_ref": [], "heading": "Concatenated Embeddings Learning", "publication_ref": [ "b9", "b7", "b29", "b6" ], "table_ref": [], "text": "Given the text feature f ∈ R l×d , a single-head self-attention compares the query vector Q = W q f with the key vector K = W k f in each Transformer block of the CLIP text encoder. Attention maps are then determined based on the similarities to indicate the importance of each input token. These attention weights are used to compute a weighted average of the value vector V = W v f , resulting in an output representation. This process can be expressed as follows:\nAttn(Q, K, V ) = Softmax QK T √ d ′ V.(3)\nHere, W q , W k and W v are projection matrix of the Query, Key and Value feature.\nA = Softmax(QK T / √ d ′ )\nis the attention map used to aggregate the value.\nSince we locate the feature-dense space on the last few self-attention layers of the CLIP text encoder, our goal is to implement the concept inversion in this space. Inspired by Li et al. [10], who use prefix embeddings to indicate prompt instructions, we concatenate residual embeddings ∆ k , ∆ v ∈ R n×d to the Key and Value embeddings K, V ∈ R l×d for the feature-dense self-attention layers. Then, we use the new Key and Value embeddings to calculate the selfattention. This process is expressed as follows:\nAttn(Q, K ′ , V ′ ) = Softmax QK ′ T √ d ′ V ′ ,\nwhere\nK ′ = W k f + ∆ k and V ′ = W v f + ∆ v .(4)\nHere, K ′ , V ′ ∈ R (l+n)×d are the new Key and Value embeddings which concatenated the residual embeddings.\nThe new attantion map (l+n) . The attention output Attn(Q, K ′ , V ′ ) ∈ R l×d retains its dimensionality intact. To optimize the residual embeddings, our overall optimization objective is derived from the simplified least squares error in Eq. 2:\nA ′ = Softmax(QK ′T / √ d ′ ) ∈ R l×\n∆ * = arg min E t,x0,ϵ ∥ϵ -ϵ θ (x t , t, τ θ (c))∥ 2 ,(5)\nwhere ∆ * = {∆ l k,v } j l=i represents a set of learnable residual embeddings concatenated on the Keys and Values in all feature-dense self-attention layers, from layer i to j.\nDuring training, we use the base class of the personalized concept as text input. For instance, \"dog\" can be the base class for a specific-looking dog. Then, we concatenate residual embeddings to the Keys and Values of selfattention. When calculating the attention score, the residual embeddings are weighted averaged together with the other token embeddings, aiding the computation of correlations in the input sequence and obtaining a better feature representation. These residual embeddings ultimately manifest as a residual on the original attention output to learn the internal gap between the base class and the target concept.\nSimilar to word embedding inversion methods, the residual embeddings of CatVersion are plug-and-play. They can be deleted and replaced according to different personalized tasks while maintaining the integrity of the T2I generative network. Additionally, CatVersion is compatible with various improvements of word embedding inversion [8,30,37], greatly accelerating its application." }, { "figure_ref": [], "heading": "Evaluation Metric", "publication_ref": [ "b3", "b8", "b30", "b6" ], "table_ref": [], "text": "Accurate and objective quantitative evaluation is necessary for T2I personalization tasks. To achieve unbiased evaluation, both the ability to restore and edit personalized concepts need to be considered. Recently, some methods [4,9,26,31,37] introduce CLIP text and image alignment scores to independently assess the fidelity of generated images to free-text and the restoration of personalized concepts. However, the CLIP image alignment score is not well-suited for evaluating the text-guided personalized results. It measures the similarity of all image features, which is susceptible to difference between the non-object parts of the reference image and the generated image. For example, the CLIP image alignment score calculated by the method of generating images that are easily overfitted to the training scene will be very high, but the majority of the reasons are attributed to overfitting of the background. To bridge this gap, We obtain the mask of the personalized concept in both the generated and reference images. Subsequently, we calculate the CLIP image alignment score for the region within the mask:\nCLIP img+ = CLIP img (M ⊙ I, M s ⊙ I s ).(6)\nHere, M and M S denote the masks of the generated and the reference images, while I and I S denote the generated and the reference images, respectively." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b3", "b19", "b30", "b18" ], "table_ref": [], "text": "In this section, we present qualitative and quantitative results. We demonstrate that our proposed CatVersion outperforms baseline Textual Inversion and is highly competitive compared to state-of-the-art methods.\nCompared Methods. We compare our CatVersion with the state-of-the-art competitors Textual Inversion [4], Dream-Booth [20], Perfusion [26], and ELITE [31]. We use Stable Diffusion v1.5 [19] as the generative backbone, and the generated image resolution is 512 × 512. " }, { "figure_ref": [ "fig_6" ], "heading": "Qualitative Comparisons", "publication_ref": [], "table_ref": [], "text": "Figure 5 shows the generated results of CatVersion and its competitors by applying personalized concepts to different scenarios through free text descriptions. Table 3. Ablation Study. We validate the independent impact of our proposed feature-dense space and residual embeddings on the results, emphasizing the importance of these two configurations." }, { "figure_ref": [], "heading": "Quantitative Comparisons", "publication_ref": [ "b3", "b19" ], "table_ref": [ "tab_0", "tab_1" ], "text": "Quantitative analysis is performed based on paired CLIP alignment scores. The CLIP text alignment score calculates the similarity between the text and the generated image to evaluate the editability of personalized generation. We adjust the CLIP image alignment score to focus on the similarity of the personalized concepts between the generated and the reference image to evaluate the reconstruction fidelity. Additionally, we employ the geometric mean of two scores to evaluate the overall generation ability. We compare the average paired CLIP alignment scores between the results of CatVersion and those of its competitors on datasets selected from these competitors [4,20,26]. The guided text includes four editing dimensions, which are attribute transfer, scene transformation, action editing, and concept addition, for achieving unbiased evaluation. As shown in Table 1, Our CatVersion outperforms the baseline Textual Inversion in both fidelity and editability. Additionally, DreamBooth and ELITE are prone to overfitting to the training images, resulting in high reconstruction scores at the massive expense of editability scores. Although Perfusion demonstrates superiority in editability, it delivers un-satisfactory results in terms of reconstruction fidelity. In comparison, our CatVersion outperforms these competitors in both editability and overall personalized effect. User Study. We assess the mean preference of 50 participants for CatVersion and four other state-of-the-art methods. Each participant is required to complete a survey consisting of 8 random questions. Table 2 indicates user prefer CatVersion in text-guided editability and overall personalized effect, with CatVersion ranking second only to Dream-Booth in reconstruction fidelity." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We conducted ablation studies to validate the effectiveness of our proposed feature-dense space inversion and residual embeddings learning. As depicted in Table 3, the featuredense space inversion notably enhances paired CLIP alignment scores. This suggests that optimizing the target concept in feature-dense spaces is more effective for learning the concept itself and improving contextual understanding. Additionally, concatenating residual embeddings to the keys and values of the CLIP text encoder resulted in a significant enhancement of the CLIP image alignment score, indicating a substantial improvement in the reconstruction ability of the target concept, as shown in Figure 7." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "CatVersion still needs to be optimized separately for each concept, so its inversion speed is not as fast as encoder based methods. In addition, CatVersion can only learn one concept in a single optimization process, thereby limiting its applicability in specific tasks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose CatVersion to achieve accurate and efficient personalized text-to-image generation. Differing from existing methods, CatVersion learns the gap between the personalized concept and its base class by concatenating residual embeddings to the Keys and Values of the feature-dense layers in the CLIP text encoder, restoring personalized concepts more faithfully and enables more robust editing. We also dissect the integration of the T2I diffusion model's text encoder to locate feature-dense inversion spaces. We believe that our work provides new insights for concept inversion. We hope that it will inspire future research in terms of inversion-based generation and editing." } ]
Keys and Values in this space to learn the gap between the personalized concept and its base class. In this way, the concatenated embeddings ultimately manifest as a residual on the original attention output. To more accurately and unbiasedly quantify the results of personalized image generation, we improve the CLIP image alignment score based on masks. Qualitatively and quantitatively, CatVersion helps to restore personalization concepts more faithfully and enables more robust editing.
CatVersion: Concatenating Embeddings for Diffusion-Based
[ { "figure_caption": "Ruoyu Zhao 1 Figure 1 .11Figure 1. CatVersion allows users to learn the personalized concept through a handful of examples and then utilize text prompts to generate images that embody the personalized concept. In contrast to existing approaches, CatVersion concatenates embeddings on the featuredense space of the text encoder in the diffusion model to learn the gap between the personalized concept and its base class, aiming to maximize the preservation of prior knowledge in diffusion models while restoring the personalized concepts.", "figure_data": "", "figure_id": "fig_0", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "photo", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. CatVersion versus Textual Inversion. We showcase the results of Textual Inversion [4] with our CatVersion. As shown in (a), Textual Inversion fails to capture the personalized concept, while CatVersion accurately restores it. We contrast the distinctions in the inversion spaces of the two methods in (b) and (c), underscoring the advantages of inversion in feature-dense space.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Overall Pipeline of CatVersion. Firstly, we identify the feature-dense layers in the CLIP text encoder. Then, we concatenate the residual embeddings with Keys and Values. In the optimization process, we use the base class word (e.g. dog) of the personalized concept as text input and optimize these residual embeddings utilizing a handful of images depicting one personalized concept. During inference, residual embeddings of CatVersion can be deleted and replaced to achieve different personalized needs.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "ing textual embeddings corresponding to the pseudo-word. The pseudo-word can be combined with free text to guide the personalized generation during inference. Ruiz et al.[20] use a class noun combined with a unique identifier to represent the target concept. The unique identifier is represented by rare tokens. To invert the target concept into the rare token, they fine-tune the diffusion model with a prior-preservation loss. Based on this method, Kumari et al.[9] only update the weight of cross attention to invert the concept onto the rare token. In addition, they select a regularization set to prevent overfitting.Tewel et al. [26] propose Perfusion, using a gated Rank-one Model Editing[12] to the weights of the Key and Value projection matrices in cross-attention of the U-Net and reducing concepts leakage beyond their scope, making it easier to combine multiple concepts. Gal et al.[5] and Wei et al.[31] focus on using extensive data to build an encoder for concept inversion. Huang et al.[32] focus on learning the relation between objects through embedding optimization and the relationsteering contrastive learning scheme. Wen et al.[32] invert hard prompts by projecting learned embeddings onto adjacent interpretable word embeddings, providing a new solution for image captioning.Han et al. [6] use singular value decomposition to fine-tune the singular value matrix of the diffusion model network, reducing the number of parameters needed for semantically aligning rare tokens with target concepts. Voynov et al.[30] optimize multiple word embeddings for modules with different feature dimensions of the denoising network, while Zhang et al.[37] optimize multiple word embeddings for various time steps of denoising. Both methods provide finer control over the generated image, allowing for more precise and accurate output.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visualizing Inversion across Multiple Layers. We concatenate embeddings and optimize them in each of the two self-attention layers in the CLIP text encoder. Then, we use these embeddings in combination with free text to create new scenarios for personalized concepts. The results indicate that the self-attention layers of different depths focus on integrating different information. Moreover, the focus of information integration has also shifted from concreteness to abstraction.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative Comparisons with Existing Methods. Our CatVersion more faithfully restores personalized concepts and achieves more powerful editing capabilities in the combination of various concepts and free text.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .Figure 7 .67Figure 6. Visualization Results of Our Method. Our CatVersion achieves a better balance between faithful reconstruction of the target concept and more robust editability.", "figure_data": "", "figure_id": "fig_7", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Quantitative Results. We measured the average CLIP alignment scores of several methods in different personalized scenarios. Our method is significantly superior to the baseline and exhibits the highest text alignment score and balanced score.", "figure_data": "MethodText Alignment ↑ Image Alignment ↑ Overall ↑Textual Inversion [4]0.22130.78310.4163DreamBooth [20]0.22270.88020.4427Perfusion [26]0.24880.76270.4356ELITE [31]0.20400.84570.4154CatVersion0.25140.80480.4498MethodText Alignment ↑ Image Alignment ↑ Overall ↑Textual Inversion [4]10.25%7.00%8.25%DreamBooth [20]23.75%36.00%24.00%Perfusion [26]27.75%20.50%17.75%ELITE [31]7.25%13.00%10.00%CatVersion31.00%23.50%40.00%", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "User Study. We investigate the respondents' preference for the results of five different personalized generation algorithms.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
[ { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Cogview: Mastering text-to-image generation via transformers", "year": "2021" }, { "authors": "Oran Gafni; Adam Polyak; Oron Ashual; Shelly Sheynin; Devi Parikh; Yaniv Taigman", "journal": "Springer", "ref_id": "b2", "title": "Make-a-scene: Scenebased text-to-image generation with human priors", "year": "2022" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "", "ref_id": "b3", "title": "An image is worth one word: Personalizing text-toimage generation using textual inversion", "year": "2008" }, { "authors": "Rinon Gal; Moab Arar; Yuval Atzmon; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b4", "title": "Encoder-based domain tuning for fast personalization of text-to-image models", "year": "2023" }, { "authors": "Ligong Han; Yinxiao Li; Han Zhang; Peyman Milanfar; Dimitris Metaxas; Feng Yang", "journal": "", "ref_id": "b5", "title": "Svdiff: Compact parameter space for diffusion fine-tuning", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Ziqi Huang; Tianxing Wu; Yuming Jiang; Kelvin Ck Chan; Ziwei Liu", "journal": "", "ref_id": "b7", "title": "Reversion: Diffusion-based relation inversion from images", "year": "2023" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "", "ref_id": "b8", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "", "ref_id": "b9", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Junyang Lin; Rui Men; An Yang; Chang Zhou; Ming Ding; Yichang Zhang; Peng Wang; Ang Wang; Le Jiang; Xianyan Jia", "journal": "", "ref_id": "b10", "title": "M6: A chinese multimodal pretrainer", "year": "2021" }, { "authors": "Kevin Meng; David Bau; Alex Andonian; Yonatan Belinkov", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Locating and editing factual associations in gpt", "year": "2022" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b12", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "PMLR", "ref_id": "b13", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Tingting Qiao; Jing Zhang; Duanqing Xu; Dacheng Tao", "journal": "", "ref_id": "b14", "title": "Mirrorgan: Learning text-to-image generation by redescription", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b15", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "PMLR", "ref_id": "b16", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b17", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2004" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b18", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b19", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Huiwen Chang; Chris Lee; Jonathan Ho; Tim Salimans; David Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b20", "title": "Palette: Image-to-image diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b22", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b23", "title": "Score-based generative modeling through stochastic differential equations", "year": "2020" }, { "authors": "Ming Tao; Hao Tang; Fei Wu; Xiao-Yuan Jing; Bing-Kun Bao; Changsheng Xu", "journal": "", "ref_id": "b24", "title": "Df-gan: A simple and effective baseline for text-to-image synthesis", "year": "2022" }, { "authors": "Yoad Tewel; Rinon Gal; Gal Chechik; Yuval Atzmon", "journal": "", "ref_id": "b25", "title": "Key-locked rank one editing for text-to-image personalization", "year": "2023" }, { "authors": "Aaron Van Den; Oriol Oord; Vinyals", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Neural discrete representation learning", "year": "2017" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "Attention is all you need", "year": "2017" }, { "authors": "Suraj Patrick Von Platen; Anton Patil; Pedro Lozhkov; Nathan Cuenca; Kashif Lambert; Mishig Rasul; Thomas Davaadorj; Wolf", "journal": "", "ref_id": "b28", "title": "Diffusers: State-of-the-art diffusion models", "year": "2022" }, { "authors": "Andrey Voynov; Qinghao Chu; Daniel Cohen-Or; Kfir Aberman", "journal": "", "ref_id": "b29", "title": "p+: Extended textual conditioning in text-toimage generation", "year": "2023" }, { "authors": "Yuxiang Wei; Yabo Zhang; Zhilong Ji; Jinfeng Bai; Lei Zhang; Wangmeng Zuo", "journal": "", "ref_id": "b30", "title": "Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation", "year": "2023" }, { "authors": "Yuxin Wen; Neel Jain; John Kirchenbauer; Micah Goldblum; Jonas Geiping; Tom Goldstein", "journal": "", "ref_id": "b31", "title": "Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery", "year": "2023" }, { "authors": "Tao Xu; Pengchuan Zhang; Qiuyuan Huang; Han Zhang; Zhe Gan; Xiaolei Huang; Xiaodong He", "journal": "", "ref_id": "b32", "title": "AttnGAN: Finegrained text to image generation with attentional generative adversarial networks", "year": "2018" }, { "authors": "Guojun Yin; Bin Liu; Lu Sheng; Nenghai Yu; Xiaogang Wang; Jing Shao", "journal": "", "ref_id": "b33", "title": "Semantics disentangling for textto-image generation", "year": "2019" }, { "authors": "Han Zhang; Tao Xu; Hongsheng Li; Shaoting Zhang; Xiaogang Wang; Xiaolei Huang; Dimitris N Metaxas", "journal": "", "ref_id": "b34", "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "year": "2017" }, { "authors": "Han Zhang; Tao Xu; Hongsheng Li; Shaoting Zhang; Xiaogang Wang; Xiaolei Huang; Dimitris N Metaxas", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b35", "title": "Stack-gan++: Realistic image synthesis with stacked generative adversarial networks", "year": "2018" }, { "authors": "Yuxin Zhang; Weiming Dong; Fan Tang; Nisha Huang; Haibin Huang; Chongyang Ma; Tong-Yee Lee; Oliver Deussen; Changsheng Xu", "journal": "", "ref_id": "b36", "title": "Prospect: Expanded conditioning for the personalization of attribute-aware image generation", "year": "2023" }, { "authors": "Minfeng Zhu; Pingbo Pan; Wei Chen; Yi Yang", "journal": "", "ref_id": "b37", "title": "Dmgan: Dynamic memory generative adversarial networks for text-to-image synthesis", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 63.4, 109.26, 393.55, 83.49 ], "formula_id": "formula_0", "formula_text": "••• ••• Pre-Trained U-Net ••• Self-Attention 𝐾 𝑉 dog ••• ••• dog Pre-" }, { "formula_coordinates": [ 4, 90.34, 260.47, 196.02, 12.62 ], "formula_id": "formula_1", "formula_text": "L DM (θ) := E t,x0,ϵ ∥ϵ -ϵ θ (x t , t)∥ 2 ,(1)" }, { "formula_coordinates": [ 4, 77.14, 402.04, 205.35, 12.62 ], "formula_id": "formula_2", "formula_text": "L LDM (θ) := E t,x0,ϵ ∥ϵ -ϵ θ (x t , t, τ θ (c)∥ 2 . (2" }, { "formula_coordinates": [ 4, 282.49, 405.33, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 5, 350.67, 558.96, 194.44, 25.24 ], "formula_id": "formula_4", "formula_text": "Attn(Q, K, V ) = Softmax QK T √ d ′ V.(3)" }, { "formula_coordinates": [ 5, 408.79, 599.85, 110.53, 17.45 ], "formula_id": "formula_5", "formula_text": "A = Softmax(QK T / √ d ′ )" }, { "formula_coordinates": [ 6, 85.22, 105.96, 166.03, 26.65 ], "formula_id": "formula_6", "formula_text": "Attn(Q, K ′ , V ′ ) = Softmax QK ′ T √ d ′ V ′ ," }, { "formula_coordinates": [ 6, 162.18, 130.6, 124.18, 31.89 ], "formula_id": "formula_7", "formula_text": "K ′ = W k f + ∆ k and V ′ = W v f + ∆ v .(4)" }, { "formula_coordinates": [ 6, 50.11, 190.58, 236.25, 29.74 ], "formula_id": "formula_8", "formula_text": "A ′ = Softmax(QK ′T / √ d ′ ) ∈ R l×" }, { "formula_coordinates": [ 6, 66.08, 267.42, 220.29, 12.62 ], "formula_id": "formula_9", "formula_text": "∆ * = arg min E t,x0,ϵ ∥ϵ -ϵ θ (x t , t, τ θ (c))∥ 2 ,(5)" }, { "formula_coordinates": [ 6, 346.56, 412.22, 198.56, 9.81 ], "formula_id": "formula_10", "formula_text": "CLIP img+ = CLIP img (M ⊙ I, M s ⊙ I s ).(6)" } ]
2023-11-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b37", "b40", "b46", "b54", "b3", "b40", "b50", "b51", "b53", "b54", "b46", "b20", "b5", "b47", "b13", "b68", "b12", "b35", "b38", "b58", "b65", "b69", "b49", "b41", "b43", "b56", "b63", "b67", "b0", "b2", "b14", "b40", "b46", "b51", "b53", "b57", "b13", "b40", "b46", "b51", "b57", "b9", "b46", "b53", "b18", "b50", "b51", "b70", "b13", "b3", "b40", "b51", "b54", "b56", "b34", "b51", "b53", "b51", "b53" ], "table_ref": [], "text": "Self-supervised visual representation learning has received significant attention over the past years. The goal is to learn strong feature representations from unlabeled images, which are transferable to downstream tasks. Most methods are currently designed for object-centric datasets like ImageNet [42], yet most downstream applications are scene-centric, containing multiple objects of interest.\nWe know that such representations can \"store\" several different concepts even when they appear simultaneously in an image [37]. It is thus reasonable to assume that they can be used to discover multiple objects in an image, without supervision. However, is it also possible to locate each concept spatially in the image? To answer this question, recent work has moved beyond concept discovery at the image level to pixel-level localization and segmentation of unsupervised concepts [2, 40,45,46,54]. Different from fine-tuning models for detection and segmentation tasks, this line of work exploits self-supervised features directly without using any annotations, for example through clustering [24,40,50] for semantic segmentation.\nInstance segmentation, on the other hand, has received less attention. This task is particularly challenging because it requires recognizing and segmenting each individual object in an image, while semantic segmentation treats multiple objects of the same category as one. In the unsupervised setting, instance segmentation relies on acquiring a notion of objectness, spanning diverse scales and appearances, that is more difficult to achieve than single-object discovery, which amounts to the most salient region in the image. As a result, prior work on this task mines a small set of pseudo-masks per image and uses that to bootstrap instance segmentation [51,53,54] or object detection [46] models.\nIn this work, we provide valuable insights with respect to the choice of self-supervised features that can be used to obtain these mask proposals. We find that the training objective in self-supervised models influences their ability to distinguish between objects, specifically different instances of the same semantic category.\nSelf-Supervised Representation Learning (SSL). The goal of SSL is to learn view-invariant representations from unlabeled data, which are then transferred to downstream tasks via task-specific finetuning. Following instance discrimination [21,61], the majority of works focus on contrastive [6,16,18,27,30,33,36,47], negative-free [9, 14,17,23,68], and clustering-based [4,12,13,35,38] learning. Inspired by advances in NLP, masked image modeling [7, 25,58,65,69] has emerged as an alternative approach. Meanwhile, the use of Vision Transformers (ViTs) [20,49] has contributed significantly to the performance of self-supervised methods. Finally, there is a growing interest in learning dense [41,43,56,64] or region-based representations [8,31,32,59,62,63,66,67], since they may be better suited for dense prediction tasks.\nUnsupervised Object Discovery. Unsupervised object discovery aims to localize objects in images by predicting bounding boxes. Earlier approaches were based on adversarial learning [1,3,11,15], while, more recently, several works [2, 40,45,46,51,53,57] have explored the use of self-supervised features. In particular, [14] first showed that the self-attention of DINO-ViT could be used for foreground segmentation. Follow-up works [40,46,51,57] showed that it can also be used for salient object discovery, which in turn can be used to train unsupervised object detectors [10,46,53].\nUnsupervised Semantic Segmentation. Recent work in unsupervised semantic segmentation can be split into two categories: (1) methods that utilize pre-trained self-supervised models for initialization [19,50,51,60,70] and (2) methods that directly exploit off-the-shelf self-supervised representations (e.g., DINO [14]) to obtain and cluster pseudo-masks and train a segmentation network [24,40,51]. These works demonstrate that self-supervised ViT features encode welllocalized semantics, but do not investigate whether these features can discriminate instances of the same semantic class. We aim to answer this question in Sec. 3, finding a notable difference between models trained with discriminative (e.g., contrastive) and generative (e.g., autoencoding) objectives.\nUnsupervised Instance Segmentation. Unsupervised instance segmentation refers to the task of discovering and segmenting individual object instances in an image. State-of-the-art methods typically bootstrap class-agnostic instance segmentation networks using coarse masks extracted from selfsupervised feature extractors. FreeSOLO [54] uses densely-learned features (DenseCL [56]), while Exemplar-FreeSOLO [34] additionally uses a pool of exemplar images to extract objects. MaskDistill [51] and CutLER [53] leverage DINO features; [51] follows a single-object discovery technique, while [53] obtains multiple masks through repeated applications of the NCut algorithm [44]." }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Analysis of Instance-ness in Self-Supervised Vision Transformers", "publication_ref": [ "b13", "b67", "b56", "b40", "b39", "b1", "b67", "b56" ], "table_ref": [], "text": "We investigate whether features from self-supervised transformers exhibit a general notion of instanceness that can be used to discover and discriminate between object instances rather than semantic categories. Among SSL methods, DINO features have been frequently used for dense unsupervised tasks. We note that different pre-training objectives may result in models with different properties and thus focus our investigation on models trained with a variety of objectives: contrastive (MoCo v3 [18]), self-distillation (DINO [14], MSN [5]), image reconstruction (MAE [25]), as well as patchlevel (SelfPatch [67]) and dense (DenseCL [56]) pre-training. All models use ViT-B/16 backbones, except for SelfPatch (only available for ViT-S/16) and DenseCL (ResNet-101 [29]).\nFirst, we examine all models with respect to their ability to encode instances in the learned representation, without further task-specific finetuning. Similar to prior work [2,40], to decompose an image into a set of meaningful regions, we compute an affinity matrix using cosine similarity in feature space and apply k-way spectral clustering spatially to generate a set of masks. In particular, we extract features from the keys of the final self-attention block of ViTs. To account for the varying number of objects in images and to allow for the possibility of segmenting objects at different granularities (Fig. 1), we apply spectral clustering multiple times for different values of k ∈ K and accumulate all masks. Finally, if the resulting masks have non-connected components, we split these off into separate masks.\nIt is already known from prior work that this process results in regions that likely correspond to semantic entities or objects. To understand the degree to which these regions overlap with real objects in images, we compute the mean average recall of these masks against ground truth mask annotations in common datasets (MS COCO val2017 [39], PASCAL VOC 2012 [22]) and report our findings Table 1: Analysis of SSL methods for instance segmentation. We report the mean average recall (in %) of feature extractors for different numbers of instances of the same semantic class. For each image and feature extractor, we apply spectral clustering to the extracted features with K = {2, 3, 4, 5, 6}. 21.0 5.9 3.3 1.9 1.9 0.6 38.3 9.9 4.7 1.9 1.7 0.7 SelfPatch (ViT-S/16) [67] 17.4 7.7 5.5 2.8 2.6 1.0 32.8 13.9 7.1 4. [56] 13.3 3.2 1.8 0.9 0.9 0.3 24.3 5.2 2.3 1.0 0.8 0.3 in Table 1. We evaluate recall based on the number of instances of the same semantic category that co-occur in an image, e.g. in an image with two objects of the same class and one object of another class, we compute and report recall for the two instances and the remaining single instance separately.\nInterestingly, we find that MAE is much better at discriminating between multiple object instances (2+) than other models. This finding may be explained by the fact that the pixel-wise reconstruction objective encourages the learning of representations of lower semanticity [25]. As a result, it shows a lower tendency to group objects of the same or similar semantic category together, compared to, e.g., DINO. This holds true even against models trained with patch-level and pixel-level discrimination (e.g., SelfPatch). We demonstrate this also qualitatively in the Appendix (Figure 3), comparing MAE and DINO masks obtained after spectral clustering while varying k. Especially for higher values of k, MAE features tend to spatially decompose an image, whereas DINO tends to divide objects further into semantic parts. The spatial bias in MAE likely explains its superiority at separating instances, though MAE masks remain of generally lower quality than DINO.\nAt the same time, our results in Table 1 indicate that DINO and other contrastive methods such as MSN and MoCo-v3 are superior at locating single objects; this finding matches prior work that uses DINO features for single-object image segmentation." }, { "figure_ref": [ "fig_1" ], "heading": "Self-Training for Instance Segmentation", "publication_ref": [ "b13", "b67", "b13", "b40", "b46", "b57", "b55", "b54" ], "table_ref": [ "tab_3" ], "text": "To evaluate how these findings transfer to the task of unsupervised instance segmentation, we implement a simple method to generate mask proposals and use these to train a segmentation network, using the feature extractors considered above. An overview of our approach is shown in Figure 2.\nWe adopt a two-part approach to generate mask proposals, both parts based on SSL feature extractors. The first is the generation of pseudo-masks spanning the entire image area. For each of the feature extractors we follow the procedure outlined previously and apply multi-k-way clustering with K = {2, 3, 4, 5} (resulting in 14 overlapping masks per image). Since we cluster features spatially over the entire image area, the resulting clusters (masks) will inevitably also enclose background regions.\nAs we are interested in instance segmentation (e.g., people, cars) rather than stuff segmentation (e.g., grass, sky), we need to eliminate masks that are less likely to correspond to objects. 3) we select the candidate masks that strongly intersect with the saliency map as our final mask proposals (followed by non-maximum suppression for deduplication).\nTable 2: Unsupervised instance segmentation performance (COCO val2017). We evaluate segmentation models trained with mask proposals from different feature extractors.\nFeature Extractor AP50 AP75 AP AR1 AR10 AR100 MAE [25] 12.1 3.7 5.2 3.8 10.9 18.3 DINO [14] 11.8 3.6 5.0 3.7 11.4 18.0 SelfPatch [67] 6.3 2.2 2.8 3.4 4.9 4.9 MSN [5] 9.0 2.7 3.9 3.9 Therefore, the second part is saliency-based masking to filter possibly noisy candidates, such as those corresponding to background regions. Although not all objects in a scene are necessarily salient, salient image regions are more likely to contain object instances than non-salient ones. Given the success of DINO features for this task [14,40,46,57] and our findings in Table 1, we apply spectral clustering with k = 2 on DINO features. We select the mask that shares fewer pixels with the image boundary as the foreground and use it to filter the mask candidates from the first stage such that only masks for salient objects remain.\nFinally, we use the resulting mask proposals (after non-maximum suppression (NMS) for deduplication) to train an instance segmentation network. We train a SOLOv2 [55] architecture using only pseudo-masks, following FreeSOLO [54]. For more training details, please refer to Appendix A.2.1.\nWe train a segmenter for each of the feature extractors using their respective mask proposals. This allows us to assess their performance in unsupervised instance segmentation and examine whether our initial observations align with post-training results. We report average precision and recall on MS COCO val2017 in Table 2. We observe that the model trained with masks from MAE remains the best-performing one. Interestingly, the DINO-based model significantly narrows the performance gap after training, whereas other feature extractors perform worse. We hypothesize that this comes from the fact that DINO masks are generally cleaner and capture object boundaries better, which is important for learning. This could also explain why the vast majority of the current state of the art (see Table 3) in unsupervised instance segmentation performs well, despite leveraging DINO features." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we study the properties of SSL methods for the task of instance segmentation. We find that different learning signals result in features with varying characteristics. DINO learns highly semantic features and is thus widely used for semantic tasks. We find that image reconstruction, e.g. MAE, tasks are better suited to discriminate instances of the same class inside an image. This is an overlooked property that can potentially be used in many instance-specific downstream tasks." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2", "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "A Appendix", "publication_ref": [ "b37" ], "table_ref": [], "text": "A.1 Qualitative Analysis of Instance-ness in Self-Supervised Vision Transformers\nIn Section 3, we have measured the mean average recall of masks produced by clustering features from different self-supervised networks for a different number of instances of the same semantic class on MS COCO val2017 and PASCAL VOC 2012. We used this experiment to better understand the properties of the raw feature extractors, i.e. how they \"see\" objects, as well as a proxy for final instance segmentation performance. Our analysis indicates that features generated by MAE lead to an overall higher recall when multiple instances exist, i.e. features produced by MAE are better at discriminating multiple instances of the same semantic class, while DINO features are superior at detecting single instances.\nWe illustrate this observation further in Fig. 3 with qualitative examples. We show examples from COCO val2017 and compare the segments produced by MAE and DINO after spectral clustering while varying k.\nNext, we record some qualitative observations from our experience with these methods, which we hope will be of use to researchers working with them in the future:\n• Spectral clustering of MAE features has a tendency to separate instances of the same semantic class in the same image, whereas DINO features are more likely to produce a purely semantic grouping (e.g., Fig. 3 (b)). Yet there is ambiguity as to how semantic classes are \"seen\" by self-supervised models. An example is shown in Fig. 3 (e), in which all objects correspond to the same semantic class orange. The image shows a whole orange, a halved orange, and three individual peeled segments, which represent different states of the orange. Since these states have distinct appearances, DINO is able to partially discriminate between these objects, and its behavior comes closer to MAE in resembling \"instance awareness\" in this example. This finding aligns with the observation of [37] and is likely true for most self-supervised models. • For smaller values of k (especially if k is smaller than the number of objects in the image), the decomposition remains shallow. Even if k is sufficiently large to capture all objects, a number of clusters are often used to represent the nuances of the background, and as a result, different foreground objects group together in the remaining clusters. • With an increasing number of clusters (over-clustering), MAE features tend to spatially decompose an image, whereas, with DINO, the level of semantic detail seems to increase, e.g. objects get subdivided into parts (Fig. 3 (a), (b), (d)).\n• Applying spectral clustering to DINO features with k = 2 produces higher quality saliency candidates (Fig. 3 (b), (f)), which matches the results of the ablation in Tab. 7. • Both feature extractors struggle with very small objects, e.g. the dog (Fig. 3 (c)) or the baseball bat and glove (Fig. 3 (f)). In these examples, small objects become part of larger, more prominent objects or part of the background. • Complex scenes without prominent foreground objects, e.g. scenes \"seen\" from a distance (Fig. 3 (g)), yield segmentations that do not align with the valid object categories in COCOfor example, the image is partitioned into trees, buildings, and street." }, { "figure_ref": [], "heading": "A.2 Self-Training for Instance Segmentation, Addendum", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.2.1 Datasets and Training Details", "publication_ref": [ "b39", "b54", "b55", "b54", "b54", "b54" ], "table_ref": [], "text": "MS COCO 2017. The Common Objects in COntext dataset comprises images of \"complex everyday scenes containing common objects in their natural context\" [39]. We use the 2017 release in our method, which consists of train2017, unlabeled2017, and val2017 sets. The training set contains 118,287 images with 860,001 polygon annotations for 80 semantic classes sourced from human annotators. The unlabeled image set comprises 123,403 images without any annotations. We use the train2017 and unlabeled2017 sets for generating mask proposals with our method to bootstrap an instance segmentation network (no ground truth annotations are used). We then evaluate it on val2017, which contains 5,000 images with 36,781 polygon annotations, in a class-agnostic (i.e., classes ignored) setting. Training Details. We generate the mask proposals using pre-trained ViT-B/16 vision transformers (except for SelfPatch, which is only available for ViT-S/16, and DenseCL, which is a ResNet-101 model) and a ViT-B/16 DINO model for saliency estimation. These networks are all pre-trained on ImageNet [42] without supervision. When extracting features, we do not apply any data augmentation to the images. We choose K = {2, 3, 4, 5} as the set of values for k to generate candidate masks and further split non-connected components into separate masks. To obtain the final set of mask proposals, we first discard masks with an intersection score of less than 0.5 when compared to the saliency map. Then, we apply NMS with a threshold of 0.8 to remove any duplicates from the masks that remain.\nUnless otherwise specified, we use SOLOv2 [54,55] as the instance segmentation network, initialized with self-supervised weights (DenseCL, ResNet-101). We employ copy & paste augmentation as well as the BoxInst loss [48,54]. We train the network with a frozen backbone for 60k steps with our mask proposals as targets. The stochastic gradient descent optimizer is used with a learning rate of 0.00025 and a batch size of 4. We follow the default parameters and loss setup of [54], and perform only a single round of training (which differs from [54] that employs two rounds with 30k steps each).\nThe trained network returns a confidence score for each detection/mask, which allows us to set a threshold and retain the most confident predictions. Training takes 48 hours on an NVIDIA A40 GPU." }, { "figure_ref": [], "heading": "A.2.2 State-Of-The-Art on Unsupervised Instance Segmentation", "publication_ref": [ "b52", "b54", "b34", "b51", "b53" ], "table_ref": [], "text": "In Tab. 3, we present the performance of recent works for the task of unsupervised class-agnostic instance segmentation and provide a comparison to our simple approach (using MAE features to generate mask proposals).\nApart from COCO val2017 (discussed above), we show results for:\n1. PASCAL VOC, a standard benchmark for objection detection with annotations spanning 20 object categories that overlap with the COCO categories. It contains 5,011 images and 15,662 object annotations.\n2. COCO20k, a subset of the COCO 2014 trainval split, with 19,817 images (143,951 object annotations) and excludes objects flagged as crowd.\n3. The Unidentified Video Objects dataset, which provides a benchmark for \"open-world classagnostic object segmentation in videos\" [52]. It includes dense segmentation annotations of video frames from human annotators without any class information to evaluate the degree to which models recognize object categories not seen during training. For evaluation, the validation set of UVO-Sparse 1.0 is used, which comprises a total of 65,588 temporally sparse frame annotations for 8,124 frames of 2,708 videos.\nAs our approach follows the setup of FreeSOLO [54] (architecture, loss formulation, and hyperparameters), it permits direct comparison and shows clear improvements across datasets. Exemplar-FreeSOLO [34], with its addition of a randomly drawn pool of exemplars used in a contrastive learning loss, shows stronger improvements.\nMaskDistill [51] trains separate models for each dataset, as opposed to all previously mentioned methods that are solely trained on COCO train2017 and unlabeled2017. Therefore, comparability is limited.\nCutLER [53] trains on ImageNet, exploiting its object-centric prior, as most images contain a single object in the center of the frame. Due to its strong instance discrimination abilities, CutLER is the current state-of-the-art method for this task." }, { "figure_ref": [ "fig_0" ], "heading": "A.3 Ablations", "publication_ref": [ "b55", "b54", "b54", "b55" ], "table_ref": [], "text": "We perform various ablations to investigate the effectiveness of our design choices surrounding the self-training setup. For simplicity, we focus on annotations produced by the best-performing feature extractor, i.e. MAE features (filtered with saliency maps obtained from DINO features), instead of considering all feature extractors individually. All models are evaluated after training an instance segmentation network for 30k steps on COCO val2017.\nChoice of k. The set K of values for k determines the number of objects that can be discovered but also the number of spurious masks that might appear. For images with a single object, large values of k might over-segment objects into parts, whereas for images with multiple objects, too small values might lead to masks that do not capture individual instances or fail to segment any objects at all. In Tab. 4, we show that for complex datasets such as COCO, where images contain various numbers of objects, no single choice of k emerges as the best option. Generating masks from multiple values of k, however, provides an opportunity to deal with this variance as evidenced by the significantly higher recall.\nGenerating the Saliency Map. The saliency map, which we use to retain masks that are likely to correspond to objects in an image while discarding those that are not, is a central piece in our self-training setup. In Tab. 5, we compare saliency maps generated from MAE and DINO features, showing that DINO features are far superior.\nNon-Connected Component Splitting. Although feature extractors yield mask proposals that segment individual instances, they may still generate mask candidates with multiple objects in a single mask, especially for smaller values of k (Fig. 1). If such masks contain disconnected components, we assume they are separate objects and thus split them apart. This is particularly helpful as we apply NMS as part of our mask generation process: If a larger k does indeed capture these objects separately, the split components can be deduplicated, effectively eliminating the poor original mask candidate. In Tab. 6, we show that this splitting of non-connected components in masks is beneficial.\nSegmentation Architecture. As the final part of our self-training design, we use an instance segmentation network, which refines masks and helps to increase the number of objects that are detected. We observe that the setup of SOLOv2 [55], adapted to unsupervised segmentation [54], is superior to an off-the-shelf Mask R-CNN for unsupervised instance segmentation, as seen in Tab. 7. [54,55] 10.7 2.9 4.5 3.5 9.0 16.5" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements. P. E., C. R., and I. L. are supported by ERC-UNION-CoG-101001212. P. E. is also supported by Meta Research and a fellowship from the German Academic Exchange Service (DAAD). L. M. K. is supported by the Rhodes Trust. C. R. and I. L. are also supported by VisualAI EP/T028572/1." } ]
Self-supervised learning (SSL) can be used to solve complex visual tasks without human labels. Self-supervised representations encode useful semantic information about images, and as a result, they have already been used for tasks such as unsupervised semantic segmentation. In this paper, we investigate self-supervised representations for instance segmentation without any manual annotations. We find that the features of different SSL methods vary in their level of instance-awareness. In particular, DINO features, which are known to be excellent semantic descriptors, lack behind MAE features in their sensitivity for separating instances.
Understanding Self-Supervised Features for Learning Unsupervised Instance Segmentation
[ { "figure_caption": "Figure 1 :1Figure 1: Varying the value of k. Depending on the number of instances and semantic classes in an image, different values of k capture scene elements at different levels of granularity. Higher values of k separate the different instances. This example demonstrates why we use multiple values of k to generate mask proposals and why it is necessary to filter these using a saliency map.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of our approach to produce mask proposals. (1) Given an image and a self-supervised feature extractor, we generate a set of candidate masks by applying spectral clustering multiple times with different values for k, (2) we obtain a saliency map via spectral clustering with k = 2 on DINO features, and (3) we select the candidate masks that strongly intersect with the saliency map as our final mask proposals (followed by non-maximum suppression for deduplication).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Spectral clustering of self-supervised features from MAE and DINO with k ∈ {2, ..., 6}. We show a sample of images from COCO val2017 with different numbers of ground-truth (GT) annotations. For each feature extractor, we show all masks for individual values of k. Each mask is shown with a different color, where no color represents the absence of annotations. Best viewed on screen.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Unsupervised class-agnostic instance segmentation. We report the performance of state-of-the-art methods on this task on common benchmarks. For FreeSOLO, we report additional results marked with † based on our reproduction. FreeSOLO and Exemplar-FreeSOLO use DenseCL features, while MaskDistill and CutLER use DINO. Our setting is most comparable to FreeSOLO.", "figure_data": "PASCAL VOCCOCO20kCOCO (val2017)UVOMethodAP50 AP75 AP AP50 AP75 AP AP50 AP75 AP AP50 AP75 APFreeSOLO [54]9.8 † 0.2 † 2.3 † 10.2 † 3.5 † 4.6 †9.82.94.0 12.73.04.8Exemplar-FS [34]------13.26.3 8.4 14.27.39.2MaskDistill [51]24.36.99.96.82.12.9------CutLER [53]---19.6 10.0 9.2 18.99.7 9.2 22.88.0 10.1Ours (MAE)18.80.64.612.33.75.3 12.03.7 5.2 12.92.64.7", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation of different values for k. Multi-way clustering (K = {2, 3, 4, 5}) performs best.", "figure_data": "kAP APS APM APLAR ARS ARM ARL20.20.00.00.30.50.00.02.332.70.71.58.84.30.11.3 16.143.70.82.4 12.36.00.12.3 21.753.70.73.0 11.46.50.23.3 22.12-5 4.51.46.0 12.8 16.51.7 19.1 38.6", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation of the saliency map. We also compare different self-supervised models (MAE, DINO) for generating the map. Without the saliency filter (None) performance drops significantly.", "figure_data": "Saliency MaskAP50 AP75 AP AR1 AR10 AR100None2.30.2 0.6 0.93.89.4Ours (MAE [26])5.91.3 2.2 2.25.17.3Ours (DINO [14]) 10.72.9 4.5 3.59.016.5", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation of non-connected component splitting (NCCS) during the generation of candidate masks.", "figure_data": "NCCS AP50 AP75 AP AR1 AR10 AR100✗7.22.53.33.46.49.6✓10.72.94.53.59.016.5", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablation of the segmentation architecture. Both networks are trained using MAE mask proposals.", "figure_data": "ArchitectureAP50 AP75 AP AR1 AR10 AR100Mask R-CNN [28] 6.81.8 2.7 2.27.311.9SOLOv2", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Paul Engstler; Luke Melas-Kyriazi; Christian Rupprecht; Iro Laina
[ { "authors": "Rameen Abdal; Peihao Zhu; Niloy Mitra; Peter Wonka", "journal": "", "ref_id": "b0", "title": "Labels4free: Unsupervised segmentation using stylegan", "year": "2021" }, { "authors": "Shir Amir; Yossi Gandelsman; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b1", "title": "Deep vit features as dense visual descriptors", "year": "" }, { "authors": "Relja Arandjelović; Andrew Zisserman", "journal": "", "ref_id": "b2", "title": "Object discovery with a copy-pasting gan", "year": "2019" }, { "authors": " Ym Asano; Rupprecht; Vedaldi", "journal": "", "ref_id": "b3", "title": "Self-labelling via simultaneous clustering and representation learning", "year": "2019" }, { "authors": "Mahmoud Assran; Mathilde Caron; Ishan Misra; Piotr Bojanowski; Florian Bordes; Pascal Vincent; Armand Joulin; Mike Rabbat; Nicolas Ballas", "journal": "Springer", "ref_id": "b4", "title": "Masked siamese networks for label-efficient learning", "year": "2022" }, { "authors": "Philip Bachman; Devon Hjelm; William Buchwalter", "journal": "", "ref_id": "b5", "title": "Learning representations by maximizing mutual information across views", "year": "2019" }, { "authors": "Hangbo Bao; Li Dong; Songhao Piao; Furu Wei", "journal": "", "ref_id": "b6", "title": "BEit: BERT pre-training of image transformers", "year": "2022" }, { "authors": "Amir Bar; Xin Wang; Vadim Kantorov; J Colorado; Roei Reed; Gal Herzig; Anna Chechik; Trevor Rohrbach; Amir Darrell; Globerson", "journal": "", "ref_id": "b7", "title": "Detreg: Unsupervised pretraining with region priors for object detection", "year": "2022" }, { "authors": "Adrien Bardes; Jean Ponce; Yann Lecun", "journal": "", "ref_id": "b8", "title": "VICReg: Variance-invariance-covariance regularization for self-supervised learning", "year": "2022" }, { "authors": "Adam Bielski; Paolo Favaro", "journal": "", "ref_id": "b9", "title": "Move: Unsupervised movable object segmentation and detection", "year": "2022" }, { "authors": "Adam Jakub; Bielski ; Paolo Favaro", "journal": "", "ref_id": "b10", "title": "Emergence of object segmentation in perturbed generative models", "year": "2019" }, { "authors": "M Caron; P Bojanowski; A Joulin; M Douze", "journal": "", "ref_id": "b11", "title": "Deep clustering for unsupervised learning of visual features", "year": "2018" }, { "authors": "Mathilde Caron; Ishan Misra; Julien Mairal; Priya Goyal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b12", "title": "Unsupervised learning of visual features by contrasting cluster assignments", "year": "2020" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b13", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Mickaël Chen; Thierry Artières; Ludovic Denoyer", "journal": "", "ref_id": "b14", "title": "Unsupervised Object Segmentation by Redrawing", "year": "2019" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b15", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Xinlei Chen; Kaiming He", "journal": "", "ref_id": "b16", "title": "Exploring simple siamese representation learning", "year": "2021" }, { "authors": "Xinlei Chen; Saining Xie; Kaiming He", "journal": "", "ref_id": "b17", "title": "An empirical study of training self-supervised vision transformers", "year": "2021" }, { "authors": "Hyun Jang; Utkarsh Cho; Kavita Mall; Bharath Bala; Hariharan", "journal": "", "ref_id": "b18", "title": "Picie: Unsupervised semantic segmentation using invariance and equivariance in clustering", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b19", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Alexey Dosovitskiy; Jost Tobias Springenberg; Martin Riedmiller; Thomas Brox", "journal": "", "ref_id": "b20", "title": "Discriminative unsupervised feature learning with convolutional neural networks", "year": "2014" }, { "authors": "Mark Everingham; Luc Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "International journal of computer vision", "ref_id": "b21", "title": "The pascal visual object classes (voc) challenge", "year": "2010" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar; Bilal Piot; Remi Munos; Michal Valko", "journal": "", "ref_id": "b22", "title": "Bootstrap your own latent -a new approach to selfsupervised learning", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b23", "title": "", "year": "2020" }, { "authors": "Mark Hamilton; Zhoutong Zhang; Bharath Hariharan; Noah Snavely; William T Freeman", "journal": "", "ref_id": "b24", "title": "Unsupervised semantic segmentation by distilling feature correspondences", "year": "2022" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b25", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b26", "title": "Masked autoencoders are scalable vision learners", "year": "2021" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b27", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2019" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b28", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b29", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Olivier Henaff", "journal": "PMLR", "ref_id": "b30", "title": "Data-efficient image recognition with contrastive predictive coding", "year": "2020" }, { "authors": "J Olivier; Skanda Hénaff; Jean-Baptiste Koppula; Aaron Alayrac; Oriol Van Den Oord; João Vinyals; Carreira", "journal": "", "ref_id": "b31", "title": "Efficient visual pretraining with contrastive detection", "year": "2021" }, { "authors": "J Olivier; Skanda Hénaff; Evan Koppula; Daniel Shelhamer; Andrew Zoran; Andrew Jaegle; João Zisserman; Relja Carreira; Arandjelović", "journal": "", "ref_id": "b32", "title": "Object discovery and representation networks", "year": "2022" }, { "authors": "Devon Hjelm; Alex Fedorov; Samuel Lavoie-Marchildon; Karan Grewal; Phil Bachman; Adam Trischler; Yoshua Bengio", "journal": "", "ref_id": "b33", "title": "Learning deep representations by mutual information estimation and maximization", "year": "2019" }, { "authors": "Taoseef Ishtiak; Qing En; Yuhong Guo", "journal": "", "ref_id": "b34", "title": "Exemplar-freesolo: Enhancing unsupervised instance segmentation with exemplars", "year": "2023-06" }, { "authors": "Xu Ji; Joao F Henriques; Andrea Vedaldi", "journal": "", "ref_id": "b35", "title": "Invariant information clustering for unsupervised image classification and segmentation", "year": "2019" }, { "authors": "Yannis Kalantidis; Bulent Mert; Noe Sariyildiz; Philippe Pion; Diane Weinzaepfel; Larlus", "journal": "Proc. NeurIPS", "ref_id": "b36", "title": "Hard negative mixing for contrastive learning", "year": "2020" }, { "authors": "Iro Laina; Yuki M Asano; Andrea Vedaldi", "journal": "", "ref_id": "b37", "title": "Measuring the interpretability of unsupervised representations via quantized reversed probing", "year": "2022" }, { "authors": "Junnan Li; Pan Zhou; Caiming Xiong; Steven Hoi", "journal": "", "ref_id": "b38", "title": "Prototypical contrastive learning of unsupervised representations", "year": "2021" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b39", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Luke Melas-Kyriazi; Christian Rupprecht; Iro Laina; Andrea Vedaldi", "journal": "", "ref_id": "b40", "title": "Deep spectral methods: A surprisingly strong baseline for unsupervised semantic segmentation and localization", "year": "2022" }, { "authors": "Pedro O Pinheiro; Amjad Almahairi; Ryan Benmalek; Florian Golemo; Aaron C Courville", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b41", "title": "Unsupervised learning of dense visual representations", "year": "2020" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein; Alexander C Berg; Li Fei-Fei", "journal": "IJCV", "ref_id": "b42", "title": "ImageNet Large Scale Visual Recognition Challenge", "year": "2015" }, { "authors": "Karan Ramprasaath R Selvaraju; Justin Desai; Nikhil Johnson; Naik", "journal": "", "ref_id": "b43", "title": "Casting your model: Learning to localize improves self-supervised representations", "year": "2021" }, { "authors": "Jianbo Shi; Jitendra Malik", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b44", "title": "Normalized cuts and image segmentation", "year": "2000" }, { "authors": "Gyungin Shin; Samuel Albanie; Weidi Xie", "journal": "", "ref_id": "b45", "title": "Unsupervised salient object detection with spectral cluster voting", "year": "2022" }, { "authors": "Oriane Siméoni; Gilles Puy; V Huy; Simon Vo; Spyros Roburin; Andrei Gidaris; Patrick Bursuc; Renaud Pérez; Jean Marlet; Ponce", "journal": "", "ref_id": "b46", "title": "Localizing objects with self-supervised transformers and no labels", "year": "2021-11" }, { "authors": "Yonglong Tian; Chen Sun; Ben Poole; Dilip Krishnan; Cordelia Schmid; Phillip Isola", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b47", "title": "What makes for good views for contrastive learning?", "year": "2020" }, { "authors": "Chunhua Zhi Tian; Xinlong Shen; Hao Wang; Chen", "journal": "", "ref_id": "b48", "title": "Boxinst: High-performance instance segmentation with box annotations", "year": "2021" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "PMLR", "ref_id": "b49", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "Wouter Van Gansbeke; Simon Vandenhende; Stamatios Georgoulis; Luc Van Gool", "journal": "", "ref_id": "b50", "title": "Unsupervised semantic segmentation by contrasting object mask proposals", "year": "2021" }, { "authors": "Wouter Van Gansbeke; Simon Vandenhende; Luc Van Gool", "journal": "", "ref_id": "b51", "title": "Discovering object masks with transformers for unsupervised semantic segmentation", "year": "2022" }, { "authors": "Weiyao Wang; Matt Feiszli; Heng Wang; Du Tran", "journal": "", "ref_id": "b52", "title": "Unidentified video objects: A benchmark for dense, open-world segmentation", "year": "2021" }, { "authors": "Xudong Wang; Rohit Girdhar; Stella X Yu; Ishan Misra", "journal": "", "ref_id": "b53", "title": "Cut and learn for unsupervised object detection and instance segmentation", "year": "2023" }, { "authors": "Xinlong Wang; Zhiding Yu; Shalini De Mello; Jan Kautz; Anima Anandkumar; Chunhua Shen; Jose M Alvarez", "journal": "", "ref_id": "b54", "title": "Freesolo: Learning to segment objects without annotations", "year": "2022" }, { "authors": "Xinlong Wang; Rufeng Zhang; Tao Kong; Lei Li; Chunhua Shen", "journal": "Advances in Neural information processing systems", "ref_id": "b55", "title": "Solov2: Dynamic and fast instance segmentation", "year": "2020" }, { "authors": "Xinlong Wang; Rufeng Zhang; Chunhua Shen; Tao Kong; Lei Li", "journal": "", "ref_id": "b56", "title": "Dense contrastive learning for self-supervised visual pre-training", "year": "2021" }, { "authors": "Yangtao Wang; Xi Shen; Shell Xu Hu; Yuan Yuan; James L Crowley; Dominique Vaufreydaz", "journal": "", "ref_id": "b57", "title": "Selfsupervised transformers for unsupervised object discovery using normalized cut", "year": "2022" }, { "authors": "Chen Wei; Haoqi Fan; Saining Xie; Chao-Yuan Wu; Alan Yuille; Christoph Feichtenhofer", "journal": "", "ref_id": "b58", "title": "Masked feature prediction for self-supervised visual pre-training", "year": "2022" }, { "authors": "Fangyun Wei; Yue Gao; Zhirong Wu; Han Hu; Stephen Lin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b59", "title": "Aligning pretraining for detection via object-level contrastive learning", "year": "2021" }, { "authors": "Xin Wen; Bingchen Zhao; Anlin Zheng; Xiangyu Zhang; Xiaojuan Qi", "journal": "", "ref_id": "b60", "title": "Self-supervised visual representation learning with semantic grouping", "year": "2022" }, { "authors": "Zhirong Wu; Yuanjun Xiong; Stella X Yu; Dahua Lin", "journal": "", "ref_id": "b61", "title": "Unsupervised feature learning via non-parametric instance discrimination", "year": "2018" }, { "authors": "Enze Xie; Jian Ding; Wenhai Wang; Xiaohang Zhan; Hang Xu; Peize Sun; Zhenguo Li; Ping Luo", "journal": "", "ref_id": "b62", "title": "Detco: Unsupervised contrastive learning for object detection", "year": "2021" }, { "authors": "Jiahao Xie; Xiaohang Zhan; Ziwei Liu; Yew ; Soon Ong; Chen Change Loy", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b63", "title": "Unsupervised objectlevel representation learning from scene images", "year": "2021" }, { "authors": "Zhenda Xie; Yutong Lin; Zheng Zhang; Yue Cao; Stephen Lin; Han Hu", "journal": "", "ref_id": "b64", "title": "Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning", "year": "2021" }, { "authors": "Zhenda Xie; Zheng Zhang; Yue Cao; Yutong Lin; Jianmin Bao; Zhuliang Yao; Qi Dai; Han Hu", "journal": "", "ref_id": "b65", "title": "Simmim: A simple framework for masked image modeling", "year": "2022" }, { "authors": "Ceyuan Yang; Zhirong Wu; Bolei Zhou; Stephen Lin", "journal": "", "ref_id": "b66", "title": "Instance localization for self-supervised detection pretraining", "year": "2021" }, { "authors": "Sukmin Yun; Hankook Lee; Jaehyung Kim; Jinwoo Shin", "journal": "", "ref_id": "b67", "title": "Patch-level representation learning for self-supervised vision transformers", "year": "2022" }, { "authors": "Jure Zbontar; Li Jing; Ishan Misra; Yann Lecun; Stéphane Deny", "journal": "PMLR", "ref_id": "b68", "title": "Barlow twins: Self-supervised learning via redundancy reduction", "year": "2021" }, { "authors": "Jinghao Zhou; Chen Wei; Huiyu Wang; Wei Shen; Cihang Xie; Alan Yuille; Tao Kong", "journal": "", "ref_id": "b69", "title": "Image BERT pre-training with online tokenizer", "year": "2022" }, { "authors": "Adrian Ziegler; Yuki M Asano", "journal": "", "ref_id": "b70", "title": "Self-supervised learning of object parts for semantic segmentation", "year": "2022" } ]
[]
2024-03-29
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Abstract. In-context segmentation aims at segmenting novel images using a few labeled example images, termed as \"in-context examples\", exploring content similarities between examples and the target. The resulting models can be generalized seamlessly to novel segmentation tasks, significantly reducing the labeling and training costs compared with conventional pipelines. However, in-context segmentation is more challenging than classic ones requiring the model to learn segmentation rules conditioned on a few samples. Unlike previous work with ad-hoc or non-endto-end designs, we propose SegIC, an end-to-end segment-in-context framework built upon a single vision foundation model (VFM). In particular, SegIC leverages the emergent correspondence within VFM to capture dense relationships between target images and in-context samples. As such, information from in-context samples is then extracted into three types of instructions, i.e. geometric, visual, and meta instructions, serving as explicit conditions for the final mask prediction. SegIC is a straightforward yet effective approach that yields state-of-theart performance on one-shot segmentation benchmarks. Notably, SegIC" }, { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b49", "b52", "b53", "b49", "b5", "b49", "b31", "b72", "b73", "b61", "b62", "b37", "b58", "b69", "b1", "b28", "b64", "b7", "b58", "b7", "b48", "b51", "b55" ], "table_ref": [], "text": "Modern advancements in deep learning have established a routine process for addressing visual perception challenges, typically involving data collection, model training, and deployment. While this pipeline is highly effective, it invariably demands additional effort in data acquisition and model tuning when adapting to new domains. Although researchers have been seeking to learn generic representations with pre-training, the resulting models have to be fine-tuned on the target domain for improved performance.\nIn contrast, the success of large language models (LLMs) [6,50,53,54] in Natural Language Processing (NLP) offers an alternative approach. These models are trained on vast datasets, handling various NLP tasks through next-token prediction guided by prompts [50]. A key strength of LLMs is their ability to learn from a few examples, a process known as in-context learning (ICL). This enables them to adapt to various tasks with a small and varied set of instructions without requiring extensive fine-tuning or retraining [6,50]. The success of ICL in NLP highlights the potential for applying similar strategies in visual perception tasks.\nWhile appealing, ICL in vision is particularly challenging as vision tasks are significantly different regarding inputs (2D/3D), outputs (one-hot labels/bounding boxes), and specialized architectures. Recent advances in vision generalist models [32,73,74] suggest different levels of segmentation tasks, i.e. instance, semantic, and video, can be unified within the same output space. This motivates us to explore ICL using segmentation as a testbed and investigate whether current vision models can be easily generalized. While there are a few previous attempts on ICL for segmentation, they either have fallen short in performance due to implicit modeling [62,63] or have employed heavy and non-end-to-end pipelines [38,59,70], which are less effective and efficient.\nAt the heart of ICL for NLP tasks is mining the relationships among different words and then propagating labels from a few task-specific question-answer pairs, namely in-context samples to the target one [2,29,65]. We argue that in vision tasks, the similar entity that facilitates label propagation from in-context samples to novel samples is establishing dense correspondences between images. Although dense visual correspondences are difficult to obtain before the era of foundation models, recent studies [8,59] have shown that high-quality correspondence emerges in visual foundation models (VFMs) [8,49,52,56].\nIn light of this, we introduce SegIC, an end-to-end segment-in-context framework without the need for sophisticated handcrafted prompt design. Specif-ically, our framework is built upon a single frozen vision foundation model followed by a lightweight mask decoder. We leverage the emergent correspondence of the VFM to establish dense correspondences across target images and incontext samples. Based on that, we extract in-context information into three types of instructions: geometric, visual, and meta instructions. By explicitly utilizing these instructions, our model demonstrates remarkable generalization capabilities with low training costs across diverse segmentation tasks, as evidenced by extensive qualitative and quantitative experiments. We summarize our contributions in threefold: \n-We introduce" }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b7", "b9", "b16", "b17", "b26", "b51", "b2", "b7", "b9", "b16", "b17", "b48", "b23", "b32", "b51", "b55", "b67", "b7", "b9", "b17", "b2", "b17", "b23", "b32", "b51", "b67", "b26", "b8", "b18", "b39", "b54", "b63", "b14", "b42", "b43", "b71", "b22", "b72", "b73", "b22", "b26", "b61", "b62", "b72", "b73", "b41", "b4", "b25", "b29", "b60", "b2", "b7", "b16", "b17", "b48", "b5", "b0", "b3", "b37", "b61", "b62", "b61", "b62", "b16", "b37", "b69", "b26" ], "table_ref": [], "text": "Vision foundation models. Recent years have witnessed great progress in large-scale vision pre-training [3,8,10,17,18,27,52], serving as the cornerstone for high-capacity foundation models. These pre-training approaches can be broadly categorized into two directions: vision-only pretraining [3,8,10,17,18,49] and vision-language pre-training [24,33,52,56,68]. For vision-only pre-training, models aim to distinguish image/patch-level entities from different views [8,10,18] or reconstruct masked images [3,18] from raw images. In vision-language pre-training, models strive to align cross-modal features into a unified visualsemantic space [24,33,52,68], showcasing great open-set performance due to the transferable capabilities of language. Unlike these approaches that perform pre-training in an unsupervised or weakly supervised manner, SAM [27] is pretrained on a huge amount of labeled segmentation data and can segment various entities with precise prompts about locations. In this paper, we conduct extensive experiments using three types of pre-trained models as backbones to explore their potential for in-context segmentation. Interestingly, we observe that models with higher zero-shot semantic and geometric correspondence performance are more likely to be effectively utilized in our SegIC framework for in-context segmentation.\nUnified vision downstream models. Unified vision downstream models have recently drawn significant attention due to their generalization capabilities and flexibilities. Unlike previous specialized vision models designed for specific datasets [9,19,40,55,64], vision generalists are tailored to handle multiple datasets [15,43,44,72] and a wide range of tasks [23,73,74] within a single yet unified model. Recently, many studies [23,27,62,63,73,74] have focused on developing techniques that unify segmentation tasks. In a similar spirit, our work also builds upon a unified output space for segmentation tasks. However, our goal is differentwe aim to perform in-context segmentation that allows a model to effectively segment novel entities conditioned on a few samples.\nVisual correspondence. Establishing visual correspondences between different images is vital for various computer vision tasks. Traditionally, computing correspondences is based on hand-designed features like SIFT [42] and SURF [5].\nDeep models [26,30,61] can also learn correspondence in a supervised manner. More recently, it has been shown features from large foundation models encode dense visual correspondence clues [3,8,17,18,49]. In this study, we discover a profound connection between correspondence and in-context segmentationcorrespondence acts as explicit guidance, linking the target image with in-context images, thus facilitating label propagation for in-context segmentation.\nIn-context learning. For the first time, GPT-3 [6] introduces a new learning paradigm known as in-context learning, which unifies various NLP tasks as text completion or question-answering tasks using provided prompts and examples. This approach enables language models to handle various tasks, including novel ones, by leveraging task examples, without requiring re-training or finetuning. Recent studies [1,4,38,62,63] explore this mechanism in vision tasks.\nPainter [62] and SegGPT [63] aim to achieve in-context segmentation through in-painting. They build upon the Mask Image Modeling (MIM) framework [17] to concatenate images and predictions into a 2×2 mosaic and make predictions by recovering the masked areas. In their pipelines, the vision backbone serves as both an image encoder and a mask decoder, which incurs significant computational costs. Moreover, they struggle to effectively leverage pre-trained models due to input shifts, leading to increased convergence challenges. Other approaches [38,70] attempt using in-context segmentation via prompting SAM [27]. They build upon cross-image correspondences between in-context examples and target images by additional pre-trained models to generate prompts for SAM. However, these methods employ a two-stage pipeline, introducing redundancy and repeated computations. Consequently, if the model encounters limitations in one stage, it negatively impacts the final performance. In this work, we build an end-to-end in-context segmentation framework, leveraging the emergent correspondence using a single vision foundation model." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [ "b5", "b49" ], "table_ref": [], "text": "In-context learning equips a model with the ability to learn from example images, namely \"in-context examples\", as humans, which has demonstrated great potential in NLP tasks [6,50]. This process is akin to how humans intuitively grasp and replicate complex patterns from just a few guides, rapidly generalizing to new examples. In this paper, our goal is to establish an end-to-end in-context l m e a 0 m 0 q K 4 5 D T T j i + z f 3 O E 5 W K J e J B T 1 I a x H g o W M Q I 1 l b y / R j r U R h l 0 a O c 9 q s 1 t + 7 O g J a J V 5 A a F G j 2 q 1 / + I C E m p k I T j p X q e W 6 q g w x L z Q i n 0 \nV / / G m 3 / j J N m D J h Y 0 F F X d d H c F i R Q G X f f b W V p e W V\n4 p v F E 0 x G e M h 7 V k q c E x V k M 0 y T 9 G J V Q Y o S q R 9 Q q O Z +\nn R s = \" > A A A B 7 H i c b V B N S w M x E J 3 1 s 9 a v q k c v w S J U k L I r f h 2 L X n q s 4 L a F d i n Z N N u G J t k l y Q p l 6 W / w 4 k E R r / 4 g b / 4 b 0 3 Y P 2 v p g 4 P H e D D P z w o Q z b V z 3 2 1 l Z X V v f 2 C x s F b d 3 d v f 2 S w e H T R 2 n i l C f x D x W 7 R B r y p m k v m G G 0 3 a i K B Y h p 6 1 w d D / 1 W 0 9 U a R b L R z N O a C D w Q L K I E W y s 5 F f q r f O z X q n s V t 0 Z 0 D L x c l K G H I 1 e 6 a v b j 0 k q q D S E Y 6 0 7 n p u Y I M P K M M L p p N h N N U 0 w G e E B 7 V g q s a A 6 y G b H T t C p V f o o i p U t a d B M / T 2 R Y a H 1 W I S 2 U 2 A z 1 I v e V\nF C B 1 q D 6 1 R 8 q m s Z M A h X E G N 9 z E w g y o o F T w W a V f m p Y Q u i E j J h v q S Q x M 0 E 2 j z z D Z 1 Y Z 4 k h p + y T g u f p 7 I y O x M d M 4 t J N 5 R L P s 5 e J / n p 9 C d B t k X C Y p M E k X H 0 W p w K B w f j 8 e c s 0 o i K k l h G p u s 2 I 6 J p p Q s C 1 V b A n e 8 s m r p H N R 9 6 7 r V w + X t c Z d U U c Z n a B T d I 4 8 d I M a 6 B 6 1 U B t R p N A z e k V v D j g v z r v z s R g t O c X O M f o D 5 / M H d p i R Y w = = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = \" i m T P j / Q n N U m W z 1 B C M P y + S G m P F 7 M = \" > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s K 9 o F t K Z n 0 T h u a y Q x J R i h D / 8 K N C 0 X c + j f u / B s z 7 S y 0 9 U D g c M 6 9 5 N z j x 4 J r 4 7 r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 d Z Q o h g 0 W i U i 1 f a p R c I k N w 4 3 A d q y Q h r 7 A l j + + z f z W E y r N I / l g J j H 2 Q j q U P O C M G i s 9 d k N q R n 6 Q 0 m m / X H G r 7 g x k m X g 5 q U C O e r / 8 1 R 1 E L A l R G i a o 1 h 3 P j U 0 v p c p w J n B a 6 i Y a Y 8 r G d I g d S y U N U f f S W e I p O b H K g A S R s k 8 a M l N / b 6 Q 0 1 H o S + n Y y S 6 g X v U z 8 z + s k J r j u p V z G i U H J 5 h 8 F i S A m I t n 5 Z M A V M i M m l l C m u M 1 K 2 I g q y o w t q W R L 8\ni V t E h p m n v q l B W z C G 0 e R P 9 4 = \" > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s K 9 o F t K Z n 0 T h u a y Q x J R i h D / 8 K N C 0 X c + j f u / B s z 7 S y 0 9 U D g c M 6 9 5 N z j x 4 J r 4 7 r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 d Z Q o h g 0 W i U i 1 f a p R c I k N w 4 3 A d q y Q h r 7 A l j + + z f z W E y r N I / l g J j H 2 Q j q U P O C M G i s 9 d k N q R n 6 Q x t N + u e J W 3 R n I M v F y U o E c 9 X 7 5 q z u I W B K i N E x Q r T u e G 5 t e S p X h T O C 0 1 E 0 0 x p S N 6 R A 7 l k o a o u 6 l s 8 R T c m K V A Q k i Z Z 8 0 Z K b + 3 k h p q P U k 9 O 1 k l l A v e p n 4 n 9 d J T H D d S 7 m M E 4 O S z T 8 K E k F M R L L z y Y A r Z E Z M L K F M c Z u V s B F V l B l b U s m W 4 C 2 e v E y a Z 1 X v s n p x f 1 6 p 3 e R 1 F O E I j u E U P L i C G t x B H R r A Q M I z v M K b o 5 0 X 5 9 3 5 m I 8 W n H z n E P 7 A + f w B 9 F G R H g = = < / l a t e x i t > p < l a t e x i t s h a 1 _ b a s e 6 4 = \" x 1 E Z T K k g t B x N A + r x a 6 l E 2 + h x s I Q = \" > A A A B 8 3 i c b V D L S s N A F L 2 p r 1 p f V Z d u B o v g q i T i a 1 l 0 4 7 K C f U A T y 2 Q 6 a Y d O J m F m I o S Q 3 3 D j Q h G 3 / o w 7 / 8 Z J m 4 W 2 H h g 4 n H M v 9 8 z x Y 8 6 U t u 1 v q 7 K y u r a + U d 2 s b W 3 v 7 O 7 V 9 w + 6 K k o k o R 0 S 8 U j 2 f a w o Z 4 J 2 N N O c 9 m N J c e h z 2 v O n t 4 X f e 6 J S s U g 8 6 D S m X o j H g g W M Y G 0 k 1 w 2 x n v h B l j 7 K f F h v 2 E 1 7 B r R M n J I 0 o E R 7 W P 9 y R x F J Q i o 0 4 V i p g W P H 2 s u w 1 I x w m t f c R N E Y k y k e 0 4 G h A o d U e d k s c 4 5 O j D J C Q S T N E x r N 1 N 8 b G Q 6 V S k P f T B Y Z 1 a J X i P 9 5 g 0 Q H 1 1 7 G R J x o K s j 8 U J B w p C N U F I B G T F K i e W o I J p K Z r I h M s M R E m 5 p q p g R n 8 c v L p H v W d C 6 b F / f n j d Z N W U c V j u A Y T s G B K 2 j B H b S h A w R i e I Z X\nN F s Q g 4 7 Q S T 2 9 z v P F G l W S Q f z D S m v s A j y U J G s L H S Y 1 9 g M w 7 C V G S D a s 2 t u z O g Z e I V p A Y F m o P q V 3 8 Y k U R Q a Q j H W v c 8 N z Z + i p V h h N O s 0 k 8 0 j T G Z 4 B H t W S q x o N p P Z 4 k z d G K V I Q o j Z Z 8 0 a K b + 3 k i x 0 H o q A j u Z J 9 S L X i 7 + 5 / U S E 1 7 7 K Z N x Y q g k 8 4 / C h C M T o f x 8 N G S K E s O n l m C i m M 2 K y B g\nj U o 1 Z S 2 q h N L d k B g m u G Q t 4 C B Y N 9 G M x K F g n X B 8 m / u d J 6 Y N V / I R J g k L Y j K U P O K U g J X 8 X k x g R I n I 7 q b 9 a s 2 t u z P g Z e I V p I Y K N P v V r 9 5 A 0 T R m E q g g x v i e m 0 C Q E Q 2 c C j a t 9 F L D E k L H Z M h 8 S y W J m Q m y W e Q p P r H K A E d K 2 y c B z 9 T f G x m J j Z n E o Z 3 M I 5 p F L x f / 8 / w U o u s g 4 z J J g U k 6 / y h K B Q a F 8 / v x g G t G Q U w s I V R z m x X T E d G E g m 2 p Y k v w F k\nV / / G m 3 / j J N m D J h Y 0 F F X d d H c F i R Q G X f f b W V p e W V\ni P F Q s I g R r K 3 k + z H W o z D K b h / k t F + t u X V 3 B v S X e A W p Q Y F m v / r p D x J i Y i o 0 4 V i p n u e m O s i w 1 I x w O q 3 4 R t E U k z E e 0 p 6 l A s d U B d k s 8 x Q d W W W A o k T a J z S a q T 8 3 M h w r N Y l D O 5 l n V I t e L v 7 n 9 Y y O L o O M i d R o K s j 8 U G Q 4 0 g n K C 0 A D J i n R f G I J J p L Z r I i M s M R E 2 5 o q t g R v" }, { "figure_ref": [ "fig_10" ], "heading": "Dense Correspondence Discovery", "publication_ref": [], "table_ref": [], "text": "To establish the relations between the target image I and the reference image I r (considering one reference example for brevity), we extract dense cross-image correspondences at the pixel level. For this purpose, we leverage pre-trained VFMs due to their powerful generalization capability and emergent correspondence properties. Specifically, we first extract the visual features of I and I r with a vision foundation model, then apply the cosine distance function to compute the patch-level distance map between the two images as shown in Figure 2. We further obtain the pixel-level correspondences by interpolating the patch-level distance map to the size of the original image as follows:\nf , f r = F(I), F(I r ) C = Upsample (Dist(f , f r )) ,(1)\nwhere F indicates the vision foundation model we use and f , f r ∈ R c×h×w are the extracted features of the target and the reference images, respectively; c, h, w indicate the dimensions of the feature maps. Dist denotes the distance function (for which we use cosine distance) and Upsample is the interpolation function.\nC ∈ R HW ×HW denotes the calculated dense correspondences between the target image and the in-context image." }, { "figure_ref": [ "fig_10" ], "heading": "In-context Instruction Extraction", "publication_ref": [ "b28", "b49", "b57", "b59", "b22", "b26", "b36", "b51" ], "table_ref": [], "text": "After obtaining dense correspondences, the question becomes how to utilize the in-context information, including in-context examples and dense correspondences, as instructions to guide the segmentation process. Here we extract incontext instructions based on ideas from NLP tasks [29,50]. Ideally, the representation for in-context information should clearly articulate how segmentation should be executed on the target image while being concise and efficient for effective segmentation. To this end, we decouple and encode the in-context information into three individual in-context instructions as shown in Figure 2: 1) geometric instructions; 2) visual instructions; and 3) meta instructions, each of which will be elaborated below:\nGeometric instructions aim to provide a coarse location estimation of the target mask. Although we only have the mask annotation for the reference image, we can propagate the label from the reference to the target to obtain a propagated mask by exploring the dense correspondences C:\na = y r /∥y r ∥ • C ∈ R H×W p = PE(Topk(a)),(2)\nwhere a represents the propagated coarse mask for the target image. As shown in Equation ( 2), by performing matrix multiplication, we gather and average the dense correspondences based on the positive points in the reference mask. This process is analogous to propagating labels from the reference image to the target image through dense correspondences, making a a form of dense geometric instruction. Additionally, based on a, we employ Topk (as referred to in Equation (2)) to select the top-k points with the highest values in a, indicating the locations most likely relevant to the target mask. We further encode the 2D coordinates into high-dimensional vectors using cosine positional encoding PE as in [58,60], resulting in a type of sparse geometric instruction. Overall, a and p together provide geometric information extracted from in-context examples.\nVisual instructions indicate the visual clues of the target entity. We use the mask label from the reference image to extract visual salient clues, v, from reference features:\nv = y r /∥y r ∥ • f r .(3)\nBy doing so, only relevant information in reference features as indicated in the mask, including low-level (texture, appearance, etc.) and high-level (semantics) clues, is used.\nMeta instructions indicate other clues implicitly provided by in-context examples, such as task descriptions, class names, etc. We uniformly treat them as languages and encode them with a pre-trained language model, following [23,27,37].\nm = F t (meta)(4)\nwhere F t is the pre-trained CLIP text encoder [52], meta indicates the meta information and m is the meta feature. Finally, we use c = {a, p, v, m} to denote the set of in-context instructions derived from reference samples, which can be further used for producing the segmentation mask, as will be introduced in section § 3.3." }, { "figure_ref": [ "fig_10" ], "heading": "Mask Decoding", "publication_ref": [ "b6", "b10", "b11", "b6", "b10", "b11", "b6" ], "table_ref": [], "text": "In this section, we discuss how to predict the segmentation masks in target images following the aforementioned in-context instructions. In particular, we use a query-based mask decoder D due to its high performance in segmentation tasks and flexibility [7,11,12]. Formally, the decoder network takes the target image feature f and in-context instructions c as input, as well as a learnable query q, and outputs a mask o as follows:\no = D(f , c; q). (5\n)\nUnlike previous designs that use a set of object queries [7,11,12] for prediction, we only initialize one query since we just need to predict one mask in-context conditioned on the in-context instructions as shown in Figure 2.\nTo prepare these instructions for mask decoding, we first project them into the latent space used by the decoder. We categorize the in-context instructions into two types according to their spatial property: instructions with spatial shapes (i.e. the propagated coarse mask a) and without spatial shapes (i.e. p, v, m). For the spatial instructions, we employ a series of convolutional layers M to encode it into the image feature space; for the non-spatial instructions, we use projection layers P to project them into the query feature space:\nq p , q v , q m = P p (p), P v (v), P m (m) a ′ = M(a)(6)\nwhere P p , P v , P m indicate the projection layers for p, v, m, respectively. a ′ , q p , q v and q m are the projected in-context instructions.\nFurthermore, we inject the projected in-context instructions into the decoding stage to guide the decoder to segment in-context. Similarly, for the spatial features, we add them to image features, such that they are aware of the coarse mask produced by reference samples. For the non-spatial features, we concatenate the initial query with them, which allows a deeper interaction via a self-attention mechanism in the decoder:\nf ′ = f + a ′ q ′ = Concat(q, q l , q s , q m ) o = D(f ′ ; q ′ ) (7)\nFinally, the mask prediction o is produced based on image features f ′ conditioned on instructions, and query features q ′ as shown in Equation (7). For more details, please refer to our supplementary." }, { "figure_ref": [ "fig_0" ], "heading": "Training Pipeline", "publication_ref": [ "b56", "b10", "b27" ], "table_ref": [], "text": "During training, we freeze all the parameters of the VFM and only leave the newly introduced mask decoder trainable. We employ a linear combination of a dice loss [57] and a binary cross-entropy loss for our mask loss: L mask = λ ce L ce + λ dice L dice . It is worth noting that we calculate the segmentation loss on K selected points using importance sampling following [11,28] instead of the whole image to save memory cost. To further improve the robustness toward noisy in-context samples, we introduce two strategies into our training recipe, namely \"context reversion\" and \"negative entity augmentation\". Negative entity augmentation. In tasks like video object segmentation (VOS), in-context examples for different entities in the same image are mutually exclusive. Taking the case of video object segmentation in Figure 1 as an example, the person and the skateboard are exclusive in one image. Thus, entities that are not of interest can serve as negative samples, indicating that they are not relevant to the target. We augment the in-context instructions with these negative entities for a better result." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Training Data", "publication_ref": [ "b62", "b37", "b15", "b46", "b50", "b65", "b50", "b65", "b61", "b73", "b69", "b37", "b62", "b12", "b34", "b70", "b55", "b47", "b72" ], "table_ref": [ "tab_1", "tab_2", "tab_2", "tab_1", "tab_1", "tab_1" ], "text": "We train SegIC on semantic and instance segmentation datasets. Unlike previous sophisticated task-specific designs for multiple datasets/tasks, our method offers a simple approach that distinguishes tasks by in-context examples. To ensure a fair comparison, we report in-domain performance of specialist models for COCO-20 i reported in [63]. Additionally, for FSS-1000, we highlight the performance trained on FSS-1000 in gray and zero-shot results in black. Notably, SegIC outperforms previous generalist models by a significant margin (more than 20 of mean mIoU) on COCO-20 i and achieves competitive results that are very close to specialist models on FSS-1000, even without ever being trained on it. Furthermore, we conduct experiments on LVIS-92 i [38], a more challenging one-shot benchmark built upon LVIS [16]. On LVIS-92 i , SegIC surpasses Matcher, the previous SoTA, by a large margin (from 33.0 to 47.8). Since we include the evaluation categories in the training process for COCO-20 i in Table 1, for a rigorous comparison, we follow its training setting that trains and tests our model on 4 splits [47] separately, avoiding seeing categories for evaluation. As shown in Table 2, our method still achieves state-of-the-art performance on COCO-20 i . Furthermore, as shown in the last row of Table 2, by joint training on COCO-excluded datasets (including ADE20k, LVIS, and FSS-1000), there is a significant performance gain across all splits. This further demonstrates the effectiveness of our in-context generalization capabilities.\nOverall, our best model surpasses all previous segmentation generalist models across all one-shot segmentation benchmarks, demonstrating its effectiveness.\nZero-shot video object segmentation. Video object segmentation (VOS) aims to segment specific objects in video frames. In this work, we focus on the semi-supervised VOS setting [51,66], where the masks that appeared first time are given as references. We evaluate SegIC without any fine-tuning on video datasets to demonstrate our generalization capabilities. We choose two commonly used VOS datasets: DAVIS-17 [51] and YouTube-VOS-18 [66]. We export two metrics commonly used in VOS for evaluation: the J &F score for DAVIS-17 and G score for YouTube-VOS-18, with their official evaluation servers or toolkits. As shown in Table 1, when compared to VOS specialist models, SegIC achieves competitive performance on VOS benchmarks, even without seeing any training videos. Furthermore, in comparison to segmentation generalist models, SegIC surpasses Painter [62], SEEM [74], and PerSAM [70] by a significant margin, and competes favorably with recent generalist models [38,63]. Additionally, the VOS task pipeline in SegIC is relatively simple. We do not use any test time augmentation tricks (TTA) used in [13]. Moreover, it does not involve dense feature interaction at the patch level, as in SegGPT, and does not require the use of a pre-trained SAM for segmentation.\nGeneric semantic segmentation. We also evaluate SegIC on generic semantic segmentation benchmarks, which need to segmenting dataset-dependent pre-defined categories within each image. To meet our in-context learning framework, we initially gather in-context examples from the training set along with the classes within each image and then perform segmentation in an in-context manner. We use two widely-used semantic segmentation datasets, COCO [35] and ADE20k [71], for evaluation. As illustrated in Table 1, compared to specialist and generalist models for semantic segmentation, SegIC demonstrates strong performance under the settings that the classes are known in advance (marked with #). As shown in Table 1, our best model achieves 74.0 and 59.0 mIoU on COCO and ADE20k, respectively, surpassing the previous specialist/generalist models.\nOpen-vocabulary semantic segmentation. Similar to generic semantic segmentation, we can also extend SegIC to open-vocabulary semantic segmentation. Furthermore, we could also utilize StableDiffusion [56] to synthesize images with coarse masks for those categories lacking in-context examples like [48]. Compared to X-decoder [73], our method obtains competitive and even better results on PC-459 and ADE-847." }, { "figure_ref": [ "fig_12", "fig_12" ], "heading": "Rethinking Vision Foundation Model Pre-training", "publication_ref": [ "b7", "b16", "b20", "b26", "b38", "b48", "b51", "b55", "b45", "b13", "b48" ], "table_ref": [], "text": "Existing works improve VFMs by scaling up data and model sizes, modifying the model architectures, and utilizing different pre-text tasks. It is unknown how much those factors affect the performance when the encoder is frozen. We delve deeper into the potential of various frozen pre-trained VFMs [8,17,21,27,39,49,52,56]. For each VFM, we freeze it to extract image features and dense correspondences then map them into the hidden space for mask decoding, equally. Additionally, we assess the zero-shot semantic correspondence score to underscore the emerging downstream capabilities from different perspectives directly. More specifically, we evaluate a percent of correct keypoints score (PCK) on SPair-71k [46] following to [14] with cosine similarity between the dense feature maps of the two images for semantic correspondence. We observe that (1) high/multi-resolution is essential; (2) pre-text tasks are important; (3) model and data scales do not necessarily help for ICL segmentation; (4) zero-shot correspondence can imply the ICL segmentation capacity.\nHigh/multi-resolution is essential. It is evident in Table 3 that the models (CLIP-ViT-B and MAE-B), lacking support for high-resolution or multiresolution only exhibit trivial performance within our frozen-backbone framework. Despite the remarkable success of vision transformers, they encounter limitations in directly adapting to various input resolutions without fine-tuning due to fixed-length visual tokens and position embeddings. Unlike vision transformers, CNNs, with their stacked convolution design, demonstrate seamless adaptation of parameters to images of different resolutions, even only pre-trained on low-resolution images. To alleviate this limitation, DINO-v1/v2 enhance vanilla ViTs with multi-resolution inputs and position embedding interpolation during pre-training. Consequently, they achieve superior segmentation performance.\nOverall, the observations indicate that the high/multi-resolution capabilities of the pre-trained VFMs play a key role in in-context segmentation.\nPre-text tasks are important. Once the conditions for supporting highresolution inputs are met, pre-text tasks employed during pre-training and the scale of data also impact the in-context segmentation capacities. Illustrated in Figure 3, with the same model size and pre-training data, DINOv1 outperforms those classification-pre-trained models in both zero-shot semantic correspondence and in-context segmentation tasks by a large margin. This emphasizes the substantial influence of the pre-text task on downstream capabilities.\nModel and data scales do not necessarily help for ICL segmentation.\nEquipping with larger-scale data and advanced self-distillation techniques, DI-NOv2 outperforms DINOv1 by a considerable margin. However, As shown in Table 3, scaling up the model size from base (B) to giant (G), is not a clear performance boost. Furthermore, compared to those pre-trained on than 10 times data, e.g. OpenCLIP-ConvNext-B and SD-2.1, DINOv2 still exhibits significant advantages. Those demonstrate that model and data scales do not necessarily help the requirements.\nZero-shot correspondence implies the segmentation capacity. We further study the relationship between correspondence and segmentation performance with different backbones. As shown in Figure 3, the performance on segmentation is proportional to correspondence, demonstrating a strong consistency between these two tasks. It further confirms our motivation to leverage the emergent correspondence for in-context segmentation. This insight further inspires us, suggesting that pre-training emphasizing inter/intra-image correspondence may lead to better in-context segmentation potentials, e.g. the image/patch-level discriminative self-distillation [49]." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_3", "tab_4" ], "text": "In the ablation study, we report the performance on COCO-20 i (in-domain one-shot segmentation), FSS-1000 (out-of-domain one-shot segmentation) and DAVIS-17 (zero-shot video object segmentation) to investigate the in-domain convergence and out-of-domain generalization capability. Ablations on in-context instructions. We study the importance of each component of in-context instructions (i.e. geometric, visual, and meta instructions). We conduct ablations on different combinations of components. As shown in Table 4, we find that the geometric and visual instructions tend to help the performance for out-of-domain segmentation and meta instructions (class name and task description in the ablation) benefit in-domain performance: when each component is used individually, the geometric and visual instructions obtain the best results on FSS-1000 and DAVIS-17, respectively. Meanwhile, meta instructions achieve the best performance on COCO-20 i , but with poor results on the other two datasets. Encouragingly, our method obtains the best performance among three tasks when using all three prompts, further demonstrating that our model can effectively transfer knowledge from in-context samples with the proposed in-context instructions.\nAblations on training strategies. We investigate how the proposed training strategies affect performance. As shown in Table 5, since the two strategies, i.e. context reversion and negative entity augmentation, act as a form of \"incontext augmentation\", they bring performance gains on COCO-20 i and FSS-1000 i with a slight drop on DAVIS-17. Furthermore, after combining them, our model achieves a consistent gain across all datasets, highlighting its efficacy. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduced SegIC, an end-to-end in-context segmentation framework that leverages the emergent correspondence of a single frozen vision foundation model. By training on standard segmentation datasets, SegIC achieved state-of-theart performance on one-shot segmentation benchmarks. Impressively, SegIC demonstrated competitive performance on novel tasks, providing a cost-effective training approach for universal segmentation. In this work, our primary focus is on utilizing one in-context example per entity. In the future, we plan to explore utilizing multiple in-context examples to enhance contextual information. Additionally, we aim to investigate our model's potential in instance-level segmentation, such as open-world instance segmentation. We do not anticipate any undesirable ethical or social impacts.\nsubsequent mask decoding process. This observation further demonstrates the emerging potential of pre-trained vision foundation models in the realm of segmentation tasks. More qualitative results on VOS. We further provide more qualitative results on video object segmentation tasks (VOS). Note that SegIC is never trained on video datasets and just treats VOS as in-context segmentation using the first frame as in-context examples. As shown in Figure 5, SegIC well handles challenging scenarios, including (a) occlusions, (b) interwoven objects, and (c) small objects.\nAlgorithm 1: Pseudo code for SegIC Mask Decoding.\n# Inputs: Image Embedding f ; In-context Instructions c = {a, p, v, m} # Variables: Learnable Object Queries q # Functions: Conv4ImgFeature(), Conv4ProgatedLabel(); Proj4Pos(), Proj4Vis(), Porj4Meta(); QuerySelfAttn(), Query2ImgAttn(), Image2QueryAttn(), output() 1 def InContext_Enhancement(f , a, p, v, m):\n# Project image feature and in-context propagated mask into the hidden space for mask decoding. q p , q v , q m =Proj4Pos(p),Proj4Vis(v),Proj4Meta(m) # Enhance the object query by contacting with the hidden features of in-context instructions." }, { "figure_ref": [], "heading": "5", "publication_ref": [], "table_ref": [], "text": "q ′ =Concat(q, q p , q v , q m ) 6 def Mask_Decoder(F , Q): " }, { "figure_ref": [], "heading": "", "publication_ref": [ "b34", "b70", "b15" ], "table_ref": [], "text": "COCO [35] is a widely used instance/semantic segmentation with 83K training samples and 41K samples of 80 categories. We use the 2014 split version to be consistent with the COCO-20 i one-shot semantic segmentation benchmark.\nADE20k [71] is a widely used semantic segmentation dataset for 150 categories, with 20K training images.\nLVIS [16] is a large instance segmentation dataset containing 1000+ categories with ∼100K training images." }, { "figure_ref": [], "heading": "Training Details", "publication_ref": [ "b10", "b48", "b40" ], "table_ref": [], "text": "We implement SegIC in PyTorch and use 8 V100 GPUs for most of our experiments. We use a batch size of 32 (4 per GPU) in total, 1 point for point instruction, and 12544 points per mask to calculate mask loss following [11]. We merge the COCO instances for its semantic segmentation. Please refer to our supplementary material for the detailed training pipeline. Thanks to the frozen backbone, our method is extremely memory-efficient (with less than 10G memory cost in most of our experiments). SegIC is a single unified model that is jointly trained on mixed datasets, while evaluated on various datasets, separately. We utilize DINOv2 [49] as the default vision foundation model, with a ViT-B for all ablations, and ViT-L/G for the main experiments. We employ an AdamW [41] optimizer with a weight decay of 1e-4. We set an initial learning rate as 1e-4 and multiply 0.1 at the 10 epoch during training. Each dataset is sampled uniformly with 160K samples per epoch in the main experiments and 80K samples for the ablations. We perform data augmentations on target images and reference images respectively. We use large-scale jittering augmentation for semantic segmentation datasets, and normal data augmentations, including random resizing cropping, color jittering, and random horizontal flipping, for instance segmentation datasets. We random sample 1 mask per image for semantic segmentation datasets during training, while up to 10 masks for instance segmentation. The size of a single image is cropped/padded to 896×896." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b48", "b62", "b46", "b33" ], "table_ref": [], "text": "For the main experiments, we compare SegIC with other specialist/generalist methods on several benchmarks of different segmentation tasks. We use SegIC of DINOv2 [49] of large and giant versions as the backbone for the main experiments.\nOne-shot semantic segmentation. To demonstrate the generalization capability from known categories to unknown ones, we evaluate SegIC on one-shot semantic segmentation benchmarks. Following SegGPT [63], we evaluate SegIC in two one-shot semantic segmentation settings: in-domain using COCO-20 i [47] (the training set of COCO-20 i is a subset of ours) and out-of-domain with FSS-1000 [34] (without seeing any training samples of FSS-1000). As depicted in Table 1, SegIC has achieved state-of-the-art performance in both of these settings. " }, { "figure_ref": [], "heading": "A Additional Implementation Details", "publication_ref": [ "b6", "b10", "b11", "b26" ], "table_ref": [], "text": "In this section, we provide additional details of the mask decoding and training pipeline.\nMask decoding. Inspired by recent query-based segmentation methods [7,11,12,27], we utilize a lightweight query-based mask decoder to effectively map the in-context enhanced image features and object query to an output mask. We employ a learnable object query that will be used for the decoder's output.\nThe workflow of SegIC is concisely summarized in Pytorch-style pseudocode, as presented in Algorithm 1. Initially, the image feature and in-context instructions are projected into the same feature space for mask decoding. Subsequently, the projected in-context features are leveraged to enhance the image feature and object feature. The mask decoder then executes a multi-layered decoding process, structured as a four-step procedure within each layer, including (1) self-attention for the concatenated object query; (2) cross-attention from the image feature to the object query; (3) cross-attention from the query back to the image; and (4) calculating the mask. The mask calculation is performed using the image feature and the first element of the concatenated object feature, which corresponds to the position of the initial object query.\nTraining pipeline. In our main experiments, we adopt a mixed training scheme using both semantic segmentation datasets (COCO and ADE20k) and instance segmentation datasets (COCO and LVIS), as presented in Algorithm 2. We do not focus intensively on adjusting the dataset ratios, instead opting for uniform dataset-level sampling. For the segmentation datasets, we employ large-scale jittering (ranging from 0.1 to 2.0) for both the target image and in-context examples. These in-context examples are constructed based on the semantic class label of the target image, sampling one class per image during training. In the case of instance segmentation datasets, in-context examples are generated by applying two separate data augmentations to the same image. The instances from these differently augmented views then serve as mutual in-context examples. Our standard data augmentation techniques for this task include random resizing cropping (ranging from 0.3 to 1.0), random color jittering (with a 0.2 probability), and random horizontal flipping (with a 0.1 probability)." }, { "figure_ref": [], "heading": "B Additional Visualization", "publication_ref": [], "table_ref": [], "text": "In this section, we provide more visualizations of the middle output and the predictions of SegIC.\nPropagated mask. As outlined in Section 3.2, the propagated mask a is derived from a weighted mean of dense correspondences according to the groundtruth mask of in-context samples. To facilitate visualization, we first apply the sigmoid function to map a into (0, 1). Subsequently, this range is transformed into RGB space using the JET color map. As depicted in Figure 4, this process demonstrates that the propagated masks predominantly concentrate on the objects referenced in the in-context examples, providing strong guidance for the" } ]
can be easily generalized to diverse tasks, including video object seg
SegIC: Unleashing the Emergent Correspondence for In-Context Segmentation
[ { "figure_caption": "Fig. 1 :1Fig. 1: Qualitative results of SegIC. SegIC segments target images (the bottom row) according to a few labeled example images (top row, linked by in the figure), termed as \"in-context segmentation\". SegIC unifies various segmentation tasks via different types of in-context samples, including those annotated with one mask per sample (one-shot segmentation), annotated with a few masks per sample (video object segmentation), and the combination of annotated samples (semantic segmentation)", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "t e x i t s h a 1 _ b a s e 6 4 = \" U 5 N g B V z I f p R X f + + z d b c O G R j e 1 V E = \" > A A A B 8 n i c b V D L S g M x F M 3 U V 6 2 v q k s 3 w S K 4 K j P i a 1 k U x G U F + 4 D p U D J p p g 3 N J E N y R y h D P 8 O N C 0 X c + j X u / B s z 7 S y 0 9 U D g c M 6 9 5 N w T J o I b c N 1 v p 7 S y u r a + U d 6 s bG 3 v 7 O 5 V 9 w / a R q W a s h Z V Q u l u S A w T X L I W c B C s m 2 h G 4 l C w T j i + z f 3 O E 9 O G K / k I k 4 Q F M R l K H n F K w E p + L y Y w o k R k d 9 N + t e b W 3 R n w M v E K U k M F m v 3 q V 2 + g a B o z C V Q Q Y 3 z P T S D I i A Z O B Z t W e q l h C a F j M m S + p Z L E z A T Z L P I U n 1 h l g C O l 7 Z O A Z + r v j Y z E x k z i 0 E 7 m E c 2 i l 4 v / e X 4 K 0 X W Q c Z m k w C S d f x S l A o P C + f 1 4 w D W j I C a W E K q 5 z Y r p i G h C w b Z U s S V 4 i y c v k / Z Z 3 b u s X z y c 1 x o 3 R R 1 l d I S O 0 S n y 0 B V q o H v U R C 1 E k U L P6 B W 9 O e C 8 O O / O x 3 y 0 5 B Q 7 h + g P n M 8 f e y e R Z g = = < / l a t e x i t > F < l a t e x i t s h a 1 _ b a s e 6 4 = \" M L q o Z 9 F g b q l p S O T X V m b Q Z 2 S V l 8 I = \" > A A A B 8 X i c b V D L S g N B E J z 1 G e M r 6 t H L Y B A 8 h V 3 x d Q x 6 8 R j B P D B Z w u y k N x k y O 7 v M 9 I p h y V 9 4 8 a C I", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 b L 2 w U N 7 e 2 d 3 Z L e / s N E 6 e a Q 5 3 H M t a t g B m Q Q k E d B U p o J R p Y F E h o B s O b i d 9 8 B G 1 E r O 5 x l I A f s b 4 S o e A M r f T Q Q X j C I M z C c b d U d i v u F H S R e D k p k x y 1 b u m r 0 4 t 5 G o F C L p k x b c 9 N 0 M + Y R s E l j I u d 1 E D C + J D 1 o W 2 p Y h E Y P 5 t e P K b H V u n R M N a 2 F N K p + n s i Y 5 E x o y i w n R H D g Z n 3 J u J / X j v F 8 M r P h E p S B M V n i 8 J U U o z p 5 H 3 a E x o 4 y p E l j G t h b 6 V 8 w D T j a E M q 2 h C 8 + Z c X S e O 0 4 l 1 U z u / O y t X r P I 4 C O S R H 5 I R 4 5 J J U y S 2 p k T r h R J F n 8 k r e H O O 8 O O / O x 6 x 1 y c l n D s g f O J 8 / D s K R L w = = < / l a t e x i t > f < l a t e x i t s h a 1 _ b a s e 6 4 = \" M u j k w b 4 2 F Q G G A K O P 6 q H J 8 L 0 8 R z M = \" > A A A B 8 3 i c b V D L S s N A F L 2 p r 1 p f V Z d u B o v g q i T i a 1 l 0 4 7 K C f U A T y 2 Q 6 a Y d O J m E e Q g n 9 D T c u F H H r z 7 j z b 5 y 0 W W j r g Y H D O f d y z 5 w w 5 U x p 1 / 1 2 S i u r a + s b 5 c 3 K 1 v b O 7 l 5 1 / 6 C t E i M J b Z G E J 7 I b Y k U 5 E 7 S", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "n s j w 7 F7S k z i 0 k 3 l G t e j l 4 n 9 e z + j o O s i Y S I 2 m g s w P R Y Y j n a C 8 A D R g k h L N J 5 Z g I p n N i s g I S 0 y 0 r a l i S / A W v 7 x M 2 m d 1 7 7 J + c X 9 e a 9 w U d Z T h C I 7 h F D y 4 g g b c Q R N a Q C C F Z 3 i F N 8 c 4 L 8 6 7 8 z E f L T n F z i H 8 g f P 5 A 3 L 3 k f g = < / l a t e x i t > f r < l a t e x i t s h a 1 _ b a s e 6 4 = \" F C n c 1 l Y + d z a t 0 l B s 7 O m S s l 7 m", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "P z P 6 6 Q m u g 0 y J p P U U E n m i 6 K U I x O j 6 e e o z x Q l h o 8 t w U Q x e y s i Q 6 w w M T a f o g 3 B W 3 x 5 m T Q v q t 5 1 9 e r h s l y 7 y + M o w D G c Q A U 8 u I E a 1 K E B P h B g 8 A y v 8 O Z I 5 8 V 5 d z 7 m r S t O P n M E f + B 8 / g B 4 F Y 3 S < / l a t e x i t > (HW, ) < l a t e x i t s h a 1 _ b a s e 6 4 = \" c X C Z 3 N H 5 w 6 9 + F k o 9 R c P k H 3 m R J F 0 = \" > A A A B 8 n i c b V D L S g M x F M 3 U V 6 2 v q k s 3 w S K 4 K j P i a 1 n s x m U F + 4 D p U D J p p g 3 N J E N y R y h D P 8 O N C 0 X c + j X u / B s z 7 S y 0 9 U D g c M 6 9 5 N w T J o I b c N 1 v p 7 S 2 v r G 5 V d 6 u 7 O z u 7 R 9 U D 4 8 6 R q W a s j Z V Q u l e S A w T X L I 2 c B C s l 2 h G 4 l C w b j h p 5 n 7 3 i W n D l X y E a c K C m I w k j z g l Y C W / H x M Y U y K y 5 m x Q r b l 1 d w 6 8 S r y C 1", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "B Z P X i b N s 6 p 3 W b 2 4 P 6 / U b v I 6 i n A E x 3 A K H l x B D e 6 g D g 1 g I O E Z X u H N 0 c 6 L 8 + 5 8 z E c L T r 5 z C H / g f P 4 A 3 Y a R D w = = < / l a t e x i t > a < l a t e x i t s h a 1 _ b a s e 6 4 = \" o F c b H", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "e L M S 6 8 V 6 t z 7 m o x W r 3 D m E P 7 A + f w C P / J I L < / l a t e x i t > y r < l a t e x i t s h a 1 _ b a s e 6 4 = \" p q P 4 s v + w g j 8 j U 1 I C d V R I l g B I s h w = \" > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s K 9 o F t K Z n 0 T h u a y Q x J p l C G / o U b F 4 q 4 9 W / c + T d m 2 l l o 9 U D g c M 6 9 5 N z j x 4 J r 4 7 p f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 d Z Q o h g 0 W i U i 1 f a p R c I k N w 4 3 A d q y Q h r 7 A l j + + z f z W B J X m k X w w 0 x h 7 I R 1 K H n B G j Z U e u y E 1 I z 9 I J 7 N + u e J W 3 T n I X + L l p A I 5 6 v 3 y Z 3 c Q s S R E a Z i g W n c 8 N z a 9 l C r D m c B Z q Z t o j C k b 0 y F 2 L J U 0 R N 1 L 5 4 l n 5 M Q q A x J E y j 5 p y F z 9 u Z H S U O t p 6 N v J L K F e 9 j L x P 6 + T m O C 6 l 3 I Z J w Y l W 3 w U J I K Y i G T n k w F X y I y Y W k K Z 4 j Y r Y S O q K D O 2 p J I t w V s + + S 9 p n l W 9 y + r F / X m l d p P X U Y Q j O I Z T 8 O A K a n A H d W g A A w l P 8 A K v j n a e n T f n f T F a c P K d Q / g F 5 + M b / W + R J A = = < / l a t e x i t > v < l a t e x i t s h a 1 _ b a s e 6 4 = \" S V W f / N T B 4 M H i + b n V E 0 e T R P 1 m H r c = \" > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s K 9 o H t U D J p p g 1 N M k O S E c o w f + H G h S J u / R t 3 / o 2 Z d h b a e i B w O O d e c u 4 J Y s 6 0 c d 1 v p 7 S y u r a + U d 6 s b G 3 v 7 O 5 V 9 w / a O k o U o S 0 S 8 U h 1 A 6 w p Z 5 K 2 D D O c d m", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "r T I w t q W J L 8 B Z P X i b t s 7 p 3 W b + 4 P 6 8 1 b o o 6 y n A E x 3 A K H l x B A + 6 g C S 0 g I O E Z X u H N 0 c 6 L 8 + 5 8 z E d L T r F z C H / g f P 4 A 7 8 K R G w = = < / l a t e x i t > m < l a t e x i t s h a 1 _ b a s e 6 4 = \" M p + f y y 1 n s K B b B T n r E Z o u W n T 6 Y h E = \" > A A A B 8 n i c b V D L S g M x F M 3 U V 6 2 v q k s 3 w S K 4 K j P i a 1 n U h c s K 9 g H T o W T S T B u a S Y b k j l C G f o Y b F 4 q 4 9 W v c + T d m 2 l l o 6 4 H A 4 Z x 7 y b k n T A Q 3 4 L r f T m l l d W 1 9 o 7 x Z 2 d r e 2 d 2 r 7 h + 0", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "9 e J u 2 z u n d Z v 3 g 4 r z V u i j r K 6 A g d o 1 P k o S v U Q P e o i V q I I o W e 0 S t 6 c 8 B 5 c d 6 d j / l o y S l 2 D t E f O J 8 / e B 2 R Z A = = < / l a t e x i t > D < l a t e x i t s h a 1 _ b a s e 6 4 = \" M L q o Z 9 F g b q l p S O T X V m b Q Z 2 S V l 8 I = \" > A A A B 8 X i c b V D L S g N B E J z 1 G e M r 6 t H L Y B A 8 h V 3 x d Q x 6 8 R j B P D B Z w u y k N x k y O 7 v M 9 I p h y V 9 4 8 a C I", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 b L 2 w U N 7 e 2 d 3 Z L e / s N E 6 e a Q 5 3 H M t a t g B m Q Q k E d B U p o J R p Y F E h o B s O b i d 9 8 B G 1 E r O 5 x l I A f s b 4 S o e A M r f T Q Q X j C I M z C c b d U d i v u F H S R e D k p k x y 1 b u m r 0 4 t 5 G o F C L p k x b c 9 N 0 M + Y R s E l j I u d 1 E D C + J D 1 o W 2 p Y h E Y P 5 t e P K b H V u n R M N a 2 F N K p + n s i Y 5 E x o y i w n R H D g Z n 3 J u J / X j v F 8 M r P h E p S B M V n i 8 J U U o z p 5 H 3 a E x o 4 y p E l j G t h b 6 V 8 w D T j a E M q 2 h C 8 + Z c X S e O 0 4 l 1 U z u / O y t X r P I 4 C O S R H 5 I R 4 5 J J U y S 2 p k T r h R J F n 8 k r e H O O 8 O O / O x 6 x 1 y c l n D s g f O J 8 / D s K R L w = = < / l a t e x i t > f < l a t e x i t s h a 1 _ b a s e 6 4 = \" F C n c 1 l Y + d z a t 0 l B s 7 O m S s l 7 m n R s = \" > A A A B 7 H i c b V B N S w M x E J 3 1 s 9 a v q k c v w S J U k L I r f h 2 L X n q s 4 L a F d i n Z N N u G J t k l y Q p l 6 W / w 4 k E R r / 4 g b / 4 b 0 3 Y P 2 v p g 4 P H e D D P z w o Q zb V z 3 2 1 l Z X V v f 2 C x s F b d 3 d v f 2 S w e H T R 2 n i l C f x D x W 7 R B r y p m k v m G G 0 3 a i K B Y h p 6 1 w d D / 1 W 0 9 U a R b L R z N O a C D w Q L K I E W y s 5 F f q r f O z X q n s V t 0 Z 0 D L x c l K G H I 1 e 6 a v b j 0 k q q D S E Y 6 0 7 n p u Y I M P K M M L p p N h N N U 0 w G e E B 7 V g q s a A 6 y G b H T t C p V f o o i p U t a d B M / T 2 R Y a H 1 W I S 2 U 2 Az 1 I v e V P z P 6 6 Q m u g 0 y J p P U U E n m i 6 K U I x O j 6 e e o z x Q l h o 8 t w U Q x e y s i Q 6 w w M T a f o g 3 B W 3 x 5 m T Q v q t 5 1 9 e r h s l y 7 y + M o w D G c Q A U 8 u I E a 1 K E B P h B g 8 A y v 8 O Z I 5 8 V 5 d z 7 m r S t O P n M E f + B 8 / g B 4 F Y 3 S < / l a t e x i t > (HW, ) < l a t e x i t s h a 1 _ b a s e 6 4 = \" 8 U Z e A Q b 6 H C t 8 N 4 m M u / Q t z J U B 3 n E = \" > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s K 9 o H t U D J p p g 3 N J E O S E c o w f + H G h S J u / R t 3 / o 2 Z d h b a e i B w O O d e c u 4 J Y s 6 0 c d 1 v p 7 S y u r a + U d 6 s b G 3 v 7 O 5 V 9 w / a W i a K 0 B a R X K p u g D X l T N C W Y Y b T b q w o j g J O O 8 H k N v c 7 T 1 R p J s W D m c b U j / B I s J A R b K z 0 2 I + w G Q d h K r N B t e b W 3 R n Q M v E K U o M C z U H 1 q z + U J I m o M I R j r X u e G x s / x c o w w m l W 6 S e a x p h M 8 I j 2 L B U 4 o t p P Z 4 k z d G K V I Q q l s k 8 Y N F N / b 6 Q 4 0 n o a B X Y y T 6 g X v V z 8 z + s l J r z 2 U y b i x F B B 5 h + F C U d G o v x 8 N G S K E s O n l m C i m M 2 K y B g r T I w t q W J L 8 B Z P X i b t s 7 p 3 W b + 4 P 6 8 1 b o o 6 y n A E x 3 A K H l x B A + 6 g C S 0 g I O A Z X u H N 0 c 6 L 8 + 5 8 z E d L T r F z C H / g f P 4 A 8 s y R H Q = = < / l a t e x i t > o < l a t e x i t s h a 1 _ b a s e 6 4 = \" 0 Q a E m N C s + q e b 0 1 m B m D q e 3 T i O O Z w = \" > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j e 4 q 2 A e 2 p W T S T B u a y Q z J H a E M / Q s 3 L h R x 6 9 + 4 8 2 / M t L P Q 6 o H A 4 Z x 7 y b n H j 6 U w 6 L p f T m F p e W V 1 r b h e 2 t j c 2 t 4 p 7 + 4 1 T Z R o x h s s k p F u + 9 R w K R R v o E D J 2 7 H m N P Q l b / n j 6 8 x v P X J t R K T u c R L z X k i H S g S C U b T S Q z e k O P K D 9 H b a L 1 f c q j s D + U u 8 n F Q g R 7 1 f / u w O I p a E X C G T 1 J i O 5 8 b Y S 6 l G w S S f l r q J 4 T F l Y z r k H U s V D b n p p b P E U 3 J k l Q E J I m 2 f Q j J T f 2 6 k N D R m E v p 2 M k t o F r 1 M / M / r J B h c 9 l K h 4 g S 5 Y v O P g k Q S j E h 2 P h k I z R n K i S W U a W G z E j a i m j K 0 J Z V s C d 7 i y X 9 J 8 6 T q n V f P 7 k 4 r t a u 8 j i I c w C E c g w c X U I M b q E M D G C h 4 g h d 4 d Y z z 7 L w 5 7 / P R g p P v 7 M M v O B / f u Q 6 Q 9 w = = < / l a t e x i t > I < l a t e x i t s h a 1 _ b a s e 6 4 = \" j G n / H B R A Y g 1 z D O K Q Z t 8 i d K f D R + I = \" > A A A B 8 3 i c b V D L S s N A F L 2 p r 1 p f V Z d u B o v g q i T i a 1 l 0 o 7 s K 9 g F N L J P p p B 0 6 m Y R 5 C C X 0 N 9 y 4 U M S t P + P O v 3 H S Z q H V A w O H c + 7 l n j l h y p n S r v v l l J a W V 1 b X y u u V j c 2 t 7 Z 3 q 7 l 5 b J U Y S 2 i I J T 2 Q 3 x I p y J m h L M 8 1 p N 5 U U x y G n n X B 8 n f u d R y o V S 8 S 9 n q Q 0", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "$Fig. 2 :2Fig. 2: Architecture overview. SegIC is built upon a frozen vision foundation model, consisting of four stages: (1) feature extraction; (2) correspondence discovery (Section § 3.1); (3) in-context instruction extraction (Section § 3.2); (4) mask decoding (Section § 3.3).", "figure_data": "", "figure_id": "fig_10", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Context reversion. We artificially introduce noisy context during training to improve the robustness. To simulate situations where in-context examples are inaccurate, we propose \"context reversion\": swapping the target and reference images-we use the prediction of the target image as an in-context example. The noisy context introduces randomness during training and hence can improve robustness.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig.3: Performance on zero-shot semantic correspondence and in-context segmentation. We use the dense feature similarity for correspondence estimation. The diameter of each bubble represents the number of parameters of each model.", "figure_data": "", "figure_id": "fig_12", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig.4: Visualization of propagated masks. We propagate labels from the incontext examples to the targets to obtain propagated masks by exploring the dense correspondences. We employ DINO-v2-large[49] for the visualization.", "figure_data": "", "figure_id": "fig_13", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "2 f2′ , a ′ =Conv4ImgFeature(f ), Conv4ProgatedLabel(a); # Enhance image feature with in-context propagated mask.", "figure_data": "", "figure_id": "fig_14", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "3 f3′ =f ′ +a ′ ; # Project other in-context instructions into the hidden space for mask decoding.", "figure_data": "", "figure_id": "fig_15", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "44", "figure_data": "", "figure_id": "fig_16", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "7 Q 9 F 10 O 13 Q 15 OAlgorithm 2 :791013152′ = QuerySelfAttn(Q) # Query self-attention 8 Q o = Img2QueryAttn(Q ′ , F ) # Image-to-query cross-attention o = Query2ImgAttn(F , Qo) # Query-to-image cross-attention = output(F o , Q o [0]) # Compute mask 11 def forward(f , a, p, v, m)): 12 f ′ , q ′ = InContext_Enhancement(f , a, p, v,m) # Enhance image feature and object query with in-context instructions. o ,F o = q ′ , f ′ # Initialize variables for mask decoding 14 for i in range(max_iter): , Q o ,F o = Mask_Decoder(Q o ,F o ) Pseudo code for training pipeline. # training set: mixed dataset D = D inst ∪ Dsem 1 def ICL_Preprocess(data):", "figure_data": "", "figure_id": "fig_17", "figure_label": "791013152", "figure_type": "figure" }, { "figure_caption": "2 I 4 I 7 I 10 for data in D: 11 IFig. 5 :24710115Fig. 5: Qualitative results on VOS. SegIC perform well on challenging scenarios in video object segmentation, including (a) occlusions, (b) interwoven objects, and (c) small objects.", "figure_data": "", "figure_id": "fig_18", "figure_label": "24710115", "figure_type": "figure" }, { "figure_caption": "SegIC, a simple yet effective in-context segmentation framework, exploring the strong emergent correspondence encoded in VFMs. -We design geometric, visual, and meta instructions that explicitly transfer knowledge from in-context samples to the target to facilitate in-context segmentation without tuning the parameters of vision foundation models. -SegIC demonstrates state-of-the-art performance on COCO-20 i , FSS-1000 and recent LVIS-92", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Main results on several segmentation benchmarks. Param t indicates the number of trainable parameters. † indicates relying SAM; * indicates that need additional fine-tuning, # indicates that classes within images are known during inference; n/a indicates the model does not have capability for the task and -indicates that do not have reported number. For FSS-1000 and VOS datasets, we highlight the performance trained on corresponding datasets in gray and zero-shot results in black.", "figure_data": "one-shot segmentationvideo object segmentation semantic seg open-vocab segMethodParam lCOCO-20 i FSS-1000 LVIS-92 i DAVIS-17YVOS-18COCO ADE20k PC-459 A-847mean mIoU mIoU mean mIoUJ &FGmIoU mIoUmIoU mIoUfew-shot seg specialistHSNet (RN50) [45]28M41.786.517.4n/an/an/an/an/an/aVAT (RN50) [20]52M42.990.318.5n/an/an/an/an/an/aFPTrans (B) [69]101M56.5--n/an/an/an/an/an/aVOS specialistAGAME (RN101) [25]-n/an/an/a70.066.0n/an/an/an/aSWEM (RN50) [36]58Mn/an/an/a84.382.8n/an/an/an/aXMem (RN50) [13]62Mn/an/an/a87.786.1n/an/an/an/asemantic seg specialistMaskFormer (L) [12]212Mn/an/an/an/an/a64.854.1n/an/aMask2Former (L) [11]216Mn/an/an/an/an/a67.456.1n/an/aMaskDINO (L) [31]225Mn/an/an/an/an/a-56.6n/an/asegmentation generalistOneFormer (L) [23]237Mn/an/an/an/an/a67.457.7--UNINEXT (L) [67]340Mn/an/an/a77.278.1----X-decoder (L) [73]341Mn/an/an/an/an/a67.558.129.69.2SEEM (L) [74]341M---58.950.067.6---in-context generalistPainter (L) [62]354M33.161.710.534.624.1-49.9--SegGPT (L) [63]354M56.185.618.675.674.7-*39.6--PerSAM † (H) [70]023.071.211.560.3-n/an/an/an/aPerSAM-F † (H) [27]223.575.612.371.9-n/an/an/an/aMatcher † (H+G) [38]052.787.033.079.5-n/an/an/an/aSegIC (L)5M76.186.844.671.462.7#72.9 #55.5 #33.5 #18.9SegIC (G)5M74.588.447.873.765.4#74.0 #59.0 #34.9 #20.1", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparisons on one-shot COCO-20 i . ‡ indicates that the model is jointly trained on the COCO-excluded datasets.", "figure_data": "MethodF0F1F2F3meanHSNet [45]37.2 44.1 42.4 41.341.2VAT [20]39.0 43.8 42.6 39.741.3FPTrans [69] 44.4 48.9 50.6 44.047.0MSANet [22] 47.8 57.4 48.7 50.551.1SegIC55.8 54.7 52.4 51.453.6SegIC ‡62.3 62.5 63.3 60.9 62.3", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Component ablation on incontext instructions. ■ is the default setting for in-context instructions.", "figure_data": "geometric visual meta COCO-20 i FSS-1000 DAVIS-17✓68.887.365.8✓71.085.167.6✓73.763.721.648.958.121.1✓✓✓74.687.468.4", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablations the training strategies. ■ is the default training scheme in our main experiments.", "figure_data": "reversion negative COCO-20 i FSS-1000 DAVIS-1770.086.166.3✓72.986.965.0✓73.186.564.3✓✓74.687.468.4", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablations on training data. ■ is the default data combination.Dataset Combination. We further investigate the effectiveness of each dataset under our joint in-context training framework. We group the training data into two types: semantic segmentation (COCO sem , ADE20k) and instance segmentation (COCO inst , LVIS). As shown in Table6, it is clear to see that when trained on COCO sem only, it achieves the best performance on COCO-20 i but is weak on the other. Based on this, we can enrich the training data from two directions: (1) use extra semantic segmentation data and (2) introduce instance segmentation tasks into training. For the former direction, the performance on FSS-1000 increases after introducing ADE20k. For the latter, we still use COCO as the training set but introduce its instance-level annotation. It can be seen a clear performance boost on DAVIS-17 J &F score (45.1→70.1), since video object segmentation requires instance-level understanding. Finally, we further enrich our training data with LVIS[16], achieving the best overall performance.", "figure_data": "semantic segmentation instance segmentation COCO-20 i FSS-1000 DAVIS-17 AVG COCOsem ADE20k COCOinst LVIS✓76.380.945.167.4✓✓75.682.340.166.0✓✓75.783.570.176.4✓✓✓✓74.687.468.476.8", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Lingchen Meng; Shiyi Lan; Hengduo Li; Jose M Alvarez; Zuxuan Wu; Yu-Gang Jiang
[ { "authors": "Y Bai; X Geng; K Mangalam; A Bar; A Yuille; T Darrell; J Malik; A A Efros", "journal": "", "ref_id": "b0", "title": "Sequential modeling enables scalable learning for large vision models", "year": "2023" }, { "authors": "I Balažević; D Steiner; N Parthasarathy; R Arandjelović; O J Hénaff", "journal": "", "ref_id": "b1", "title": "Towards in-context scene understanding", "year": "2023" }, { "authors": "H Bao; L Dong; S Piao; F Wei", "journal": "ICLR", "ref_id": "b2", "title": "Beit: Bert pre-training of image transformers", "year": "2021" }, { "authors": "A Bar; Y Gandelsman; T Darrell; A Globerson; A Efros", "journal": "NeurIPS", "ref_id": "b3", "title": "Visual prompting via image inpainting", "year": "2022" }, { "authors": "H Bay; T Tuytelaars; L Van Gool", "journal": "", "ref_id": "b4", "title": "Surf: Speeded up robust features", "year": "2006" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "NeurIPS", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "", "ref_id": "b6", "title": "Endto-end object detection with transformers", "year": "2020" }, { "authors": "M Caron; H Touvron; I Misra; H Jégou; J Mairal; P Bojanowski; A Joulin", "journal": "", "ref_id": "b7", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "L C Chen; G Papandreou; I Kokkinos; K Murphy; A L Yuille", "journal": "TPAMI", "ref_id": "b8", "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "year": "2017" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "", "ref_id": "b9", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "B Cheng; I Misra; A G Schwing; A Kirillov; R Girdhar", "journal": "", "ref_id": "b10", "title": "Masked-attention mask transformer for universal image segmentation", "year": "2022" }, { "authors": "B Cheng; A Schwing; A Kirillov", "journal": "NeurIPS", "ref_id": "b11", "title": "Per-pixel classification is not all you need for semantic segmentation", "year": "2021" }, { "authors": "H K Cheng; A G Schwing", "journal": "", "ref_id": "b12", "title": "Xmem: Long-term video object segmentation with an atkinson-shiffrin memory model", "year": "2022" }, { "authors": "S Cho; S Hong; S Kim", "journal": "TPAMI", "ref_id": "b13", "title": "Cats++: Boosting cost aggregation with convolutions and transformers", "year": "2022" }, { "authors": "X Gu; Y Cui; J Huang; A Rashwan; X Yang; X Zhou; G Ghiasi; W Kuo; H Chen; L C Chen", "journal": "", "ref_id": "b14", "title": "Dataseg: Taming a universal multi-dataset multi-task segmentation model", "year": "2023" }, { "authors": "A Gupta; P Dollar; R Girshick", "journal": "", "ref_id": "b15", "title": "Lvis: A dataset for large vocabulary instance segmentation", "year": "2019" }, { "authors": "K He; X Chen; S Xie; Y Li; P Dollár; R Girshick", "journal": "", "ref_id": "b16", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "K He; H Fan; Y Wu; S Xie; R Girshick", "journal": "", "ref_id": "b17", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "K He; G Gkioxari; P Dollár; R Girshick", "journal": "", "ref_id": "b18", "title": "Mask r-cnn", "year": "2017" }, { "authors": "S Hong; S Cho; J Nam; S Lin; S Kim", "journal": "", "ref_id": "b19", "title": "Cost aggregation with 4d convolutional swin transformer for few-shot segmentation", "year": "2022" }, { "authors": "G Ilharco; M Wortsman; R Wightman; C Gordon; N Carlini; R Taori; A Dave; V Shankar; H Namkoong; J Miller; H Hajishirzi; A Farhadi; L Schmidt", "journal": "", "ref_id": "b20", "title": "Openclip", "year": "2021" }, { "authors": "E Iqbal; S Safarov; S Bang", "journal": "", "ref_id": "b21", "title": "Msanet: Multi-similarity and attention guidance for boosting few-shot segmentation", "year": "2022" }, { "authors": "J Jain; J Li; M T Chiu; A Hassani; N Orlov; H Shi", "journal": "", "ref_id": "b22", "title": "Oneformer: One transformer to rule universal image segmentation", "year": "2023" }, { "authors": "C Jia; Y Yang; Y Xia; Y T Chen; Z Parekh; H Pham; Q Le; Y H Sung; Z Li; T Duerig", "journal": "", "ref_id": "b23", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "J Johnander; M Danelljan; E Brissman; F S Khan; M Felsberg", "journal": "", "ref_id": "b24", "title": "A generative appearance model for end-to-end video object segmentation", "year": "2019" }, { "authors": "S Kim; J Min; M Cho", "journal": "", "ref_id": "b25", "title": "Transformatcher: Match-to-match attention for semantic correspondence", "year": "2022" }, { "authors": "A Kirillov; E Mintun; N Ravi; H Mao; C Rolland; L Gustafson; T Xiao; S Whitehead; A C Berg; W Y Lo", "journal": "", "ref_id": "b26", "title": "Segment anything", "year": "2023" }, { "authors": "A Kirillov; Y Wu; K He; R Girshick", "journal": "", "ref_id": "b27", "title": "Pointrend: Image segmentation as rendering", "year": "2020" }, { "authors": "J Kossen; T Rainforth; Y Gal", "journal": "", "ref_id": "b28", "title": "In-context learning in large language models learns label relationships but is not conventional learning", "year": "2023" }, { "authors": "J Y Lee; J Degol; V Fragoso; S N Sinha", "journal": "", "ref_id": "b29", "title": "Patchmatch-based neighborhood consensus for semantic correspondence", "year": "2021" }, { "authors": "F Li; H Zhang; H Xu; S Liu; L Zhang; L M Ni; H Y Shum", "journal": "", "ref_id": "b30", "title": "Mask dino: Towards a unified transformer-based framework for object detection and segmentation", "year": "2023" }, { "authors": "H Li; J Zhu; X Jiang; X Zhu; H Li; C Yuan; X Wang; Y Qiao; X Wang; W Wang", "journal": "", "ref_id": "b31", "title": "Uni-perceiver v2: A generalist model for large-scale vision and vision-language tasks", "year": "2023" }, { "authors": "J Li; R Selvaraju; A Gotmare; S Joty; C Xiong; S C H Hoi", "journal": "", "ref_id": "b32", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "X Li; T Wei; Y P Chen; Y W Tai; C K Tang", "journal": "", "ref_id": "b33", "title": "Fss-1000: A 1000-class dataset for few-shot segmentation", "year": "2020" }, { "authors": "T Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "", "ref_id": "b34", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Z Lin; T Yang; M Li; Z Wang; C Yuan; W Jiang; W Liu", "journal": "", "ref_id": "b35", "title": "Swem: Towards real-time video object segmentation with sequential weighted expectationmaximization", "year": "2022" }, { "authors": "S Liu; Z Zeng; T Ren; F Li; H Zhang; J Yang; C Li; J Yang; H Su; J Zhu", "journal": "", "ref_id": "b36", "title": "Grounding dino: Marrying dino with grounded pre-training for open-set object detection", "year": "2023" }, { "authors": "Y Liu; M Zhu; H Li; H Chen; X Wang; C Shen", "journal": "", "ref_id": "b37", "title": "Matcher: Segment anything with one shot using all-purpose feature matching", "year": "2023" }, { "authors": "Z Liu; H Mao; C Y Wu; C Feichtenhofer; T Darrell; S Xie", "journal": "", "ref_id": "b38", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "J Long; E Shelhamer; T Darrell", "journal": "", "ref_id": "b39", "title": "Fully convolutional networks for semantic segmentation", "year": "2015" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b40", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "D G Lowe", "journal": "IJCV", "ref_id": "b41", "title": "Distinctive image features from scale-invariant keypoints", "year": "2004" }, { "authors": "L Meng; X Dai; Y Chen; P Zhang; D Chen; M Liu; J Wang; Z Wu; L Yuan; Y G Jiang", "journal": "", "ref_id": "b42", "title": "Detection hub: Unifying object detection datasets via query adaptation on language embedding", "year": "2023" }, { "authors": "L Meng; X Dai; J Yang; D Chen; Y Chen; M Liu; Y L Chen; Z Wu; L Yuan; Y G Jiang", "journal": "", "ref_id": "b43", "title": "Learning from rich semantics and coarse locations for long-tailed object detection", "year": "2023" }, { "authors": "J Min; D Kang; M Cho", "journal": "", "ref_id": "b44", "title": "Hypercorrelation squeeze for few-shot segmentation", "year": "2021" }, { "authors": "J Min; J Lee; J Ponce; M Cho", "journal": "", "ref_id": "b45", "title": "Spair-71k: A large-scale benchmark for semantic correspondence", "year": "2019" }, { "authors": "K Nguyen; S Todorovic", "journal": "", "ref_id": "b46", "title": "Feature weighting and boosting for few-shot segmentation", "year": "2019" }, { "authors": "Q H Nguyen; T T Vu; A T Tran; K Nguyen", "journal": "NeurIPS", "ref_id": "b47", "title": "Dataset diffusion: Diffusion-based synthetic data generation for pixel-level semantic segmentation", "year": "2023" }, { "authors": "M Oquab; T Darcet; T Moutakanni; H Vo; M Szafraniec; V Khalidov; P Fernandez; D Haziza; F Massa; A El-Nouby", "journal": "", "ref_id": "b48", "title": "Dinov2: Learning robust visual features without supervision", "year": "2023" }, { "authors": "L Ouyang; J Wu; X Jiang; D Almeida; C Wainwright; P Mishkin; C Zhang; S Agarwal; K Slama; A Ray", "journal": "NeurIPS", "ref_id": "b49", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "J Pont-Tuset; F Perazzi; S Caelles; P Arbeláez; A Sorkine-Hornung; L Van Gool", "journal": "", "ref_id": "b50", "title": "The 2017 davis challenge on video object segmentation", "year": "2017" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "", "ref_id": "b51", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "A Radford; K Narasimhan; T Salimans; I Sutskever", "journal": "OpenAI blog", "ref_id": "b52", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "OpenAI blog", "ref_id": "b53", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "NeurIPS", "ref_id": "b54", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b55", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "C H Sudre; W Li; T Vercauteren; S Ourselin; M Jorge Cardoso", "journal": "", "ref_id": "b56", "title": "Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations", "year": "2017-09-14" }, { "authors": "M Tancik; P Srinivasan; B Mildenhall; S Fridovich-Keil; N Raghavan; U Singhal; R Ramamoorthi; J Barron; R Ng", "journal": "NeurIPS", "ref_id": "b57", "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "year": "2020" }, { "authors": "L Tang; M Jia; Q Wang; C P Phoo; B Hariharan", "journal": "", "ref_id": "b58", "title": "Emergent correspondence from image diffusion", "year": "2023" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "NeurIPS", "ref_id": "b59", "title": "Attention is all you need", "year": "2017" }, { "authors": "X Wang; A Jabri; A A Efros", "journal": "", "ref_id": "b60", "title": "Learning correspondence from the cycleconsistency of time", "year": "2019" }, { "authors": "X Wang; W Wang; Y Cao; C Shen; T Huang", "journal": "", "ref_id": "b61", "title": "Images speak in images: A generalist painter for in-context visual learning", "year": "2023" }, { "authors": "X Wang; X Zhang; Y Cao; W Wang; C Shen; T Huang", "journal": "", "ref_id": "b62", "title": "Seggpt: Towards segmenting everything in context", "year": "2023" }, { "authors": "T Xiao; Y Liu; B Zhou; Y Jiang; J Sun", "journal": "", "ref_id": "b63", "title": "Unified perceptual parsing for scene understanding", "year": "2018" }, { "authors": "S M Xie; A Raghunathan; P Liang; T Ma", "journal": "ICLR", "ref_id": "b64", "title": "An explanation of in-context learning as implicit bayesian inference", "year": "2021" }, { "authors": "N Xu; L Yang; Y Fan; D Yue; Y Liang; J Yang; T Huang", "journal": "", "ref_id": "b65", "title": "Youtube-vos: A large-scale video object segmentation benchmark", "year": "2018" }, { "authors": "B Yan; Y Jiang; J Wu; D Wang; P Luo; Z Yuan; H Lu", "journal": "", "ref_id": "b66", "title": "Universal instance perception as object discovery and retrieval", "year": "2023" }, { "authors": "J Yu; Z Wang; V Vasudevan; L Yeung; M Seyedhosseini; Y Wu", "journal": "", "ref_id": "b67", "title": "Coca: Contrastive captioners are image-text foundation models", "year": "2022" }, { "authors": "J W Zhang; Y Sun; Y Yang; W Chen", "journal": "", "ref_id": "b68", "title": "Feature-proxy transformer for few-shot segmentation", "year": "2022" }, { "authors": "R Zhang; Z Jiang; Z Guo; S Yan; J Pan; H Dong; P Gao; H Li", "journal": "", "ref_id": "b69", "title": "Personalize segment anything model with one shot", "year": "2023" }, { "authors": "B Zhou; H Zhao; X Puig; T Xiao; S Fidler; A Barriuso; A Torralba", "journal": "IJCV", "ref_id": "b70", "title": "Semantic understanding of scenes through the ade20k dataset", "year": "2019" }, { "authors": "X Zhou; V Koltun; P Krähenbühl", "journal": "", "ref_id": "b71", "title": "Simple multi-dataset detection", "year": "2022" }, { "authors": "X Zou; Z Y Dou; J Yang; Z Gan; L Li; C Li; X Dai; H Behl; J Wang; L Yuan", "journal": "", "ref_id": "b72", "title": "Generalized decoding for pixel, image, and language", "year": "2023" }, { "authors": "X Zou; J Yang; H Zhang; F Li; L Li; J Gao; Y J Lee", "journal": "NeurIPS", "ref_id": "b73", "title": "Segment everything everywhere all at once", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 140.99, 235.32, 69, 8.81 ], "formula_id": "formula_0", "formula_text": "-We introduce" }, { "formula_coordinates": [ 5, 254.42, 150.91, 7.91, 19.08 ], "formula_id": "formula_1", "formula_text": "V / / G m 3 / j J N m D J h Y 0 F F X d d H c F i R Q G X f f b W V p e W V" }, { "formula_coordinates": [ 5, 241.98, 172.08, 2.43, 6.92 ], "formula_id": "formula_2", "formula_text": "4 p v F E 0 x G e M h 7 V k q c E x V k M 0 y T 9 G J V Q Y o S q R 9 Q q O Z +" }, { "formula_coordinates": [ 5, 260.81, 227.41, 2.59, 4.85 ], "formula_id": "formula_3", "formula_text": "n R s = \" > A A A B 7 H i c b V B N S w M x E J 3 1 s 9 a v q k c v w S J U k L I r f h 2 L X n q s 4 L a F d i n Z N N u G J t k l y Q p l 6 W / w 4 k E R r / 4 g b / 4 b 0 3 Y P 2 v p g 4 P H e D D P z w o Q z b V z 3 2 1 l Z X V v f 2 C x s F b d 3 d v f 2 S w e H T R 2 n i l C f x D x W 7 R B r y p m k v m G G 0 3 a i K B Y h p 6 1 w d D / 1 W 0 9 U a R b L R z N O a C D w Q L K I E W y s 5 F f q r f O z X q n s V t 0 Z 0 D L x c l K G H I 1 e 6 a v b j 0 k q q D S E Y 6 0 7 n p u Y I M P K M M L p p N h N N U 0 w G e E B 7 V g q s a A 6 y G b H T t C p V f o o i p U t a d B M / T 2 R Y a H 1 W I S 2 U 2 A z 1 I v e V" }, { "formula_coordinates": [ 5, 264.74, 137.89, 109.8, 53.67 ], "formula_id": "formula_4", "formula_text": "F C B 1 q D 6 1 R 8 q m s Z M A h X E G N 9 z E w g y o o F T w W a V f m p Y Q u i E j J h v q S Q x M 0 E 2 j z z D Z 1 Y Z 4 k h p + y T g u f p 7 I y O x M d M 4 t J N 5 R L P s 5 e J / n p 9 C d B t k X C Y p M E k X H 0 W p w K B w f j 8 e c s 0 o i K k l h G p u s 2 I 6 J p p Q s C 1 V b A n e 8 s m r p H N R 9 6 7 r V w + X t c Z d U U c Z n a B T d I 4 8 d I M a 6 B 6 1 U B t R p N A z e k V v D j g v z r v z s R g t O c X O M f o D 5 / M H d p i R Y w = = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = \" i m T P j / Q n N U m W z 1 B C M P y + S G m P F 7 M = \" > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s K 9 o F t K Z n 0 T h u a y Q x J R i h D / 8 K N C 0 X c + j f u / B s z 7 S y 0 9 U D g c M 6 9 5 N z j x 4 J r 4 7 r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 d Z Q o h g 0 W i U i 1 f a p R c I k N w 4 3 A d q y Q h r 7 A l j + + z f z W E y r N I / l g J j H 2 Q j q U P O C M G i s 9 d k N q R n 6 Q 0 m m / X H G r 7 g x k m X g 5 q U C O e r / 8 1 R 1 E L A l R G i a o 1 h 3 P j U 0 v p c p w J n B a 6 i Y a Y 8 r G d I g d S y U N U f f S W e I p O b H K g A S R s k 8 a M l N / b 6 Q 0 1 H o S + n Y y S 6 g X v U z 8 z + s k J r j u p V z G i U H J 5 h 8 F i S A m I t n 5 Z M A V M i M m l l C m u M 1 K 2 I g q y o w t q W R L 8" }, { "formula_coordinates": [ 5, 153.67, 162.19, 235.17, 57.96 ], "formula_id": "formula_5", "formula_text": "i V t E h p m n v q l B W z C G 0 e R P 9 4 = \" > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s y I r 2 X R j c s K 9 o F t K Z n 0 T h u a y Q x J R i h D / 8 K N C 0 X c + j f u / B s z 7 S y 0 9 U D g c M 6 9 5 N z j x 4 J r 4 7 r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 d Z Q o h g 0 W i U i 1 f a p R c I k N w 4 3 A d q y Q h r 7 A l j + + z f z W E y r N I / l g J j H 2 Q j q U P O C M G i s 9 d k N q R n 6 Q x t N + u e J W 3 R n I M v F y U o E c 9 X 7 5 q z u I W B K i N E x Q r T u e G 5 t e S p X h T O C 0 1 E 0 0 x p S N 6 R A 7 l k o a o u 6 l s 8 R T c m K V A Q k i Z Z 8 0 Z K b + 3 k h p q P U k 9 O 1 k l l A v e p n 4 n 9 d J T H D d S 7 m M E 4 O S z T 8 K E k F M R L L z y Y A r Z E Z M L K F M c Z u V s B F V l B l b U s m W 4 C 2 e v E y a Z 1 X v s n p x f 1 6 p 3 e R 1 F O E I j u E U P L i C G t x B H R r A Q M I z v M K b o 5 0 X 5 9 3 5 m I 8 W n H z n E P 7 A + f w B 9 F G R H g = = < / l a t e x i t > p < l a t e x i t s h a 1 _ b a s e 6 4 = \" x 1 E Z T K k g t B x N A + r x a 6 l E 2 + h x s I Q = \" > A A A B 8 3 i c b V D L S s N A F L 2 p r 1 p f V Z d u B o v g q i T i a 1 l 0 4 7 K C f U A T y 2 Q 6 a Y d O J m F m I o S Q 3 3 D j Q h G 3 / o w 7 / 8 Z J m 4 W 2 H h g 4 n H M v 9 8 z x Y 8 6 U t u 1 v q 7 K y u r a + U d 2 s b W 3 v 7 O 7 V 9 w + 6 K k o k o R 0 S 8 U j 2 f a w o Z 4 J 2 N N O c 9 m N J c e h z 2 v O n t 4 X f e 6 J S s U g 8 6 D S m X o j H g g W M Y G 0 k 1 w 2 x n v h B l j 7 K f F h v 2 E 1 7 B r R M n J I 0 o E R 7 W P 9 y R x F J Q i o 0 4 V i p g W P H 2 s u w 1 I x w m t f c R N E Y k y k e 0 4 G h A o d U e d k s c 4 5 O j D J C Q S T N E x r N 1 N 8 b G Q 6 V S k P f T B Y Z 1 a J X i P 9 5 g 0 Q H 1 1 7 G R J x o K s j 8 U J B w p C N U F I B G T F K i e W o I J p K Z r I h M s M R E m 5 p q p g R n 8 c v L p H v W d C 6 b F / f n j d Z N W U c V j u A Y T s G B K 2 j B H b S h A w R i e I Z X" }, { "formula_coordinates": [ 5, 383.55, 217.28, 5.38, 8.88 ], "formula_id": "formula_6", "formula_text": "N F s Q g 4 7 Q S T 2 9 z v P F G l W S Q f z D S m v s A j y U J G s L H S Y 1 9 g M w 7 C V G S D a s 2 t u z O g Z e I V p A Y F m o P q V 3 8 Y k U R Q a Q j H W v c 8 N z Z + i p V h h N O s 0 k 8 0 j T G Z 4 B H t W S q x o N p P Z 4 k z d G K V I Q o j Z Z 8 0 a K b + 3 k i x 0 H o q A j u Z J 9 S L X i 7 + 5 / U S E 1 7 7 K Z N x Y q g k 8 4 / C h C M T o f x 8 N G S K E s O n l m C i m M 2 K y B g" }, { "formula_coordinates": [ 5, 408.96, 148.68, 6.63, 6.92 ], "formula_id": "formula_7", "formula_text": "j U o 1 Z S 2 q h N L d k B g m u G Q t 4 C B Y N 9 G M x K F g n X B 8 m / u d J 6 Y N V / I R J g k L Y j K U P O K U g J X 8 X k x g R I n I 7 q b 9 a s 2 t u z P g Z e I V p I Y K N P v V r 9 5 A 0 T R m E q g g x v i e m 0 C Q E Q 2 c C j a t 9 F L D E k L H Z M h 8 S y W J m Q m y W e Q p P r H K A E d K 2 y c B z 9 T f G x m J j Z n E o Z 3 M I 5 p F L x f / 8 / w U o u s g 4 z J J g U k 6 / y h K B Q a F 8 / v x g G t G Q U w s I V R z m x X T E d G E g m 2 p Y k v w F k" }, { "formula_coordinates": [ 5, 385.09, 135.69, 10.48, 23.59 ], "formula_id": "formula_8", "formula_text": "V / / G m 3 / j J N m D J h Y 0 F F X d d H c F i R Q G X f f b W V p e W V" }, { "formula_coordinates": [ 5, 180.52, 213.23, 3.06, 7.02 ], "formula_id": "formula_9", "formula_text": "i P F Q s I g R r K 3 k + z H W o z D K b h / k t F + t u X V 3 B v S X e A W p Q Y F m v / r p D x J i Y i o 0 4 V i p n u e m O s i w 1 I x w O q 3 4 R t E U k z E e 0 p 6 l A s d U B d k s 8 x Q d W W W A o k T a J z S a q T 8 3 M h w r N Y l D O 5 l n V I t e L v 7 n 9 Y y O L o O M i d R o K s j 8 U G Q 4 0 g n K C 0 A D J i n R f G I J J p L Z r I i M s M R E 2 5 o q t g R v" }, { "formula_coordinates": [ 6, 239.62, 138.12, 240.97, 33.86 ], "formula_id": "formula_10", "formula_text": "f , f r = F(I), F(I r ) C = Upsample (Dist(f , f r )) ,(1)" }, { "formula_coordinates": [ 6, 252.58, 469.54, 228.01, 26.79 ], "formula_id": "formula_11", "formula_text": "a = y r /∥y r ∥ • C ∈ R H×W p = PE(Topk(a)),(2)" }, { "formula_coordinates": [ 7, 270.64, 139.51, 209.95, 18.92 ], "formula_id": "formula_12", "formula_text": "v = y r /∥y r ∥ • f r .(3)" }, { "formula_coordinates": [ 7, 274.15, 249.81, 206.44, 18.44 ], "formula_id": "formula_13", "formula_text": "m = F t (meta)(4)" }, { "formula_coordinates": [ 7, 275.23, 454.45, 201.12, 17.29 ], "formula_id": "formula_14", "formula_text": "o = D(f , c; q). (5" }, { "formula_coordinates": [ 7, 476.35, 455.4, 4.24, 8.8 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 7, 228.08, 604.84, 252.51, 33.86 ], "formula_id": "formula_16", "formula_text": "q p , q v , q m = P p (p), P v (v), P m (m) a ′ = M(a)(6)" }, { "formula_coordinates": [ 8, 250.97, 193.5, 229.63, 49.91 ], "formula_id": "formula_17", "formula_text": "f ′ = f + a ′ q ′ = Concat(q, q l , q s , q m ) o = D(f ′ ; q ′ ) (7)" } ]
2023-11-22
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b14", "b17", "b18", "b19", "b20", "b21", "b14", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b30", "b31", "b32", "b33", "b34", "b35" ], "table_ref": [], "text": "Large Language Models (LLMs) such as BERT [1], RoBERTA [2], T5 [3], and PaLM [4], are intricately designed architectures equipped with an extensive number of parameters. These models have been rigorously pre-trained on vast and diverse corpora, thereby enabling them to excel in a wide array of Natural Language Processing (NLP) tasks, from language understanding to both conditional and unconditional text generation [5], [6]. These advancements have been heralded as a step toward higher-bandwidth human-computer interactions. However, their deployment faces significant challenges. On one hand, LLMs exhibit a tendency for 'hallucinations' [7], [8], providing plausible yet nonfactual predictions. On the other hand, the black-box nature of LLMs compromises both interpretability and factual accuracy, often resulting in erroneous statements despite memorizing facts during training [9], [10].\nKnowledge in natural language can be externally sourced from a retrievable database, reducing hallucinations and enhancing the interpretability of LLMs [11]. Utilizing dense neural retrievers, which employ dense query and document vectors generated by a neural network [12], the system can evaluate the semantic similarity to an information-seeking query by calculating the embedding vector similarity across related concepts [13], [14].\nTo go beyond mere semantic similarity in information retrieval and augment the reasoning capabilities of LLMs, two advanced methodologies are particularly transformative: prompt engineering like the Chain-of-thought prompting, and the incorporation of Knowledge Graphs (KGs) [15]. The former, chain-of-thought prompting, provides a framework for advanced reasoning by generating paths of explanations and predictions that are cross-verified through knowledge retrieval [16], [17]. While this method offers significant benefits, it is not the primary focus of this study. As for the latter, KGs offer LLMs a structured and efficient way to address their limitations in factual accuracy and reasoning [15], [18]. KGs not only provide accurate and explicit knowledge crucial for various applications [19] but are also known for their symbolic reasoning capabilities to produce interpretable results [20]. These graphs are dynamic, continuously evolving with the addition of new knowledge [21], and can be specialized for domain-specific requirements [22].\nIn this study, our emphasis is on techniques of automated KG generation and incorporation with LLMs. Most of the works related to these two tasks rely intensively on the ongoing training of neural networks [15], [23], which is both difficult to employ and less flexible for on-the-fly updates. Traditional KG construction approach uses NLP techniques for entity recognition [24], [25], or keyword identification based on term frequency [26], [27], followed by determining relationship strength through word proximity [28]. Current automated techniques necessitate neural network training [29]- [31]. As for the interaction between KGs and LLMs, neural networks are trained to let LLMs understand the information retrieved from KGs [32], [33].\nThe recent advancements in LLMs make us think much more simply about the automatic generation of KGs and the integration of LLMs with KGs. State-of-the-art LLMs such as ChatGPT1 , BARD2 , and LLAMA [34] have demonstrated impressive reasoning capabilities [35], [36]. Given sufficient information, they can independently execute effective inference. This observation suggests an opportunity to simplify the KG structure: perhaps the intricate relational patterns found in traditional KGs could be simplified into basic strength indicators of association. Consequently, specific relationships are implicitly conveyed to the model through corpus blocks associated with the KG. In addition, we can provide retrieved keywords and the related corpus directly in the prompt rather than training a network to let LLMs understand the retrieved subgraph structure.\nMotivated by these ideas, this study makes the following contributions:\n1) We introduce AutoKG, an innovative method for automated KG generation, based on a knowledge base comprised of text blocks. AutoKG circumvents the need for training or fine-tuning neural networks, employs pretrained LLMs for extracting keywords as nodes, and applies graph Laplace learning to evaluate the edge weights between these keywords. The output is a simplified KG, where edges lack attributes and directionality, possessing only a weight that signifies the relevance between nodes. 2) We present a hybrid search strategy in tandem with prompt engineering, which empowers large LLMs to effectively utilize information from the generated KGs. This approach simultaneously searches for semantically relevant corpora based on embedding vectors and the most pertinent adjacent information within the knowledge graphs. The KG constructed here is a simplified version compared to traditional KGs, which are typically composed of relations in the form of triplets. Firstly, nodes in AutoKG are not entities in the usual sense; they are more abstract keywords. These keywords can represent entities, concepts, or any content that serves as a foundation for search. Additionally, instead of directed edges with specific semantic meanings found in traditional KGs, AutoKG utilizes undirected edges with a single weight value. The node keywords are extracted from the knowledge base with the aid of LLMs, while the graph structure is algorithmically derived. Such a KG can be efficiently stored with just a keyword list and a sparse adjacency matrix.\nSection II explains the detailed process of automated KG generation, while Section III describes the hybrid search method. An essential highlight is that our proposed techniques require no neural network training or fine-tuning." }, { "figure_ref": [ "fig_0" ], "heading": "II. AUTOMATED KG GENERATION", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our proposed approach, Au-toKG, for automated KG generation. The training aspects of the LLM are not the focus of this article. We operate under the assumption that the LLM is already pre-trained and is accompanied by a corresponding vector embedding model. Specifically, we have employed OpenAI's gpt-4 or gpt-3.5turbo-16k as the LLM and the text-embedding-ada-002 as the embedding model.\nConsider a scenario involving an external knowledge base, comprised of discrete text blocks. AutoKG constructs a KG where the nodes represent keywords extracted from the external knowledge base. The edges between these nodes carry a single non-negative integer weight, signifying the strength of the association between the connected keywords. AutoKG encompasses two primary steps: the extraction of keywords, which correspond to the nodes in the graph, and the establishment of relationships between these keywords, represented by the edges in the graph. It is worth noting that the pretrained LLM is employed only in the keyword extraction step of the process. Figure 1 is the flowchart of the KG construction." }, { "figure_ref": [], "heading": "A. Keywords Extraction", "publication_ref": [ "b36", "b37", "b38" ], "table_ref": [], "text": "Let the external knowledge base be denoted by X = {x 1 , x 2 , . . . , x N }, where each x i is a block of text with the maximum length of T tokens, represented as a string. The corresponding embedding vectors for these text blocks are encapsulated in V = {v(x 1 ), v(x 2 ), . . . , v(x N )} ⊂ R d , where v is the embedding projection from string to R d . We extract keywords from the knowledge base X with unsupervised clustering algorithms and the assistance of LLMs.\nAlgorithm 1 outlines the keyword extraction process. The algorithm takes as input all text blocks and their corresponding embedding vectors X and V, along with pre-defined parameters: n for the number of clusters, c for the number of text blocks to select, and l 1 , l 2 as keyword extraction parameters. Additionally, the algorithm also utilizes a parameter m to specify the number of sampled previous keywords. Two unsupervised clustering algorithms, K-means clustering [37], [38] and spectral clustering [39], are applied to cluster the knowledge base. For each cluster identified, we sample 2c text blocks, with c closest to the cluster center and c randomly selected, to capture both the global and centered information. The LLM is used twice in this algorithm. First, it extracts keywords from a selection of 2c text blocks, guided by the parameters l 1 and l 2 , while avoiding the sampled m previous keywords. Second, the same LLM is employed to filter and refine the extracted keywords.\nThe construction of the prompts for these applications strictly follows the format outlined in Table I. A specific prompt example for the keyword extraction is given in the Appendix. Specifically, each prompt is formed by concatenating the Task Information, Input Information, Additional Requirements, and Outputs. It is essential to note that within each task, the length of the prompt sections corresponding to Task Information and Additional Requirements is fixed.\nFor Task 1, which deals with keyword extraction, the maximum input length is set to 2cT + m(l 2 + 1), where T represents the token length of a single text block. Note that each keyword can have a length of up to l 2 + 1 tokens when accounting for potential separators such as commas. Similarly, the maximum output length is l 1 (l 2 + 1), where l 1 is the maximum number of keywords and l 2 is the maximum token length of each keyword. Since Task 1 is applied once for each of the n clusters generated by the two clustering methods, the total maximum token usage for Task 1 would be 2n(2cT +(m+l 1 )(l 2 +1)). This process yields a maximum of 2nl 1 extracted keywords. For Task 2, which involves filtering and refining the keywords, the maximum lengths for both the input and output are governed by the formula 2nl 1 (l 2 + 1). In summary, the maximum usage of tokens M tokens KG for the keyword extraction process is\nM tokens KG = 2n(2cT + (m + 2l 1 )(l 2 + 1)) + L F ,(1)\nwhere L F is the fixed total length of tokens of the task information and additional requirement parts." }, { "figure_ref": [], "heading": "B. Graph Structure Construction", "publication_ref": [ "b39", "b40" ], "table_ref": [], "text": "In this section, we detail how to construct a KG based on the keywords extracted in Section II-A. Specifically, we establish whether there are edges between keywords and how to weight these edges. We propose a method based on label propagation on the graph, a step that does not require the involvement of any LLM.\nFirstly, we create a graph G t = (X , W t ) where X is the set of text blocks serving as the nodes of graph G t , and W t is the weight matrix for the edges. W t ij is determined by the similarity between the corresponding embedding vectors v i and v j . Define the similarity function:\nw(v i , v j ) = exp - ∠(v i , v j ) 2 √ τ i τ j ,(2)\nwhere\n∠(v i , v j ) = arccos v ⊤ i vj ∥vi∥∥vj ∥\nis the angle between feature vectors v i and v j . The normalization constant τ i is chosen according to the similarity to the K th nearest neighbor of i (i.e.,\nτ i = ∠(v i , v i K ), where v i K is the K th nearest neighbor to v i ).\nFor computational efficiency, we construct a sparse weight matrix W t by considering only the K-nearest neighbors [40] for each vertex. Let x i k , k = 1, 2, . . . , K be the K-nearest neighbors (KNN) of x i (including x i itself) according to angle similarity. Define a sparse weight matrix by\nW t ij = w(v i , v j ), j = i 1 , i 2 , . . . , i K , 0, otherwise.(3)\nK is chosen to ensure that the corresponding graph G t is connected, empirically K = 30. We symmetrize the sparse weight matrix to obtain our final weight matrix W t by redefining W t ij := ( W t ij + W t ji )/2. Note that W t is sparse, symmetric, and non-negative (i.e. W t ij ≥ 0). Next, we utilize the graph G t = (X , W t ), constructed on text blocks, to establish a keyword KG G k = (K, W k ). Here, K is the set of keywords, and W k is the weight matrix for the edges. In this matrix, W k ij quantifies the strength of association between keywords k i and k j . Importantly, this association is not semantic but is reflected across the entire corpus in the knowledge base. Specifically, W k ij corresponds to the count of text blocks that are simultaneously associated with both keywords k i and k j .\nAlgorithm 2 establishes the relationship between a keyword and text blocks. The core idea is to select a subset of text blocks that are closest to the keyword as positive data, and another subset that is farthest as negative data. We then employ graph Laplace learning [41] based on the graph structure G t = (X , W t ) that we have previously constructed for text blocks. The graph Laplace learning is a semi-supervised learning method on graphs, utilizing the harmonic property of the solution function u : X → [0, 1] to diffuse the label values \nfor i = 1,2,. . . ,n do 5:\nRandomly select c text blocks and c nearest to the cluster center from cluster Use LLM, extract up to l 1 keywords of maximum token length l 2 , collected as K P i 13:\nV P i 6: if |K| > m then 7: Select a subset K s ⊂ K such that |K s | = m 8:\nUpdate K = K ∪ K P i 14:\nend for 15: end for 16: Filter and refine K using a LLM to obtain the final keyword list 17: return K from a subset of labeled nodes to other unlabeled nodes in the graph. The text blocks that are classified towards the positive side (with a node function value u ≥ 0.5) are considered to be associated with the keyword.\nThe association weight W k ij between k i and k j is defined as follows:\nW k ij = W k ji = |X ki ∩ X kj |.(4)\nWith this, we complete the construction of the keyword-based KG G k , which is built upon the text block graph G t ." }, { "figure_ref": [], "heading": "C. Time Complexity Analyzation", "publication_ref": [ "b40", "b39", "b36", "b37", "b38" ], "table_ref": [], "text": "This section analyzes the efficiency of the AutoKG method. The token consumption required for KG construction in the Algorithm 2 Identifying Keyword to Text Block Association Input: Keyword k, Set of text blocks X , Forward relation parameter n 1 , Backward relation parameter n 2 Output: X k ⊂ X , the subset of X associated with k 1: Obtain the embedding vector v(k).\n2: Find the n 1 nearest vectors in X to v(k) (label them as 1) and n 2 farthest vectors (label them as 0). 3: In the text-block graph G t = (X , W t ), use the graph Laplace learning algorithm [41] to label the remaining nodes based on these n 1 + n 2 labeled nodes. Obtain a real-valued function u : X → [0, 1] on the graph nodes.\n4: Define X k = {x i ∈ X : u(x i ) ≥ 0.5} 5: return X k\nAutoKG method has the upper bound according to Eq. 1.\nThe efficiency of the algorithm is mainly influenced by three aspects:\n1) Constructing the similarity graph based on text blocks G t = (X , W t ): An approximate nearest neighbor search [40] is employed for KNN search, leading to a complexity of O(N log N ). 2) Clustering algorithm: Since both K-means clustering [37], [38] and spectral clustering [39] are NP-hard, we bound the complexity by I max , the preset maximum number of iterations. Spectral clustering is essentially the Kmeans method augmented with an eigendecomposition of the graph Laplacian. \nO(N log N + N ndI max + 2KN √ κ) = O(N log N + N n),\nwhere the number of clusters practically depends on N ." }, { "figure_ref": [], "heading": "D. Remarks", "publication_ref": [], "table_ref": [], "text": "In the process of generating the entire KG, there are several points to be considered:\n• Although the keywords are extracted from clusters of text blocks, we do not take into account the previous clustering results when establishing the relationship between keywords and text blocks. This is because the same keyword may be included in multiple clusters. • When constructing the relationship between keywords, we did not incorporate the embedding vectors of the keywords into the graph for the graph Laplace learning process. There are two reasons for this decision: first, we do not need to update the graph structure when selecting different keywords; second, empirically speaking, the embedding vectors of the keywords tend to be quite distant from the embedding vectors of the text blocks. Therefore, including them in the initial label data for Laplace learning might be meaningless. Our approach considerably outperforms these conventional methods in both keyword extraction and relationship construction. The primary shortcoming of traditional techniques is their reliance on a fixed set of words, leading to a significant loss of related information and often producing overly localized insights. In terms of keyword extraction, our method leverages the capabilities of LLMs, allowing for the refining of keywords that are more central to the topic at hand, rather than merely being high-frequency terms. When it comes to relationship construction, our strategy is grounded in a macroscopic algorithm on graphs of all text blocks. This approach encompasses the information from the entire knowledge base of text blocks, providing a more comprehensive perspective compared to relationships derived from local distances." }, { "figure_ref": [], "heading": "III. HYBRID SEARCH: INCORPORATING KG AND LLM", "publication_ref": [], "table_ref": [], "text": "In this section, we propose a hybrid search approach, based on the KG generated according to Section II. For a given query, the search results using this hybrid search strategy include not only the text blocks that are semantically related to the query but also additional associative information sourced from the KG. This supplementary data serves to provide more detailed and in-depth reasoning for further analysis by the model. The incorporation of a KG allows us to capture complex relationships between different entities, thereby enriching the contextual understanding of the query.\nIn our proposed hybrid search approach, we have devised a multi-stage search process that incorporates both direct text block search as well as keyword-based searching guided by the KG. This process is detailed in Algorithm 3. Initially, we perform the initial search by computing the text blocks that are closest to the given query embedding vector. Then, we turn to the KG and identify the keywords that are closest to the Algorithm 3 hybrid search Algorithm Input: Query q, embedding vector v(q), Parameters (s t 0 , s k 1 , s t 1 , s k 2 , s t 2 ) Output: Set X final containing text blocks related to q, and Set K final containing keywords related to q 1: Step 1: Vector Similarity Search 2: Find the closest s t 0 text blocks in X to v(q) 3: X 0 ← set of closest s t 0 text blocks 4:\nStep 2: Similar Keyword Search 5: Find the closest s k 1 keywords in K to v(q) \nX final ← X 0 ∪ X 1 ∪ X 2 15: K final ← K 1 ∪ K 2 16: return X final , K final\nquery, along with text blocks associated with these keywords. Lastly, we identify additional keywords that have the strongest association with the previously identified ones, based on the weight matrix in the KG, and accordingly search for related text blocks. The algorithm returns not just a set of text blocks that are highly relevant to the query, but also a set of keywords that are closely connected to the query.\nTo estimate the maximum number of tokens returned by the hybrid search, we consider the maximum number of tokens T for a single text block and l 2 for a single keyword. The total number of keywords retrieved will be s\nk 1 + s k 1 • s k 2 ,\nand the total number of text blocks will be s\nt 0 + s k 1 • s t 1 + s k 1 • s k 2 • s t 2 .\nTherefore, the maximum number of tokens M tokens QA can be calculated as:\nM tokens QA = s k 1 •l 2 •(1+s k 2 )+T •(s t 0 +s k 1 •s t 1 +s k 1 •s k 2 •s t 2 ).\n(5) In practical applications, the actual number of tokens obtained through the search often falls below the theoretical maximum. This is because there is substantial overlap between the text blocks and keywords discovered via different search methods. Subsequently, the retrieved information is incorporated into the prompt to enhance the LLM's response to the original query. For details on prompt construction, one may refer to Task 3 in Table I. A specific prompt example is provided in the Appendix. Importantly, an adaptive approach can be employed during the prompt construction to ensure that the maximum token limit for the LLM is not exceeded. Text blocks can be added sequentially until the token limit is reached." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS AND RESULTS", "publication_ref": [ "b12", "b13", "b41" ], "table_ref": [], "text": "In this section, our primary goal is to demonstrate through experiments that our proposed AutoKG approach provides significantly better responses while maintaining a comparable efficiency, compared with the retrieval-augmented generation (RAG) method based on semantic vector similarity [13], [14]. Our approach that combines AutoKG and hybrid search extracts more valuable information for the model than RAG which relies on semantic vector similarity search.\nUnfortunately, we encountered challenges in identifying a suitable dataset to conduct these experiments. We attempted to utilize the WikiWhy dataset [42], which is designed to evaluate the reasoning capability of models. The dataset comprises approximately 9,000 entries. Each entry contains a paragraph of content, spanning between 100 to 200 words. Based on this content, every entry provides a \"why\" question along with its corresponding cause-effect relationship and explanation. When we employ the hybrid search based on AutoKG or the semantic vector similarity search of RAG, we can easily retrieve the content corresponding to the given question and instruct the model to answer based on that content. In both methods, the model's responses are almost identical. Since the 9,000 entries are relatively independent of each other, cross-entry data retrieval provided by our method doesn't significantly contribute to answering the questions.\nAs a consequence, we adopt qualitative approaches rather than employing numerical metrics to evaluate the experimental performance of our method. First, we provide a simple example to explain why our AutoKG with hybrid search approach has benefits compared to methods based on semantic vector similarity search. Next, we present a detailed example based on all 40 references of this article and the associated subgraph from the KG used during the query. Finally, we compare the efficiency of hybrid search and semantic vector similarity search from both theoretical and experimental perspectives." }, { "figure_ref": [], "heading": "A. A Simple Example: Why We Need KG?", "publication_ref": [ "b34", "b35" ], "table_ref": [], "text": "Consider a simple knowledge base that contains text blocks detailing a day in the life of an individual named Alex, along with related information. The core narrative is that after leaving his home in the morning, Alex goes to Cafe A to buy a coffee and then takes a bus to Company B for work. Interspersed within the knowledge base are numerous pieces of granular information such as conversations Alex had with the barista at the cafe, the coffee order details, dialogues on the bus, as well as conversations at his workplace, and so forth.\nThe point of interest here is how a model would answer the question: \"Was it raining this morning when Alex left his home?\" under the assumption that there is no direct answer to this question and no content about the weather in the knowledge base. We aim to compare the responses given the support information retrieved using our method versus that retrieved through semantic similarity search. Within the knowledge base, there are two indirect pieces of information hinting at the weather conditions:\n1) Related to Cafe A: \"Many people were chatting and drinking coffee in the square outside Cafe A.\" 2) Related to Company B: \"The car wash located downstairs of Company B was bustling with business today.\" Both these snippets subtly suggest that it was not raining.\nGiven that the question is primarily about Alex and the weather, the information retrieved from the knowledge base through semantic similarity vector search would only be about Alex (as there is no direct information about the weather). The search results would primarily outline his movements throughout the day. Even with an increase in search entries, it would mostly retrieve additional miscellaneous details, like his coffee order and dialogues. Unfortunately, these details do not contain any hints to infer the day's weather.\nOn the other hand, employing AutoKG with a hybrid search approach yields different results. During the KG generation process, we extract keywords such as Alex, Cafe A, and Company B. With the hybrid search, the initial step uses the input question to retrieve the keyword Alex. Then, the adjacency search identifies Cafe A and Company B as related keywords. Subsequently, text blocks are sought based on these keywords, resulting in the identification of implicit weatherrelated information. This example illustrates the utility of the hybrid search. Semantic similarity alone can lack crosstopic connections. It tends to retrieve many minor details within the scope of a given question. When searching with the KG constructed using the AutoKG method, the breadth and diversity of the retrieved information is enhanced. Moreover, prior work has easily substantiated GPT-4's capability to reason effectively with provided clues [35], [36].\nFrom the dialogue record with GPT-4 in the Appendix, it is evident that GPT-4 can accurately infer that it did not rain today when given clues about today's weather. However, when only provided with information about Alex from the semantic similarity vector search, it cannot make any predictions about today's weather." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "B. An Example with Article References", "publication_ref": [], "table_ref": [], "text": "We present a concrete example utilizing content from the 42 references cited in this paper. The resulting KG is interactively queried using the hybrid search method outlined above. Both the KG generation and subsequent querying processes were performed using the gpt-3.5-turbo-16k model, chosen to minimize cost. The 40 references, once segmented, comprise 5,261 text blocks, each less than 201 tokens in length. For the keyword extraction process, as per Algorithm 1, the parameters are: n = 15, c = 15, l 1 = 10, l 2 = 3, m = 300. For Algorithm 2, we use the parameters n 1 = 5 and n 2 = 35. The entire KG construction consumes 137,516 tokens, which is less than the theoretical maximum of 181,280 tokens given by Eq. 1. This calculation of the theoretical maximum does not account for the fixed total length of tokens pertaining to task information and additional requirement parts.\nThe constructed KG comprises 461 nodes (extracted keywords) with its adjacency matrix containing 40,458 non-zero elements. The node with the highest degree in the graph is connected to 289 neighbors. There are 353 nodes whose degree is less than 92, which is 20% of the maximum possible degree of 460. The entire process of KG construction took approximately ten minutes. All computations, excluding calls to the OpenAI API, are carried out on a CPU with an Intel i9-9900. Both keyword extraction and KG construction take approximately five minutes each. For the subsequent hybrid search described in Algorithm 3, we use the parameters (s t 0 = 15, s k 1 = 5, s t 1 = 3, s k 2 = 3, s t 2 = 2) and ensure, through an adaptive approach, that the input prompt remains under 10,000 tokens in length. The maximum length of response is set as 1024. As an illustrative example, when querying: \"Please introduce PaLM in detail, and tell me about related applications.\", the temporary KG structure during the hybrid search is shown in the Figures 2 and3. Both images represent subgraphs of the same KG, with the input query depicted in blue nodes. The image on the left (Figure 2) showcases only the keyword nodes (in green), while the image on the right (Figure 3) includes the additionally retrieved text blocks (in pink nodes). The edges displayed are those connecting similar keywords directly retrieved from the query (shown as inner circle nodes in the left figure) as well as edges connecting these similar keywords to the keywords obtained via adjacency search (connecting the inner and outer circles in the left figure). While there may be existing edges between the outer circle keywords, they are omitted from the visualization for clarity. The model has a lengthy response which is shown in the Appendix. For those interested in further exploration, all pertinent code and test cases are available at https://github.com/wispcarey/AutoKG." }, { "figure_ref": [], "heading": "C. Efficiency Analyzation", "publication_ref": [], "table_ref": [], "text": "Given the flexibility in regulating the volume of retrieved information, both the proposed method and the RAG approach can, in theory, support knowledge bases of any size. This means they can encompass any number of text blocks, each subject to the maximum token limit. As outlined in Section II-C, the efficiency of the AutoKG method for automated knowledge graph construction is O(N log N ) when the number of text blocks N is large.\nThe constructed keyword KG contains M keywords where M < N (empirically, M ≈ 0.1N ). During the hybrid search process, with parameters (s t 0 , s k 1 , s t 1 , s k 2 , s t 2 ), the overall time complexity for the search is:\nO((s t 0 + s k 1 • s t 1 + s k 1 • s k 2 • s t 2 )N ) + O((s k 1 + s k 1 • s k 2 )M ).(6\n) For the semantic vector similarity search method to retrieve the same volume of text blocks, the time complexity is:\nO((s t 0 + s k 1 • s t 1 + s k 1 • s k 2 • s t 2 )N ).(7)\nFrom the above, it's evident that the time complexity of our hybrid search approach is the same as that of the semantic vector similarity search. For sufficiently large N , both complexities tend towards O(N ).\nBased on the KG generated from the 40 references of this article, as described in Section IV-B, we perform a hybrid search using parameters (s t 0 = 15, s k 1 = 5, s t 1 = 3, s k 2 = 3, s t 2 = 2). The theoretical maximum number of text blocks that can be searched using this configuration is 60. For comparison, we conduct a semantic vector similarity search for 30 text blocks. Using a query composed of 50 random characters, we carry out both the hybrid search and semantic vector similarity search methods and record the time taken for each (this includes the embedding computation time). After repeating the experiment 100 times, we calculate the average time taken. The hybrid search method had an average duration of 0.0310 seconds, while the semantic vector similarity search took slightly less, with an average time of 0.0305 seconds. This experiment aligns well with our theoretical analysis of the time complexity." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [ "b30" ], "table_ref": [], "text": "This paper addressed the inherent challenges faced by semantic similarity search methods when linking LLMs to knowledge bases. Our method, AutoKG, presents a refined and efficient strategy for automated KG construction. In comparison to traditional KGs, the innovative architecture of AutoKG offers a lightweight and simplified version of KG, shifting the focus from specific entities to more abstract keywords and utilizing weighted undirected edges to represent the associations between keywords. Based on the generated KG, our approach harnesses these capabilities by presenting the LLMs with a more interconnected and comprehensive knowledge retrieval mechanism through the hybrid search strategy. By doing so, we ensure that the model's responses are not only richer in quality but also derive insights from a more diverse set of information nodes.\nWe tested AutoKG with a hybrid search in experimental evaluations. Because of dataset limitations, our tests were mostly qualitative. The outcome highlights the benefits of our method compared to typical RAG methods with semantic similarity search. In summary, AutoKG provides a valuable step to combine knowledge bases with LLMs. It is computationally lightweight and paves the way for more detailed interactions in LLM applications. Moreover, our hybrid search and the semantic vector similarity search have the same order of time complexity.\nFurther analysis of the AutoKG approach requires the identification or creation of an appropriate dataset to evaluate its integration with LLMs. Wang et al. [31] developed their own dataset to evaluate a similar idea to ours. While the evaluation criteria should resemble that of RAG, a more structurally intricate and complex dataset is desired. Another avenue for improvement revolves around keyword extraction. Currently, the method leverages prompt engineering; however, future work could explore fine-tuning larger models or even training specialized models to achieve enhanced results. " }, { "figure_ref": [], "heading": "ACKNOWLEDGMENT", "publication_ref": [], "table_ref": [], "text": "The authors acknowledge the assistance of ChatGPT-4 in a first draft of the exposition of the manuscript." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Bohan Chen is supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606. This work is supported by NSF grants DMS-2027277 and DMS-2318817." }, { "figure_ref": [], "heading": "APPENDIX DIALOGUES WITH LLM", "publication_ref": [], "table_ref": [], "text": "Dialogue with GPT-4 for Section IV-A If we only provide information retrieved from the semantic vector similarity search, GPT-4 can not answer the question." }, { "figure_ref": [], "heading": "User:", "publication_ref": [], "table_ref": [], "text": "Answer the question \"Was it raining this morning when Alex left his home?\" with the following information: 1. After leaving his home in the morning, Alex goes to Cafe A to buy a coffee and then takes a bus to Company B for work; 2. Alex bought an iced Americano at Cafe A; 3. Alex talked to the staff at Coffee Shop A about whether they were planning to open a branch; 4. Alex was criticized by his boss at Company B for not completing work on time." }, { "figure_ref": [], "heading": "GPT-4:", "publication_ref": [], "table_ref": [], "text": "Based on the provided information, there is no direct evidence or indication about the weather condition when Alex left his home in the morning. Therefore, the answer to the question \"Was it raining this morning when Alex left his home?\" is \"The information provided does not specify.\"\nIf we provide information about the clue of weather retrieved from the hybrid search based on AutoKG, GPT-4 can complete reasoning and answer the question correctly." }, { "figure_ref": [], "heading": "User:", "publication_ref": [], "table_ref": [], "text": "Answer the question \"Was it raining this morning when Alex left his home?\" with the following information: 1. After leaving his home in the morning, Alex goes to Cafe A to buy a coffee and then takes a bus to Company B for work; 2. Many people were chatting and drinking coffee in the square outside Cafe A; 3. The car wash located down-stairs of Company B was bustling with business today." }, { "figure_ref": [], "heading": "GPT-4:", "publication_ref": [], "table_ref": [], "text": "Based on the provided information, it is likely that it was not raining this morning when Alex left his home. The clue is that \"many people gathered in the square in front of the Cafe A, drinking coffee and chatting,\" which indicates a pleasant outdoor environment, not typical of rainy weather. Additionally, the car wash shop doing very well suggests it might be a clear day since car washes tend to be less busy on rainy days." }, { "figure_ref": [], "heading": "Dialogue with GPT-3.5 for Section IV-B", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Input query:", "publication_ref": [], "table_ref": [], "text": "Please introduce PaLM in detail, and tell me about related applications." }, { "figure_ref": [], "heading": "Preprocess:", "publication_ref": [], "table_ref": [], "text": "With the input query, information is retrieved from the knowledge base through the hybrid search process based on the KG. The prompt is reconstructed with the retrieved information and sent to the gpt-3.5-turbo-16k model for the final output." }, { "figure_ref": [], "heading": "LLM response:", "publication_ref": [], "table_ref": [], "text": "PaLM, which stands for Pathways Language Model, is a large-scale language model that has been developed to achieve state-of-the-art results in few-shot learning across a wide range of natural language processing (NLP) tasks. It is a dense Transformer language model with 540 billion parameters and has been trained on 780 billion tokens of high-quality, diverse text.\nThe development of PaLM has been driven by the goal of exploring novel architectural choices and training schemes in order to create a large-scale, modularized system with broad generalization capabilities across multiple modalities. PaLM has been designed to be highly scalable, and it leverages the scaling capabilities of the Pathways infrastructure, which enables training a single model across thousands or tens of thousands of accelerator chips in an efficient manner.\nPaLM has demonstrated outstanding performance on various NLP tasks. It has achieved breakthrough results on tasks such as open-domain closed-book question answering, cloze and completion tasks, common sense reasoning, in-context reading comprehension, and more. It has also shown strong capabilities in multilingual tasks and source code generation. In fact, PaLM has outperformed the state-of-the-art on a suite of multi-step reasoning tasks and has even surpassed the average human performance on the recently released BIGbench benchmark.\nThe primary application of PaLM is research on language models, including NLP applications such as machine translation and question answering. It is being used within Google for research on a variety of open-ended text and code generation tasks, including reasoning and code synthesis. PaLM has also been evaluated for its translation capabilities across a variety of language pairs and settings, particularly focusing on English-centric language pairs. It is important to note that PaLM is primarily designed for research purposes and has not been extensively tested in settings outside of research. Further analysis is required before using PaLM for downstream applications. Additionally, ethical considerations and risks associated with large language models, including potential misuse, have been discussed in the context of PaLM.\nIn conclusion, PaLM is a highly scalable language model that has achieved state-of-the-art results in few-shot learning across a wide range of NLP tasks. Its applications include research on language models, machine translation, question answering, code generation, and more." }, { "figure_ref": [], "heading": "APPENDIX PROMPT EXAMPLES", "publication_ref": [], "table_ref": [], "text": "Task: Keyword Extraction This prompt requires the following preset variables: 1. 'main topic', a high-level topic(s) of the knowledge base; 2. 'l 1 ', the maximum number of extracted keywords; 3. 'l 2 ', the maximum length of each keyword; 4. 'language', the language of output keywords; 5. 'previous keywords', a list of previously extracted keywords. Once the text blocks are sampled from a certain cluster, we use the following prompt for keyword extraction: Prompt for Keyword Extraction: You are an advanced AI assistant, specializing in analyzing various pieces of information and providing precise summaries. Your task is to determine the core theme in the following series of *-separated information fragments, which are delimited by triple backticks. Ensure your answer focuses on the topic and avoids including unrelated content. DO NOT write complete sentences. You should obey the following rules when doing this task: 1, Keywords in your answer should related to the topic 'main topic'; 2, Your answer should include at most 'l 1 ' keywords; 3, Each keyword should be at most 'l 2 ' words long; 4, avoid already appeared theme keywords, marked inside ⟨⟩; 5, Write your answer in 'language'; 6, Separate your output keywords with commas (,); 7, Don't include any symbols other than keywords.\nInformation:' ' 'text blocks' ' ' Please avoid the following already appeared theme terms: ⟨'previous keywords'⟩ Your response:\nTask: Incorporation between KGs and LLMs For a given query q, we search for its related text blocks X final and keywords K final according to the hybrid search algorithm (Algorithm 3). Given a preset variable 'language' for the output language, we use the following prompt to provide retrieved information from the KG and original knowledge base: Prompt for Query Response: I want you to do a task, deal with a query, or answer a question with some information from a knowledge graph. You will be given a set of keywords directly related to a query, as well as adjacent keywords from the knowledge graph. Relevant texts will be provided, enclosed within triple backticks. These texts contain information pertinent to the query and keywords. Please note, you should not invent any information. Stick to the facts provided in the keywords and texts. These additional data are meant to assist you in accurately completing the task. Your response should be written in 'language'. Avoid showing any personal information, like Name, Email, WhatsApp, Skype, and Website in your polished response.\nKeywords information (directly related to the query or find via the adjacent search of the knowledge graph): K final Text information: ' ' ' X final ' ' ' Your task: q Your response:" } ]
Traditional methods of linking large language models (LLMs) to knowledge bases via the semantic similarity search often fall short of capturing complex relational dynamics. To address these limitations, we introduce AutoKG, a lightweight and efficient approach for automated knowledge graph (KG) construction. For a given knowledge base consisting of text blocks, AutoKG first extracts keywords using a LLM and then evaluates the relationship weight between each pair of keywords using graph Laplace learning. We employ a hybrid search scheme combining vector similarity and graph-based associations to enrich LLM responses. Preliminary experiments demonstrate that AutoKG offers a more comprehensive and interconnected knowledge retrieval mechanism compared to the semantic similarity search, thereby enhancing the capabilities of LLMs in generating more insightful and relevant outputs.
AutoKG: Efficient Automated Knowledge Graph Generation for Language Models
[ { "figure_caption": "Fig. 1 .1Fig. 1. Flowchart of the KG Construction Process. This figure illustrates the different steps involved in the construction of the KG. The blue blocks represent the core components of the KG, yellow blocks indicate the embedding process, green blocks focus on keyword extraction, and the red blocks correspond to the establishment of relationships between keywords and the corpus as well as among the keywords themselves.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "these 2c text blocks and previous keywords K s in a prompt for keyword extraction 12:", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Subgraph Visualization: Keyword Nodes", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm for Keyword Extraction in AutoKG Input: All text blocks and their corresponding embedding vectors X and V, pre-defined parameters n (number of clusters), c (number of text blocks to select), l 1 , l 2 (keyword extraction parameters), m (number of sampled previous keywords) Output: A set of extracted keywords K = {k 1 , k 2 , . . . , k M }", "figure_data": "TABLE IPROMPT CONSTRUCTION FOR DIFFERENT TASKS USING LLMIDTask InformationInput InformationAdditional RequirementsOutputs1Keywords Extraction1.Sampled text blocks 2.Sampled previous keywords1. Avoid previous keywords 2. Output up to l 1 keywords 3. Each output keyword is at most l 2 tokensExtracted Keywords2Refining KeywordsOriginally extracted keywordsConcentration, deduplication, splitting, and deletionRefined Keywords3Response to the Query1. Original query 2. Related text blocks 3. Related keywordsIndicate the method used to search for texts and keywords: Direct, via keywords, or KG adjacency searchFinal ResponseAlgorithm 1", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The time complexity here is mainly dominated by the Kmeans method and is O(N ndI max ), where n is the number of clusters, and We have the upper bound for N as 2KN , where K is the number of nearest neighbors.Considering these factors, for large N and if preconditioning techniques can keep the condition number of the graph Laplacian matrix small, our automated KG construction algorithm should operate with a time complexity of", "figure_data": "d is the vector dimension (1536 for OpenAI's embeddingmodel).3) Graph Laplace learning: Given that our graph Lapla-cian matrix is sparse, employing the conjugate gradientmethod to solve the graph Laplace learning problem results in a time complexity of O( N √ κ), where Nrepresents the count of non-zero elements in the graph Laplacian matrix, and √ κ denotes the condition number.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "6: K 1 ← set of closest s k 1 keywords 7: For each k in K 1 , find the closest s t 1 text blocks in X 8: X 1 ← merged set of closest s t 1 text blocks for each k in K 1 9: Step 3: Keyword Adjacency Search 10: For each k in K 1 , find s k 2 strongest connected keywords according to W k 11: K 2 ← merged set of s k 2 strongest connected keywords for each k in K 1 12: For each k in K 2 , find the closest s t 2 text blocks in X 13: X 2 ← merged set of closest s t 2 text blocks for each k in K 2 14:", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Bohan Chen; Andrea L Bertozzi
[ { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b0", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Y Liu; M Ott; N Goyal; J Du; M Joshi; D Chen; O Levy; M Lewis; L Zettlemoyer; V Stoyanov", "journal": "", "ref_id": "b1", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "C Raffel; N Shazeer; A Roberts; K Lee; S Narang; M Matena; Y Zhou; W Li; P J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b2", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "A Chowdhery; S Narang; J Devlin; M Bosma; G Mishra; A Roberts; P Barham; H W Chung; C Sutton; S Gehrmann", "journal": "", "ref_id": "b3", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "K Tirumala; A Markosyan; L Zettlemoyer; A Aghajanyan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b4", "title": "Memorization without overfitting: Analyzing the training dynamics of large language models", "year": "2022" }, { "authors": "K Zhou; J Yang; C C Loy; Z Liu", "journal": "International Journal of Computer Vision", "ref_id": "b5", "title": "Learning to prompt for visionlanguage models", "year": "2022" }, { "authors": "S Welleck; I Kulikov; S Roller; E Dinan; K Cho; J Weston", "journal": "", "ref_id": "b6", "title": "Neural text generation with unlikelihood training", "year": "2019" }, { "authors": "Z Ji; N Lee; R Frieske; T Yu; D Su; Y Xu; E Ishii; Y J Bang; A Madotto; P Fung", "journal": "ACM Computing Surveys", "ref_id": "b7", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "F Petroni; T Rocktäschel; P Lewis; A Bakhtin; Y Wu; A H Miller; S Riedel", "journal": "", "ref_id": "b8", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "T Scialom; T Chakrabarty; S Muresan", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Fine-tuned language models are continual learners", "year": "2022-12" }, { "authors": "G Mialon; R Dessì; M Lomeli; C Nalmpantis; R Pasunuru; R Raileanu; B Rozière; T Schick; J Dwivedi-Yu; A Celikyilmaz", "journal": "", "ref_id": "b10", "title": "Augmented language models: a survey", "year": "2023" }, { "authors": "A Asai; X Yu; J Kasai; H Hajishirzi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "One question answering model for many languages with cross-lingual dense passage retrieval", "year": "2021" }, { "authors": "P Lewis; E Perez; A Piktus; F Petroni; V Karpukhin; N Goyal; H Küttler; M Lewis; W -T. Yih; T Rocktäschel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Retrievalaugmented generation for knowledge-intensive nlp tasks", "year": "2020" }, { "authors": "Y Luan; J Eisenstein; K Toutanova; M Collins", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b13", "title": "Sparse, dense, and attentional representations for text retrieval", "year": "2021" }, { "authors": "S Pan; L Luo; Y Wang; C Chen; J Wang; X Wu", "journal": "", "ref_id": "b14", "title": "Unifying large language models and knowledge graphs: A roadmap", "year": "2023" }, { "authors": "H He; H Zhang; D Roth", "journal": "", "ref_id": "b15", "title": "Rethinking with retrieval: Faithful large language model inference", "year": "2022" }, { "authors": "H Trivedi; N Balasubramanian; T Khot; A Sabharwal", "journal": "", "ref_id": "b16", "title": "Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions", "year": "2022" }, { "authors": "W Xiong; M Yu; S Chang; X Guo; W Y Wang", "journal": "", "ref_id": "b17", "title": "Improving question answering over incomplete kbs with knowledge-aware reader", "year": "2019" }, { "authors": "S Ji; S Pan; E Cambria; P Marttinen; S Y Philip", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b18", "title": "A survey on knowledge graphs: Representation, acquisition, and applications", "year": "2021" }, { "authors": "J Zhang; B Chen; L Zhang; X Ke; H Ding", "journal": "AI Open", "ref_id": "b19", "title": "Neural, symbolic and neural-symbolic reasoning on knowledge graphs", "year": "2021" }, { "authors": "T Mitchell; W Cohen; E Hruschka; P Talukdar; B Yang; J Betteridge; A Carlson; B Dalvi; M Gardner; B Kisiel", "journal": "Communications of the ACM", "ref_id": "b20", "title": "Never-ending learning", "year": "2018" }, { "authors": "B Abu-Salih", "journal": "Journal of Network and Computer Applications", "ref_id": "b21", "title": "Domain-specific knowledge graphs: A survey", "year": "2021" }, { "authors": "L Zhong; J Wu; Q Li; H Peng; X Wu", "journal": "", "ref_id": "b22", "title": "A comprehensive survey on automatic knowledge graph construction", "year": "2023" }, { "authors": "D Nadeau; S Sekine", "journal": "Lingvisticae Investigationes", "ref_id": "b23", "title": "A survey of named entity recognition and classification", "year": "2007" }, { "authors": "R Grishman; B M Sundheim", "journal": "", "ref_id": "b24", "title": "Message understanding conference-6: A brief history", "year": "1996" }, { "authors": "G Salton; C Buckley", "journal": "Information processing & management", "ref_id": "b25", "title": "Term-weighting approaches in automatic text retrieval", "year": "1988" }, { "authors": "J Ramos", "journal": "Citeseer", "ref_id": "b26", "title": "Using tf-idf to determine word relevance in document queries", "year": "2003" }, { "authors": "M Mintz; S Bills; R Snow; D Jurafsky", "journal": "", "ref_id": "b27", "title": "Distant supervision for relation extraction without labeled data", "year": "2009" }, { "authors": "L Luo; Y.-F Li; G Haffari; S Pan", "journal": "", "ref_id": "b28", "title": "Normalizing flow-based neural process for few-shot knowledge graph completion", "year": "2023" }, { "authors": "G Wan; S Pan; C Gong; C Zhou; G Haffari", "journal": "", "ref_id": "b29", "title": "Reasoning like human: Hierarchical reinforcement learning for knowledge graph reasoning", "year": "2021" }, { "authors": "Y Wang; N Lipka; R A Rossi; A Siu; R Zhang; T Derr", "journal": "", "ref_id": "b30", "title": "Knowledge graph prompting for multi-document question answering", "year": "2023" }, { "authors": "Y Tian; H Song; Z Wang; H Wang; Z Hu; F Wang; N V Chawla; P Xu", "journal": "", "ref_id": "b31", "title": "Graph neural prompting with large language models", "year": "2023" }, { "authors": "M Yasunaga; A Bosselut; H Ren; X Zhang; C D Manning; P S Liang; J Leskovec", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Deep bidirectional language-knowledge graph pretraining", "year": "2022" }, { "authors": "H Touvron; L Martin; K Stone; P Albert; A Almahairi; Y Babaei; N Bashlykov; S Batra; P Bhargava; S Bhosale", "journal": "", "ref_id": "b33", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Y Bang; S Cahyawijaya; N Lee; W Dai; D Su; B Wilie; H Lovenia; Z Ji; T Yu; W Chung", "journal": "", "ref_id": "b34", "title": "A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity", "year": "2023" }, { "authors": "M Agarwal; P Sharma; A Goswami", "journal": "Cureus", "ref_id": "b35", "title": "Analysing the applicability of chatgpt, bard, and bing to generate reasoning-based multiple-choice questions in medical physiology", "year": "2023" }, { "authors": "J Macqueen", "journal": "", "ref_id": "b36", "title": "Some methods for classification and analysis of multivariate observations", "year": "1967" }, { "authors": "S Lloyd", "journal": "IEEE transactions on information theory", "ref_id": "b37", "title": "Least squares quantization in pcm", "year": "1982" }, { "authors": "U ; Von Luxburg", "journal": "Statistics and computing", "ref_id": "b38", "title": "A tutorial on spectral clustering", "year": "2007" }, { "authors": "S Arya; D M Mount; N S Netanyahu; R Silverman; A Y Wu", "journal": "Journal of the ACM (JACM)", "ref_id": "b39", "title": "An optimal algorithm for approximate nearest neighbor searching fixed dimensions", "year": "1998" }, { "authors": "X Zhu; Z Ghahramani; J Lafferty", "journal": "AAAI Press", "ref_id": "b40", "title": "Semi-supervised learning using gaussian fields and harmonic functions", "year": "2003" }, { "authors": "M Ho; A Sharma; J Chang; M Saxon; S Levy; Y Lu; W Y Wang", "journal": "", "ref_id": "b41", "title": "Wikiwhy: Answering and explaining cause-and-effect questions", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 72.86, 394.25, 227.17, 9.81 ], "formula_id": "formula_0", "formula_text": "M tokens KG = 2n(2cT + (m + 2l 1 )(l 2 + 1)) + L F ,(1)" }, { "formula_coordinates": [ 3, 105.61, 598.36, 194.42, 24.8 ], "formula_id": "formula_1", "formula_text": "w(v i , v j ) = exp - ∠(v i , v j ) 2 √ τ i τ j ,(2)" }, { "formula_coordinates": [ 3, 78.26, 631.72, 121.2, 16.28 ], "formula_id": "formula_2", "formula_text": "∠(v i , v j ) = arccos v ⊤ i vj ∥vi∥∥vj ∥" }, { "formula_coordinates": [ 3, 48.96, 672.02, 251.06, 23.18 ], "formula_id": "formula_3", "formula_text": "τ i = ∠(v i , v i K ), where v i K is the K th nearest neighbor to v i )." }, { "formula_coordinates": [ 3, 356.38, 408.82, 206.66, 23.37 ], "formula_id": "formula_4", "formula_text": "W t ij = w(v i , v j ), j = i 1 , i 2 , . . . , i K , 0, otherwise.(3)" }, { "formula_coordinates": [ 4, 54.72, 327.28, 100.85, 20.55 ], "formula_id": "formula_5", "formula_text": "for i = 1,2,. . . ,n do 5:" }, { "formula_coordinates": [ 4, 54.72, 349.68, 221.16, 45.97 ], "formula_id": "formula_6", "formula_text": "V P i 6: if |K| > m then 7: Select a subset K s ⊂ K such that |K s | = m 8:" }, { "formula_coordinates": [ 4, 118.72, 633.16, 181.3, 12.74 ], "formula_id": "formula_7", "formula_text": "W k ij = W k ji = |X ki ∩ X kj |.(4)" }, { "formula_coordinates": [ 4, 317.73, 313.78, 162.52, 22.46 ], "formula_id": "formula_8", "formula_text": "4: Define X k = {x i ∈ X : u(x i ) ≥ 0.5} 5: return X k" }, { "formula_coordinates": [ 5, 53.46, 62.93, 242.07, 17.47 ], "formula_id": "formula_9", "formula_text": "O(N log N + N ndI max + 2KN √ κ) = O(N log N + N n)," }, { "formula_coordinates": [ 5, 313.75, 314.88, 107.04, 33.72 ], "formula_id": "formula_10", "formula_text": "X final ← X 0 ∪ X 1 ∪ X 2 15: K final ← K 1 ∪ K 2 16: return X final , K final" }, { "formula_coordinates": [ 5, 479.53, 492.38, 48.34, 12.2 ], "formula_id": "formula_11", "formula_text": "k 1 + s k 1 • s k 2 ," }, { "formula_coordinates": [ 5, 463.2, 504.34, 99.84, 12.2 ], "formula_id": "formula_12", "formula_text": "t 0 + s k 1 • s t 1 + s k 1 • s k 2 • s t 2 ." }, { "formula_coordinates": [ 5, 316.96, 545.98, 229.48, 12.69 ], "formula_id": "formula_13", "formula_text": "M tokens QA = s k 1 •l 2 •(1+s k 2 )+T •(s t 0 +s k 1 •s t 1 +s k 1 •s k 2 •s t 2 )." }, { "formula_coordinates": [ 7, 53.95, 580.28, 242.21, 12.69 ], "formula_id": "formula_14", "formula_text": "O((s t 0 + s k 1 • s t 1 + s k 1 • s k 2 • s t 2 )N ) + O((s k 1 + s k 1 • s k 2 )M ).(6" }, { "formula_coordinates": [ 7, 105.84, 629.29, 194.18, 12.69 ], "formula_id": "formula_15", "formula_text": "O((s t 0 + s k 1 • s t 1 + s k 1 • s k 2 • s t 2 )N ).(7)" } ]
2023-11-26
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b1", "b8", "b9", "b21", "b24", "b10", "b15", "b16", "b22", "b23" ], "table_ref": [ "tab_0" ], "text": "Nowadays, since Internet penetration is increased to a high level, online shopping has become to a highly convenient manner for consumers. Everyday, millions of users search & browse products, and maybe finally place orders in e-commerce platforms. Consequently, the relevance of products exposed to users triggered by their queries plays a crucial role in users shopping experiences, and also the transaction efficiency. Therefore, it is essential to accurately judge whether the candidate products are relevant to the user intentions for an e-commerce search engine.\nTraditionally, relevance model [2,9,10,22,25] have primarily relied on textual information, such as query and product descriptions (title, attribute, etc) to judge the relevance between queries and products. However the product information also includes images which captures a significant portion of user attention during browsing products, thus it is becoming increasingly essential to incorporate image into relevance modeling. This integration of both image and text data has the potential to provide a more comprehensive understanding of the products and better capture user intent.\nSome core information may be missing from product titles, just like the cases in Table 1. In these cases, if only based on the product titles, it is hard to accurately match the relevant product with search query. However, product images can provide incremental valuable information for relevance judgement. In recent years, there has been a surge of vision-language pre-training models (VLP) [11,16,17,23,24]. As shown in Figure 1(a), these VLP models typically consist of both text and image encoders, and leverage text-image contrastive learning to align representations across different modalities. They have demonstrated remarkable performance in various general tasks, such as image captioning, visual question answering, and text-image retrieval. It is worth noting that these VLP models can extract image features to enhance the representation of products with ambiguous titles, or correct the representation of products with misleading titles in e-commerce relevance task. Current VLP models in e-commerce relevance task often employ a divide-andconquer approach. As shown in Figure 1(b), they first conduct VLP model extracts query, title, and image representations, and then add query-title similarity and query-image similarity as the final relevance score. However, different types of products contain varying weights of information in images and titles. For example, electronic products often have important parameters listed in their titles, while clothing items rely more on visual elements depicted in the images, such as designed style, texture, material, color, etc. In this paper, we propose a novel approach called Query-aware Language Image Fusion Embedding(referred as Query-LIFE) for e-commerce relevance modeling. As shown in Figure 1(a), it integrates query, title, and image into relevance tasks. Firstly, we randomly sample <query,title,image> triplet data from the online user behavior logs as pre-training data. Secondly, unlike the divide-andconquer approach, we propose a dynamic fusion as multi-modal representation of product. Hence, we use inner product of <query, multi-modal representation> to measure the relevance between one query and the multi-modal representation above, as shown in Figure 1(c). Thirdly, we use supervised contrastive learning and utilize generating ability both from multi-modal large model and large language model to filter out the false negative samples. Finally, we fine-tune the proposed model using the manually annotated triplet data. Our contributions can be summarized as follows: \n•" }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Vision-Language Pre-training", "publication_ref": [ "b13", "b0", "b5", "b19", "b10", "b3", "b17", "b16", "b15", "b2", "b14" ], "table_ref": [], "text": "The emergence of pre-training models, such as BERT [14], GPT3 [1],\nand ViT [6], has led to significant advancements in NLP and CV tasks, achieving state-of-the-art results. Recently, researchers have extended the pre-training approach to the vision-language (VL) domain, resulting in the development of several impressive VL models (e.g. CLIP [20] and ALIGN [11]). These VLP models have demonstrated impressive performance in various multi-modal downstream tasks, such as image captioning, vision question answering, and cross-modal retrieval. They achieve this by leveraging large-scale image-text pairs and then employing contrastive learning to align images and text in the joint embedding space. These VLP models are divided into two categories: object-detector (OD)-based VLP models (e.g., UNITER [4], OSCAR [18]) and end-to-end VLP models (e.g., ALBEF [17],BLIP [16])). OD-based VLP models rely on bounding box annotations during pre-training and require high-resolution images for inference, making them both annotation-expensive and computationally demanding. In contrast, end-to-end VLP models directly utilize image patch features as input to a pre-trained ViT model. This eliminates the need for costly annotations and significantly improves inference speed. As a result, end-to-end VLP models have gained traction in recent research [3,15]. Therefore, we also adopt the end-to-end VLP model in this paper." }, { "figure_ref": [], "heading": "E-commerce VLP Model", "publication_ref": [ "b7", "b26", "b18", "b11", "b25" ], "table_ref": [], "text": "There are also some VLP models specifically targeted at e-commerce scenarios. FashionBERT [8] was the first vision-language pre-train model which adopts mask language loss and title-image contrastive learning. Later on, Kaleido-BERT [27] further adopts several selfsupervised tasks at different scales to focus more on title-image coherence. EI-CLIP [19] proposed an intervention-based entityaware contrastive learning framework. KG-FLIP [12] proposed a knowledge-guided fashion-domain language-image pre-training framework and utilizes external knowledge to improve the pretraining efficiency. MAKE [26] introduces query encoder and propose modal adaptation and keyword enhancement modules to improve text-to-multimodal matching. However, these e-commerce VLP models focus on multi-modal retrieval rather than relevance task, so their losses are designed for retrieval. On the contrary, our work focuses on relevance task and our motivation is to deeply integrate the VLP model to enhance the product's relevance score in e-commerce scenarios." }, { "figure_ref": [ "fig_2" ], "heading": "METHOD 3.1 Model Architecture", "publication_ref": [ "b5", "b12" ], "table_ref": [], "text": "In this section, we will introduce our model architecture in detail.\nAs shown in Figure 2, The entire model training is divided into internal alignment and external alignment. Internal alignment is used to align the features of product titles and images. External alignment is used to align the relevance between user queries and products. The model architecture consists of three modules: an image preprocessing backbone, a universal modal encoder and GenFilt.\nVisual transformer (ViT) [6] is deployed as our image preprocessing backbone, which divides the image into patches and encodes them as a sequence of embeddings with an additional [𝐶𝐿𝑆] token to represent the global image features. The universal modal encoder includes self-attention layer, cross-attention layer and feed-forward layer. It extracts different modalities features through the interaction of different layers. For the text modal, the input undergoes tokenization and then interacts with the self-attention layer and the feed-forward layer. Similarly, for the image modal, the input image is first processed by the ViT and follows the same process as the text modal. In the case of multimodal features, the text modal interacts with the image modal through the cross-attention layer after the self-attention layer. GenFilt is designed for filtering false negative sample during in-batch sampling.\nInspired by the relevance learning framework proposed by Jiang et al [13], we also propose a three-stage training framework. In the first stage, we leverage a large set of products title-image pairs for contrastive learning, which is pre-trained for internal alignment of products in section 3.2. In the second stage, we sample million <query,title,image> positive pairs from the online clicking log of Miravia Search, then pre-train for external alignent between user queries and products in the section 3.3 and 3.4. In the third stage, we utilize manually labeled <query, title, image> triplet data to finetune the alignment between products and user queries furtherly." }, { "figure_ref": [], "heading": "Vision-Language Pre-training", "publication_ref": [ "b16", "b20" ], "table_ref": [], "text": "The VLP model utilizes the Image-Text Contrastive (ITC) loss to align the image features and the text features, which makes positive image-text pairs have similar representations and reduces the similarity between negative pairs [17,21]. The ITC loss has proven to be an effective objective for enhancing vision and language representation even in the absence of labeled data. The formula is as follows:\nL I T C = - 1 𝑁 𝑁 ∑︁ 𝑖=1 𝑙𝑜𝑔 𝑒𝑥𝑝 (𝑍 𝑇 𝑖 • 𝑍 𝐼 𝑖 /𝜏) 𝑁 𝑗=1 𝑒𝑥𝑝 (𝑍 𝑇 𝑗 • 𝑍 𝐼 𝑗 /𝜏) . (1\n)\nwhere 𝑍 𝑇 and 𝑍 𝐼 are normalized text and image embeddings, 𝑍 𝐼 𝑖 is the 𝑖-th positive image sample in the batch, and 𝑁 and 𝜏 batch size and temperature parameter respectively." }, { "figure_ref": [], "heading": "Query-based Modal Alignment", "publication_ref": [], "table_ref": [], "text": "In e-commerce search scenarios, the relevance of products is highly dependent on the user queries. However, user queries are short and brief, while sellers often add redundant keywords in titles to hack the search engine indexing. Calculating relevance solely based on query and title can easily result in errors in relevance scoring. If misleading words or insufficient information are present in the title, this problem becomes even more severe.\nObviously, there is an imbalance problem between query and product information in e-commerce relevance task. To mitigate the impact of above problem on relevance scoring, we introduce image information to improve product representation. In addition, we introduce the concept of title-image fusion representation for products (referred as multi-modal representation or M representation). M representation is defined as the interaction between the product title and the image. Unlike divide-and-conquer approach, we adopt universal modal encoder to represent M and use inner product to calculate the relevance between the item and query. To further align the M representation with user queries, we adopt the query-M contrastive (QMC) loss. Additionally, we incorporate the query-title contrastive (QTC) loss to align the query with title. Simultaneously, the query-image contrastive (QIC) loss is utilized to further align the product image with the query. These loss functions play a crucial role in aligning user queries and different product modalities and enhancing the relevance scoring.\nUnsupervised contrastive learning utilizes large amounts of unlabeled data to increase the similarity between positive samples while decrease the similarity between negative samples. However, in e-commerce relevance task, the same query often generates positive pairs with different products. Compared with triplet loss and unsupervised contrastive learning, supervised contrastive learning introduces labeled negative samples and can contain more positive samples in a mini-batch, which is more suitable for relevance task. The formula is defined as follows:\nL = - 1 𝑁 𝑁 ∑︁ 𝑖=1        1 |𝑃 (𝑖)| ∑︁ 𝑝 ∈𝑃 (𝑖 ) 𝑙𝑜𝑔 𝑒𝑥𝑝 (𝑄 𝑖 • 𝑍 𝑥 𝑝 /𝜏) 𝑁 𝑗=1 𝑒𝑥𝑝 (𝑄 𝑖 • 𝑍 𝑥 𝑗 /𝜏)        .(2)\nwhere 𝑃 (𝑖) is all positive samples in the 𝑖 batch, 𝑄 is normalized query embedding and 𝑍 𝑥 𝑝 , 𝑥 ∈ [𝐼,𝑇 , 𝑀], 𝑝 ∈ 𝑃 (𝑖), are normalized image/text/M modal embedding in positive samples. 𝜏 and 𝑁 are temperature parameter and batch size. Bringing in different modal by 𝑥 ∈ [𝐼,𝑇 , 𝑀], this loss function can represent QIC, QTC and QMC loss respectively." }, { "figure_ref": [], "heading": "Query-based Modal Fusion", "publication_ref": [ "b10" ], "table_ref": [], "text": "We utilize image-text matching (ITM) to learn the M representation of the product. The objective of ITM is to learn a title-image fusion representation that captures the alignment between the image and text modalities. In ITM, we frame the task as a binary classification problem where the model predicts whether an image-text pair is positive or negative. To obtain the matching score, we pass the model's output through a two-class linear classifier, yielding a logit. We employ a hard negative mining strategy [11] and leverage labeled data. Hard negative mining strategy samples negative pairs with higher contrastive similarity within a batch. Consequently, these informative negative pairs contribute to better align between the image and text modalities. The ITM loss can be expressed as:\nL 𝐼𝑇 𝑀 = -𝐸 (𝐼,𝑇 )∼𝑃 [𝑙𝑜𝑔{𝑃 (𝑦 (𝐼,𝑇 ) |(𝐼,𝑇 )}] (3\n)\nwhere 𝑃 is a distribution of in-batch samples, 𝑦 (𝐼,𝑇 ) ∈ (0, 1) represents whether the image 𝐼 and the text 𝑇 are matched, and 𝑃 (𝑦 (𝐼,𝑇 ) |(𝐼,𝑇 )) is the output of the multi-modal embedding followed by a two-class linear classifier. We acknowledge that image-text matching alone may not be sufficient, as different types of products contain varying amounts of information in their images and titles. For example, electronic products often have important parameters listed in their titles, while clothing items rely more on visual attributes such as material, color, and size displayed in the images. To enable the model to learn a more effective fusion representation, we introduce query-M matching (QMM). QMM not only allows the model to extract features from both the images and titles, but also giving different weights to each modality based on the user queries. This enables the model to generate fused representations with a query-aware bias. QMM and ITM share the same loss function listed in Eq. 3. Finally, the Query-LIFE model loss contains vision-language pre-training (VLP), query-based modal alignment (QMA) and query-based modal fusion (QMF): \nL 𝑡𝑜𝑡𝑎𝑙 = L 𝐼𝑇𝐶 𝑉 𝐿𝑃 + L 𝐼𝑇 𝑀 + L 𝑄𝑀𝑀 𝑄𝑀𝐹 + L 𝑄𝐼𝐶 + L 𝑄𝑇𝐶 + L 𝑄𝑀𝐶 𝑄𝑀𝐴 .(4)" }, { "figure_ref": [ "fig_3" ], "heading": "GenFilt", "publication_ref": [ "b15", "b4" ], "table_ref": [], "text": "Most VLP models adopt in-batch sampling to generate image-title negative pairs. However, in the <query, title, image> triplet data, multiple user queries may be relevant to the multiple products. Inbatch sampling will introduce false negative samples. Those similar even same queries are mistakenly treated as negative samples and thus compromise relevance score. Inspired by CapFilt [16], we propose a method called Generating and Filtering (GenFilt) to address the impact of false negative samples on the training process. It enhances the quality of the training data using large model generation capabilities. As illustrated in Figure 3, GenFilt consists of two modules. The first module is generating, we employ a large language model (LLM) and a multimodal model (InstructBLIP) [5] to extract key text features from the product title and image respectively. The second module is filtering, we calculate the similarity between image feature and text feature (I-T), the similarity between query feature and image feature (Q-I) and the similarity between query feature and text feature (Q-T). Finally, we set a threshold 𝜎 based on these similarities, and the similarity of query-product pairs ((Q-I+Q-T)/2) and image-text pairs (I-T) above the threshold are also corrected as positive samples." }, { "figure_ref": [], "heading": "EXPERIMENTS 4.1 Baselines and Datasets", "publication_ref": [ "b13", "b19", "b15" ], "table_ref": [], "text": "Large-scale Industrial Datasets. There are three training datasets. The first dataset samples 5M product title-image pairs. The second dataset samples 1.3M <query,title,image> positive pairs from the online clicking log of Miravia Search. The third dataset has 200,000 <query,title,image> labeled data, and 30K samples are selected as the evaluation set, where the ratio of positive to negative is 1:1.\nBaselines. In our experiments, we compare Query-LIFE with several strong baselines, including BERT [14], CLIP-zeroshot [20] and BLIP2-zeroshot and BLIP2-FT(finetune) [16]. CLIP and BLIP2 are two-tower models, while BERT is a single-tower and text-modal relevance model. Only the <query, title> pairs from the <query, title, image> triplet data are used for training. This approach does not incorporate image information. These baselines have shown impressive performance in their respective domains. For the CLIP and BLIP2, we concatenate the query and title as text information and train the models with images. To ensure fair comparison, all models are trained on the 200,000 <query, title, image> triplet data samples." }, { "figure_ref": [], "heading": "Experiment Implementation", "publication_ref": [ "b6", "b15", "b13" ], "table_ref": [], "text": "We select the state-of-the-art pre-trained vision transformer ViTg/14 from EVA-CLIP [7] and frozen it. We remove the last layer of the ViT and use the second last layer's output features, which leads to slightly better performance and is the same as BLIP2 [16] setting. The universal modal encoder is composed with 12 layers of transformers, each layer contains self-attention layer,cross-attention layer and feed forward layer. We initialize the universal modal encoder with the pre-trained weights of BERT-base [14], whereas the cross-attention layers are randomly initialized. In total, universal modal encoder contains 188M parameters. We train the model for 10 epochs with a batch size of 512 on 16 NVIDIA A10 GPUs. We utilize the AdamW optimizer with 𝛽 1 = 0.9, 𝛽 2 = 0.98, and a weight decay rate of 0.05. For learning rate scheduling, we employ a cosine decay strategy with a maximum learning rate of 1𝑒 -4 and a linear warmup of 2k steps. GenFilt threshold 𝜎 is set to 0.9." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b24" ], "table_ref": [], "text": "Offline Evaluation Metrics. We consider manual annotation as the ground truth, where relevance is indicated by the labels of 1 (relevant) and 0 (irrelevant). This task can be treated as a classification problem. In e-commerce scenarios, the Area Under Curve (AUC) is commonly used as the evaluation metric [25]. Additionally, we utilize the Precision-Recall curve for evaluation. This curve represents the trade-off between precision and recall, with recall as the x-axis and precision as the y-axis. In addition, we employ Recall@K (R@K) as the metric. Recall@K is widely used in search and recommendation systems. To calculate Recall@K, we randomly select 10K unique <query,title,image> triplet data from the online clicking log of Miravia Search. For each query, we consider the Table 2: Offline results compared with different baselines." }, { "figure_ref": [], "heading": "Query → Title", "publication_ref": [ "b7" ], "table_ref": [], "text": "Query → Image Query → M R@5 R@10 R@20 AUC R@5 R@10 R@20 AUC R@5 R@10 R@ ground-truth product as well as 100 other randomly sampled products as the candidate rank set. We calculate the similarity between the query → title, query → image, and query → M and sort the candidate rank set based on these similarities. Recall@K measures the percentage of ground-truth matches that appear in the top-K ranked list [8].\nOnline Evaluation Metrics. We adopt number of orders (Or-der_cnt), average number of buyer (Order_uv), and GMV (Gross Merchandise Volume,total value of sales) as online evaluation metrics. These metrics reflect changes in user order transactions.\nHuman Evaluation. We sampled 1,000 queries, and selected the top-10 query-item pairs of the exposure page for each query to perform human relevance evaluation. The relevance of a query-item can be divided into three types: Excellent, Fair and Bad. Excellent means same as the original highly standard, the item's core products, functional attributes and other attributes perfectly match the query requirements. Fair means the core product is the same as query, but the functional attributes are inconsistent. Bad means the core products are different or the core products are the same, but the retained attributes in the brand or other key industries are different. We count the proportions of the three indicators of different models." }, { "figure_ref": [ "fig_4", "fig_0" ], "heading": "Offline Experiments", "publication_ref": [], "table_ref": [], "text": "The previous models calculates the cosine similarity between query embedding and title embedding (Query → Title) or image embedding (Query → Image). In Query-LIFE, we introduce another method which calculates the cosine similarity between query embedding and M embedding (Query → M).\nAUC of different models can be found in Table 2. It is worth noting that the AUC of general VLP models (CLIP, BLIP2) is lower than that of BERT, because these general VLP models only focus on the internal alignment of product titles and images, ignoring the imbalance problem between query and products, while BERT performs external alignment of query and product. This shows that external alignment improves query and product relevance. In addition, AUC for query→M proposed by Query-LIFE is 0.021(0.891-0.871) higher than BERT. It is shown that the relevance score is effectively improved by introducing image information and external alignment of query-product.\nMoreover, In Query-LIFE, when comparing the different relevance score inner products, we observe that the AUC of query → M is also the highest. This indicates that M representation can provide a more comprehensive and robust representation compared to using a single modality (text or image). To further evaluate these methods, we also plot Precision-Recall (PR) curves of three relevance score inner products in Figure 4. It can be observed that the PR curve of query → M consistently outperforms the others, indicating the superiority of the query → M. The R@K results for different models are also presented in Table 2. Firstly, general VLP model's R@K is lower than BERT and Query-LIFE, further illustrating the impact of query-product external alignment on the relevance score. Secondly, at R@10 and R@20, query→M of Query-LIFE is 0.029 (0.215-0.186), 0.035 (0.386-0.351) higher than query→title of BERT. Query-LIFE enhances the external alignment between the product and query, and effectively enhance M representation by making full use of image information.\nAs shown in Figure 1(b), previous VLP models usually adopt divide-and-conquer approach to utilize image information in ecommece relevance task. They extract query, title, and image representations, and then add query→title (Q→T) and query→image (Q→I) as the relevance score. It assumes equal contribution of product text and image for the relevance judgement given any types of products. Hence, we compare the performance of query→M and divide-and-conquer approach. As shown in Table 3, query→M outperforms the divide-and-conquer approach in all metrics. It clearly demonstrates the advantage of dynamic weighting for different types of products.\nTable 3: The R@K and AUC of divide-and-conquer approach and query →M." }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "R@5 R@10 R@20 AUC " }, { "figure_ref": [], "heading": "Online Experiment", "publication_ref": [], "table_ref": [ "tab_4", "tab_5", "tab_6" ], "text": "Furthermore, we carry out online A/B experiments, and BERT is the current baseline in Miravia Search. Annotators are invited to evaluate whether the relevance is improved by the proposed method. 10K query-item pairs are sampled top-10 items both from the buckets of BERT and Query-LIFE respectively. The results are shown in Table 4, compared with BERT, the main improvement is that the Excellent ratio increased by 4.42% and the Bad ratio decreased by 2.79%. Since the search relevance is one key aspect of user experience, the conversion efficiency is also improved as a consequence. As shown in Table 5, all the efficiency metrics are increased. The results verified that Query-LIFE can attracts higher conversions for our platform. Query-LIFE has been deployed online and brings stable conversion improvements for Miravia Search. QMF is designed to smartly extract information with different weights based on product types. For verification, we sample several different types of products and adopt Query-LIFE to extract text, image and M representation respectively. Then we calculate the similarity between M representation and text&image representation respectively and then normalize them. As shown in Table 6, without QMF, the model assigns equal weights to the image and text representation of different types of products. However, Query-LIFE dynamically adjusts the weights assigned to both image and text to better represent the product. In particular, when it comes to dresses, the model gives more attention to the images than text. Compared with title, image can describe the material, size, pattern and other visual information of the dress more intuitively. Similarly, it also works for other product types. In addition, without QMF module, AUC of query→title, query→image and query→M drops 0.009 (0.865-0.856), 0.008 (0.871-0.863) and 0.014 (0.891-0.877) respectively. Therefore it makes sense by giving more weight to the images. " }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_5", "fig_5", "fig_5", "fig_5" ], "heading": "Effect of GenFilt", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "We fine-tune Query-LIFE w/o GenFilt and list AUC and R@K in Table 2. Taking query → M as an example, compared with Query-LIFE, AUC decreases by 0.042 (0.891-0.849), R@5, R@10, and R@20 decreases by 0.012 (0.102-0.090), 0.079 (0.215-0.136) and 0.091 (0.386-0.295) respectively. This further shows that in the <query,title,image> data, in-batch sampling will sample some false negative samples and affect model performance.\nWe visualize the embedding space of different relevance models (BLIP2-zeroshot, BLIP2-FT, Query-LIFE w/o GenFilt and Query-LIFE) in Figure 5(a)-(d) respectively. In detail, we sampled positive samples and negative samples for a query. In Figure 5(a), the similarity between the query and its positive products does not widen the gap with other negative products. After vision-language pretraining (BLIP2-FT), query and different modals of positive products closer in Figure 5(b), while query and some negative samples are still confused. While GenFilt is not used in Figure 5(c), the performance is similar with BLIP2-FT in Figure 5(b). Figure 5(d) shows the Query-LIFE results, positive samples are more clustered and are further away from negative samples, compared with Figure 5(c), it intuitively demonstrates the enhancement of model performance by GenFilt.\nFinally, we select some cases to show the generation capability of LLM and InstructBLIP, as shown in Table 7, GenFilt can extract A gold chain necklace key information such as core products, materials, brand,colors, etc. from titles and images through LLM and InstructBLIP. We adopt these prompts:\n• LLM: As a product search engine, please understand the input of the product title, extract the core word, material, brand, color, and model parameters from the title and provide structured output. The input title: title To solve the problem, please execute the following steps: Firstly, understand the input product title and extract the vocabulary that describes the main product as the core word. Secondly, analyze the main material of the product and replace it with \"NULL\" if none is specified. Thirdly, analyze the brand of the product and replace it with \"NULL\" if none is specified. Firthly, analyze the color of the product and replace it with \"NULL\" if not specified. Finally, output the structured parsing results. • InstructBLIP: Briefly summarize the items in the picture in a few words." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel approach for learning multimodal representation of product in e-commerce search relevance.\nWe design a query-based multi-modal fusion module which effectively generates dynamic fusion representations that incorporate product image and text based on the product types. We propose query-based modal alignment module which utilizes supervised contrastive learning to align the multi-modal representation of products guided by the search query. Additionally, we propose the GenFilt module, leveraging LLM (large language model) and the generation capability extracting information from image to text to address the problem of false negative sampling in contrastive learning. Experimental results demonstrate that Query-LIFE outperforms existing baselines in both the relevance task. Moreover, Query-LIFE has been successfully deployed in Miravia Search, leading to improvements both in search relevance and the conversion rate." } ]
Relevance module plays a fundamental role in e-commerce search as they are responsible for selecting relevant products from thousands of items based on user queries, thereby enhancing users experience and efficiency. The traditional approach models the relevance based product titles and queries, but the information in titles alone maybe insufficient to describe the products completely. A more general optimization approach is to further leverage product image information. In recent years, vision-language pre-training models have achieved impressive results in many scenarios, which leverage contrastive learning to map both textual and visual features into a joint embedding space. In e-commerce, a common practice is to fine-tune on the pre-trained model based on e-commerce data. However, the performance is sub-optimal because the vision-language pre-training models lack of alignment specifically designed for queries. In this paper, we propose a method called Query-LIFE (Query-aware Language Image Fusion Embedding) to address these challenges. Query-LIFE utilizes a query-based multimodal fusion to effectively incorporate the image and title based on the product types. Additionally, it employs query-aware modal alignment to enhance the accuracy of the comprehensive representation of products. Furthermore, we design GenFilt, which utilizes the generation capability of large models to filter out false negative samples and further improve the overall performance of the contrastive learning task in the model. Experiments have demonstrated that Query-LIFE outperforms existing baselines. We have conducted ablation studies and human evaluations to validate the effectiveness of each module within Query-LIFE. Moreover, Query-LIFE has been deployed on Miravia Search 1 , resulting in improved both relevance and conversion efficiency.
Query-LIFE: Query-aware Language Image Fusion Embedding for E-Commerce Relevance
[ { "figure_caption": "Figure 1 :1Figure 1: (a) The relationship of relevance model, VLP model and Query-LIFE. (b) VLP model divide-and-conquer approach for relevance task. (c) Query-LIFE fusion approach for relevance task.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "We propose query-aware multi-modal fusion tailored for e-commerce relevance task. Instead of divide-and-conquer approach, Query-LIFE proposes multi-modal representation and adopt <query, multi-modal representation> similarity as the relevance score. It effectively generates dynamic fusion representations that incorporate product images and text based on the product types. • We also propose query-based modal alignment module utilizes supervised contrastive learning to align the multi-modal representation of products guided by the search query. improves the representation quality and enhances the matching between user queries and dynamic fusion representations of products. • Generating and Filtering (GenFilt) is designed to mitigate the impact of false negative samples during the training process. By generating additional positive samples and filtering out false negative ones, it helps to improve the quality of the training data, leading to enhanced model performance and robustness. • We carried out extensive experiments on both offline and online A/B experiments, which have shown that it outperforms existing VLP models and relevance models in e-commerce relevance task. Our model has been successfully deployed in Miravia Search.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of Query-LIFE. The overall training process is divided into internal aligment and external alignment.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Item3Figure 3 :3Figure 3: Overview of GenFilt. GenFilt adopts LLM and InstructBLIP for feature generation. Then compare the similarity of query-product pairs and correct the false negative pairs. In addition, GenFilt can also calculate the similarity of image-title pairs and correct false negative pairs.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Precision-Recall curve of three types of representation", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Embedding space visualization of different models. The purple cross is the different modal representation of the negative samples, the blue dot is the query representation, the orange square is the M representation of the positive samples, the green five-pointed star is the title representation of the positive samples, and the red diamond is the image representation of the positive samples.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "QueryImageTitlemen's winter coatKoroshi Jacket in twocolors, water-repellent,with hood, for Menair-conditioningSplit 1x1 MUNDO-CLIMA MUPR12 H113027frig R32golden necklaceElegant necklace withcol-layered pearl gent", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of human evaluation.", "figure_data": "ExcellentFairBadQuery-LIFE+4.42%+2.17% -2.79%", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Online A/B tests of Query-LIFE.", "figure_data": "Order_cnt Order_uv GMVQuery-LIFE+4.11%+3.06%+3.19%5 ABLATION STUDIES5.1 Effect of QMA and QMFIn this section, we will demonstrate what important role QMA andQMF play for Query-LIFE. As listed in Table 2, Query-LIFE w/oQMA, Query-LIFE w/o QMF are compared with Query-LIFE. It isseen that QMA significantly improves both R@K and AUC by en-hancing the similarity between query and products. Without QMAmodule, AUC of query→title, query→image and query→M de-creases 0.124 (0.865-0.741), 0.066 (0.871-0.805) and 0.107 (0.891-0.784)", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The weight of different models on image information.", "figure_data": "Dress Monitor PhoneQuery-LIFE0.710.560.35Query-LIFE w/o QMF 0.490.520.51", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "GenFilt Results for some products.", "figure_data": "ImageTitle", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Hai Zhu; Yuankai Guo; Ronggang Dou; Kai Liu
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Wei-Cheng Chang; Daniel Jiang; Hsiang-Fu Yu; Choon Hui Teo; Jiong Zhang; Kai Zhong; Kedarnath Kolluri; Qie Hu; Nikhil Shandilya; Vyacheslav Ievgrafov", "journal": "", "ref_id": "b1", "title": "Extreme multi-label learning for semantic matching in product search", "year": "2021" }, { "authors": "Kezhen Chen; Qiuyuan Huang; Yonatan Bisk; Daniel Mcduff; Jianfeng Gao", "journal": "", "ref_id": "b2", "title": "Kb-vlp: Knowledge based vision and language pretraining", "year": "2021" }, { "authors": "Yen-Chun Chen; Linjie Li; Licheng Yu; Ahmed El Kholy; Faisal Ahmed; Zhe Gan; Yu Cheng; Jingjing Liu", "journal": "", "ref_id": "b3", "title": "UNITER: UNiversal Image-TExt Representation Learning", "year": "2020" }, { "authors": "Wenliang Dai; Junnan Li; Dongxu Li; Anthony Meng; Huat Tiong; Junqi Zhao; Weisheng Wang; Boyang Li; Pascale Fung; Steven Hoi", "journal": "", "ref_id": "b4", "title": "InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning", "year": "2023" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "ICLR", "ref_id": "b5", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Yuxin Fang; Wen Wang; Binhui Xie; Quan Sun; Ledell Wu; Xinggang Wang; Tiejun Huang; Xinlong Wang; Yue Cao", "journal": "", "ref_id": "b6", "title": "Eva: Exploring the limits of masked visual representation learning at scale", "year": "2023" }, { "authors": "Dehong Gao; Linbo Jin; Ben Chen; Minghui Qiu; Peng Li; Yi Wei; Yi Hu; Hao Wang", "journal": "", "ref_id": "b7", "title": "Fashionbert: Text and image matching with adaptive loss for crossmodal retrieval", "year": "2020" }, { "authors": "Baotian Hu; Zhengdong Lu; Hang Li; Qingcai Chen", "journal": "Advances in neural information processing systems", "ref_id": "b8", "title": "Convolutional neural network architectures for matching natural language sentences", "year": "2014" }, { "authors": "Po-Sen Huang; Xiaodong He; Jianfeng Gao; Li Deng; Alex Acero; Larry Heck", "journal": "", "ref_id": "b9", "title": "Learning deep structured semantic models for web search using clickthrough data", "year": "2013" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b10", "title": "Scaling up visual and visionlanguage representation learning with noisy text supervision", "year": "2021" }, { "authors": "Qinjin Jia; Yang Liu; Daoping Wu; Shaoyuan Xu; Huidong Liu; Jinmiao Fu; Roland Vollgraf; Bryan Wang", "journal": "", "ref_id": "b11", "title": "KG-FLIP: Knowledge-guided Fashion-domain Language-Image Pre-training for E-commerce", "year": "2023" }, { "authors": "Yunjiang Jiang; Yue Shang; Rui Li; Wen-Yun Yang; Guoyu Tang; Chaoyi Ma; Yun Xiao; Eric Zhao", "journal": "", "ref_id": "b12", "title": "A unified neural network approach to e-commerce relevance learning", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton ; Lee Kristina; Toutanova ", "journal": "", "ref_id": "b13", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Wonjae Kim; Bokyung Son; Ildoo Kim", "journal": "PMLR", "ref_id": "b14", "title": "Vilt: Vision-and-language transformer without convolution or region supervision", "year": "2021" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b15", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Junnan Li; Ramprasaath Selvaraju; Akhilesh Gotmare; Shafiq Joty; Caiming Xiong; Steven Chu; Hong Hoi", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "Xiujun Li; Xi Yin; Chunyuan Li; Pengchuan Zhang; Xiaowei Hu; Lei Zhang; Lijuan Wang; Houdong Hu; Li Dong; Furu Wei", "journal": "Springer", "ref_id": "b17", "title": "Oscar: Object-semantics aligned pre-training for vision-language tasks", "year": "2020-08-23" }, { "authors": "Haoyu Ma; Handong Zhao; Zhe Lin; Ajinkya Kale; Zhangyang Wang; Tong Yu; Jiuxiang Gu; Sunav Choudhary; Xiaohui Xie", "journal": "", "ref_id": "b18", "title": "Ei-clip: Entity-aware interventional contrastive learning for e-commerce cross-modal retrieval", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b19", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b20", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Stephen Robertson; Hugo Zaragoza", "journal": "Foundations and Trends® in Information Retrieval", "ref_id": "b21", "title": "The probabilistic relevance framework: BM25 and beyond", "year": "2009" }, { "authors": "Wenhui Wang; Hangbo Bao; Li Dong; Johan Bjorck; Zhiliang Peng; Qiang Liu; Kriti Aggarwal; Owais Khan Mohammed; Saksham Singhal; Subhojit Som", "journal": "", "ref_id": "b22", "title": "Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks", "year": "2023" }, { "authors": "Zirui Wang; Jiahui Yu; Adams Wei Yu; Zihang Dai; Yulia Tsvetkov; Yuan Cao", "journal": "", "ref_id": "b23", "title": "SimVLM: Simple Visual Language Model Pretraining with Weak Supervision", "year": "2021" }, { "authors": "Shaowei Yao; Jiwei Tan; Xi Chen; Keping Yang; Rong Xiao; Hongbo Deng; Xiaojun Wan", "journal": "", "ref_id": "b24", "title": "Learning a product relevance model from click-through data in e-commerce", "year": "2021" }, { "authors": "Xiaoyang Zheng; Zilong Wang; Sen Li; Ke Xu; Tao Zhuang; Qingwen Liu; Xiaoyi Zeng", "journal": "", "ref_id": "b25", "title": "MAKE: Vision-Language Pre-training based Product Retrieval in Taobao Search", "year": "2023" }, { "authors": "Mingchen Zhuge; Dehong Gao; Deng-Ping Fan; Linbo Jin; Ben Chen; Haoming Zhou; Minghui Qiu; Ling Shao", "journal": "", "ref_id": "b26", "title": "Kaleido-bert: Vision-language pre-training on fashion domain", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 333.92, 284.32, 4.46, 7.7 ], "formula_id": "formula_0", "formula_text": "•" }, { "formula_coordinates": [ 3, 357.76, 281.46, 197.81, 24.96 ], "formula_id": "formula_1", "formula_text": "L I T C = - 1 𝑁 𝑁 ∑︁ 𝑖=1 𝑙𝑜𝑔 𝑒𝑥𝑝 (𝑍 𝑇 𝑖 • 𝑍 𝐼 𝑖 /𝜏) 𝑁 𝑗=1 𝑒𝑥𝑝 (𝑍 𝑇 𝑗 • 𝑍 𝐼 𝑗 /𝜏) . (1" }, { "formula_coordinates": [ 3, 555.57, 289.73, 3.17, 7.94 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 66.11, 466.31, 228.47, 33.91 ], "formula_id": "formula_3", "formula_text": "L = - 1 𝑁 𝑁 ∑︁ 𝑖=1        1 |𝑃 (𝑖)| ∑︁ 𝑝 ∈𝑃 (𝑖 ) 𝑙𝑜𝑔 𝑒𝑥𝑝 (𝑄 𝑖 • 𝑍 𝑥 𝑝 /𝜏) 𝑁 𝑗=1 𝑒𝑥𝑝 (𝑄 𝑖 • 𝑍 𝑥 𝑗 /𝜏)        .(2)" }, { "formula_coordinates": [ 4, 364.8, 454.45, 190.77, 9.43 ], "formula_id": "formula_4", "formula_text": "L 𝐼𝑇 𝑀 = -𝐸 (𝐼,𝑇 )∼𝑃 [𝑙𝑜𝑔{𝑃 (𝑦 (𝐼,𝑇 ) |(𝐼,𝑇 )}] (3" }, { "formula_coordinates": [ 4, 555.57, 454.94, 3.17, 7.94 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 4, 324.44, 682.93, 234.3, 26.05 ], "formula_id": "formula_6", "formula_text": "L 𝑡𝑜𝑡𝑎𝑙 = L 𝐼𝑇𝐶 𝑉 𝐿𝑃 + L 𝐼𝑇 𝑀 + L 𝑄𝑀𝑀 𝑄𝑀𝐹 + L 𝑄𝐼𝐶 + L 𝑄𝑇𝐶 + L 𝑄𝑀𝐶 𝑄𝑀𝐴 .(4)" } ]
2023-11-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b36", "b6", "b9", "b20", "b42", "b43", "b44", "b14", "b42", "b44", "b20", "b43" ], "table_ref": [], "text": "Salient Object Detection (SOD) can locate the most salient objects in an image, and it is widely used as an important preprocessing method for many vision tasks such as image/video segmentation [11,37], video compression [7], and visual tracking [10]. Depending on the data to be processed, SOD can be divided into 2D (RGB) SOD and 3D (RGB-D, RGB-T) SOD. 3D SOD addresses challenging scenarios by introducing depth maps or thermal maps paired with RGB maps.\nGiven the detection requirements for RGB, RGB-D, and RGB-T data, current models [2,9,19,21,[43][44][45] can only handle a single type of data, and it cannot handle these three types of data simultaneously. Therefore, it is necessary to develop a model framework that can simultaneously meet the detection requirements for these three types of data. The model adopting this framework only needs to be trained once, and then it can use the same set of weight parameters to detect RGB, RGB-D, and RGB-T data.\nIn general, RGB SOD [15,43,45] requires only one network to extract RGB features, while RGB-D or RGB-T SOD [19, 21,44] requires two networks to extract features from two modalities separately. The 3D SOD model differs from the 2D SOD model in that it includes an additional feature learning network and a multi-modal feature fusion module. To achieve the detection of three types of data in one model, this model needs to be optimized based on a 3D SOD framework to effectively handle RGB, RGB-D, and RGB-T data. When processing RGB data, the multi-modal feature fusion module has less impact on the RGB saliency prediction results because it processes learned RGB features, which is equivalent to fusing multiple RGB features. Therefore, it is only necessary to consider feature extraction methods that can be used for RGB, RGB-D, and RGB-T data to simplify the model structure while ensuring performance.\nIn the training process, we consider the depth map or thermal map as a special kind of RGB map and merge them into an input batch with the RGB maps in an orderly manner, and extract the features with one transformer network as the backbone network, as shown in Fig. 1. The network learns features from different modalities by sharing weights, and replaces the Batchnorm with the Layernorm thus avoiding the interference of batch normalisation in learning features from different modalities. As shown in Fig. 2, a feature is obtained by concatenating RGB and Depth features in the Batch dimension. When BatchNorm is applied to this feature, the information from both modalities interferes with each other, whereas when LayerNorm is used, there is no interference. In summary, using a single transformer network with shared weights to extract multimodal features can prevent negative interference between the modalities, ensuring performance, and simplifying the model structure, effectively saving parameters. Therefore, this feature extraction method is suitable for single-modal RGB information as well as dual-modal RGB-D and RGB-T information.\nWe build a lightweight SOD model based on the proposed model framework, which is called AiOSOD due to its ability to perform saliency detection for all three data types in one model. Due to the large training sets of the three data, AiOSOD is designed to be lightweight in order to validate the experimental results faster. AiOSOD employs a lightweight T2T-ViT-10 network [38] as the encoder and piggybacks on a lightweight decoder.\nIn conclusion, this paper has the following contributions: • This work is the first time to consider all three (RGB, RGB-D, and RGB-T) saliency detection tasks all in one model. For the task of saliency detection of three different types of data, we introduces a novel model framework. This innovative model framework provides a unified solution for three types of data. It means that one weight file obtained through once training can be used universally for RGB, RGB-D, and RGB-T SOD. And the framework is successfully migrated in some 3D SOD models.\n• Proposed framework employs a single-stream transformer network with shared weight parameters for extracting multi-modal features. This feature extraction method ensures comprehensive training for all three data types while preventing interference between multi-modal features. This not only provides a unified solution for the detection of all three data types but also ensures precision in detecting these data types. In comparison to models trained on only a single type of data, this framework achieves close performance and even make breakthroughs on some datasets. The effectiveness of this framework has been validated by our proposed model (AiOSOD) . • We propose a simple general model called AiOSOD to validate the proposed model framework. Thanks to the joint training of the three types of data and the feature extraction method, even though AiOSOD is a lightweight model, it achieves state-of-the-art performance in all three saliency detection tasks." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b28", "b2", "b20", "b40", "b16", "b2" ], "table_ref": [], "text": "CNN-based SOD methods have achieved many impressive results. In recent years, transformer networks have evolved in the field of computer vision, demonstrating excellent performance. The transformer structure is commonly used to model global remote dependencies between word sequences in machine translation tasks [29]. The selfattention mechanism is the core idea of transformer, which establishes associations between different positions in a sequence by calculating the correlation between a query and a key. Transformer captures long-distance dependencies by stacking multiple layers of self-attention layers. Vision Transformer (ViT) [4] is the first application of the transformer structure to image classification tasks, being able to understand images from a holistic point of view. The ViT is suitable for a variety of vision tasks such as highlevel classification and low-level dense prediction. Salient object detection belongs to the pixel-level dense prediction task. Therefore, more and more saliency detection models [13,19,21] have adopted the transformer structure to capture the global correlation information in images. JL-DCF [9] first introduces siamese networks into RGB-D SOD, which employs a single convolutional network with shared parameters to extract common features of the two modalities. However, during training, the batch of JL-DCF can only be set to 1. If the batch size exceeds 1, its performance will decrease significantly. CSNet [41] employs a large kernel convolutional neural network as a siamese network to improve the performance. CCFENet [17] introducs a cross-modal interaction module on the basis of weight sharing to enhance the siamese network's ability to learn features. However, neither of CSNet and CCFENet solve the problems of JL-DCF. In contrast, SiaTrans [13] addresses this issue. It finds that multimodal features inter-fering with each other due to the effects of batch normalisation in multiple batches of training. SiaTrans adopts the transformer network as the siamese network, which conquers this drawback, makes full use of the computational power of the GPU and improves the training efficiency. Based on this property of the transformer, we can infer that the transformer network is suitable for the joint training of RGB, RGB-D, and RGB-T data." }, { "figure_ref": [], "heading": "Proposed model", "publication_ref": [], "table_ref": [], "text": "In order to achieve saliency detection across three types of data, we propose a lightweight AiOSOD model (as illustrated in Fig. 3). The model is mainly composed of three components: an encoder, a token fusion module (TFM), and a decoder. The encoder employs the T2T-ViT-10 network with 5.31M parameters. The token fusion module facilitates high-level cross-modal information integration, containing 0.29M parameters. The decoder, structured with convolutional architecture, consists of three feature fusion modules (FFM) and a multi-level feature fusion module (MFFM), totally containing 0.64M parameters." }, { "figure_ref": [], "heading": "Encoder", "publication_ref": [ "b31", "b19" ], "table_ref": [], "text": "Considering the performance and parameters of networks like ViT [4], T2T-ViT [38], PVTv2 [32], and Swin Transformer [20], we adopt the T2T-ViT-10 in the work [38] as the backbone network for our AiOSOD model. T2T-ViT [38] is an improvement for the lack of local modeling capability of ViT [4]. T2T-ViT incorporates the Tokens-to-Token (T2T) operation, whiche merges adjacent tokens into new tokens, effectively reducing token length and enabling local dependency modeling within images.\nThe token sequence T 0 ∈ R l×c derivs from input image I ∈ R h×w×c after undergoing transformations, and serves as the input of the backbone network. Through successive transformer operations and Tokens-to-Token (T2T) processes, multilevel tokens sequences, namely T 1 , T 2 , and T 3 , are generated.\nT i = Backbone (T 0 ) ,(1)\nwhere i = 1, 2, 3. In order to reduce the parameters, T i is subsequently dimensionally transformed to be 64. After that T i needs to be splited in the order of concatenating of input I to obtain T i1 , T i2 sequentially, as shown in Fig. 3.\nT i1 , T i2 = Split (T i ) .(2)\nT i1 and T i2 need to be reshaped into four-dimensional tensors to serve as inputs for the decoder consisting of a convolutional architecture. Additionally, T 31 and T 32 also serve as inputs of the tokens fusion module. The tokens fusion module (TFM), as depicted in Fig. 3, is employed to integrate top-level tokens information, with a parameter count of 0.288M. Generally, with equivalent parameters, the computational complexity of a transformer block significantly exceeds that of a convolutional block. Therefore, considering computation and performance comprehensively, AiOSOD only fuses the top-level tokens of the two modalities.\nThe \"scaled dot-product attention\" [4] in the multi-head attention can be written as:\nAttention(Q, K, V ) = softmax(QK T / d k )V, (3) where Q is Query, K is Key, V is Value, d k\nis the length of the Key vector. AiOSOD may have three types of inputs, which are (RGB, RGB) pairs, (RGB, depth) pairs, and (RGB, thermal) pairs. When the input consists of RGB and depth modalities, according to Eq. (3), TFM facilitates cross-modal interaction through the following process:\nAttention(QR, KD, VD) = softmax(QRK T D / d k )VD, Attention(QD, KR, VR) = softmax(QDK T R / d k )VR.(4)\nSimilarly, RGB-T flows can be processed according to Eq. ( 4). When the input involves two RGB flows, the Key and Value of these two RGB flows are the same. TFM essentially reinforces self-attention for RGB tokens individually. Exactly because TFM can effectively handle different information flow pairs, which is highly suitable for AiOSOD, we adopt TFM to fuse the top-level tokens." }, { "figure_ref": [], "heading": "Decoder", "publication_ref": [ "b32" ], "table_ref": [], "text": "The decoder of the AiOSOD model consists of three feature fusion modules (FFM) and one multi-level feature fusion module (MFFM). The schematic diagrams of FFM and MFFM are provided in Fig. 3. The parameters of FFM is 0.071M, while that of MFFM is of 0.141M. The FFM within the decoder serves not only as component modules that aggregate features from high to low levels in the decoder network but also effectively fuse features. Each FFM module is composed of a CBAM attention [33] and two convolutional blocks. Through element-wise addition and multiplication, the FFM enhances salient features.\nThe multi-level feature fusion module (MFFM) is employed to fuse the three distinct-level features obtained from the FFMs, enhancing the accuracy of predictions. These features obtained from the FFMs, denoted as F 1 , F 2 , and F 3 respectively, are upsampled to the size of (56 × 56), maintaining the same dimensions as F 3 . Subsequently, the three features are fused using channel-wise attention and convolutional computations within the MFFM to achieve multilevel feature fusion. " }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b30", "b24", "b4", "b15" ], "table_ref": [], "text": "V K Q Q K V Transformer Layer C TFM i T 2 i T 1 i T 1 i T 2 i T 31 T 32 T 1 F 2 F 3 F Figure 3. Framework of our proposed AiOSOD.\nfollowing subsets: the RGB dataset DUTS-TR [31] with 10,553 images, the RGB-T dataset VT5000 [28] with 2,500 image pairs, the RGB-D dataset NJUD [14] with 1,485 image pairs, NLPR [24] with 700 image pairs, and DUTLF-Depth [25] with 800 image pairs.\nLoss function. We utilize the cross-entropy loss function: (5) where\nL (P, G) = - i [gilog (pi) + (1 -gi) log (1 -pi)],\nP ∈ [0, 1] 224×224 and G ∈ [0, 1] 224×224 represent\nthe prediction map and ground truth (GT) map, respectively.\ng i ∈ G, p i ∈ P represent individual pixel values.\nTraining settings. Our proposed model is implemented by PyTorch [23], and trained on an RTX2080Ti (11GB). We resize each image to 256 × 256 pixels and then randomly crop 224 × 224 image regions as the model input. We employ the Adam optimizer [16] with an initial learning rate of 0.0001 and a batch size of 16. The training process includes 300,000 steps. The learning rate is reduced by a factor of 10 at steps 100,000 and 200,000." }, { "figure_ref": [], "heading": "Benchmarking evaluation result", "publication_ref": [], "table_ref": [], "text": "In this study, we conduct benchmark tests on RGB datasets, RGB-D datasets, and RGB-T datasets. We compare AiOSOD against a total of 16 state-of-the-art (SOTA) models on these datasets." }, { "figure_ref": [], "heading": "Evaluation metrics", "publication_ref": [ "b0", "b0", "b4", "b5" ], "table_ref": [], "text": "MAE. The mean absolute error (MAE) [1] represents the average absolute pixel difference between the saliency prediction map (P ) and the ground truth map (G), and it is calculated using the following formula:\nM AE = 1 W × H W x=1 H y=1 |P (x, y) -G (x, y)|,(6)\nwhere, W and H represent the width and height of the saliency map, respectively. A smaller error indicates a closer match between the prediction and the ground truth, thus indicating a more accurate prediction. F-Measure. The F-measure [1] is a comprehensive performance metric calculated as the weighted harmonic mean of precision and recall. The formula is as follows:\nF -measure = (1 + β 2 ) × Precision × Recall β 2 × Precision + Recall , (7\n)\nwhere β is set to 0.3. We use the maximum F-measure as the evaluation metric, where a higher value indicates better prediction performance. S-Measure. The S-measure [5] is particularly focused on evaluating the structural information within saliency maps and is considered closer to the human visual system than the F-measure. The formula for S-measure is expressed as:\nS = γS 0 + (1 -γ) S γ ,(8)\nwhere S 0 and S γ denote region-aware and object-aware structural similarities, respectively. The parameter γ is set to a default value of 0.5. Absolutely, a higher S-measure value indicates more accurate predictions in terms of capturing structural information within the saliency maps. E-Measure. The E-measure [6] is employed to quantify both global and local saliency differences and can be expressed as:\nE m = 1 W × H W x=1 H y=1 ϕ (x, y),(9)\nwhere ϕ (•) represents the enhanced consistency matrix operation. A larger value of E m indicates a more accurate prediction. FPS, Parameters, and FLOPs. In Tabs. 1 and 2, we calculate the FPS, parameters, and FLOPs for these methods. The FPS calculation code is sourced from MobileSal [34] and is tested on the same RTX2080Ti (11G) platform. The code for calculating the parameters and FLOPs is obtained from the Python library 'thop'." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Comparison with SOTA RGB, RGB-D, and RGB-T SOD models", "publication_ref": [ "b30", "b35", "b44", "b42", "b14", "b17", "b1", "b24", "b7", "b38", "b21", "b16" ], "table_ref": [], "text": "In Tab. 1, we compare our model (AiOSOD) with some SOTA models on RGB, RGB-D, and RGB-T datasets, respectively. RGB datasets include the testset of DUTS [31] (5019 images), DUT-OMORN [36] (5166 images), and EC-SSD [35] (1000 images). RGB SOD models include ICON (PAMI22 [45]), ITSD (CVPR20 [43]), RCSB (WACV22 [15]), VST (ICCV21 [19]), CII (TIP21 [18]), and CTD-Net (ACM MM21 [42]). RGB-D datasets include the testsets of DUTLF-Depth [25] (400 pairs of images), NJUD [14] (500 pairs of images), NLPR [24] (300 pairs of images), and SIP [8] (929 pairs of images). RGB-D SOD models include C2DFNet (IEEE TM [39]), CAVER (TIP23 [22]), CCFENet (TCSVT22 [17]), JL-DCF (PAMI Tabs. 1a to 1c, present the comparative results on the RGB, RGB-D, and RGB-T datasets, respectively. Overall, benefiting from the proposed model framework, AiOSOD demonstrates a competitive advantage on RGB, RGB-D, and RGB-T datasets. It achieves excellent performance with low parameters, low computation requirements and high speed. In terms of FPS, parameters and Flops, AiOSOD achieves the best results in RGB and RGB-D saliency detection methods, and it's second-best in RGB-T saliency detection. Regarding performance, AiOSOD delivers moderate performance on the RGB dataset. However, surprisingly, AiOSOD, as a lightweight model, achieves second-best performance across multiple datasets in RGB-D and RGB-T. The data in Tab. 1 demonstrates that AiOSOD is able to efficiently process RGB, RGB-D, and RGB-T data, maintaining a balance between performance, size and speed.\nTab. 2 shows the results of the comparison between AiOSOD and the three lightweight RGB-D SOD models shows representative RGB images where AiOSOD consistently generates high-quality predictions, even in challenging conditions such as blurry boundaries, low contrast and small objects. Some representative RGB-D examples are shown in Fig. 4b. AiOSOD effectively utilizes depth information to enhance saliency detection, maintaining good performance even when the depth maps are blurry. Fig. 4c shows several representative RGB-T images." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Function of proposed components", "publication_ref": [], "table_ref": [], "text": "Tab. 3 presents the results of ablation experiments on there datasets concerning the proposed components. In these experiments, we start with a baseline model that adopts a clas- In the final model (Base+TFM+FFM+MFFM model), we introduce a multi-level feature fusion module, namely AiOSOD. With this module, the performance of the model is successfully improved, thus improving all the metrics for the three types of datasets. Overall, these ablation experiment results confirm the effectiveness of our proposed model framework and individual components, showcasing their performance-enhancing effects across diverse datasets." }, { "figure_ref": [], "heading": "Function of proposed model framework", "publication_ref": [ "b31", "b19" ], "table_ref": [], "text": "This subsection conducts experiments with JL-DCF [9] and VST [19] to explore the impact of extracting features with weight sharing in both convolutional and transformer networks. JL-DCF is the first RGB-D SOD model that extracts RGB and depth features through a convolutional network with shared weights, but its batch can only be set to 1. As shown in Fig. 2, the convolutional network uses BatchNorm to normalize the RGB and depth features in the batch dimension. The experimental results of JL-DCF are shown in Tab. 4. When batch size is 1, only one pair of RGB and depth features is normalized, which has little effect on the prediction results. However, when the batch size exceeds 1, more than one pair of RGB and depth features are normal- ized, which affect each other and cause performance degradation. For the VST model, using a single transformer network with weight sharing to extract multi-modal features produces results comparable to the original model and is not limited by batch size. These experiments confirm that convolutional networks are not suited to providing a unified solution for three data types and achieving optimal performance. Therefore, using a transformer network as the feature extraction network in our proposed model framework is undoubtedly the best choice to meet the task requirements. Tab. 5 presents the results of AiOSOD models with different backbone networks (T2T-ViT-10 [38], PVTv2-B0 [32], and Swin Tiny [20]), including trained jointly on the RGB, RGB-D, and RGB-T datasets, as well as models trained separately on each of these datasets. Among these models based on PVTv2-B0 and Swin Tiny architectures, the original model structure is retained, and an additional FFM module is introduced. Of course, the number of convolutional kernels in each layer has also been adjusted based on the backbone network. Experimental results on T2T-ViT-10, PVTv2-B0, and Swin Tiny consistently show that joint training using all three data types produces better results compared to training on a single data type. This outcome can be attributed to the effectiveness of the proposed model framework, which not only reduces interference between different data types but also significantly increases the effective training sample size, thereby enhancing predictive performance. The data in Tab. 5 confirms that the proposed model framework not only offers a unified solution for all three data types but also improves performance metrics across various testing datasets." }, { "figure_ref": [], "heading": "Applying proposed model framework to other models", "publication_ref": [ "b20", "b43", "b25", "b25" ], "table_ref": [], "text": "We apply the proposed model framework to two transformer-based 3D SOD models, VST [19] and SwinNet [21]. In addition, a CNN-based LSNet [44] (MobileV2Net [26]) is used as a control for comparison. VST, SwinNet, and LSNet are 3D SOD models, and VST also provides a version for RGB detection. Therefore, Tab. 6 presents the results of VST, SwinNet, and LSNet on RGB-D and RGB-T datasets, as well as the results of VST on RGB datasets. In Tab. 6, the first row of each model represents its original model architecture, and the first row of data is sourced from Tab. 1. For the VST and SwinNet, the results of both models trained jointly (Dual-s-JT and Single-s-JT) outperform the results of the model trained with a single dataset (Duals-ST) on these datasets. Moreover, most metrics of the Single-s-JT model are superior to those of the Dual-s-JT model. Since VST and SwinNet use the transformer network as the backbone, dual-stream VST and SwinNet with joint training (Dual-s-JT) still achieve relatively good results. One of the learning networks of the dual-stream VST or SwinNet extracts RGB images, while the input to the other learning network is a mixture of RGB, depth, and thermal images. Although Dual-s-JT-VST and Duals-JT-SwinNet perform well on all three types of datasets, RGB saliency detection requires only a single-stream network. Therefore the proposed model framework is more suitable for saliency detection of RGB, RGB-D, and RGB- Compared to the original LSNet (Dual-s-ST), LSNet (Single-s-JT) using a single-stream network with joint training exhibits performance decrease on the RGB-D dataset and an improvement on the RGB-T dataset. In Tab. 6, the LSNet trained jointly (Dual-s-JT and Single-s-JT ) outperforms the original LSNet (Dual-s-ST) in the RGB-T dataset. We infer that this result is likely due to the similarity between thermal and RGB images, as both are threechannel images with rich color information. Due to the large proportion of RGB data in the joint training dataset, [26]. This model, adapts for MobileV2Net, removing the TFM and MFFM modules, adding an additional FFM module. In Tab. 7, LSNet and AiOSOD both achieve optimal results by using a single-stream learning network and undergoing joint training. Moreover, when the training set contains only RGB-T data, Single-s-AiOSOD outperforms Dual-s-AiOSOD, and Single-s-LSNet performs close to Dual-s-LSNet. These experimental result confirms our inference. Additionally, it can be observed that when (RGB, thermal) pairs are concatenated in the batch dimension, batch normalization has a minor impact on the prediction results, unlike the case with RGB-D." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we are the first to consider a unified solution to realize RGB, RGB-D and RGB-T SOD, requiring only one model and same weights to perform SOD on all three modalities. To achieve the unified solution, we propose a model framework and develop a lightweight model for validation (AiOSOD). The lightweight AiOSOD model demonstrates excellent performance on RGB, RGB-D, and RGB-T datasets, effectively balancing performance and speed. Our proposed model framework takes three types of data as the training set, concatenating them in the batch dimension and extracting features through a transformer network. With this framework, the model can learn from all three types of data and avoid performance degradation due to interference between multimodal features. Importantly, the proposed model framework can be applied to other 3D SOD models, reducing model size and providing a unified solution for RGB, RGB-D, and RGB-T SOD. In the future, we will continue to explore more efficient unified solutions for SOD." } ]
Salient object detection (SOD) aims to identify the most attractive objects within an image. Depending on the type of data being detected, SOD can be categorized into various forms, including RGB, RGB-D (Depth), RGB-T (Thermal) and light field SOD. Previous researches have focused on saliency detection with individual data type. If the RGB-D SOD model is forced to detect RGB-T data it will perform poorly. We propose an innovative model framework that provides a unified solution for the salient object detection task of three types of data (RGB, RGB-D, and RGB-T). The three types of data can be handled in one model (all in one) with the same weight parameters. In this framework, the three types of data are concatenated in an ordered manner within a single input batch, and features are extracted using a transformer network. Based on this framework, we propose an efficient lightweight SOD model, namely AiOSOD, which can detect any RGB, RGB-D, and RGB-T data with high speed (780FPS for RGB data, 485FPS for RGB-D or RGB-T data). Notably, with only 6.25M parameters, AiOSOD achieves excellent performance on RGB, RGB-D, and RGB-T datasets.
All in One: RGB, RGB-D, and RGB-T Salient Object Detection
[ { "figure_caption": "Figure 1 .Figure 2 .12Figure 1. Diagram of proposed model framework. The framework extracts RGB, RGB-D, and RGB-T data simultaneously with a single transformer network with shared weights.", "figure_data": "", "figure_id": "fig_0", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Training dataset. AiOSOD is trained jointly using three different types of data. The training dataset consists of the", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "[9]), SwinNet (TCSVT21[21]), and VST. RGB-T datasets include the testset of VT5000 [28] (2500 pairs of images), along with VT821[30] (821 pairs of images), and VT1000[27] (1000 pairs of images). RGB-T SOD models include CCFENet, TNet (IEEE TM22 [3]), LSNet (TIP23[44]), CSRNet (TCSVT21 [12]), SwinNet, and VST. And in Tab. 2 our model is compared with three lightweight RGB-D models, LSNet, DFM-Net (ACM MM21 [40]), and Mo-bileSal (PAMI21 [34]).", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Qualitative comparison with SOTA RGB, RGB-D, and RGB-T SOD methods.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "sical U-shaped architecture, utilizing the TiT-ViT-10 network as the encoder and three dual convolution layers as the decoder. The baseline model also follows our proposed model framework and integrates cross-modal information through element-wise addition. From the experimental outcomes, it is evident that the baseline model performs well on the RGB, RGB-D, and RGB-T datasets, thanks to our proposed framework.Subsequently, we add a tokens fusion module on the top level of the baseline model (Base+TFM model), which increases 0.3M parameters. Compared to the baseline model, Base+TFM model demonstrates performance improvements across all there datasets, affirming the effectiveness of the tokens fusion module.Building upon Base+TFM model, we further introduce there feature fusion modules as model decoder (Base+TFM+FFM model). With very few additional parameters compared to the Base+TFM model, the Base+TFM+FFM model improves the detection of RGB-D and RGB-T data. However, due to the FFM's primary focus on cross-modal feature fusion, its performance on the RGB dataset lags behind Base+TFM and baseline models.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative comparison of our model with other SOTA RGB, RGB-D, and RGB-T SOD methods on benchmark datasets. The best and second best results are highlighted in red and blue.", "figure_data": "(a) RGB SOD methodsMethodICONITSDRCSBVSTCIICTDNetAiOSODBackboneResNet50 ResNet50 ResNet50 T2T-ViT-t14 ResNet18 ResNet18 T2T-ViT-10Speed(FPS)18290--76293551780Params(M)31.5124.8731.9142.0511.3411.286.25Flops(G)19.5314.87265.1221.6414.895.722.04DUT-OMRONSm ↑0.84420.84010.83500.85010.83880.84420.8447F max β E max ϕ↑ ↑0.7985 0.88420.7923 0.87950.7727 0.86590.8001 0.88780.7817 0.87570.7985 0.88420.7890 0.8823M AE ↓0.05690.06080.04920.05790.05380.05690.0549DUTS-TESm ↑0.88860.88490.88080.89610.88760.88860.8821F max β E max ϕ↑ ↑0.8768 0.93160.8680 0.92940.8676 0.92500.8779 0.93930.8696 0.92900.8768 0.93160.8562 0.9272M AE ↓0.03730.04100.03500.03740.03650.03730.0408ECSSDSm ↑0.92900.92480.92170.93220.92610.92900.9280F max β E max ϕ↑ ↑0.9433 0.96030.9394 0.95890.9355 0.95450.9442 0.96410.9395 0.95620.9433 0.96030.9387 0.9623M AE ↓0.03180.03450.03350.03290.03340.03180.0339(b) RGB-D SOD methodsMethodC2DFNetCAVERCCFENetJL-DCFSwinNetVSTAiOSODBackboneResNet50 ResNet50 ResNet50 ResNet50 Swin-B T2T-ViT-t14 T2T-ViT-10Speed(FPS)28314446172158485Params(M)45.3153.2127.33118.76189.5779.216.25Flops(G)10.3220.3615.93787.88116.1628.953.24DUTLF-DepthSm ↑0.93260.93100.93280.89370.94690.94260.9455F max β E max ϕ↑ ↑0.9440 0.96380.9404 0.96390.9445 0.96360.8920 0.92830.9579 0.97520.9493 0.97080.9526 0.9746M AE ↓0.02520.02830.02630.04880.02260.02460.0239NJUDSm ↑0.90790.92030.91650.91040.92550.92240.9248F max β E max ϕ↑ ↑0.9086 0.94230.9235 0.95340.9210 0.95430.9119 0.95060.9283 0.95730.9195 0.95100.9233 0.9567M AE ↓0.03890.03140.03230.04100.03140.03430.0330NLPRSm ↑0.92790.92900.92680.93060.92960.93140.9273F max β E max ϕ↑ ↑0.9166 0.96050.9211 0.96380.9184 0.96240.9182 0.96480.9170 0.96240.9201 0.96230.9115 0.9582M AE ↓0.02170.02200.02080.02210.02250.02330.0250SIPSm ↑0.87150.89340.88160.88520.90090.90360.9069F max β E max ϕ↑ ↑0.8770 0.91600.9064 0.93440.8977 0.92380.8935 0.93050.9122 0.93960.9150 0.94390.9216 0.9492M AE ↓0.05290.04240.04730.04900.04090.03960.0375(c) RGB-T SOD methodsMethodCCFENetTNetSwinNetVSTLSNetCSRNetAiOSODBackboneResNet50 ResNet50 Swin-B T2T-ViT-t14 MobileNetV2 ESPNetv2 T2T-ViT-10Speed(FPS)46672158896--485Params(M)27.3383.35189.5779.214.35--6.25Flops(G)15.9351.13116.1628.951.154.203.24VT800Sm ↑0.89960.89890.89350.88320.87860.88470.9049F max β E max ϕ↑ ↑0.8819 0.93670.8884 0.93820.8707 0.92880.8541 0.91720.8448 0.92050.8579 0.92260.8821 0.9372M AE ↓0.02730.03020.03340.04120.03320.03760.0283VT1000Sm ↑0.93410.92860.93600.93290.92560.91830.9410F max β E max ϕ↑ ↑0.9336 0.96840.9296 0.96620.9392 0.97270.9314 0.97050.9216 0.96260.9083 0.95250.9405 0.9741M AE ↓0.01820.02120.01790.02110.02270.02420.0202VT5000Sm ↑0.89600.89520.90460.88700.87740.86770.8958F max β E max ϕ↑ ↑0.8802 0.93890.8809 0.93740.8920 0.94810.8610 0.92860.8499 0.92400.8372 0.91380.8750 0.9375M AE ↓0.03040.03280.02900.03830.03700.04160.0346on NJUD and SIP datasets. Compared to the lightweightmodels DFM-Net, LSNet, and MobileSal, AiOSOD's pa-rameters and FLOPs are slightly higher but it maintains thestrongest performance. AiOSOD can process RGB-D im-ages at 485 FPS, slower than DFM-Net and LSNet, andfaster than MobileSal. It can be seen that AiOSOD is ableto maintain high speed while maintaining a high accuracy.As shown in Fig. 4, we demonstrate the generation ofsaliency maps in various challenging scenarios. Fig. 4a", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison with lightweight RGB-D SOD models.", "figure_data": "MethodDFM-NetLSNetMobileSalAiOSODBackboneMobileNetV2 MobileNetV2 MobileNetV2 T2T-ViT-10Speed(FPS)692896459485Params(M)3.844.356.246.25Flops(G)2.681.151.513.24NJUDSm ↑0.90720.91110.90970.9248F max β E max ϕ↑ ↑0.9130 0.95250.9144 0.94980.9117 0.95020.9233 0.9567M AE ↓0.04280.03860.03700.0330SIPSm ↑0.88310.88610.87320.9069F max β E max ϕ↑ ↑0.8873 0.92590.8952 0.93060.8795 0.91620.9216 0.9492M AE ↓0.05070.04960.05280.0375RGB RGBGT GTICON ICONITSD ITSDRCSB RCSBVST VSTCII CIICTDNet CTDNetAiOSOD AiOSODRGB RGBGT GTICON-R ICON-R(a) RGB SOD methods ITSD-R RCSB VST ITSD-R RCSB VSTCII-R18 CII-R18CTDNet-R18 CTDNet-R18AiOSOD AiOSODRGB RGBGT GTICON-R ICON-RITSD-R ITSD-RRCSB RCSBVST VSTCII-R18 CII-R18CTDNet-R18 CTDNet-R18AiOSOD AiOSODRGB RGBDepth DepthGT GTC2DFNet C2DFNetVST VSTDFM-Net DFM-NetLSNet LSNetMobileSal MobileSalAiOSOD AiOSODRGB RGBDepth DepthGT GTC2DFNet C2DFNetVST VSTDFM-Net DFM-NetLSNet LSNetMobileSal MobileSalAiOSOD AiOSODRGB RGBDepth DepthGT GTC2DFNet C2DFNetVST VSTDFM-Net DFM-NetLSNet LSNetMobileSal MobileSalAiOSOD AiOSODRGB RGBThermal ThermalGT GTCCFENet CCFENetTNet TNetVST VSTLSNet LSNetCSRNet CSRNetAiOSOD AiOSODRGB RGBThermal ThermalGT GTCCFENet CCFENetTNet TNetVST VSTLSNet LSNetCSRNet CSRNetAiOSOD AiOSODRGB RGBThermal ThermalGT GTCCFENet CCFENetTNet TNetVST VSTLSNet LSNetCSRNet CSRNetAiOSOD AiOSOD", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Quantitative evaluation results of ablation experiments for proposed components.", "figure_data": "Params(M)DUTLF-Depth M AE ↓ F max βVT821 ↑ M AE ↓ F max βECSSD ↑ M AE ↓ F max β↑Baseline5.800.0308 0.9407 0.0366 0.8685 0.0356 0.9369Base+TFM6.100.0282 0.9434 0.0335 0.8788 0.0350 0.9378Base+TFM+FFM6.100.0264 0.9501 0.0321 0.8697 0.0370 0.9321Base+TFM+FFM+MLF6.250.0239 0.9526 0.0283 0.8821 0.0339 0.9387", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Impact of backbone network weight sharing on the prediction results of two RGB-D SOD models, JL-DCF and VST.", "figure_data": "Size Batch (Mb)NJUD M AE ↓ F max βSIP ↑ M AE ↓ F max βNLPR ↑ M AE ↓ F max β↑JL-DCF475 4751 60.0454 0.05110.8975 0.90380.0491 0.06140.8890 0.86040.0216 0.02920.9176 0.9030VST320 2386 60.0351 0.03240.9195 0.92750.0403 0.03950.9150 0.91490.0236 0.02340.9201 0.9232", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of training results for AiOSOD models using different backbone networks. JT means joint training on RGB, RGB-D, and RGB-T datasets.", "figure_data": "(a) RGB DatasetsBackboneT2T-ViT-10PVTv2-B0Swin TinyTrain datasetRGBJTRGBJTRGBJTDUTS-TESm ↑0.8786 0.8821 0.8790 0.8797 0.8905 0.8931F max β E max ϕ↑ ↑0.8521 0.8562 0.8519 0.8539 0.8659 0.8701 0.9234 0.9272 0.9239 0.9262 0.9329 0.9365M AE ↓0.0418 0.0408 0.0418 0.0412 0.0387 0.0381DUT-OMRONSm ↑0.8365 0.8447 0.8383 0.8472 0.8445 0.8535F max β E max ϕ↑ ↑0.7773 0.7890 0.7772 0.7922 0.7854 0.8018 0.8728 0.8823 0.8746 0.8875 0.8788 0.8928M AE ↓0.0599 0.0549 0.0566 0.0531 0.0548 0.0528(b) RGB-D DatasetsBackboneT2T-ViT-10PVTv2-B0Swin TinyTrain dataset RGB-DJTRGB-DJTRGB-DJTNJUDSm ↑0.9136 0.9248 0.9133 0.9247 0.9207 0.9309F max β E max ϕ↑ ↑0.9093 0.9233 0.9092 0.9241 0.9209 0.9316 0.9474 0.9567 0.9459 0.9552 0.9549 0.9619M AE ↓0.0386 0.0330 0.0395 0.0343 0.0366 0.0308SIPSm ↑0.8971 0.9069 0.8876 0.9001 0.8949 0.9112F max β E max ϕ↑ ↑0.9082 0.9216 0.8965 0.9116 0.9043 0.9243 0.9403 0.9492 0.9349 0.9427 0.9403 0.9515M AE ↓0.0423 0.0375 0.0471 0.0414 0.0439 0.0370(c) RGB-T DatasetsBackboneT2T-ViT-10PVTv2-B0Swin TinyTrain dataset RGB-TJTRGB-TJTRGB-TJTVT1000Sm ↑0.9263 0.9410 0.9297 0.9388 0.9309 0.9428F max β E max ϕ↑ ↑0.9251 0.9405 0.9312 0.9406 0.9293 0.9447 0.9654 0.9741 0.9694 0.9745 0.9699 0.9775M AE ↓0.0240 0.0202 0.0240 0.0202 0.0225 0.0187VT5000Sm ↑0.8803 0.8958 0.8793 0.8940 0.8851 0.9034F max β E max ϕ↑ ↑0.8467 0.8750 0.8508 0.8753 0.8586 0.8880 0.9238 0.9375 0.9251 0.9379 0.9338 0.9457M AE ↓0.0404 0.0346 0.0414 0.0364 0.0377 0.0325", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Applying proposed model framework to other models results. ST means training on a single type of data. JT means joint training on RGB, RGB-D, and RGB-T datasets. Dual-s and Single-s indicate that the model employs a dual-stream backbone network or a single-stream backbone network.", "figure_data": "(a) RGB DatasetsModelDUTS-TE Sm ↑ F max β ↑ E max ϕDUT-OMRON ↑ M AE ↓ Sm ↑ F max β ↑ E max ϕ ↑ M AE ↓Single-s-ST 0.8961 0.87790.93930.03740.8501 0.80010.88780.0579VSTDual-s-JT0.8993 0.88190.93900.03570.8597 0.81520.89620.0535Single-s-JT 0.8974 0.88060.93900.03640.8597 0.81830.89730.0544Dual-s-ST----------------SwinNetDual-s-JT0.9025 0.88730.94030.03160.8675 0.82450.90480.0438Single-s-JT 0.9051 0.89150.94370.03120.8683 0.82680.90680.0458Dual-s-ST----------------LSNetDual-s-JT0.8570 0.82620.90720.04980.8286 0.77000.87060.0611Single-s-JT 0.8564 0.82820.90750.05100.8293 0.77650.87690.0632(b) RGB-D DatasetsModelSm ↑ F max βNJUD ↑ E max ϕ↑ M AE ↓ Sm ↑ F max βSIP ↑ E max ϕ↑ M AE ↓Dual-s-ST 0.9224 0.91950.95100.03430.9036 0.91500.94390.0396VSTDual-s-JT0.9274 0.92410.95400.03240.9147 0.92310.95160.0342Single-s-JT 0.9301 0.93100.95890.03140.9120 0.92220.95040.0354Dual-s-ST 0.9255 0.92830.95730.03140.9009 0.91220.93960.0409SwinNetDual-s-JT0.9230 0.92770.95370.03380.9046 0.91580.94400.0385Single-s-JT 0.9274 0.93050.95780.03160.9156 0.93180.95370.0332Dual-s-ST 0.9111 0.91440.94980.03860.8861 0.89520.93060.0496LSNetDual-s-JT0.8786 0.86700.91710.05360.8730 0.87610.92510.0560Single-s-JT 0.8907 0.88470.92780.05120.8864 0.88860.92860.0521(c) RGB-T DatasetsModelSm ↑ F max βVT1000 ↑ E max ϕ↑ M AE ↓ Sm ↑ F max βVT5000 ↑ E max ϕ↑ M AE ↓Dual-s-ST 0.9329 0.93140.97050.02110.8870 0.86100.92860.0383VSTDual-s-JT0.9418 0.94430.97450.01920.9048 0.88910.94410.0317Single-s-JT 0.9418 0.94460.97510.01830.9048 0.89050.94610.0314Dual-s-ST 0.9360 0.93920.97270.01790.9046 0.89200.94810.0290SwinNetDual-s-JT0.9365 0.93800.97270.01900.9119 0.90320.95500.0261Single-s-JT 0.9449 0.94750.97850.01670.9191 0.91130.95900.0239Dual-s-ST 0.9256 0.92160.96260.02270.8774 0.84990.92400.0370LSNetDual-s-JT0.9264 0.92220.96130.02290.8843 0.86140.92880.0364Single-s-JT 0.9286 0.92660.96440.02240.8884 0.87080.93380.0367T data. It's worth noting that the single-stream VST modelshows an approximate 18% increase in training speed com-pared to the dual-stream VST model, and the model sizedecreases from 320MB to 238MB. Similarly, the single-stream SwinNet model exhibits about a 15% increase intraining speed compared to the dual-stream SwinNet model,with the model size decreasing from 786MB to 441MB.The proposed model framework has successfully migratedon the VST and SwinNet, providing a unified solution forRGB, RGB-D, and RGB-T SOD, resulting in improvedmodel performance and reducing parameters. Combiningthe experimental results of VST and SwinNet, the proposedmodel framework can be applied to other 3D SOD models,thus providing a unified solution for all three types of dataand reducing model size.", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "LSNet and MobileV2Net-based AiOSOD detection results on RGB-T dataset. JT means joint training on RGB, RGB-D, and RGB-T datasets. Dual-s and Single-s indicate that the model employs a dual-stream backbone network or a single-stream backbone network. the jointly trained model may perform better for RGB-T data because of this similarity. To confirm this inference, we conduct experiments by replacing the backbone network of AiOSOD with MobileV2Net", "figure_data": "ModelLSNet Dual-s Single-s Single-s Dual-s Single-s Single-s AiOSODTrain DatasetRGB-T RGB-TJTRGB-T RGB-TJTSm ↑0.87860.87830.90230.84080.84240.8984VT821F max β E max ϕ↑ 0.8448 ↑ 0.92050.8524 0.92260.8872 0.94310.7822 0.87860.7813 0.87490.8759 0.9392M AE ↓ 0.03320.03760.02900.06410.05700.0315Sm ↑0.92560.92100.92860.91330.91670.9332VT1000F max β E max ϕ↑ 0.9216 ↑ 0.96260.9186 0.96180.9266 0.96440.9070 0.95350.9094 0.95520.9329 0.9704M AE ↓ 0.02270.02620.02240.02890.02880.0234Sm ↑0.87740.87840.88840.84960.84960.8883VT5000F max β E max ϕ↑ 0.8499 ↑ 0.92400.8553 0.93130.8708 0.93380.8035 0.89550.8043 0.89570.8647 0.9337M AE ↓ 0.03700.03870.03670.05110.05130.0380", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" } ]
Xingzhao Jia; Zhongqiu Zhao; Changlei Dongye; Zhao Zhang
[ { "authors": "Ali Borji; Ming-Ming Cheng; Huaizu Jiang; Jia Li", "journal": "IEEE Transactions on Image Processing", "ref_id": "b0", "title": "Salient object detection: A benchmark", "year": "2015" }, { "authors": "Xiaolong Cheng; Xuan Zheng; Jialun Pei; He Tang; Zehua Lyu; Chuanbo Chen", "journal": "IEEE Transactions on Multimedia", "ref_id": "b1", "title": "Depth-induced gapreducing network for rgb-d salient object detection: An interaction, guidance and refinement approach", "year": "2022" }, { "authors": "Runmin Cong; Kepu Zhang; Chen Zhang; Feng Zheng; Yao Zhao; Qingming Huang; Sam Kwong", "journal": "IEEE Transactions on Multimedia", "ref_id": "b2", "title": "Does thermal really always matter for rgb-t salient object detection", "year": "2022" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b3", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Deng-Ping Fan; Ming-Ming Cheng; Yun Liu; Tao Li; Ali Borji", "journal": "", "ref_id": "b4", "title": "Structure-measure: A new way to evaluate foreground maps", "year": "2017" }, { "authors": "Deng-Ping Fan; Cheng Gong; Yang Cao; Bo Ren; Ming-Ming Cheng; Ali Borji", "journal": "International Joint Conferences on Artificial Intelligence Organization", "ref_id": "b5", "title": "Enhancedalignment measure for binary foreground map evaluation", "year": "2018" }, { "authors": "Deng-Ping Fan; Wenguan Wang; Ming-Ming Cheng; Jianbing Shen", "journal": "", "ref_id": "b6", "title": "Shifting more attention to video salient object detection", "year": "2019" }, { "authors": "Deng-Ping Fan; Zheng Lin; Zhao Zhang; Menglong Zhu; Ming-Ming Cheng", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b7", "title": "Rethinking rgb-d salient object detection: Models, data sets, and large-scale benchmarks", "year": "2021" }, { "authors": "Keren Fu; Deng-Ping Fan; Ge-Peng Ji; Qijun Zhao; Jianbing Shen; Ce Zhu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b8", "title": "Siamese network for rgbd salient object detection and beyond", "year": "2022" }, { "authors": "Yuan Gao; Miaojing Shi; Dacheng Tao; Chao Xu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b9", "title": "Database saliency for fast image retrieval", "year": "2015" }, { "authors": "Chenlei Guo; Liming Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b10", "title": "A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression", "year": "2010" }, { "authors": "Fushuo Huo; Xuegui Zhu; Lei Zhang; Qifeng Liu; Yu Shu", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b11", "title": "Efficient context-guided stacked refinement network for rgb-t salient object detection", "year": "2022" }, { "authors": "Xingzhao Jia; Changlei Dongye; Yanjun Peng", "journal": "Image and Vision Computing", "ref_id": "b12", "title": "Siatrans: Siamese transformer network for rgb-d salient object detection with depth image classification", "year": "2022" }, { "authors": "Ran Ju; Ling Ge; Wenjing Geng; Tongwei Ren; Gangshan Wu", "journal": "", "ref_id": "b13", "title": "Depth saliency based on anisotropic center-surround difference", "year": "2014" }, { "authors": "Yun Yi; Ke ; Takahiro Tsubono", "journal": "", "ref_id": "b14", "title": "Recursive contoursaliency blending network for accurate salient object detection", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b15", "title": "Adam: A method for stochastic optimization", "year": "" }, { "authors": "Guibiao Liao; Wei Gao; Ge Li; Junle Wang; Sam Kwong", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b16", "title": "Cross-collaborative fusion-encoder network for robust rgb-thermal salient object detection", "year": "2022" }, { "authors": "Jiang-Jiang Liu; Zhi-Ang Liu; Pai Peng; Ming-Ming Cheng", "journal": "IEEE Transactions on Image Processing", "ref_id": "b17", "title": "Rethinking the u-shape structure for salient object detection", "year": "2021" }, { "authors": "Nian Liu; Ni Zhang; Kaiyuan Wan; Ling Shao; Junwei Han", "journal": "", "ref_id": "b18", "title": "Visual saliency transformer", "year": "2021" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b19", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Zhengyi Liu; Yacheng Tan; Qian He; Yun Xiao", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b20", "title": "Swinnet: Swin transformer drives edge-aware rgb-d and rgb-t salient object detection", "year": "2022" }, { "authors": "Youwei Pang; Xiaoqi Zhao; Lihe Zhang; Huchuan Lu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b21", "title": "Caver: Cross-modal view-mixed transformer for bi-modal salient object detection", "year": "2023" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "Advances in neural information processing systems", "ref_id": "b22", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "2019" }, { "authors": "Houwen Peng; Bing Li; Weihua Xiong; Weiming Hu; Rongrong Ji", "journal": "Springer International Publishing", "ref_id": "b23", "title": "Rgbd salient object detection: A benchmark and algorithms", "year": "2014" }, { "authors": "Yongri Piao; Wei Ji; Jingjing Li; Miao Zhang; Huchuan Lu", "journal": "", "ref_id": "b24", "title": "Depth-induced multi-scale recurrent attention network for saliency detection", "year": "2019" }, { "authors": "Mark Sandler; Andrew Howard; Menglong Zhu; Andrey Zhmoginov; Liang-Chieh Chen", "journal": "", "ref_id": "b25", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "2018" }, { "authors": "Zhengzheng Tu; Tian Xia; Chenglong Li; Xiaoxiao Wang; Yan Ma; Jin Tang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b26", "title": "Rgb-t image saliency detection via collaborative graph learning", "year": "2020" }, { "authors": "Zhengzheng Tu; Yan Ma; Zhun Li; Chenglong Li; Jieming Xu; Yongtao Liu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b27", "title": "Rgbt salient object detection: A large-scale dataset and benchmark", "year": "2022" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Attention is all you need", "year": "2017" }, { "authors": "Guizhao Wang; Chenglong Li; Yunpeng Ma; Aihua Zheng; Jin Tang; Bin Luo", "journal": "Springer Singapore", "ref_id": "b29", "title": "Rgb-t saliency detection benchmark: Dataset, baselines, analysis and a novel approach", "year": "2018" }, { "authors": "Lijun Wang; Huchuan Lu; Yifan Wang; Mengyang Feng; Dong Wang; Baocai Yin; Xiang Ruan", "journal": "", "ref_id": "b30", "title": "Learning to detect salient objects with image-level supervision", "year": "2017" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Deng-Ping Fan; Kaitao Song; Ding Liang; Tong Lu; Ping Luo; Ling Shao", "journal": "Computational Visual Media", "ref_id": "b31", "title": "Pvt v2: Improved baselines with pyramid vision transformer", "year": "2022" }, { "authors": "Sanghyun Woo; Jongchan Park; Joon-Young Lee; In So Kweon", "journal": "", "ref_id": "b32", "title": "Cbam: Convolutional block attention module", "year": "2018" }, { "authors": "Yu-Huan Wu; Yun Liu; Jun Xu; Jia-Wang Bian; Yu-Chao Gu; Ming-Ming Cheng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b33", "title": "Mobilesal: Extremely efficient rgb-d salient object detection", "year": "2022" }, { "authors": "Qiong Yan; Li Xu; Jianping Shi; Jiaya Jia", "journal": "", "ref_id": "b34", "title": "Hierarchical saliency detection", "year": "2013" }, { "authors": "Chuan Yang; Lihe Zhang; Huchuan Lu; Xiang Ruan; Ming-Hsuan Yang", "journal": "", "ref_id": "b35", "title": "Saliency detection via graphbased manifold ranking", "year": "2013" }, { "authors": "Linwei Ye; Zhi Liu; Lina Li; Liquan Shen; Cong Bai; Yang Wang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b36", "title": "Salient object segmentation via effective integration of saliency and objectness", "year": "2017" }, { "authors": "Li Yuan; Yunpeng Chen; Tao Wang; Weihao Yu; Yujun Shi; Zi-Hang Jiang; Francis E H Tay; Jiashi Feng; Shuicheng Yan", "journal": "", "ref_id": "b37", "title": "Tokens-to-token vit: Training vision transformers from scratch on imagenet", "year": "2021" }, { "authors": "Miao Zhang; Shunyu Yao; Beiqi Hu; Yongri Piao; Wei Ji", "journal": "IEEE Transactions on Multimedia", "ref_id": "b38", "title": "C 2 dfnet: Criss-cross dynamic filter network for rgb-d salient object detection", "year": "2022" }, { "authors": "Wenbo Zhang; Ge-Peng Ji; Zhuo Wang; Keren Fu; Qijun Zhao", "journal": "Association for Computing Machinery", "ref_id": "b39", "title": "Depth quality-inspired feature manipulation for efficient rgb-d salient object detection", "year": "2021" }, { "authors": "Yunhua Zhang; Hangxu Wang; Gang Yang; Jianhao Zhang; Congjin Gong; Yutao Wang", "journal": "The Visual Computer", "ref_id": "b40", "title": "Csnet: a convnext-based siamese network for rgb-d salient object detection", "year": "2023" }, { "authors": "Zhirui Zhao; Changqun Xia; Chenxi Xie; Jia Li", "journal": "Association for Computing Machinery", "ref_id": "b41", "title": "Complementary trilateral decoder for fast and accurate salient object detection", "year": "2021" }, { "authors": "Huajun Zhou; Xiaohua Xie; Jian-Huang Lai; Zixuan Chen; Lingxiao Yang", "journal": "", "ref_id": "b42", "title": "Interactive two-stream decoder for accurate and fast saliency detection", "year": "2020" }, { "authors": "Wujie Zhou; Yun Zhu; Jingsheng Lei; Rongwang Yang; Lu Yu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b43", "title": "Lsnet: Lightweight spatial boosting network for detecting salient objects in rgb-thermal images", "year": "2023" }, { "authors": "Mingchen Zhuge; Deng-Ping Fan; Nian Liu; Dingwen Zhang; Dong Xu; Ling Shao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b44", "title": "Salient object detection via integrity learning", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 123.53, 549.74, 162.83, 9.65 ], "formula_id": "formula_0", "formula_text": "T i = Backbone (T 0 ) ,(1)" }, { "formula_coordinates": [ 3, 123.84, 626.97, 162.53, 9.65 ], "formula_id": "formula_1", "formula_text": "T i1 , T i2 = Split (T i ) .(2)" }, { "formula_coordinates": [ 3, 308.86, 178.31, 236.25, 33.27 ], "formula_id": "formula_2", "formula_text": "Attention(Q, K, V ) = softmax(QK T / d k )V, (3) where Q is Query, K is Key, V is Value, d k" }, { "formula_coordinates": [ 3, 319.03, 280.64, 226.08, 27.6 ], "formula_id": "formula_3", "formula_text": "Attention(QR, KD, VD) = softmax(QRK T D / d k )VD, Attention(QD, KR, VR) = softmax(QDK T R / d k )VR.(4)" }, { "formula_coordinates": [ 4, 211.09, 96.66, 291.07, 179.16 ], "formula_id": "formula_4", "formula_text": "V K Q Q K V Transformer Layer C TFM i T 2 i T 1 i T 1 i T 2 i T 31 T 32 T 1 F 2 F 3 F Figure 3. Framework of our proposed AiOSOD." }, { "formula_coordinates": [ 4, 63.02, 407.12, 199.97, 17.67 ], "formula_id": "formula_5", "formula_text": "L (P, G) = - i [gilog (pi) + (1 -gi) log (1 -pi)]," }, { "formula_coordinates": [ 4, 77.83, 435.67, 208.53, 11.92 ], "formula_id": "formula_6", "formula_text": "P ∈ [0, 1] 224×224 and G ∈ [0, 1] 224×224 represent" }, { "formula_coordinates": [ 4, 50.11, 462.55, 195.45, 9.65 ], "formula_id": "formula_7", "formula_text": "g i ∈ G, p i ∈ P represent individual pixel values." }, { "formula_coordinates": [ 4, 324.43, 317.59, 220.68, 30.2 ], "formula_id": "formula_8", "formula_text": "M AE = 1 W × H W x=1 H y=1 |P (x, y) -G (x, y)|,(6)" }, { "formula_coordinates": [ 4, 325.97, 443.97, 215.27, 24.1 ], "formula_id": "formula_9", "formula_text": "F -measure = (1 + β 2 ) × Precision × Recall β 2 × Precision + Recall , (7" }, { "formula_coordinates": [ 4, 541.24, 452.61, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 379.64, 565.42, 165.48, 9.65 ], "formula_id": "formula_11", "formula_text": "S = γS 0 + (1 -γ) S γ ,(8)" }, { "formula_coordinates": [ 4, 361.93, 684.78, 183.18, 30.2 ], "formula_id": "formula_12", "formula_text": "E m = 1 W × H W x=1 H y=1 ϕ (x, y),(9)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b17" ], "table_ref": [], "text": "In recent years, machine learning has made substantial advancements, with innovations covering a wide range of applications such as image categorization and natural language interpretation. Among the evolving paradigms, Zero-shot Learning (ZSL) has gained attention, aiming at overcoming the challenge of model generalization to unseen categories with scarce labeled instances. Conventional ZSL methodologies use auxiliary information like class attributes and class embeddings but struggle with unseen class * VinUniversity † Kyushu University ‡ VNU University of Engineering and Technology § VinUniversity ¶ University of Western Australia combinations. In this context, Compositional Zero-Shot Learning (CZSL) is emerging as a viable solution, especially in structured or high multiclass situations.\nFigure 1. The goal of zero-shot compositional learning is to create a classifier capable of identifying visual concepts denoted by attribute-object pairs (such as \"bright street\") even when no training images of that specific composition exist. Our approach involves generating synthetic features for these new compositions by leveraging knowledge acquired from observed compositions like \"bright ocean\" and \"sunny street.\" These synthetic features are then used to train the classifier directly.\nCompositional Zero-Shot Learning (CZSL) is a task aimed at recognizing unseen compositional visual concepts by training models on learned concepts. Fig 1 demonstrates the basic idea of CZSL. It primarily involves modeling attributes that interact with different objects and addressing the challenges posed by diversity, informativeness, and entanglement between visual primitives. CZSL effectively incorporates compositional reasoning, which facilitates models to accurately predict new class combinations by considering the hierarchical and structural relationships among classes. This is achieved through the generation of embeddings based on constituent components, thereby offering a robust strategy towards tackling Zero-Shot Learning (ZSL) challenges. Through this methodical approach, CZSL not only broadens the scope of recognizing new visual compositions but also significantly contributes towards enhancing arXiv:2311.14747v1 [cs.CV] 23 Nov 2023 the model's generalization capabilities in unseen scenarios.\nA common CZSL strategy involves merging scores from different classifiers, independently trained for attributes and objects [28]. Nevertheless, such separate predictions often overlook the contextual interactions originating from the combination of different primitives, termed as contextuality within compositions [28]. Previous approaches have tackled this challenge by modeling compositional label embeddings for each class, making use of external knowledge repositories like Glove [27] to extract semantic vectors for attributes and objects. These vectors are subsequently concatenated through multiple layers to form the compositional label embedding. The resulting embeddings are then aligned with visual attributes in a shared embedding space, effectively transforming the recognition of unseen images into a nearest neighbor search problem [25]. However, the extent of contextual consideration within the visual domain remains constrained. Prior investigations have also underscored the significance of discriminability in visual features, a factor influencing generalization performance in recognizing unseen compositions. Disentanglement emerges as a favored solution, with dedicated layers for extracting intermediate visual representations of attributes and objects [18,38,39,44].\nIn this paper, we propose a novel methodology intended to faithfully mimic the complex process wherein humans inherently fuse various visual elements to create compositions. Our assertion on the profound impact of memory on human compositionality is further corroborated by the way individuals employ towards novel concepts. Upon encountering unfamiliar ideas or scenarios, humans frequently reason and formulate logical conjectures rooted in their preexisting knowledge and memory [2]. Additionally, the interactions between diverse attributes and objects are unique, necessitating varied composition techniques.\nMotivated by this, we propose HOMOE, a novel and efficient model centered on learning primitive representations, retrieving, and combining them to create compositional and joint representations. To learn the joint representations, HOMOE exploits the soft prompt from DFSP [23]. Furthermore, we develop a set of learnable features, initially represented by the mean embeddings of random images corresponding to the same classes, and use the Modern Hopfield Network [36] to retrieve representations of akin attributes or objects predicated on the input image. The Modern Hopfield Network, drawing from biological concepts, aids in establishing an associative memory framework crucial for the retrieval process, emulating a human-like approach toward recognizing and categorizing new or unseen objects or scenarios. Associative memory allows the network to recall patterns or representations based on partial or noisy inputs, providing a robust mechanism for retrieval even in the face of incomplete or imperfect cues. This property is essential for real-world applications where inputs are often ambiguous or occluded. Lastly, we deploy a Soft Mixture of Experts (Soft MoE) [34] to adapt the input image embeddings based on the retrieved features. The \"soft\" in Soft Mixture of Experts refers to the manner in which the model allocates the handling of inputs among different \"experts\" within the model. Unlike a \"hard\" Mixture of Experts model where each input is directed to exactly one expert based on the gating network, a Soft Mixture of Experts model allows for a more nuanced allocation, permitting each input to be handled by multiple experts with varying degrees of weighting determined by the gating network. This soft assignment enables a more flexible and nuanced handling of inputs, which could be particularly beneficial in complex or multifaceted tasks, providing a form of regularization and a richer representation of the data as it gathers insights from multiple experts for each input. The sparsity of the Soft MoE model is perceived as a form of specialization, with certain experts dedicated to managing specific attributes.\nTo summarize, the primary contributions of this paper are as follows:\n• We introduce an innovative framework -HOMOE, inspired by human memory, leveraging external references to effectively recognize novel combinations using exclusive primitives. • For the first time, our proposed framework employs a Modern Hopfield Network and Mixture of Expert models for Compositional Zero-Shot Learning (CZSL), capitalizing on their synergy to enhance the classification of unseen compositions. • We also devise additional loss functions aimed at progressively augmenting the memory's composability while preserving the primitive clustering.\nTo evaluate these contributions, we conduct extensive experiments examining the influence of memory design and auxiliary loss functions on the overall model performance, alongside the effectiveness of other elements within our methodology." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Compositional Zero-Shot Learning", "publication_ref": [ "b19", "b30", "b0", "b16", "b20", "b1", "b30", "b17", "b19", "b17", "b30", "b31", "b25", "b11", "b18", "b25", "b41" ], "table_ref": [], "text": "Compositional Zero-Shot Learning (CZSL) [20,27,28,31] is a task that mimics the human ability to imagine and identify new concepts based on prior knowledge, marking it as a crucial part of Zero-Shot Learning (ZSL) [1,6,7,14,17,21,22]. While traditional ZSL uses attributes to recognize unseen objects, CZSL views classes as mixtures of states and objects. Initially, CZSL methods involved training a classifier for identification along with a transformation module to change states or objects [28,31]. However, newer strategies use two distinct classifiers to identify states and objects individually [11,18,20,28]. Some methods even combine the encoded attribute/state and object features using late fusion with a multi-layer perceptron [35]. Li et al. [18] brought contrastive learning into CZSL, designing a siamese network to identify states and objects in a contrastive space. Other strategies [31,32] aim at representing compositions together, learning an embedding space to map compositions like in ZSL. Moreover, methods have been using graph networks to represent the relationship between states and objects and to learn their compositions [26,38]. Lately, the attention mechanism has been modified to disentangle compositions into primitives and compute new compositions [8,12,19]. Nihal et al. [33] pioneered the use of a VLM (Vision-Language Model) for CZSL, substituting prompt classes with a learnable combined state and object vector representation. Additionally, the challenge of handling compositions during testing is not just restricted to the test data compositions, but often extends to real-world scenarios, accounting for all potential compositions. Contrary to the earlier closed-world setting (where the system only encounters scenarios it was trained on), some research aims to tackle the open-world situation (where the system may encounter completely new, unseen scenarios) by using external knowledge to eliminate unlikely compositions [11,25,26,42]." }, { "figure_ref": [], "heading": "Associative Memory", "publication_ref": [ "b23" ], "table_ref": [], "text": "In our proposed approach, using Associative Memory is key for efficient storage and retrieval of learned patterns, enabling precise mapping of new compositional representations. This capability allows our framework to efficiently navigate the complexities of open-world scenarios, drawing from stored knowledge to deduce the characteristics of unseen compositions, thereby significantly enhancing the accuracy and robustness of CZSL in real-world applications. Looking more closely into the technical aspects, associative memory networks play a central role in linking an input with the most closely resembling pattern, focusing on both the storage and retrieval of patterns. This type of neural network, which is built on the framework of the classic Hopfield Network [10], stores multidimensional vectors as memories through recurrent dynamical systems, thus facilitating data clustering by establishing fixed point attractor states. Yet, a significant limitation lies in its restricted memory capacity, which can only hold around 0.14 times the dimensionality (d) in random memories for a d-dimensional data domain. This poses a challenge for clustering, especially when the number of clusters should not be directly related to data dimensionality.\nTo address the drawbacks associated with the classical Hopfield Network, Krotov, and Hopfield introduced an updated version known as the Modern Hopfield Network, or Dense Associative Memory (Dense AM) [13]. This improved model incorporates rapidly advancing nonlinearities (activation functions) into the dynamical system, which results in a denser memory arrangement and significantly boosts the memory capacity, a benefit that is particularly pronounced in high-dimensional data spaces. Some of the activation functions employed in Dense AMs have the potential to lead to power-law or even exponential memory capacity [3,24].\nMoreover, as pointed out by Ramsauer et al., the attention mechanism inherent in transformers[41] can be interpreted as a particular instance of Dense AMs when utilizing the softmax activation function [36]. They illustrated the capacity to store an exponentially large number of patterns, retrieve patterns with a single update, maintain an exponentially small retrieval loss, and even can learn the memory." }, { "figure_ref": [], "heading": "Sparse Mixture of Experts", "publication_ref": [ "b36" ], "table_ref": [], "text": "Expanding on the capabilities of associative memory, using a framework like the Sparsely-gated Mixture of Experts (MoE) [40] can further improve the performance in managing and interpreting complex data structures, especially in the field of CZSL. The MoE model has been groundbreaking, displaying substantial improvements in model capacity, training time, or model quality through the incorporation of gating. Following this, the Switch Transformer [4] simplified the gating process by choosing only the top expert per token using a softmax over the hidden state, which displayed better scaling compared to earlier efforts. However, a common necessity across these improvements has been the use of an auxiliary loss to actively promote balancing within the model. This auxiliary loss needs meticulous weighting to prevent overshadowing the primary loss, yet it doesn't ensure a perfect balance, necessitating a hard capacity factor. This scenario might lead to many tokens remaining unprocessed by the MoE layer. The introduction of Hard MoE [5]with a singular decoding layer showed efficient training yielding positive outcomes on large-scale hashtag prediction tasks. Furthermore, Base Layers [16] devised a linear assignment strategy to maximize token-expert affinities while ensuring an equitable distribution of tokens to each expert. Recently, several innovative methods have surfaced, proposing ways to selectively trigger token paths across the network in various domains including language [4, 15], vision [37], and multimodal models [29]. This diverse array of strategies underscores the potential of leveraging sophisticated models like Sparse Mixture of Experts in addressing the complex challenges posed by the openworld setting of CZSL, making a compelling case for its inclusion in our proposed framework." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "CZSL is a classification problem that aims to train a model using a limited set of attribute-object combinations, enabling the recognition of images with unseen combinations. To elaborate, consider an attribute set A = {a 0 , a 1 , . . . , a n } and an object set O = {o 0 , o 1 , . . . , o m }. By composing these two labels, a composition set is formed as \nC = A × O = {(a 0 , o 0 ), (a 1 , o 0 ), . . . , (a n , o m )}." }, { "figure_ref": [], "heading": "Our Approach", "publication_ref": [], "table_ref": [], "text": "Most recent research has primarily treated Compositional Zero-shot Learning (CZSL) as a multi-classification task, focusing on learning visual primitives separately and then combining them later on. However, this approach often runs into challenges due to the tight interweaving of image features, leading to a possible loss of their joint representation, which makes CZSL a challenging task.\nIn contrast, our proposed approach uses modern Hopfield networks as a retrieval mechanism to extract visual prototypes from a learnable memory, preserving their joint representations. Moreover, we use Transformers [41] alongside Soft Mixture of Experts [34] to use and integrate this knowledge by retrieving it from the memory. Additionally, our model is trained using a hybrid approach where the memory contrastive loss is incorporated after finishing an epoch with it.\nSpecifically, our proposed model (HOMOE) consists of three primary steps: (1) data encoding, (2) retrieval using the modern Hopfield network, and (3) prototype composition. The framework of HOMOE is depicted in Fig 2 . More detailed explanation of our approach is provided in the following sections." }, { "figure_ref": [], "heading": "HOMOE", "publication_ref": [], "table_ref": [], "text": "Data encoding. In the training phase, we deal with an image denoted by x i and a collection of labels C i , both of which are sampled from the training dataset D tr . Our approach involves encoding both the image and labels using CLIP (Contrastive Language-Image Pre-training, a model designed to learn visual concepts from natural language descriptions), which consists of an image encoder E img (x) and a text encoder E text (y), where x symbolizes an image, and y denotes a sentence .\nInitially, the image is directly fed into the image encoder, yielding latent vectors F v = E img (x i ). On the other hand, when it comes to text processing, we use the soft prompt strategy from DFSP (Decomposed Fusion with Soft Prompt, a technique for enhancing language models by introducing soft prompts) [23] to achieve enhanced linguistic joint representations. Specifically, DFSP transforms labels\nC i = (a i , o i ) into a soft prompt P θ (C i ) = [θ 1 ] [θ 2 ] [θ 3 ] [a i ] [o i ], with [θ 1 ] [θ 2 ] [θ 3 ]\nbeing fully learnable prompts (soft prompts). Following this, the transformed labels are tokenized and sent through the static CLIP encoder, resulting in\nF t = E text (P θ (C i )).\nRetrieving with modern Hopfield Network Transformers have become common in several fields such as natural language processing, computer vision, and reinforcement learning, owing to their outstanding capacity to extract relevant information using multi-head self-attention mechanisms. In the context of CZSL, our objective is to retrieve key visual prototypes while maintaining a generalized visual representation as memory. To achieve this, we turn to the Modern Hopfield Network, which enables us to retrieve representations of other labels and keep updating these representations continually. Going into the details, we start with two sets of memory, M a and M o , where M a , M o ∈ R |Ctr|×D , representing the memory of attributes and objects with a latent dimension D, respectively. We encode M a and M o by determining the mean of visual and linguistic prototypes for each label. For instance, considering the label \"red tomato,\" the linguistic prototype is obtained from the output of the frozen CLIP's text encoder with a prompt: \"A photo of a red object,\" while the visual prototype is the mean of several image embeddings, derived from the output of the frozen CLIP's image encoder concerning several images labeled \"red tomato\". The rationale behind encoding two distinct sets of features is to capture the interplay of primitives with another primitive. Rather than representing just an attribute or object, we construct M a and M o , ensuring they match the size with the training set classes, so that we can store noticeable variations of primitives for improved compositions. For a Hopfield Network to retrieve l different patterns, we initially use a trainable linear layer to project F v into l vectors, denoted by Z l = Z θ (F v ). Subsequently, the modern Hopfield network retrieves akin patterns, V = [V 1 , V 2 , ..., V l ], from both M a and M o , with each set containing l/2 patterns, respectively. This equation is adapted from the modern Hopfield Network [36].\nV i = sof tmax(Z i • M T a ) • M a , i < l/2 sof tmax(Z i • M T o ) • M o , i ≥ l/2(1)\nIn this equation, s i = sof tmax(Z i • M a ) can be viewed as the probability of the vector correlating to the memory. To ensure the Modern Hopfield Network retrieves [V 1 , V 2 , . . . , V n ] as meaningful information, we incorporate two auxiliary losses to realize this goal. Elaborating further, we follow the retrieval equation to compute the cross-Figure 2. Illustrates the process of how an image and its associated label are processed using the CLIP model with an adaptable prompt to generate visual embedding Fv along with a series of textual embeddings F 1 t , F 2 t , . . . Visual memories consist of learnable features that represent basic visual elements and their interconnections, while text memories capture the semantic essence of these elements. The Modern Hopfield Network then selects a matching subset of visual and textual features based on Fv. Subsequently, the Soft Mixture of Experts model uses Fv and these retrieved representations to produce a refined composite feature F c v . The similarity between F c v and the series of Ft features is then used to compute cross-entropy loss. Additionally, the model incorporates Soft Prompt Loss and Clustering & Retrieval Loss to refine its understanding of compositional representation and to ensure the relevance of the retrieval process.\nentropy loss with the labels (a t , o t ) of the input image.\nL re (s) = |Ctr| i=1 I(a t ) i log   1 l/2 l/2 j=1 s j,i   + |Ctr| i=1 I(o t ) i log   1 l/2 l j=l/2 s j,i  \nwith I() is the one-hot-encoder.\n(2)\nAdditionally, we compute the InfoNCE loss (as outlined in [9]) on the Modern Hopfield model's attribute output and object output, respectively. For a more comprehensive understanding, our initial step consists of generating all possible combinations of positive sets, denoted as V + , from the set V. We select pairs of vectors (V i , V j ) such that arg(s i ) = arg(s j ). For each of these positive sets, we proceed to calculate the InfoNCE loss using the subsequent equation:\nL N = - l i=0 log exp(V + i,1 • V + i,2 /τ ) l j=0 exp(V i,1 • V j /τ ) ,(3)\nwhere V + i,1 is the first element from the i-th set and τ is a temperature hyper-parameter as per [43]. To avoid the scenario of positive sets being empty, we randomly include one element from M while forming the V + .\nThe underlying idea of this loss is to promote the clustering of similar attributes and objects in memory, given that the Modern Hopfield Network identifies the k-nearest neighbors to the input image, and effective clustering boosts the relevance of the retrieval mechanism.\nPrototypes aggregation. The significant variations in the relationships between states and objects, which may highly depend on the specific state and object in question, act as a strong driving force for our approach. When combining different attributes with an object, it is essential to customize the composition method accordingly. Given this incentive, we have adopted the Soft Mixture of Experts (Soft MoE) [34] as the compositional composer in our model. This choice allows us to infuse diversity into the model's capacity to handle a broad spectrum of patterns and complexities present in the data.\nThe Soft MoE approach employs a differentiable routing algorithm, which relies on adjustable parameters for each slot to compute the importance scores for pairs of input tokens and slots. Following this, each slot uses these scores to perform a weighted summation of the input tokens. Each \"expert\" is essentially a unique Multi-Layer Perceptron (MLP) layer tasked with processing its assigned slot (in our experiments, we allocated one slot per expert). Ultimately, the initial importance scores are also used to combine the outputs from all the slots. In alignment with their work, we also incorporate Transformers [41] and substitute the second half of the MLP blocks with Soft MoE layers as the composer for our model.\nGiven an input image embedding denoted as F v and a set of retrieved visual embeddings, namely,V 1 , V 2 , . . . , V n , we also acquire the corresponding linguistic embeddings, labeled as T 1 , T 2 , . . . , T n . For instance, when we input an image of a \"red car\" into the Hopfield Network, the network produces a set of prototypes like \"red tomato,\" \"old car,\" and so forth. The textual retrieval, in this scenario, would include words like \"red,\" \"old,\" \"car,\" \"tomato,\" and others. We opt to keep these retrieved embeddings as representations of individual primitives because they are easier to combine compared to representing the entire composition, such as \"red tomato\" and \"old car\". Then, we concatenate the input image embeddings with the retrieved embeddings to form\n[F v , V 1 , V 2 , . . . V n , T 1 , T 2 , . . . T n ]\nand feed it forward to the Soft MoE. The output would be\n[F ′ v , V ′ 1 , V ′ 2 , . . . V ′ n , T ′ 1 , T ′ 2 , . . . T ′ n ].\nThe final input embeddings will be the combination of F v and F v ′ according to\nF c v = F v * w + (1 -w) * F ′ v with 0 < w < 1 . (4)" }, { "figure_ref": [], "heading": "Loss function", "publication_ref": [], "table_ref": [], "text": "The class probability is computed as:\np spm (y = (s, o)|x; θ) = exp(F v • F t ) (s,ō)∈Cs exp(F v • F t ) .(5)\nTo minimize the cross-entropy loss in the soft prompt module, L spm , we use the formula:\nL spm = - 1 |C s | (x,y)∈Cs log (p spm (y = (s, o)|x; θ)) . (6)\nTo ensure that the soft prompt's representation mirrors the joint representation of primitives, we apply the decomposition technique from DFSP [23] and compute the crossentropy loss on the decomposed features. Let F ts and F to represent the decomposed attribute and object representations from the text embeddings, respectively. DFSP defines the class probability as:\np( y = s x, θ ) = exp(F c v • F ts ) (s)∈A exp(F c v • F ts )(7)\np( y = o x, θ ) = exp(F c v • F to ) (ō)∈O exp(F c v • F to ) .(8)\nThe cross-entropy loss is then computed using:\nL decompose = - 1 |A| x,y∈C s log(p( y = s x; θ ))(9)\n- 1 |O| x,y∈C s log(p y = o x; θ ))(10)\np c (y = (s, o)|x; θ) = exp(F c v • F t ) (s,ō)∈Cs exp(F c v • F t ) .(11)\nFinally, the cross-entropy loss in the soft prompt module is minimized:\nL compose = - 1 |C s | (x,y)∈Cs log (p c (y = (s, o)|x; θ)) (12)\nIn conclusion, our final loss is computed as the sum of its parts, where 0 < α, β, γ < 1\nL =L compose + αL decompose + βL spm + γ(L retrieval + L clustering ) .(13)\n5. Experiment" }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [], "table_ref": [], "text": "Datasets. Our experiments are conducted on three real-world, challenging benchmark datasets: MIT-States, UTZappos, and C-GQA. The MIT-States dataset is frequently used in CZSL research, containing 115 states, 245 objects, and 1962 compositions, and has been embraced by many previous methods. UTZappos includes 50025 images of shoes, featuring 16 states and 12 objects. On the other hand, C-GQA is a newly introduced, large-scale benchmark derived from in-the-wild images, offering more generalized compositions. Being the most popular dataset for CZSL, C-GQA contains 453 states, 870 objects, and a total of 39298 images, which include over 9500 compositions.\nMetrics. Following the framework in [25], we evaluate prediction accuracy considering both seen and unseen compositions under both closed-world and open-world scenarios. Specifically, \"Seen\" (S) denotes accuracy evaluated solely on known compositions, while \"Unseen\" (U) denotes accuracy evaluated exclusively on unknown compositions. Moreover, we calculate the Harmonic Mean (HM) of the S and U metrics. Given the tendency of zero-shot models to favor known compositions, we generate a seen-unseen accuracy curve at various operating points, ranging from a bias of negative infinity to positive infinity, and determine the Area Under the Curve (AUC). In summary, our evaluation metrics encompass S, U, HM, and AUC.\nImplementation Details. Our HOMOE system is implemented using PyTorch version 1.12.1 and is optimized using the Adam optimizer over 10 epochs. The image and text encoders within this model are both constructed on top of the pre-trained CLIP Vit-L/14 architecture. Furthermore, we have configured the Soft MoE with 2 layers. The complete model was trained and evaluated on a single NVIDIA RTX 3090 GPU." }, { "figure_ref": [], "heading": "Comparison", "publication_ref": [ "b30", "b29", "b19", "b29", "b25", "b17", "b19" ], "table_ref": [], "text": "We present experimental comparisons in learning methods, including AoP [31], LE+ [30], TMN [35], SymNet [20], CompCos [25], CGE [30], Co-CGE [26], SCEN [18], KG-SP [11], CSP [33] and DFSP [23].\nThe performance is evaluated in both closed-world and open-world scenarios. In the closed-world scenario, our HOMOE model sets a new benchmark on the MIT-States and UT-Zappos datasets, displaying superior performance over the recent state-of-the-art method DFSP [23]. Specifically, on the MIT-States dataset, HOMOE surpasses DFSP [20] by increasing the unseen accuracy by 2.6%, seen accuracy by 3.6%, harmonic mean by 2.6%, and AUC by 2.7%. On the UT-Zappos dataset, HOMOE exhibits an improvement of 1.7%, 2.2%, 1.9%, and 1.5% on seen accuracy, unseen accuracy, harmonic mean, and AUC, respectively, compared to DFSP [23]. Regarding the C-GQA dataset, the performance of HOMOE provides insightful data, prompting a detailed ablation study to better understand the dynamics, as elaborated in the subsequent section." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Our method shows optimal performance on the MIT-States dataset, prompting us to focus our ablation studies there to identify the key contributing factors to its success and understand how these can be applied to improve results on other datasets." }, { "figure_ref": [ "fig_1" ], "heading": "Memory Updates", "publication_ref": [], "table_ref": [], "text": "When considering the memory update mechanism during training, its configuration plays a pivotal role in the model's overall effectiveness. Specifically, while our method excels with the MIT-States and UT-Zappos datasets, it struggles with the C-GQA dataset. T-SNE visualizations in Figure 3 of the memory highlight that the C-GQA dataset's memory structure is not as well organized, mainly because many classes are represented by only a few images. This limited representation fails to capture a comprehensive class essence, which is why our method underperforms on the C-GQA dataset." }, { "figure_ref": [ "fig_1" ], "heading": "Contrastive Loss Impacts", "publication_ref": [], "table_ref": [], "text": "The contrastive loss incorporation presents a distinct training dynamic. Without it, the model tends to classify input images using retrieved features as auxiliary support. Introducing contrastive loss shifts the focus towards compositional learning, significantly boosting performance on unseen combinations at the expense of seen ones. A balanced training strategy that combines both approaches yields the best results. Figure 3 sitivity to hyperparameters, especially batch size, due to the InfoNCE loss being computed across image batches. Intuitively, a larger batch allows for more extensive feature updates, potentially enhancing memory feature clustering. We believe that better performance across all datasets could be achieved by increasing the batch size. However, such adjustments would require computational resources that exceed our current budget. This selective engagement of experts is noteworthy, as it implies that the model does not engage all patterns or features equally. Instead, it adopts a tailored strategy for image recognition, focusing on distinctive features to potentially improve its performance on new, unseen data." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose HOMOE, a novel framework to Compositional Zero-Shot Learning (CZSL) that mimics human adaptability to new state-object combinations. By integrating the Modern Hopfield Network with a Mixture of Experts, our framework effectively recalls and assembles relevant primitives into novel compositions. Additional loss functions are proposed to ensure that each part of the system contributes optimally to information recall and association. Our extensive evaluations and detailed analyses establish the superior performance of HOMOE across various standard datasets compared to SOTA and provide insights into its limitations with the C-GQA dataset." } ]
Compositional Zero-Shot Learning (CZSL) has emerged as an essential paradigm in machine learning, aiming to overcome the constraints of traditional zero-shot learning by incorporating compositional thinking into its methodology. Conventional zero-shot learning has difficulty managing unfamiliar combinations of seen and unseen classes because it depends on pre-defined class embeddings. In contrast, Compositional Zero-Shot Learning uses the inherent hierarchies and structural connections among classes, creating new class representations by combining attributes, components, or other semantic elements. In our paper, we propose a novel framework that for the first time combines the Modern Hopfield Network with a Mixture of Experts (HOMOE) to classify the compositions of previously unseen objects. Specifically, the Modern Hopfield Network creates a memory that stores label prototypes and identifies relevant labels for a given input image. Following this, the Mixture of Expert models integrates the image with the fitting prototype to produce the final composition classification. Our approach achieves SOTA performance on several benchmarks, including MIT-States and UT-Zappos. We also examine how each component contributes to improved generalization.
HOMOE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts
[ { "figure_caption": "The set C is used to label images with seen classes C s ⊂ C and unseen classes C u ⊂ C where C s ∩ C u = ∅ and each image only has one c ∈ C label. This division is also used to separate training D tr = {(X tr , C tr )} and testing D test = {(X test , C test )} datasets, where X tr is labelled with seen classes C s only, so C tr ⊆ P owerset(C s ). Whereas X test are labeled with C s and C u classes. [11] defined the open-world evaluation as C test labels are from C s ∪ C u , while for closed-world evaluation C test labels are from C s ∪ C u ′ where C u ′ ⊂ C u . The aim of CZSL is to train a classification model f θ (X) on D tr but it should also be capable of correctly predicting D test .", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. The visualization of attribute memory embeddings in different memory configurations and datasets", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visualization of Soft Mixture of Experts. Each 'expert' focuses on distinct categories of data -Expert 3 on food items, Expert 4 on apparel, etc.-demonstrating the model's ability to assign and weigh inputs across different neural network sub-models for enhanced specialization and accuracy in classification taskswith many experts still handling a mix of unrelated inputs. This selective engagement of experts is noteworthy, as it implies that the model does not engage all patterns or features equally. Instead, it adopts a tailored strategy for image recognition, focusing on distinctive features to potentially improve its performance on new, unseen data.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "and Ta-", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Closed World Evaluation. Comparison to state-of-the-art models", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "also shows that contrastive loss also Open World Evaluation. Comparison to state-of-the-art models helps maintain feature clusters, preventing the model from updating only those features derived directly from the CLIP loss. It is important to recognize the contrastive loss's sen-InfoNCE loss 47.5 52.4 37.8 20.9with InfoNCE loss 45.5 55.6 36.5 19.8", "figure_data": "MethodSMIT-States U HAUCSUT-Zappos U HAUCSC-GQA U HAUCAoP [31]16.65.74.70.750.9 34.2 29.4 13.7----LE+ [30]14.22.52.70.360.4 36.5 30.5 16.3 19.2 0.71.00.08TMN [33]12.60.91.20.155.9 18.1 21.78.4----SymNet [20]21.47.05.80.853.3 44.6 34.5 18.5 26.7 2.23.30.43CompCos [25]25.4 10.08.91.659.3 46.8 36.9 21.3----CGE [30]32.45.16.01.061.7 47.7 39.0 23.1 32.7 1.82.90.47Co-CGEˆClosed [26] 31.15.86.41.162.0 44.3 40.3 23.1 32.1 2.03.40.53Co-CGEˆOpen [26]30.3 11.2 10.72.361.2 45.8 40.8 23.3 32.1 3.04.80.78KG-SP [11]28.47.57.41.361.8 52.1 42.3 26.5 31.5 2.94.70.78DRANet [19]29.87.87.91.565.1 54.3 44.0 28.8 31.3 3.96.01.05CSP [33]46.3 15.7 17.45.764.1 44.1 38.9 22.7 28.7 5.26.91.2DFSP [23]47.5 18.5 19.35.866.8 60.0 44.0 30.3 38.3 7.2 10.42.4HOMOE50.4 19.7 20.77.968.4 61.9 45.1 31.1 35.7 6.69.02.0MethodMIT-StatesSUHAUCwithout", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Closed world evaluation on different memory configurations", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Closed world evaluation result of different batch size after 10 training epochs5.3.3 Analysis of Expert Allocation in Soft MoEOur analysis examines how the Soft Mixture of Experts (Soft MoE) distributes inputs based on their attributes or objects. By contrasting this with a cross-attention approach, we find that implementing Soft MoE enhances unseen accuracy by 1%. Visual inspection in Fig 4 reveals that the Soft MoE tends to assign images with similar attributes or objects to the same experts, indicating a degree of efficient routing. Yet, this pattern is confined to a few experts,", "figure_data": "Batch sizeMIT-StatesSUHAUC849.5 52.6 38.7 21.91649.7 54.1 39.5 22.83250.5 54.6 39.9 23.36450.1 55.3 39.9 23.5", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Huu Do; Dat; Po Yuan; Tien Hoang Nguyen; Wray Buntine; Mohammed Bennamoun
[ { "authors": "Zhuo Chen; Yufeng Huang; Jiaoyan Chen; Yuxia Geng; Wen Zhang; Yin Fang; Jeff Z Pan; Huajun Chen", "journal": "", "ref_id": "b0", "title": "Duet: Crossmodal semantic grounding for contrastive zero-shot learning", "year": "2023" }, { "authors": "Michael Crossley; György Paul R Benjamin; Kevin Kemenes; Ildikó Staras; Kemenes", "journal": "Science Advances", "ref_id": "b1", "title": "A circuit mechanism linking past and future learning through shifts in perception", "year": "2023" }, { "authors": "Mete Demircigil; Judith Heusel; Matthias Löwe; Sven Upgang; Franck Vermet", "journal": "Journal of Statistical Physics", "ref_id": "b2", "title": "On a model of associative memory with huge storage capacity", "year": "2017" }, { "authors": "William Fedus; Barret Zoph; Noam Shazeer", "journal": "The Journal of Machine Learning Research", "ref_id": "b3", "title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity", "year": "2022" }, { "authors": "Sam Gross; Marc'aurelio Ranzato; Arthur Szlam", "journal": "", "ref_id": "b4", "title": "Hard mixtures of experts for large scale weakly supervised vision", "year": "2017" }, { "authors": "Jingcai Guo; Song Guo", "journal": "IEEE Transactions on Multimedia", "ref_id": "b5", "title": "A novel perspective to zero-shot learning: Towards an alignment of manifold structures via semantic feature expansion", "year": "2020" }, { "authors": "Jingcai Guo; Song Guo; Qihua Zhou; Ziming Liu; Xiaocheng Lu; Fushuo Huo", "journal": "", "ref_id": "b6", "title": "Graph knows unknowns: Reformulate zero-shot learning as sample-level graph recognition", "year": "2023" }, { "authors": "Shaozhe Hao; Kai Han; Kwan-Yee K Wong", "journal": "", "ref_id": "b7", "title": "Learning attention as disentangler for compositional zero-shot learning", "year": "2023" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b8", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "J John; Hopfield", "journal": "Proceedings of the national academy of sciences", "ref_id": "b9", "title": "Neural networks and physical systems with emergent collective computational abilities", "year": "1982" }, { "authors": "Shyamgopal Karthik; Massimiliano Mancini; Zeynep Akata", "journal": "", "ref_id": "b10", "title": "Kg-sp: Knowledge guided simple primitives for open world compositional zero-shot learning", "year": "2022" }, { "authors": "Hanjae Kim; Jiyoung Lee; Seongheon Park; Kwanghoon Sohn", "journal": "", "ref_id": "b11", "title": "Hierarchical visual primitive experts for compositional zero-shot learning", "year": "2023" }, { "authors": "Dmitry Krotov; John J Hopfield", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Dense associative memory for pattern recognition", "year": "2016" }, { "authors": "Hannes Christoph H Lampert; Stefan Nickisch; Harmeling", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b13", "title": "Attribute-based classification for zero-shot visual object categorization", "year": "2013" }, { "authors": "Dmitry Lepikhin; Hyoukjoong Lee; Yuanzhong Xu; Dehao Chen; Orhan Firat; Yanping Huang; Maxim Krikun; Noam Shazeer; Zhifeng Chen", "journal": "", "ref_id": "b14", "title": "Gshard: Scaling giant models with conditional computation and automatic sharding", "year": "2020" }, { "authors": "Mike Lewis; Shruti Bhosale; Tim Dettmers; Naman Goyal; Luke Zettlemoyer", "journal": "PMLR", "ref_id": "b15", "title": "Base layers: Simplifying training of large, sparse models", "year": "2021" }, { "authors": "Xiangyu Li; Zhe Xu; Kun Wei; Cheng Deng", "journal": "", "ref_id": "b16", "title": "Generalized zero-shot learning via disentangled representation", "year": "2021" }, { "authors": "Xiangyu Li; Xu Yang; Kun Wei; Cheng Deng; Muli Yang", "journal": "", "ref_id": "b17", "title": "Siamese contrastive embedding network for compositional zero-shot learning", "year": "2022" }, { "authors": "Yun Li; Zhe Liu; Saurav Jha; Lina Yao", "journal": "", "ref_id": "b18", "title": "Distilled reverse attention network for open-world compositional zeroshot learning", "year": "2023" }, { "authors": "Yong-Lu Li; Yue Xu; Xiaohan Mao; Cewu Lu", "journal": "", "ref_id": "b19", "title": "Symmetry and group in attribute-object compositions", "year": "2020" }, { "authors": "Yang Liu; Lei Zhou; Xiao Bai; Yifei Huang; Lin Gu; Jun Zhou; Tatsuya Harada", "journal": "", "ref_id": "b20", "title": "Goal-oriented gaze estimation for zero-shot learning", "year": "2021" }, { "authors": "Ziming Liu; Song Guo; Jingcai Guo; Yuanyuan Xu; Fushuo Huo", "journal": "IEEE Transactions on Multimedia", "ref_id": "b21", "title": "Towards unbiased multi-label zero-shot learning with pyramid and semantic attention", "year": "2022" }, { "authors": "Xiaocheng Lu; Song Guo; Ziming Liu; Jingcai Guo", "journal": "", "ref_id": "b22", "title": "Decomposed soft prompt guided fusion enhancing for compositional zero-shot learning", "year": "2023" }, { "authors": "Carlo Lucibello; Marc Mézard", "journal": "", "ref_id": "b23", "title": "The exponential capacity of dense associative memories", "year": "2023" }, { "authors": "Massimiliano Mancini; Muhammad Ferjad Naeem; Yongqin Xian; Zeynep Akata", "journal": "", "ref_id": "b24", "title": "Open world compositional zeroshot learning", "year": "2021" }, { "authors": "Massimiliano Mancini; Muhammad Ferjad Naeem; Yongqin Xian; Zeynep Akata", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b25", "title": "Learning graph embeddings for open world compositional zero-shot learning", "year": "2022" }, { "authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" }, { "authors": "Ishan Misra; Abhinav Gupta; Martial Hebert", "journal": "", "ref_id": "b27", "title": "From red wine to red tomato: Composition with context", "year": "2017" }, { "authors": "Basil Mustafa; Carlos Riquelme; Joan Puigcerver; Rodolphe Jenatton; Neil Houlsby", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Multimodal contrastive learning with limoe: the language-image mixture of experts", "year": "2022" }, { "authors": "Muhammad Ferjad Naeem; Yongqin Xian; Federico Tombari; Zeynep Akata", "journal": "", "ref_id": "b29", "title": "Learning graph embeddings for compositional zero-shot learning", "year": "2021" }, { "authors": "Tushar Nagarajan; Kristen Grauman", "journal": "", "ref_id": "b30", "title": "Attributes as operators: factorizing unseen attribute-object compositions", "year": "2018" }, { "authors": "Zhixiong Nan; Yang Liu; Nanning Zheng; Song-Chun Zhu", "journal": "", "ref_id": "b31", "title": "Recognizing unseen attribute-object pair with generative model", "year": "2019" }, { "authors": "Peilin Nihal V Nayak; Stephen Yu; Bach", "journal": "", "ref_id": "b32", "title": "Learning to compose soft prompts for compositional zero-shot learning", "year": "2022" }, { "authors": "Joan Puigcerver; Carlos Riquelme; Basil Mustafa; Neil Houlsby", "journal": "", "ref_id": "b33", "title": "From sparse to soft mixtures of experts", "year": "2023" }, { "authors": "Senthil Purushwalkam; Maximilian Nickel; Abhinav Gupta; Marc'aurelio Ranzato", "journal": "", "ref_id": "b34", "title": "Task-driven modular networks for zero-shot compositional learning", "year": "2019" }, { "authors": "Hubert Ramsauer; Bernhard Schäfl; Johannes Lehner; Philipp Seidl; Michael Widrich; Thomas Adler; Lukas Gruber; Markus Holzleitner; Milena Pavlović; Geir Kjetil Sandve", "journal": "", "ref_id": "b35", "title": "Hopfield networks is all you need", "year": "2020" }, { "authors": "Carlos Riquelme; Joan Puigcerver; Basil Mustafa; Maxim Neumann; Rodolphe Jenatton; André Susano Pinto; Daniel Keysers; Neil Houlsby", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Scaling vision with sparse mixture of experts", "year": "2021" }, { "authors": "Frank Ruis; Gertjan Burghouts; Doina Bucur", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b37", "title": "Independent prototype propagation for zero-shot compositionality", "year": "2021" }, { "authors": "Nirat Saini; Khoi Pham; Abhinav Shrivastava", "journal": "", "ref_id": "b38", "title": "Disentangling visual embeddings for attributes and objects", "year": "2022" }, { "authors": "Noam Shazeer; Azalia Mirhoseini; Krzysztof Maziarz; Andy Davis; Quoc Le; Geoffrey Hinton; Jeff Dean", "journal": "", "ref_id": "b39", "title": "Outrageously large neural networks: The sparsely-gated mixtureof-experts layer", "year": "2017" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b40", "title": "Attention is all you need", "year": "2017" }, { "authors": "Qingsheng Wang; Lingqiao Liu; Chenchen Jing; Hao Chen; Guoqiang Liang; Peng Wang; Chunhua Shen", "journal": "", "ref_id": "b41", "title": "Learning conditional attributes for compositional zero-shot learning", "year": "2023" }, { "authors": "Zhirong Wu; Yuanjun Xiong; Stella X Yu; Dahua Lin", "journal": "", "ref_id": "b42", "title": "Unsupervised feature learning via non-parametric instance discrimination", "year": "2018" }, { "authors": "Tian Zhang; Kongming Liang; Ruoyi Du; Xian Sun; Zhanyu Ma; Jun Guo", "journal": "Springer", "ref_id": "b43", "title": "Learning invariant visual representations for compositional zero-shot learning", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 61.54, 99.07, 204.32, 9.65 ], "formula_id": "formula_0", "formula_text": "C = A × O = {(a 0 , o 0 ), (a 1 , o 0 ), . . . , (a n , o m )}." }, { "formula_coordinates": [ 4, 308.86, 121.4, 236.25, 23.18 ], "formula_id": "formula_1", "formula_text": "C i = (a i , o i ) into a soft prompt P θ (C i ) = [θ 1 ] [θ 2 ] [θ 3 ] [a i ] [o i ], with [θ 1 ] [θ 2 ] [θ 3 ]" }, { "formula_coordinates": [ 4, 334.33, 170.8, 84.7, 9.65 ], "formula_id": "formula_2", "formula_text": "F t = E text (P θ (C i ))." }, { "formula_coordinates": [ 4, 338.9, 608.71, 206.22, 24.16 ], "formula_id": "formula_3", "formula_text": "V i = sof tmax(Z i • M T a ) • M a , i < l/2 sof tmax(Z i • M T o ) • M o , i ≥ l/2(1)" }, { "formula_coordinates": [ 5, 85.97, 417.52, 164.01, 74 ], "formula_id": "formula_4", "formula_text": "L re (s) = |Ctr| i=1 I(a t ) i log   1 l/2 l/2 j=1 s j,i   + |Ctr| i=1 I(o t ) i log   1 l/2 l j=l/2 s j,i  " }, { "formula_coordinates": [ 5, 84.33, 629.11, 202.04, 30.32 ], "formula_id": "formula_5", "formula_text": "L N = - l i=0 log exp(V + i,1 • V + i,2 /τ ) l j=0 exp(V i,1 • V j /τ ) ,(3)" }, { "formula_coordinates": [ 6, 135.25, 266.44, 133.43, 9.65 ], "formula_id": "formula_6", "formula_text": "[F v , V 1 , V 2 , . . . V n , T 1 , T 2 , . . . T n ]" }, { "formula_coordinates": [ 6, 50.11, 288.78, 137, 12.2 ], "formula_id": "formula_7", "formula_text": "[F ′ v , V ′ 1 , V ′ 2 , . . . V ′ n , T ′ 1 , T ′ 2 , . . . T ′ n ]." }, { "formula_coordinates": [ 6, 58.71, 320.27, 227.65, 12.69 ], "formula_id": "formula_8", "formula_text": "F c v = F v * w + (1 -w) * F ′ v with 0 < w < 1 . (4)" }, { "formula_coordinates": [ 6, 62.95, 385.91, 223.41, 24.72 ], "formula_id": "formula_9", "formula_text": "p spm (y = (s, o)|x; θ) = exp(F v • F t ) (s,ō)∈Cs exp(F v • F t ) .(5)" }, { "formula_coordinates": [ 6, 58.93, 450.55, 227.44, 27.27 ], "formula_id": "formula_10", "formula_text": "L spm = - 1 |C s | (x,y)∈Cs log (p spm (y = (s, o)|x; θ)) . (6)" }, { "formula_coordinates": [ 6, 99.39, 568.63, 186.97, 26.29 ], "formula_id": "formula_11", "formula_text": "p( y = s x, θ ) = exp(F c v • F ts ) (s)∈A exp(F c v • F ts )(7)" }, { "formula_coordinates": [ 6, 93.35, 598.39, 193.01, 26.29 ], "formula_id": "formula_12", "formula_text": "p( y = o x, θ ) = exp(F c v • F to ) (ō)∈O exp(F c v • F to ) .(8)" }, { "formula_coordinates": [ 6, 74.5, 651.8, 211.87, 26.8 ], "formula_id": "formula_13", "formula_text": "L decompose = - 1 |A| x,y∈C s log(p( y = s x; θ ))(9)" }, { "formula_coordinates": [ 6, 136.44, 683.2, 149.92, 26.8 ], "formula_id": "formula_14", "formula_text": "- 1 |O| x,y∈C s log(p y = o x; θ ))(10)" }, { "formula_coordinates": [ 6, 323.12, 71.09, 221.99, 26.29 ], "formula_id": "formula_15", "formula_text": "p c (y = (s, o)|x; θ) = exp(F c v • F t ) (s,ō)∈Cs exp(F c v • F t ) .(11)" }, { "formula_coordinates": [ 6, 316.33, 130.49, 228.79, 27.27 ], "formula_id": "formula_16", "formula_text": "L compose = - 1 |C s | (x,y)∈Cs log (p c (y = (s, o)|x; θ)) (12)" }, { "formula_coordinates": [ 6, 343.51, 199.34, 201.6, 24.6 ], "formula_id": "formula_17", "formula_text": "L =L compose + αL decompose + βL spm + γ(L retrieval + L clustering ) .(13)" } ]
2023-11-23
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b18", "b10", "b33", "b3", "b11", "b38" ], "table_ref": [], "text": "\"There's more than one way to skin a cat.\"\nWhat enables us humans to recognize new concepts we have never encountered before? It all comes down to our capacity to generalize learned knowledge to unseen domains. For instance, when presented with concepts \"green apple\" and \"yellow banana\", we can instantly recognize and imagine the concept \"green banana\" by combining state \"green\" with object \"banana\". Inspired by this innate cognitive ability of humans, Compositional Zero-Shot Learning (CZSL) emerges to tackle the challenge of recognizing unseen state-object compositions (e.g., \"green banana\") by leveraging visible primitives (i.e., state and object) in compositional concepts during training and applying the knowledge during inference [19,25,29].\nEffectively modeling the interactions between state and object primitives, as well as extrapolating the understanding of seen compositions to unseen ones, poses major challenges in CZSL. Concretely, it revolves around two critical factors: 1) Object-conditioned Variance: Wherein the visual representations of the same state category can vary considerably when different objects are involved. As depicted in Figure 1(a), considering the state \"old\" in the context of modifying a \"car\" and a \"cat\", it may refer to a vintage design with classic curves and retro elements for the \"car\", evoking a sense of nostalgia and history, whereas for the \"cat\", it denotes the senior age of feline with features like grey fur, reflecting the passage of time and aging process. 2) State-conditioned Variance: It pertains to the variations in the appearance of an object when combined with different states. As shown in Figure 1(a), for composition \"peeled banana\", the \"banana\" exhibits a smooth texture and a pale appearance, as the outer peel is removed. In contrast, for the composition \"sliced banana\", the \"banana\" takes on a sliced appearance with visible segments.\nPrevious approaches in CZSL often construct separate classifiers for recognizing states and objects simultaneously, overlooking their intrinsic relationship. Recent efforts have made strides in addressing the first factor by adopting a two-stage method with an object-then-state order [11,15,36]. Prioritizing the prediction of the object primitive allows the model to capture salient visual cues (e.g., shapes), thereby enhancing the overall comprehension of compositions. Subsequently, armed with the knowledge of the object primitives, the CZSL model sequentially refines its understanding by classifying the state primitives conditioned on guided object features.\nNonetheless, we argue that there is more than one way to skin a cat, and the human cognition process will progressively collect different observations for specific compositions, in a simple to complex manner. In certain cases, such as the composition \"ripe banana\" in Figure 1(b), the object itself, \"banana\", possesses highly salient visual cues that make it easily recognizable due to its curving shape and vibrant yellow color. Once we establish that it is a \"banana\", we can then further analyze its state and recognize it as a \"ripe banana\" by observing additional visual cues, e.g., the presence of brown spots on the yellow skin. In contrast, for compositions like \"mashed banana\", possess distinct visual features primarily related to the state \"mashed\" rather than the object. The mushy texture becomes the prominent aspect that captures our attention. Consequently, through further analysis of extra visual features, e.g., yellow and sticky material, we refine our recognition and discern it as a \"mashed banana\".\nIn this paper, inspired by the human-like cognition process, we propose a novel approach, Progressive Languagebased Observations (PLO) for CZSL. Specifically, PLO dynamically determines the order of progressive observations in the form of language, building upon the pre-trained vision-language models (VLMs), e.g., CLIP [34]. These observations comprise a series of languages that allow the model to observe the image's content step by step. Due to training with image-text pairs, these VLMs endow the model with observing capability by measuring the similarity between the two modalities within the same space. For dynamic progressive observation, we propose two variants: PLO-VLM and PLO-LLM. In PLO-VLM, we introduce a two-step observation strategy that adopts a pre-observing classifier based on VLM to dynamically determine the order of primitive-containing languages based on image features. Subsequently, leveraging the observed primitive knowledge (semantic features from primitive prompts), we integrate this information via a cross-modal attention module for category prediction of the remaining primitive. In PLO-LLM, we further extend to a multi-step observation scheme, employing large language models (LLMs), e.g., GPT [4] to design composition-specific prompts (e.g., \"yellow, mushy substance\" in Figure 1(b)) for each composition category, thus obtaining a composition-specific observation order. This method allows us to selectively extract features at each observation step, boosting the model's ability to understand and recognize composition categories more effectively.\nThree popular and challenging CZSL datasets MIT-States [12], UT-Zappos [29], and C-GQA [39] are used for evaluation. Extensive results show that our PLO exceeds the current state-of-the-art CZSL methods with significant gains in both closed-world and open-world settings. In summary, the main contributions of our work are three-fold: • We propose the novel PLO for CZSL. To the best of our knowledge, it is the first work to dynamically allocate the order of observations using language, enabling effective prediction of unseen state-object compositions. In this section, we first introduce how to endow models with observing capabilities in Sec. 3.1. Then, we describe how to determine the step-by-step observation sequence for progressive comprehension, i.e., PLO-VLM and PLO-LLM, in Sec. 3.2 and Sec. 3.3, respectively." }, { "figure_ref": [], "heading": "CLIP-based Observing", "publication_ref": [ "b33", "b6" ], "table_ref": [], "text": "To enable observing ability, our PLO builds upon a pretrained vision-language model: CLIP [34], which consists of an image encoder En v (•) and a text encoder En t (•) capable of mapping image features and semantic features from prompts (natural languages with category information, e.g., \"a photo of ripe banana\") into a shared semantic space. By comparing the similarities between the two modalities in this shared space, we can discern whether language-based observations exist in a given image.\n1) Image Encoder En v (•). We enhance the image encoder of CLIP with some lightweight parameter efficient fine-tuning (PEFT) strategies [6,7], allowing it to effectively handle image features in CZSL. These PEFT strategies1 allow the encoder to achieve performance levels comparable to full fine-tuning by transferring knowledge from CLIP while avoiding strong training biases with several learnable parameters.\nSpecifically, the image encoder En v (•) splits the input image into non-overlapping patches (N in total) along with a pre-trained [CLS] token and positional embeddings, then generates a sequence of patch tokens. After that, the self-attention-based blocks, including the inserted learnable layers, are used to update the token sequence V = {v [CLS] , v 1 , v 2 , . . . , v N }. By optimizing the parameters of these inserted learnable layers during training while keeping the original image encoder frozen, PLO effectively incorporates primitive-specific information to improve observation capabilities. Finally, a linear layer is utilized to project the output [CLS] token v[CLS] , yielding the image representation v in the cross-modal shared space.\n2) Text Encoder En t (•). Following [22], we employ soft prompt tuning, making prompts tokens better adapted to CZSL. Specifically, we formulate the prompt as a set comprising prefix context and category representations. By converting the prompt into learnable embeddings, we provide the model with the flexibility to adapt its languagebased observation. The prompts of state, object, and composition, denoted as P s , P o , and P c , are formulated as:\nP s = [x 0 ; x 1 ; . . . ; x M ; x s ; o], P o = [x 0 ; x 1 ; . . . ; x M ; x o ], P c = [x 0 ; x 1 ; . . . ; x M ; x s ; x o ],(1)\nwhere [x 0 ; . . . ; x M ] represents the prefix context with the the word embedding of \"a photo of\" as an initialization, x s and x o represent word embedding of the state and object categories, o represents word embedding of the word \"object\". The context length is denoted by M . Similar to the token representation in the visual part, textual prompts " }, { "figure_ref": [], "heading": "PLO-VLM", "publication_ref": [], "table_ref": [], "text": "As illustrated in Figure 2(a), PLO-VLM adopts a dynamic two-step observation framework to decide whether to prioritize observing the state or object primitive. Next, the prompt features of the selected primitive are integrated into the image using cross-modal attention (CA), which highlights regions of interest related to the observation. By further comparing the similarities between the refined image features and the text features of the remaining primitives, PLO-VLM effectively recognizes state-object compositions. 1) Pre-observing. The pre-observing classifier recognizes the first primitive by measuring the cosine similarity S(•, •) between the image representation v and the text representations of the state primitive t s and the object primitive t o :\nF pre obs (v, t s , t o ) = S(v, t s ) ⊕ S(v, t o ),(2)\nwhere ⊕ denotes concatenation. The first observation is the prompt of argmax(F pre obs (•)). Its corresponding text representation is t pre obs . For instance in Figure 2(a), the first observation is the prompt of the state \"mashed object\", and t pre obs will correspond to the text representation t s . 2) Post-observing. After obtaining the first observation, we employ learnable residual cross-modal attention modules to extract the interest features of the first observation for refined image representations v, which are widely adopted in existing methods [3, 10, 22]. The CA module is defined as:\nCA(q, K, V) = q+FFN(LN(q+MHA(q, K, V))),(3)\nwhere q, K, and V are the features of query, key, and value, respectively. The module FFN, LN, and MHA are feedforward networks, layer normalization, and multi-head attention, respectively. The refined image representations v are derived based on the first observation as follows:\nV = CA(V, T pre obs , T pre obs ), v = FC( V),(4)\nwhere T pre obs is the output patch tokens of the prompt corresponding to t pre obs . The [CLS] token of V is then fed into a linear layer to obtain the refined image representation v. The refined image representation is utilized to calculate the similarity between the prompt of retain primitive, e.g., object \"banana\" in Figure 2(a), written as:\np(s|I) = π(S( v, t post obs ), τ ), if t post obs ← t s , p(o|I) = π(S( v, t post obs ), τ ), if t post obs ← t o .(5)\nwhere π(•) is the softmax function and τ denotes the temperature hyper-parameter. The t post obs represents the text presentation of the prompt of retaining state or object primitive, respectively. Additionally, to ensure that the refined feature encompasses both state and object features, we also compare its similarity with the text presentations of the composition prompt, which is defined as follows:\np(c|I) = π(S( v, t c ), τ ),(6)\nwhere the t c corresponds to the text representation of the prompt of the composition category." }, { "figure_ref": [], "heading": "PLO-LLM", "publication_ref": [ "b3" ], "table_ref": [], "text": "As shown in Figure 2(b), it employs a multi-step observation process and starts by generating a sequence of observation prompts for each composition category using an LLM, i.e., GPT [4]. Then, the final compositional classification is determined by observing step by step whether observation prompts exist in the image. 1) Observation Prompts Generation. To elaborate, for each composition category c ∈ C, a sequence of the observation hard prompts\nP c = {P (i)\nc } n i=1 is generated using an LLM with the designed prompt (refer to the Appendix), where n denotes the number of observation steps. Note that these prompts are frozen. For instance, in Figure 2(b), the observation prompts are listed in the blue box, e.g., \"yellow, mushy substance\" for \"mashed banana\"." }, { "figure_ref": [], "heading": "2)", "publication_ref": [], "table_ref": [], "text": "Step-by-step Observing. At each observation step i, the image representation v (i) and the text representation of the current observation prompt t (i) obs are fed into the learnable CA module. The CA module updates the image representation to produce the refined image representation v (i) :\nV (i) = CA(V (i) , T (i) obs , T (i) obs ), v (i+1) = v (i) = FC( V (i) ),(7)\nwhere v (i) incorporates information from the current observation prompt, which is then utilized in the next observation step to calculate the similarity between the image and the next observation prompt t (i+1) obs . This process is repeated for all observation steps, iteratively updating the image representation with information from each observation prompt. The result of each observation is S(v (i) , t\n(i) obs ).\nSince each observation is conditioned on the previous step, the probability of each step is given by:\np(c (i) |I, c (1) , . . . , c (i-1) ) = π(S(v (i) , t (i) obs ), τ ).(8)\nFurthermore, to facilitate the model in learning semantic representations suitable for CZSL without being solely influenced by hard prompts, we also incorporate the similarity between the original image representation v and the text representation t c of the learnable composition soft prompts to aid in prediction. The probability is calculated as follows:\np Sof t (c|I) = π(S(v, t c ), τ ).\n(9)" }, { "figure_ref": [], "heading": "Training Objectives and Inference", "publication_ref": [ "b13" ], "table_ref": [], "text": "PLO-VLM. During training, we employ four main losses to optimize the model: multi-label loss L obs for the preobserving classifier and cross-entropy losses L s , L o , and L c for state, object, and composition, respectively. The combined loss is formulated as:\nL V LM P LO = L obs + L s + L o + L c .(10)\nThe multi-label loss optimizes the pre-observing classifier, responsible for predicting whether to initially observe the state or object primitive in the input image. It is calculated using binary cross-entropy loss for each target label:\nL obs = -y obs log(σ(F pre obs )) + (1 -y obs ) log(1 -σ(F pre obs )),(11)\nwhere σ(•) denotes the sigmoid function.\ny obs = [y s , y o ]\nis the ground-truth multi-label target for the pre-observing classifier, where y s and y o are binary values of ground-truth state and object categories, respectively. The optimization of classification involves three independent cross-entropy losses: L s for state, L o for object, and L c for composition. Taking the state classification as an illustrative example, its loss L s is defined as:\nL s = -S y s log(p(s|I)). (12\n)\nDuring inference, we pick the composition with the highest score of p(c|I) as our predicted label for the input image. PLO-LLM. In the training of PLO-LLM, the loss includes the cross-entropy loss L step for each observation step and cross-entropy loss L c for composition:\nL LLM P LO = L step + L c ,(13)\nL c is the same as in PLO-VLM, and L step is calculated by:\nL step = - n i=1 j∈C y (i) c log(p(c (i) j |I, c(1) j , . . . , c (i-1) j\n)), (14) where y\n(i)\nc is the ground truth one-hot encoded composition label for the i-th observation prompt at step i.\nSince PLO-LLM includes both the step-by-step hard prompt observation and soft prompt during training, the inference process involves predicting the composition label ŷc by calculating the probabilities of the composition categories using the following equations:\np(c|I) = p Sof t (c|I) + p Hard (c|I), p Hard (c|I) = n i=1 p(c (i) |I, c (1) , . . . , c (i-1) ),(15)\nwhere p(c|I) is the probability of composition category c, p Sof t (c|I) is the probability based on soft prompts, and p Hard (c|I) is based on hard prompts at each observing step. The composition category with the highest overall probability is chosen as the final predicted composition label ŷc .\nThe training and inference procedures of PLO-VLM and PLO-LLM are shown in the Appendix." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiment Settings", "publication_ref": [ "b11", "b32", "b38" ], "table_ref": [], "text": "Datasets. We evaluate performance on three challenging benchmarks: MIT-States [12] images, with 115 states and 245 objects. In the closed-world setting, the search space includes 1,262 seen compositions and 300/400 unseen compositions for validation/testing. 2) UT-Zappos [29]: It consists of 50,025 images of shoes, with 16 states and 12 objects. In the closed-world experiments, we considered 83 seen compositions and 15/18 (validation/test) unseen compositions following the constraints defined in [33]. 3) C-GQA [39]: It contains 453 states and 870 objects, comprising a total of 39,298 images. The dataset is divided into 5,592 seen compositions for training, and 1,040/923 unseen compositions for validation/testing, respectively. In the open-world settings, these datasets contain 28,175, 192, and 278,362 compositions, respectively. Metrics. We followed the established CZSL evaluation protocol [25] and assessed all results using four metrics in both closed-world and open-world scenarios: 1) Seen (S): This metric measures the accuracy specifically for seen compositions. 2) Unseen (U): It evaluates the accuracy exclusively for unseen compositions. 3) Harmonic Mean (HM): The best harmonic mean between the seen and unseen accuracy is calculated, providing a comprehensive performance measure. 4) Area Under the Curve (AUC): This metric computes the area under the seen-unseen accuracy curve, capturing the overall performance characteristics over a wide range of operating points from -∞ to +∞. Implementation Details. Please refer to the appendix." }, { "figure_ref": [], "heading": "Comparison with the State-of-the-Arts", "publication_ref": [], "table_ref": [], "text": "Setting. setting 2 , PLO-VLM attains the highest AUC across all three datasets, with scores of 7.4%, 33.1%, and 3.9%. These numerical results substantiate our motivation of empowering models with the observing capabilities to understand visual compositions in a simple to complex manner, rather than modeling each composition separately." }, { "figure_ref": [], "heading": "Cross-Domain Evaluation", "publication_ref": [], "table_ref": [], "text": "Setting. For a more comprehensive evaluation, we trained DFSP(i2t) [22], PLO-VLM and PLO-LLM on the MIT-States dataset and tested cross-domain performance on the C-GQA dataset, selecting categories that correspond to the states and objects in MIT-States for a consistent assessment.\nResults. As shown in Table 2, PLO-LLM exhibits superior performance on C-GQA dataset, particularly in the accuracy of unseen categories (32.1% vs. 28.7%) and consistently outperforms in HM and AUC scores. This enhancement stems from the GPT-generated sequence of observations in PLO-LLM, which operates independently of the visual modality, thereby bolstering cross-domain robustness. " }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_2", "tab_4" ], "text": "In this section, we conducted ablation studies to analyze the effectiveness of each component in PLO-VLM, the impact of the observation order in PLO-VLM, the number of observation prompts in PLO-LLM, and the network architectures of the based CLIP. All experiments were conducted on the MIT-States dataset under the closed-world setting.\nEffectiveness of Each Component in PLO-VLM. The contribution of Parameter Efficient Fine-Tuning (PEFT) 1 and Dynamic Observation (DO) strategies to PLO-VLM was assessed, as summarized in Table 3. Our analysis can yield the following insights: 1) PEFT (Fine-Tuning in En v ): Incorporating PEFT into the image encoder En v marginally improves HM metric, with an increase from 36.9% to 38.1%. This improvement underscores the subtle yet impactful role of fine-tuning the encoder's lightweight layers. Extra experimental results of PEFT are left in the appendix. 2) DO: Implementing DO in isolation enhances the model's ability to recognize both seen and unseen compositions, as reflected in a boost of the seen metric from 47.6% to 48.5% and unseen metric from 51.2% to 52.0%. This highlights the critical impact of dynamically determining observations in understanding visual compositions. 3) Synergistic Impact of PEFT and DO: When PEFT and DO are synergistically applied, they collectively achieve the highest AUC of 22.2%. This composite application underlines the effectiveness of integrating both strategies, leading to optimal model performance in recognizing compositions.\nObservation Order. We explored the influence of different observation orders in PLO-VLM by using various strategies for determining the first observation: predicting the state primitive first, predicting the object primitive first, and dynamically deciding the observation order. From Table 4, we can observe: 1) The multi-step observation leads to significant performance gains against the baseline across all the metrics. 2) Dynamically deciding the observation order based on the input image yields the highest overall performance. These findings demonstrated the effectiveness of adaptively choosing the most informative observation step based on the content of the input image, which is of impor- tance due to the conditioned variance nature of CZSL. Figure 5 reveals the dynamic observation strategy of PLO-VLM, which selects states or objects first based on their visual saliency in a composition 3 . The \"cooked pasta\" often undergoes object-first observations owing to its distinct visuals, whereas \"broken bottle\" typically requires state-first recognition due to the impactful visual change presented by the state \"broken\". PLO-VLM's context-aware approach mirrors human cognition in compositional zero-shot learning, leading to more precise recognition. Furthermore, we visualized top-1 predictions of PLO-VLM with and without dynamically observing in Figure 6. In a sense, the pre-observed primitive can represent a more prominent feature in the image (e.g., \"mashed object\" and \"burnt object\"). By virtue of pre-observed primitive, PLO-VLM consistently achieved accurate predictions.\n3 More visualization results are left in the appendix.\nNumber of Observation Prompts. We investigated the impact of the number of observation prompts in PLO-LLM. We varied the number of prompts for each composition category and evaluated the model's performance accordingly. The results, depicted in Figure 3, reveal a nuanced relationship between the number of prompts and the model's accuracy. As expected, increasing the number of observation prompts can generally lead to improved performance.\nEffect of Network Architectures. We further examined the influence of replacing CLIP backbones in our PLO-VLM and PLO-LLM. All results are reported in Table 5. Those consistent performance gains compared to the previous SOTA reaffirmed the efficacy and robustness of the proposed methodology. We chose ViT-L/14 by default.\nQualitative Results. We provided top-K predictions of PLO-VLM in the open-world settings in Figure 4. Our progressive observation strategy enables a thorough grasp of compositions, which is particularly effective in recognizing unseen compositions by bridging the gap between base and novel categories. Notably, even when our model's top-1 prediction is not exactly matched, it still accurately predicts the state primitive, and the object primitive also appears to be reasonable (e.g., muddy stream). These results underscore PLO's effectiveness in capturing pertinent visual cues and enabling comprehensive composition understanding." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b3", "b15", "b40" ], "table_ref": [], "text": "In this paper, we presented PLO, a novel solution for addressing the challenges of conditioned variance in CZSL. Unlike existing methods modeling each composition separately, PLO effectively captures the interactions between state and object primitives by dynamically determining the observation order, leading to a holistic understanding of visual concepts for each composition. The two variants, PLO-VLM and PLO-LLM, harness the power of VLMs and LLMs to refine image representations in a simple to complex manner. Experimental results on three gold-standard CZSL benchmarks demonstrated their superiority over existing frameworks. PLO opens a new avenue for CZSL from the perspective of endowing models with observing capabilities, and we hope it will pave the way for future research. outlines some limitations or constraints on the output generated by the LLMs. • Example: The provided instance (i.e., the example of \"mashed banana\") functions as a guiding paradigm for the model to generate analogous output in the manner of in-context learning [4,21]. • Question:\nThe question (i.e., \"How can you identify...\") instructs the model to devise observation prompts tailored to the specific composition category under consideration. In addition, this process adheres to the Chain-of-Thought concept [16,41], generating observations step by step from easy to hard through the utilization of a special prompt \"Let's observe it step by step!\"." }, { "figure_ref": [], "heading": "C. Implementation Details", "publication_ref": [ "b29" ], "table_ref": [], "text": "Our PLO models were trained and evaluated on one NVIDIA A100 GPU using PyTorch [32]. The GPT-3.5turbo, a variant of the GPT model, known for its impressive performance, was employed as the LLM. For the CLIP, we utilized OpenAI's resources, opting for the Vision Transformer with a base configuration of ViT-L/14. In PLO-VLM, we assigned the weight factors for L obs , L s , L o , and L c to 1.0, 0.01, 0.01, and 1.0, respectively, as also presented in [22]. In PLO-LLM, the default number of observation prompts was set to 4. Further, fusion weights during the aggregation of probabilities for p Sof t (c|I) and p Hard (c|I) were set to 0.7 and 0.3, respectively. Moreover, the weight factors assigned to L step and L c were both set to 1.0. Following [22], we used the Adam optimizer for 20 epochs on all datasets, with a learning rate of 2.5e-4 and a batch size of 64. In the open-world evaluation, we adhered to the posttraining calibration method [30] to filter out compositions deemed infeasible. Our code will be released." }, { "figure_ref": [], "heading": "D. Extra Quantitative Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1. Ablation Study on PEFT", "publication_ref": [ "b6" ], "table_ref": [ "tab_6" ], "text": "Setting. To optimize the performance of PLO-VLM for CZSL tasks, we focused on enhancing CLIP's image encoder using two distinct Parameter Efficient Fine-Tuning (PEFT) strategies: Adapter [6] and LoRA [7], the results are shown in Table 6. Below, we detail their integration into the image encoder of the CLIP.\n1) Adapter: Adapters are particularly effective for our purpose as they introduce minimal parameters to the model, allowing us to fine-tune CLIP's image encoder specifically for the CZSL tasks while avoiding overfitting. The Adapter layers are inserted between the transformer blocks of the image encoder, and they work as follows:\nAda(h) = h + U(ReLU(D(h))),(16)\nwhere D(•) and U(•) are the downsampling and upsampling projection functions applied to the hidden states h, with ReLU activation promoting non-linearity. The Adapter layers are optimized during training to adapt the pre-trained representations for better handling of the compositionality in CZSL.\n2) LoRA: LoRA is applied to the self-attention modules within the CLIP's image encoder to achieve a fine-tuning of attention mechanisms tailored to novel CZSL tasks. For each attention module, we modify its weight matrix W using low-rank updates as follows:\nLoRA(h) = h × (W + δW ),(17)\nwhere δW = AB T is the low-rank update with learnable matrices A and B. These updates are applied during finetuning, enabling the encoder to modify its attention patterns without altering the entire pre-trained weight structure. These PEFT strategies are incorporated into the image encoder En v (•) by inserting learnable layers into its transformer blocks. This approach allows for fine-tuning on AUC metrics, while Adapter demonstrates slightly better performance in HM metric. On the UT-Zappos dataset, LoRA significantly exceeds Adapter, notably in seen and HM metrics. For the C-GQA dataset, the performance of both strategies is closely matched, with Adapter slightly leading in the seen and HM metrics." }, { "figure_ref": [], "heading": "D.2. Observation Order in PLO-LLM", "publication_ref": [], "table_ref": [], "text": "The bar chart in Figure 7 offers an extensive visualization of the frequency distribution for states and objects observed first within various composition categories on the MIT-States dataset. By presenting such a comprehensive statistic, we aim to provide an insightful reference that can aid future research endeavors in compositional learning and related fields." }, { "figure_ref": [ "fig_8" ], "heading": "E. Extra Qualitative Results", "publication_ref": [], "table_ref": [], "text": "Success and Failure Cases. Figure 8 displays the top-K predictions from our PLO-VLM on the MIT-States, UT-Zappos, and C-GQA datasets. Successful cases, highlighted in green, demonstrate precise model predictions, exemplified by instances such as \"sliced chicken\". Notably, some failures, marked in red, such as \"gray elephant\", do not align with the ground truth (GT), yet still repre-sent existing compositions within the image. These results underscore our model's capability to identify stateobject compositions, affirming its effectiveness in interpreting complex visual scenes despite occasional misalignments with the GT.\nVisualization results of PLO-LLM. In Figure 9, we presented visualizations of the top-1 predictions obtained by PLO-LLM, in cases where observations (obs) were utilized or not. All results were from the MIT-States dataset under the closed-world setting. Concurrently, we displayed the sequence of step-by-step observation prompts on the right side of each image. With our meticulously designed prompt, the LLM effectively generated a progression of observation prompts that transitioned from easy to hard. Through the aid of these well-organized observation prompts, we empowered the model with observing capabilities, leading to better holistic understanding and considerable performance gains." }, { "figure_ref": [], "heading": "F. Potential Negative Societal Impacts.", "publication_ref": [], "table_ref": [], "text": "While the PLO-VLM and PLO-LLM models represent advancements in visual and linguistic AI applications, they may also have unintended societal impacts. The dependence of PLO on language models and vision-language pre-training raises concerns about perpetuating incorrect or biased interpretations of compositions. This could occur if the underlying data or language prompts that inform these models are biased or flawed, leading to misrepresentations and potentially reinforcing stereotypes or inaccuracies." }, { "figure_ref": [], "heading": "G. Limitations", "publication_ref": [], "table_ref": [], "text": "Despite the promising results demonstrated by PLO-VLM and PLO-LLM in compositional zero-shot learning, we acknowledge two principal limitations: 1) Scope of Zero-Shot Learning: Our approach primarily addresses the recognition of novel compositions involving already seen states and objects. It does not extend to the zero-shot recognition of entirely novel state and object categories. This limitation marks a boundary in our model's applicability, underscoring the need for future developments that can generalize to entirely unseen states and objects. 2) Dependence on External Language Model APIs: The efficacy of PLO-LLM is partly reliant on external language model APIs, such as those from OpenAI. This reliance introduces practical constraints, especially concerning the costs associated with API usage, which can escalate with the increase in the number of unique composition categories." }, { "figure_ref": [], "heading": "A. The procedures of Training and Inference", "publication_ref": [], "table_ref": [], "text": "To provide a clearer understanding of our training and inference processes, we detail the specific operational steps of PLO-VLM and PLO-LLM in Algorithm 1 and Algorithm 2, respectively." }, { "figure_ref": [], "heading": "B. Observation Prompt Generation", "publication_ref": [], "table_ref": [], "text": "We present the prompt designed to generate a series of observation prompts that fed into the text encoder of CLIP within the framework of PLO-LLM. The prompt's structure is as follows:\nSetting: In compositional classification, we use the hostile prompt \"a photo of [state] [object]\" and compute similarities between images and prompts to determine composition category: [state] [ object]. Q: How can you identify a photo of the composition \"mashed banana\" ? Please provide step-by-step observation prompts from easy to hard, where each step builds upon the previous one. Note that the last observation prompt is \"a photo of mashed banana \". A: Let's observe it step by step! Four observation prompts: -a photo of yellow, mushy substance -a photo of a fruit that has been mashed into a paste -a photo of a soft and creamy mixture made from bananas " } ]
Compositional zero-shot learning aims to recognize unseen state-object compositions by leveraging known primitives (state and object) during training. However, effectively modeling interactions between primitives and generalizing knowledge to novel compositions remains a perennial challenge. There are two key factors: object-conditioned and state-conditioned variance, i.e., the appearance of states (or objects) can vary significantly when combined with different objects (or states). For instance, the state "old" can signify a vintage design for a "car" or an advanced age for a "cat". In this paper, we argue that these variances can be mitigated by predicting composition categories based on pre-observed primitive. To this end, we propose Progressive Language-based Observations (PLO), which can dynamically determine a better observation order of primitives. These observations comprise a series of concepts or languages that allow the model to understand image content in a step-by-step manner. Specifically, PLO adopts pre-trained vision-language models (VLMs) to empower the model with observation capabilities. We further devise two variants: 1) PLO-VLM: a two-step method, where a pre-observing classifier dynamically determines the observation order of two primitives. 2) PLO-LLM: a multi-step scheme, which utilizes large language models (LLMs) to craft compositionspecific prompts for step-by-step observing. Extensive ablations on three challenging datasets demonstrate the superiority of PLO compared with state-of-the-art methods, affirming its abilities in compositional recognition.
Compositional Zero-shot Learning via Progressive Language-based Observations
[ { "figure_caption": "Figure 1 .1Figure 1. Illustrations of challenges in CZSL and the proposed PLO method. (a) The challenge of object/state-conditioned variation: A perceptible variance emerges in the visual appearance of state/object primitives when juxtaposed in different compositions. (b) PLO-VLM/LLM: A two/multi-step observation approach dynamically controls the observation order for effective recognition.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2. (a) PLO-VLM: A two-step approach using a pre-observing classifier to dynamically determine the first observation. (b) PLO-LLM: A multi-step approach that observes composition-specific prompts from LLMs step-by-step.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Ablation study on the different number of observation prompts in PLO-LLM.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Top-K predictions on for randomly selected cases from MIT-States. The top and bottom rows show the results of open-world settings, respectively. Correct and incorrect predictions are highlighted in green and red, respectively.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .Figure 6 .56Figure 5. Distribution of samples where state or object is first observed in PLO-VLM on the test set of MIT-States.", "figure_data": "", "figure_id": "fig_5", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "17Predict label: ŷc = argmax p(c|I); classification...\") establishes a specific context and roles for the Large Language Models (LLMs) to operate within.• Constraint: The constraint (i.e., \"Note that...\")", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .Figure 8 .78Figure 7. Distribution of samples where state or object is first observed in PLO-VLM on the test set of MIT-States.", "figure_data": "", "figure_id": "fig_7", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "➢➢➢Figure 9 .9Figure 9. Top-1 predictions of PLO-LLM with and without multi-step observations under the closed-world setting on the MIT-States dataset. The corresponding generated observation prompts are presented to the right of each image. Correct and incorrect predictions are highlighted in green and red, respectively.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": ": It comprises 53,753 natural", "figure_data": "SettingMethodSMIT-States U HM AUCSUT-Zappos U HM AUCSC-GQA U HM AUCCLIP [34] ICML'2130.2 46.0 26.1 11.0 15.8 49.1 15.65.07.5 25.0 8.61.4CoOp [43] IJCV'2234.4 47.6 29.8 13.5 52.1 49.3 34.6 18.8 20.5 26.8 17.14.4Co-CGE [26] TPAMI'22 46.7 45.9 33.1 17.0 63.4 71.3 49.7 36.3 34.1 21.2 18.95.7ProDA [24] CVPR'22 37.4 51.7 32.7 16.1 63.7 60.7 47.6 32.7----Closed-WorldPromptCompVL [24] arXiv'22 CSP [30] ICLR'23 DFSP(i2t) [22] CVPR'23 47.4 52.4 37.2 20.7 64.2 66.4 45.1 32.1 35.6 29.3 24.3 48.5 47.2 35.3 18.3 64.4 64.0 46.1 32.2 ---46.6 49.9 36.3 19.4 64.2 66.2 46.6 33.0 28.8 26.8 20.5 DFSP(BiF) [22] CVPR'23 47.1 52.8 37.7 20.8 63.3 69.2 47.1 33.5 36.5 32.0 26.2 DFSP(t2i) [22] CVPR'23 46.9 52.0 37.3 20.6 66.7 71.7 47.2 36.0 38.2 32.0 27.1 10.5 -6.2 8.7 9.9Troika [10] arXiv'2349.0 53.0 39.3 22.1 66.8 73.8 54.6 41.7 41.0 35.7 29.4 12.4PLID [3] arXiv'2349.7 52.4 39.0 22.1 67.3 68.8 52.4 38.7 38.8 33.0 27.9 11.0GIPCOL [38] WACV'24 48.5 49.6 36.6 19.9 65.0 68.5 48.8 36.2 31.9 28.4 22.57.1PLO-VLM (Ours)49.6 52.7 39.0 22.2 67.8 75.6 53.1 42.0 43.9 38.2 32.2 14.5PLO-LLM (Ours)49.6 53.2 39.0 21.9 68.3 73.0 54.8 41.6 44.3 37.9 31.2 14.3CLIP [34] ICML'2130.1 14.3 12.83.015.7 20.6 11.22.27.54.64.00.3CoOp [43] IJCV'2234.6 9.3 12.32.852.1 31.5 28.9 13.2 21.0 4.65.50.7Co-CGE [26] TPAMI'22 38.1 20.0 17.75.659.9 56.2 45.3 28.4 33.2 3.95.30.9ProDA [24] CVPR'22 37.5 18.3 17.35.163.9 34.6 34.3 18.4----Open-WorldPromptCompVL [24] arXiv'22 CSP [30] ICLR'23 DFSP(i2t) [22] CVPR'23 47.2 18.2 19.1 48.5 16.0 17.7 46.3 15.7 17.4 DFSP(BiF) [22] CVPR'23 47.1 18.1 19.2 DFSP(t2i) [22] CVPR'23 47.5 18.5 19.36.1 5.7 6.7 6.7 6.864.6 44.0 37.1 21.6 64.1 44.1 38.9 22.7 28.7 5.2 --64.3 53.8 41.2 26.4 35.6 6.5 63.5 57.2 42.7 27.6 36.4 7.6 10.6 -6.9 9.0 66.8 60.0 44.0 30.3 38.3 7.2 10.4-1.2 2.0 2.4 2.4Troika [10] arXiv'2348.8 18.7 20.17.266.4 61.2 47.8 33.0 40.8 7.9 10.92.7PLID [3] arXiv'2349.1 18.7 20.07.367.6 55.5 46.6 30.8 39.1 7.5 10.62.5GIPCOL [38] WACV'24 48.5 16.0 17.96.365.0 45.0 40.1 23.5 31.6 5.57.31.3PLO-VLM (Ours)49.5 18.7 20.57.468.0 63.5 47.8 33.1 43.9 10.4 13.93.9", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study on each component in PLO-VLM. PEFT: It adopts the parameter efficient fine-tuning in the image encoder. DO: It utilizes the dynamic observation strategy.", "figure_data": "Quantitative Analysis. It can be seen that PLO out-performs nearly all competitors by clear margins in bothclosed-world and open-world settings across three preva-lent benchmarks. In the closed-world setting: 1) PLO-VLMsurpasses the leading competitor, Troika, by 0.1%, 0.3%,and 2.1% in AUC (the core metric) on the MIT-States, UT-Zappos, and C-GQA datasets, respectively. 2) PLO-VLMand PLO-LLM achieve dominant results on C-GQA (themost challenging dataset), with HM scores of 32.2% and31.2%, compared to Troika's 29.4%. In the open-world", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study on observation order in PLO-VLM.", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study on different CLIP backbones.", "figure_data": "Backbone MethodSMIT-States U HM AUCDFSP [22] 36.7 43.4 29.4 13.2ViT-B/32PLO-VLM 41.1 44.2 31.3 14.8PLO-LLM 44.2 47.9 34.3 17.4DFSP [22] 39.6 46.5 31.5 15.1ViT-B/16PLO-VLM 43.9 46.6 34.1 17.0PLO-LLM 41.2 44.7 31.4 14.8DFSP [22] 46.9 52.0 37.3 20.6ViT-L/14PLO-VLM 49.6 52.7 39.0 22.2PLO-LLM 49.6 53.2 39.0 21.9", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Use the pre-observing detector F pre obs (v, t s , t o ) to determine observation order; L obs + L s + L o + L c ; Hard (c|I) using hard prompts;", "figure_data": "Algorithm 2: PLO-LLM Training and InferenceData: Training data DS = {(I, c)}Result: Optimal parameters for PLO-LLM1 Initialization: Initialize weights;2 Observation Prompts Generation:3 for each composition category c ∈ C do4Generate observation prompts P c using LLM;// Training Process5 while not converged do6Sample a batch from DS with images {I k } n k=1 and their corresponding labels {c k } n k=1 ;7for each observation step i do8Extract v (i) and t(i) obs ;9Compute refined v (i) via CA and updatev (i+1) ;10Compute probability:p(c (i) |I, c (1) , . . . , c (i-1) );11Calculate total loss: L LLM P LO = L step + L c ;12Update weights using L LLM P LO ;// Inference Process13 for each test image I do14Compute p Sof t (c|I);15for each observation step i do18Calculate total loss:L V LM P LO = 19 Update weights using L V LM P LO ;// Inference Process20 for each test image I do21Compute p(c|I);22Predict label: ŷc = argmax p(c|I);-a photo of mashed bananaQ: How can you identify a photo of thecomposition \"{STATE CLASS} {OBJECTCLASS}\" ? Please provide step-by-step observation prompts from easyto hard, where each step builds uponthe previous one. Note that thelast observation prompt is \"a photoof {STATE CLASS} {OBJECT CLASS}\".A: Let's observe it step by step!Four observation prompts:The prompt is divided into four individual parts: setting,constraint, example, and question:• Setting: The setting text (i.e., \"In compositional", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation study on different PEFT strategies in closed-world settings.", "figure_data": "StrategySMIT-States U HM AUCSUT-Zappos U HM AUCSC-GQA U HM AUCAdapter [6]49.6 52.7 39.022.267.8 75.6 53.142.043.9 38.2 32.214.5LoRA [7]50.2 52.9 38.622.369.1 73.8 55.643.242.8 38.3 31.814.2", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Lin Li; Guikun Chen; Jun Xiao; Long Chen
[ { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "", "ref_id": "b0", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Muhammad Umer Anwaar; Zhihui Pan; Martin Kleinsteuber", "journal": "", "ref_id": "b1", "title": "On leveraging variational graph embeddings for open world compositional zero-shot learning", "year": "2022" }, { "authors": "Wentao Bao; Lichang Chen; Heng Huang; Yu Kong", "journal": "", "ref_id": "b2", "title": "Prompting language-informed distribution for compositional zero-shot learning", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Shaozhe Hao; Kai Han; Kwan-Yee K Wong", "journal": "", "ref_id": "b4", "title": "Learning attention as disentangler for compositional zero-shot learning", "year": "2023" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "PMLR", "ref_id": "b5", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "ICLR", "ref_id": "b6", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Xiaoming Hu; Zilei Wang", "journal": "", "ref_id": "b7", "title": "Leveraging sub-class discimination for compositional zero-shot learning", "year": "2023" }, { "authors": "Yushi Hu; Hang Hua; Zhengyuan Yang; Weijia Shi; Noah A Smith; Jiebo Luo", "journal": "", "ref_id": "b8", "title": "Promptcap: Prompt-guided taskaware image captioning", "year": "2022" }, { "authors": "Siteng Huang; Biao Gong; Yutong Feng; Yiliang Lv; Donglin Wang", "journal": "", "ref_id": "b9", "title": "Troika: Multi-path cross-modal traction for compositional zero-shot learning", "year": "2023" }, { "authors": "Fushuo Huo; Wenchao Xu; Song Guo; Jingcai Guo; Haozhao Wang; Ziming Liu", "journal": "", "ref_id": "b10", "title": "Procc: Progressive cross-primitive consistency for open-world compositional zero-shot learning", "year": "2022" }, { "authors": "Phillip Isola; Joseph J Lim; Edward H Adelson", "journal": "", "ref_id": "b11", "title": "Discovering states and transformations in image collections", "year": "2015" }, { "authors": "Shyamgopal Karthik; Massimiliano Mancini; Zeynep Akata", "journal": "", "ref_id": "b12", "title": "Kg-sp: Knowledge guided simple primitives for open world compositional zero-shot learning", "year": "2022" }, { "authors": "Muhammad Gul; Zain Ali Khan; Muhammad Ferjad Naeem; Luc Van Gool; Alain Pagani; Didier Stricker; Muhammad Zeshan; Afzal ", "journal": "", "ref_id": "b13", "title": "Learning attention propagation for compositional zero-shot learning", "year": "2023" }, { "authors": "Hanjae Kim; Jiyoung Lee; Seongheon Park; Kwanghoon Sohn", "journal": "", "ref_id": "b14", "title": "Hierarchical visual primitive experts for compositional zero-shot learning", "year": "2023" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "NeurIPS", "ref_id": "b15", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b16", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Lin Li; Jun Xiao; Guikun Chen; Jian Shao; Yueting Zhuang; Long Chen", "journal": "", "ref_id": "b17", "title": "Zero-shot visual relation detection via composite visual cues from large language models", "year": "2023" }, { "authors": "Xiangyu Li; Xu Yang; Kun Wei; Cheng Deng; Muli Yang", "journal": "", "ref_id": "b18", "title": "Siamese contrastive embedding network for compositional zero-shot learning", "year": "2022" }, { "authors": "Yong-Lu Li; Yue Xu; Xiaohan Mao; Cewu Lu", "journal": "", "ref_id": "b19", "title": "Symmetry and group in attribute-object compositions", "year": "2020" }, { "authors": "Jiachang Liu; Dinghan Shen; Yizhe Zhang; Bill Dolan; Lawrence Carin; Weizhu Chen", "journal": "", "ref_id": "b20", "title": "What makes good in-context examples for gpt-3?", "year": "2021" }, { "authors": "Xiaocheng Lu; Song Guo; Ziming Liu; Jingcai Guo", "journal": "", "ref_id": "b21", "title": "Decomposed soft prompt guided fusion enhancing for compositional zero-shot learning", "year": "2007" }, { "authors": "Xiaocheng Lu; Ziming Liu; Song Guo; Jingcai Guo; Fushuo Huo; Sikai Bai; Tao Han", "journal": "", "ref_id": "b22", "title": "Drpt: Disentangled and recurrent prompt tuning for compositional zero-shot learning", "year": "2023" }, { "authors": "Yuning Lu; Jianzhuang Liu; Yonggang Zhang; Yajing Liu; Xinmei Tian", "journal": "", "ref_id": "b23", "title": "Prompt distribution learning", "year": "2022" }, { "authors": "Massimiliano Mancini; Muhammad Ferjad Naeem; Yongqin Xian; Zeynep Akata", "journal": "", "ref_id": "b24", "title": "Open world compositional zeroshot learning", "year": "2021" }, { "authors": "Massimiliano Mancini; Muhammad Ferjad Naeem; Yongqin Xian; Zeynep Akata", "journal": "TPAMI", "ref_id": "b25", "title": "Learning graph embeddings for open world compositional zero-shot learning", "year": "2022" }, { "authors": "Sachit Menon; Carl Vondrick", "journal": "ICLR", "ref_id": "b26", "title": "Visual classification via description from large language models", "year": "2023" }, { "authors": "Ishan Misra; Abhinav Gupta; Martial Hebert", "journal": "", "ref_id": "b27", "title": "From red wine to red tomato: Composition with context", "year": "2017" }, { "authors": "Muhammad Ferjad Naeem; Yongqin Xian; Federico Tombari; Zeynep Akata", "journal": "", "ref_id": "b28", "title": "Learning graph embeddings for compositional zero-shot learning", "year": "2021" }, { "authors": "Peilin Nihal V Nayak; Stephen H Yu; Bach", "journal": "ICLR", "ref_id": "b29", "title": "Learning to compose soft prompts for compositional zero-shot learning", "year": "2023" }, { "authors": "Zachary Novack; Julian Mcauley; Zachary Chase Lipton; Saurabh Garg", "journal": "PMLR", "ref_id": "b30", "title": "Chils: Zero-shot image classification with hierarchical label sets", "year": "2023" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b31", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "" }, { "authors": "Senthil Purushwalkam; Maximilian Nickel; Abhinav Gupta; Marc'aurelio Ranzato", "journal": "", "ref_id": "b32", "title": "Task-driven modular networks for zero-shot compositional learning", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b33", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Maria Tsimpoukelli; Jacob L Menick; Serkan Cabi; Oriol Eslami; Felix Vinyals; Hill", "journal": "", "ref_id": "b34", "title": "Multimodal few-shot learning with frozen language models", "year": "2021" }, { "authors": "Qingsheng Wang; Lingqiao Liu; Chenchen Jing; Hao Chen; Guoqiang Liang; Peng Wang; Chunhua Shen", "journal": "", "ref_id": "b35", "title": "Learning conditional attributes for compositional zero-shot learning", "year": "2023" }, { "authors": "Guangyue Xu; Parisa Kordjamshidi; Joyce Chai", "journal": "", "ref_id": "b36", "title": "Prompting large pre-trained vision-language models for compositional concept learning", "year": "2022" }, { "authors": "Guangyue Xu; Joyce Chai; Parisa Kordjamshidi", "journal": "WACV", "ref_id": "b37", "title": "Gipcol: Graph-injected soft prompting for compositional zero-shot learning", "year": "2023" }, { "authors": "Aron Yu; Kristen Grauman", "journal": "", "ref_id": "b38", "title": "Fine-grained visual comparisons with local learning", "year": "2014" }, { "authors": "Tian Zhang; Kongming Liang; Ruoyi Du; Xian Sun; Zhanyu Ma; Jun Guo", "journal": "Springer", "ref_id": "b39", "title": "Learning invariant visual representations for compositional zero-shot learning", "year": "2022" }, { "authors": "Zhuosheng Zhang; Aston Zhang; Mu Li; Alex Smola", "journal": "", "ref_id": "b40", "title": "Automatic chain of thought prompting in large language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b41", "title": "Conditional prompt learning for vision-language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "IJCV", "ref_id": "b42", "title": "Learning to prompt for vision-language models", "year": "2022" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b43", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 364, 570.91, 181.11, 39.57 ], "formula_id": "formula_0", "formula_text": "P s = [x 0 ; x 1 ; . . . ; x M ; x s ; o], P o = [x 0 ; x 1 ; . . . ; x M ; x o ], P c = [x 0 ; x 1 ; . . . ; x M ; x s ; x o ],(1)" }, { "formula_coordinates": [ 4, 89.73, 624.2, 196.63, 13.91 ], "formula_id": "formula_1", "formula_text": "F pre obs (v, t s , t o ) = S(v, t s ) ⊕ S(v, t o ),(2)" }, { "formula_coordinates": [ 4, 313.84, 381.08, 231.27, 8.99 ], "formula_id": "formula_2", "formula_text": "CA(q, K, V) = q+FFN(LN(q+MHA(q, K, V))),(3)" }, { "formula_coordinates": [ 4, 351.17, 460.05, 193.94, 13.91 ], "formula_id": "formula_3", "formula_text": "V = CA(V, T pre obs , T pre obs ), v = FC( V),(4)" }, { "formula_coordinates": [ 4, 342.07, 555.74, 203.04, 30.56 ], "formula_id": "formula_4", "formula_text": "p(s|I) = π(S( v, t post obs ), τ ), if t post obs ← t s , p(o|I) = π(S( v, t post obs ), τ ), if t post obs ← t o .(5)" }, { "formula_coordinates": [ 4, 377.78, 680.81, 167.34, 9.68 ], "formula_id": "formula_5", "formula_text": "p(c|I) = π(S( v, t c ), τ ),(6)" }, { "formula_coordinates": [ 5, 146.24, 207.11, 44.79, 12.78 ], "formula_id": "formula_6", "formula_text": "P c = {P (i)" }, { "formula_coordinates": [ 5, 106.34, 346.8, 180.03, 29.38 ], "formula_id": "formula_7", "formula_text": "V (i) = CA(V (i) , T (i) obs , T (i) obs ), v (i+1) = v (i) = FC( V (i) ),(7)" }, { "formula_coordinates": [ 5, 217.4, 457.44, 18.06, 14.3 ], "formula_id": "formula_8", "formula_text": "(i) obs )." }, { "formula_coordinates": [ 5, 67.04, 499.89, 219.32, 14.3 ], "formula_id": "formula_9", "formula_text": "p(c (i) |I, c (1) , . . . , c (i-1) ) = π(S(v (i) , t (i) obs ), τ ).(8)" }, { "formula_coordinates": [ 5, 110.34, 594.6, 115.8, 11.72 ], "formula_id": "formula_10", "formula_text": "p Sof t (c|I) = π(S(v, t c ), τ )." }, { "formula_coordinates": [ 5, 102.18, 702.12, 184.18, 12.69 ], "formula_id": "formula_11", "formula_text": "L V LM P LO = L obs + L s + L o + L c .(10)" }, { "formula_coordinates": [ 5, 347.39, 124.78, 197.72, 28.85 ], "formula_id": "formula_12", "formula_text": "L obs = -y obs log(σ(F pre obs )) + (1 -y obs ) log(1 -σ(F pre obs )),(11)" }, { "formula_coordinates": [ 5, 486.76, 159.51, 58.36, 9.65 ], "formula_id": "formula_13", "formula_text": "y obs = [y s , y o ]" }, { "formula_coordinates": [ 5, 372.32, 260.8, 168.65, 11.15 ], "formula_id": "formula_14", "formula_text": "L s = -S y s log(p(s|I)). (12" }, { "formula_coordinates": [ 5, 540.96, 261.12, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 383.07, 341.09, 162.04, 12.69 ], "formula_id": "formula_16", "formula_text": "L LLM P LO = L step + L c ,(13)" }, { "formula_coordinates": [ 5, 310.67, 374.96, 201.71, 30.47 ], "formula_id": "formula_17", "formula_text": "L step = - n i=1 j∈C y (i) c log(p(c (i) j |I, c(1) j , . . . , c (i-1) j" }, { "formula_coordinates": [ 5, 340.74, 412.88, 9.05, 6.12 ], "formula_id": "formula_18", "formula_text": "(i)" }, { "formula_coordinates": [ 5, 330.82, 503.43, 214.29, 46.01 ], "formula_id": "formula_19", "formula_text": "p(c|I) = p Sof t (c|I) + p Hard (c|I), p Hard (c|I) = n i=1 p(c (i) |I, c (1) , . . . , c (i-1) ),(15)" }, { "formula_coordinates": [ 12, 358.56, 453.65, 186.56, 8.99 ], "formula_id": "formula_20", "formula_text": "Ada(h) = h + U(ReLU(D(h))),(16)" }, { "formula_coordinates": [ 12, 364.95, 612.69, 180.17, 8.99 ], "formula_id": "formula_21", "formula_text": "LoRA(h) = h × (W + δW ),(17)" } ]
2023-12-01
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b5", "b12", "b41", "b42", "b45", "b10", "b23", "b34", "b35", "b43", "b46", "b2", "b11", "b33", "b5", "b16" ], "table_ref": [], "text": "Pre-trained models lack the ability to extract fine-grained information of specific datasets. Direct fine-tuning leads to forgetfulness and overfitting, whereas our approach aims to keep and learn valuable information (better viewed in color).\nThe success of deep neural networks in the visual area is well documented, and one of the reasons is the support of a huge amount of training data. Nevertheless, it is an intractable condition to fulfill in the real-world scenario, e.g., there are very few or even no samples of rare species. To solve this problem, Zero-Shot Learning (ZSL) has gained increasing attention in recent years with its ded-ication to bridging the gap between language and vision, which takes inspiration from the logical reasoning ability of human beings. By virtue of the shared visual information and the semantic descriptions of unseen classes, ZSL endows the model with out-of-distribution recognition capability. Another more challenging task is Generalized Zero-Shot Learning (GZSL), which requires simultaneous recognition of both seen and unseen categories [2,22].\nMainstream studies strive to search and locate local visual attributes so as to construct one-to-one visual-semantic mappings [5,6,13,33,42,43,46], or implicitly learn the class-wise visual-semantic relation to simulate unseen distributions in a generative manner [11,19,24,35,36,40,44,47]. Despite the promising results achieved by these approaches, the visual-semantic matchiness is significantly restricted by the incompleteness of the visual features. Such deficiency stems from the domain bias between the pretrained dataset, e.g., ImageNet [10], and the downstream tasks/datasets, which simply states that the extracted features are insufficient to provide fine-grained knowledge to build complete and reliable visual-semantic pairs [3].\nTo address such issues, a straightforward solution is to fine-tune them on downstream tasks, which may inevitably introduce catastrophic forgetting and overfitting problems (Fig. 1) [16,23]. Concretely, deep neural networks usually excel at seeking shortcuts from observed data [12], favoring partial features that benefit the seen classes during the fine-tuning process, while conversely, features that are critical for the unseen classes may be filtered. Meanwhile, newly learned features may contain various noise, e.g., backgrounds, irrelevant attributes, etc., rendering the model to create pseudo-visual-semantic associations. Worse still, identical issues also exist in the realm of continual learning [9] and spawn substantial mature schemes. However, they are not applicable to ZSL/GZSL, which has no further training or adjustment phase for novel categories.\nGrounded on the above discussions, we lock our research interests in the following two aspects: 1) How to keep the valuable knowledge in the raw features to ensure the unseen distributional generalizability of the model, and 2) How to guide the learning process of new knowledge to reduce the interference of noisy factors. The crux of these issues is to determine which features are worth being kept and learned. In this paper, we argue that attributes, i.e., textual descriptions or semantic embeddings, are the only shared supervised signals to distinguish seen and unseen classes. Thus, the kept and learned features need to be directed by them.\nHereby, we present the Attribute-Aware Representation Rectification framework for GZSL, i.e., (AR) 2 , to adaptively rectify the feature extractor to learn novel features while keeping original valuable features. Concretely, our (AR) 2 consists of two key components including Unseen-Aware Distillation (UAD) and Attribute-Guided Learning (AGL). The objective of UAD is to identify those in the raw features that are beneficial to both seen and unseen classes. By virtue of the attention mechanism of the class activation map [34] and the attribute-region classifier [6,17], specific features are identified and localized. Specifically, on the one hand, we use the pre-trained network as the teacher model and attribute labels of similar classes as supervisory information to obtain class activation maps of unseen classes by gradient propagation. Meanwhile, the score of the attributeregion classifier is utilized to restrict the scope of attention maps, from which valuable features are filtered out. In parallel, the student model is prompted to retain this fraction of features by means of feature distillation. On the other hand, AGL aims to encourage the model to refine features that are most relevant to attributes and reduce noisy interference. We first leverage features extracted by the teacher model to initialize visual prototypes of each sub-attribute, which form an attribute pool. Class prototypes can then be obtained by selecting and assembling various sub-attributes from the pool. The attribute pool is updated by each batch of data during the training period, with increasing the semantic distance between prototypes of each category as the learning goal. In this way, the model is implicitly motivated to learn features that are associated with attributes and are discriminative. Finally, the teacher model is updated by the exponential moving average method with the student model.\nOur contributions are summarized as follows:\n• To alleviate the issue of mismatch between the pre-trained model and downstream data/tasks, we present a novel method named (AR) 2 to adaptively rectify the feature representations. (AR) 2 steers the learning process with the supervision of attributes, continuously keeping and learning the most valuable knowledge. • (AR) 2 consists of two main components, wherein, UAD assists the model in reviewing old knowledge to prevent catastrophic forgetting and AGL guides the model in refining features to avoid overfitting on noisy information. • We conduct extensive experiments and analysis on three benchmark datasets, and the results show that the proposed method is effective to improve the performance of the model." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Bridge Vision and Attribute", "publication_ref": [ "b0", "b19", "b12", "b16", "b26", "b37", "b41", "b42", "b44", "b45", "b47", "b5", "b17", "b10", "b23", "b34", "b35", "b43", "b46" ], "table_ref": [], "text": "ZSL/GZSL aims to learn shared attributes (semantics) from accessible training data, thereby obtaining the ability to infer on the unknown domain [2,22]. Attribute descriptions, i.e., text embeddings, category prototypes, etc., are the only prior information to access unseen categories. Therefore, how to connect the visual features of the seen classes with the attributes is a central issue in ZSL/GZSL. Numerous studies choose the most direct way, i.e., mapping visual features to the attribute space [32]. In this direction, the accuracy of the mapping is the main challenge to attack. Existing approaches include generating hallucinatory categories [1], reconstructing visual features [20], and modeling region-attribute relationships [5, 13,17,26,27,38,42,43,45,46], to name a few. In contrast, some studies opt to map attributes to the visual space and adjust the mapping function by maintaining semantic relations among categories [48]. In addition to this, some studies combine the characteristics of the above two methods by mapping visual features and attributes to a common space [6,18,28,33], thus mitigating the discrepancies between different modal data. Compared to modeling the visual-attribute relationship explicitly, generative approaches [11,19,24,35,36,40,44,47] provide an alternative perspective, i.e., learning the relationship implicitly by means of the distributional alignment capability of GANs or VAEs. Although these approaches achieve promising results, they are limited by the domain bias problem between the pre-trained model and the downstream dataset, i.e., the pre-trained model struggles to capture the fine-grained features of the specific dataset, which is detrimental to the establishment of accurate visual-attribute associations." }, { "figure_ref": [], "heading": "Domain Bias", "publication_ref": [ "b36", "b40", "b2" ], "table_ref": [], "text": "A common practice in ZSL/GZSL is to utilize the Ima-geNet [10] pre-trained model to extract features and then develop links between visual features and attributes. However, inherent domain differences exist between datasets, i.e., domain bias. For instance, ImageNet [10] lacks finegrained annotations for birds, which are needed to discriminate samples in the CUB dataset [37]. Xian et al. [41] achieve performance improvement by fine-tuning the feature extractor, demonstrating the existence of domain bias. However, they do not conduct further research to mitigate the forgetting and overfitting problems [16,23]." }, { "figure_ref": [], "heading": "Representation Rectification", "publication_ref": [ "b24", "b6", "b13" ], "table_ref": [], "text": "Domain bias leads to features extracted by the pre-trained model on the downstream dataset being incomplete, i.e., lacking fine-grained, targeted information. To this end, some studies attempt to rectify the extracted features (raw features). Li et al. [25] and Chen et al. [7] argue that the raw features contain both class-relevant and class-irrelevant parts, which are then stripped away by means of disentanglement. Chen et al.\n[3] strive to refine the raw features in order to reduce the redundancy of the information and to enhance the discriminability of the features. Han et al. [14], on the other hand, utilize contrastive learning to bring the same-class representations closer and push the dissimilarity of representations farther away. Kong et al. [21] then resort to enhancing the intra-class compactness. Although their methods mitigate the domain bias to some extent, they are unable to learn new knowledge on downstream datasets." }, { "figure_ref": [ "fig_1" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Preliminary. In the ZSL/GZSL tasks, generally, the training data consists of complete samples of seen classes and attribute vectors of unseen classes. Suppose D s = {x s , y s , a s } denotes the seen class sample set, where x s , y s and a s denote the images, labels and attributes, respectively. Let D u = {a u } denote attributes of unseen classes, and a = a s ∪ a u is the whole attribute set. Meanwhile, we also use the semantic embeddings of each sub-attribute learned by GloVe, which are indicated by v = {v 1 , v 2 , ..., v n }, where n is the number of sub-attributes. Let K denote the number of categories, including seen and unseen categories. Our approach consists of a teacher model T = {E t , W t } and a student model S = {E s , W s } that have the same network structure, where\nW t = [W t 1 , W t 2 ] and W s = [W s 1 , W s 2 ]. Each model includes a feature extractor E and a classifier W = [W 1 , W 2 ]. Let w 1 , w 2 denote the learnable param- eters of W 1 , W 2 , respectively. Then w t\n1 , w t 2 stand for the teacher model's parameters, and w s 1 , w s 2 represent the student model's. AGL module has a classifier W p , where w p denotes the parameters. Overview. Our approach is depicted in Fig. 2. The teacher model is responsible for retaining historical features to prevent the student model from forgetting old knowledge. UAD employs a dual filtering mechanism, i.e., class activation map and attribute scoring, to identify and locate the valuable parts of the features extracted by the teacher model. The output of UAD is a weight map, which measures the value of each region. After that, the teacher's knowledge is passed on to the students via distillation. AGL utilizes the original and learnable features to maintain a pool of attributes that contain prototypes of each sub-attribute. Different class prototypes are available by selecting and assembling various sub-attributes. By maximizing the distance between the prototypes of each class, AGL implicitly facilitates the correlation between learned features and attributes." }, { "figure_ref": [], "heading": "Attribute-Region Classifier", "publication_ref": [ "b5", "b16" ], "table_ref": [], "text": "Attributes are the only shared prior information between seen and unseen classes, which are crucial for identifying unknown classes. However, attribute descriptions are typically coarse-grained class-wise annotations, hence many studies resort to learning fine-grained attribute-region mapping relations. Recent works [6,17] have achieved promising results by incorporating attention mechanisms into classifiers. Since our approach requires localizing attributerelated features (regions), we employ the same classifier.\nAssume f = E(x), f ∈ R CHW denotes the feature of an input image extracted by the feature extractor E, where C, H, W denote the channel, height, and width. Then f is divided into r = HW regions, and each region is represented by a C-dimensional vector. The degree of association between each attribute and each region can be scored, which is formulated as:\np(i, j|w 1 ) = exp (v i w 1 f j ) r q=1 exp (v i w 1 f q ) ,(1)\nwhere p(i, j) denotes the association score of attribute i and region j. Then p is used to weight the final output and optimize it with cross-entropy loss. The loss function is defined as:\nL CE (a, f, p|w 2 ) = -log exp (a(vw 2 f )p) K k=1 exp (a k (vw 2 f )p) ,(2)" }, { "figure_ref": [], "heading": "Unseen-Aware Distillation", "publication_ref": [ "b33" ], "table_ref": [], "text": "Models gradually forget some important features as they learn downstream data. In order to help the model memorize these features, we design the UAD module to review the old valuable knowledge. A key issue is how to recognize which features are worth being retained. We are inspired by the class activation map [34], which indicates the correlation between regional features and classes via gradient responses. So we use the class activation maps created by the teacher model to represent the importance of each region. Let f t = E t (x) denote the feature extracted by the teacher extractor, and o t = W t (f t ) denotes the output of the teacher classifier. Then, if we want to know which regional features are strongly correlated with category c, we just need to activate the gradient for the corresponding category and the gradient map is represented as:\ng(k) = τ ( ∂L CE (a c , f t , p t |w t 2 ) ∂f t ),(3)\nwhere a c is the attribute of class c and p t (i, j) = p(i, j|w t 1 ) and τ (•) is Min-Max Normalization. Unseen-Aware Attention. A simple idea is to activate the category corresponding to the training sample to get the activation map. However, such an approach would be inclined to focus on attributes that favor the seen classes, leading to overfitting, which is not conducive to generalization to the unseen classes. Therefore, we need to know which attributes the unseen classes are interested in. In the end, we activate similar unseen classes at the same time. Specifically, we first compute the similarity between the attributes of the unseen and seen classes, which is measured by Euclidean distance. Then we select the m most similar seen classes for each unseen class, corresponding to the fact that there will exist an unseen class similarity set U k for each seen class k. After that, we get the new activation map:\ng = τ (g(k) + 1 d ku∈U k g(k u )),(4)\nwhere d denotes the size of set U k . Attribute-Aware Attention. Despite the fact that the class activation map implies connections between regions and classes, those regions may be the ones that contain noise, such as backgrounds. For example, if the unseen class possesses the attribute crown while the corresponding seen class does not, then the activated region is inaccurate. To suppress the effect of this part of the regions, we reweight the activation map with the score of the attribute-region classifier. Assuming p denotes the score map of the training sample computed by Eq. 1, the final attention map is defined as:\ng = g • p,(5)\nwhere (•) denotes the dot product. Knowledge Distillation. We use feature distillation to transfer knowledge to the student model. Suppose f s = E s (x) denotes the feature extracted by the student extractor. The distillation loss is:\nL U AD = ||f s -f t || 2 • g,(6)\nwhere || • || means Mean-Square Error loss function." }, { "figure_ref": [], "heading": "Attribute-Guided Learning", "publication_ref": [ "b11" ], "table_ref": [], "text": "Another problem with the application of the model to downstream tasks is overfitting. Due to the characteristic of neural networks that are skilled at finding shortcuts [12], some noisy features, irrelevant attributes, etc. in the seen classes receive more attention, which prevents the model from generalizing to the unseen classes. The goal of AGL is to guide the model to learn features that are relevant to attributes. Our motivation is to reorganize learned features into class prototypes guided by attributes, and then implicitly enhance the connection between learned features and attributes by increasing the distinguishability of class prototypes. Initialize Attribute Pool. We firstly create an attribute pool h = {h 1 , h 2 , ..., h n }, h i ∈ R C , where n is the number of sub-attribute and h i denotes the prototypical feature of the i-th sub-attribute. The attribute pool is initialized by the features extracted by the teacher model. Specifically, we compute the prototypes using the region in each sample that has the highest correlation with the attribute. According to Eq. 1, we can obtain the association score map of attributes and regions. Then the prototype of attribute i is formulated as:\nh i = N j=1 pj i f j i N j=1 pj i ,(7)\nwhere pj i denotes the max score of attribute i in sample j and f j i is the corresponding region feature of pj i . N is the size of the whole training dataset. Note that the initialization is performed only once during the entire training process, and we set h learnable. Update Attribute Pool. During the training phase, we update the attribute pool with the features extracted from the student model. For a batch of features f s extracted by the student extractor, the prototype of attribute i is:\nhi = 1 B B b=1 r j=1 p b (i, j)f b j p b (i, j) ,(8)\nwhere hi denotes the prototype of attribute i computed by current batch and B denotes the batch size. Here p(i, j) = p(i, j|w s 1 ). Then the attribute pool is updated by:\nh = h × λ + h × (1.0 -λ),(9)\nwhere λ is a balanced parameter and we set it learnable. Optimization Objective. We hope that the updated attribute pool can facilitate the recognition of both seen and unseen classes. Specifically, with the help of attribute vectors, class prototypes are obtained by adaptively selecting and assembling sub-attributes. Then we increase the semantic distance between the class prototypes to enhance the correlation between the learned features and attributes. The loss function is:\nL AGL = -log exp (ahw p ) K k=1 exp (a k hw p ) .(10)" }, { "figure_ref": [], "heading": "Overall Objective", "publication_ref": [], "table_ref": [], "text": "In the pre-training stage, only L CE is used for training because the performance of the attribute-region classifier is too weak to localize valuable features. When the model is stabilized, all loss functions are used to train together. The optimization objective is:\nL AR = L CE + βL U AD + γL AGL ,(11)\nwhere β and γ are hyper-parameters. The teacher model does not participate in training. At the end of each epoch, an update is performed by the exponential moving average method. Let Θ t and Θ s denote the parameters of the teacher model and the student model. The teacher model is updated by:\nΘ t = Θ t × δ + Θ s × (1.0 -δ), (12\n)\nwhere δ is a constant and is set to 0.9995." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b36", "b30", "b14", "b5" ], "table_ref": [], "text": "Datasets. We perform experiments on three benchmark datasets including CUB (Caltech UCSD Birds 200) [37], SUN (SUN Attribute) [31], and AWA2 (Animals with Attributes 2) [39]. We split the seen and unseen classes according to the criteria described in [39]. CUB is a finegrained bird dataset containing 11,788 images with 150 seen classes and 50 unseen classes, and the attribute dimension is 312. SUN is a scene dataset containing 14,340 images with 645 seen classes and 72 unseen classes, and the attribute dimension is 102. AWA2 is a coarse-grained animal dataset containing 37,322 images, including 40 seen classes and 10 unseen classes, and the attribute dimension is 85. Evaluation Protocols. For the ZSL setting, we evaluate the top-1 accuracy on unseen classes and denote it as T. For the GZSL setting, we record the top-1 accuracies on seen and unseen classes and denote them as S and U, respectively. Meanwhile, we report their harmonic mean H, i.e., H = 2SU S+U , to evaluate the performance of GZSL. Implementation Details. We adopt the feature extractor of ResNet101 [15] pre-trained on ImageNet [10] and the classifier of MSDN [6] to form our network architecture. The batch size is set to 32 for CUB and 50 for SUN and AWA2. We set the learning rate to 5e-6 and employ the RMSProp optimizer with the momentum set as 0.9 and weight decay set as 1e-4. For hyperparameters, we set m to 5 for AWA2 and 10 for CUB and SUN. For β and γ, we set them to {10, 0.1} for CUB and AWA2 and {15, 0.1} for SUN." }, { "figure_ref": [], "heading": "Comparision with State-of-the-Arts", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We compare our (AR) 2 with state-of-the-art methods of various types, including GEN (Generative Method), OWM (One-Way Mapping, i.e., visual feature maps to attribute space or attribute maps to feature space), CS (Common Space), RR (Representation Rectification). The experimental results are shown in Table 1. As can be seen from the table, our method yields competitive results, with the highest H-scores and ZSL accuracies on both the CUB and SUN datasets. Specifically, our H-score on CUB precedes the second place by 1.7%, and the recognition rate of unseen classes (74.1%) precedes the second place by 4.5%. On the SUN dataset, we achieve an H-score of 43.7% and a T-score of 66.2% for the first and third places, respectively. Five of our metrics achieve the best scores, one second and one third. Notably, our scheme is also far ahead of its peers. The experimental results show that the features extracted from the pre-trained model have great room for improvement and that representation rectification is one of the effective schemes. Meanwhile, our solution significantly contributes to the recognition performance with dual constraints of keeping and learning valuable knowledge. " }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We perform a series of ablation experiments to analyze the functionality of each component. We use MSDN as the baseline and compare our method with direct fine-tuning. The results of the experiments are shown in Table 2, where it can be seen that fine-tuning brings some boost on the CUB and SUN datasets, but does not significantly promote the effect on AWA2. Our method, instead, achieves the best results on all three datasets, and each component plays a positive role. It demonstrates the soundness of the design of each component and the effectiveness of our method in mitigating the forgetting and overfitting problems." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Analysis of Hyperparameters", "publication_ref": [ "b0", "b14", "b19" ], "table_ref": [ "tab_2", "tab_2" ], "text": "Sensitivity of β and γ. We conduct a number of experiments to analyze the sensitivity of the hyperparameters β and γ. The experimental results are shown in Fig. 4. We set β to [1,5,10,15,20] and γ to [0.1, 0.2, 0.4, 0.6, 0.8, 1.0], respectively. Fig. 4 (a) and (b) show the performance plots of the H-score on CUB and SUN, respectively. As can be seen from the figure, changes in β and γ do not lead to large fluctuations in performance. And, we recommend setting β to [10, 15] and γ below 0.5 for optimal performance. Effect of m and λ. We further analyze the impact of the parameter m, i.e., how many of the most similar seen classes are appropriate to choose among the UDA module. The experimental results are shown in Table 3. We set m to [1, 5, 10] for comparison, and the experimental results on CUB show that setting it to 10 is best, but setting it to 5 is better on AWA2. The reason is that CUB has a total of 200 classes with 150 seen classes, while AWA2 has a total of only 50 classes with 40 seen classes. Moreover, AWA2 is a coarsegrained dataset with less similarity between categories. In addition, we investigate the effect of the parameter λ in Eq. 9. The experimental results are also shown in Table 3. We conduct experiments on the CUB and SUN datasets by fixing it to [0.9, 0.5]. The results show that the fixed values are not as effective as the learnable ones. " }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "Stability Analysis", "publication_ref": [], "table_ref": [], "text": "We conduct a plethora of experiments to analyze the stability of our method and compare it with fine-tuning. We study the effect of training time as well as learning rate on performance to analyze the contribution of our method in From the figures, it can be seen that our method is relatively more stable, especially the results in Fig. 3 (a) and Fig. 3 (b), which show that our method can effectively suppress the forgetting problem of the model. In Fig. 3 (c) and Fig. 3 (d), there are fluctuations caused by the excessive learning rate, but our method can adjust quickly and obtain higher performance than fine-tuning. It indicates that our AGL module effectively captures the attribute-related features and boosts the performance." }, { "figure_ref": [ "fig_4" ], "heading": "Visualization of Attribute-Region Attention", "publication_ref": [], "table_ref": [], "text": "In order to investigate the features that our scheme keeps and learns during the learning process, we perform a visualization analysis. As shown in Fig. 5, the first row represents the baseline method, and the second row is our method. It can be seen that MSDN endeavors to learn as much as it can about the correspondence between attributes and visual features, but it still construct some wrong relational pairs. Our method preserves the features that correspond cor-rectly, e.g., black wing, black forehead, grey bill. Meanwhile, features that are not originally learned are successfully enhanced or captured by our scheme, e.g., black eye, solid breast, solid belly, black underparts. It illustrates that our method effectively maintains the original valuable knowledge and guides the model to mine more attributerelated features." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5" ], "heading": "t-SNE Analysis to Features", "publication_ref": [], "table_ref": [], "text": "We utilize the t-SNE map on CUB to study the kept and learned features. The experimental results are shown in Fig. 6, where 10 classes are randomly selected. From the image, we can observe that our method is effective for both seen and unseen classes. Specifically, the features extracted by our method possess more obvious differentiation, e.g., green, purple, yellowgreen in Fig. 6 (b) and yellow, cyan, pink in Fig. 6 (d). It demonstrates that the proposed AGL module effectively captures the connection between features and attributes, and thus learns the attribute-related knowledge and transfers it to the unseen domain." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we analyze that existing ZSL/GZSL methods are limited by the incompleteness of the extracted features. Such incompleteness stems from the problem of domain bias between the pre-trained model and the downstream tasks. Fine-tuning serves as a simple approach to address this problem, while can introduce catastrophic forgetting and seen class biased overfitting. To address these issues, we present a novel Attribute-Aware Representation Rectification framework, dubbed (AR) 2 , to refine the learned features while, at the same time, maintaining the original valuable features. Our approach consists of two main modules, i.e., Unseen-Aware Distillation (UAD) and Attribute-Guided Learning (AGL), which dominate the work of keeping old knowledge and learning effective new knowledge, respectively. Through extensive experimental analysis, we show that our method can effectively improve the model's recognition performance in ZSL/GZSL tasks." } ]
Generalized Zero-shot Learning (GZSL) has yielded remarkable performance by designing a series of unbiased visual-semantics mappings, wherein, the precision relies heavily on the completeness of extracted visual features from both seen and unseen classes. However, as a common practice in GZSL, the pre-trained feature extractor may easily exhibit difficulty in capturing domain-specific traits of the downstream tasks/datasets to provide finegrained discriminative features, i.e., domain bias, which hinders the overall recognition performance, especially for unseen classes. Recent studies partially address this issue by fine-tuning feature extractors, while may inevitably incur catastrophic forgetting and overfitting issues. In this paper, we propose a simple yet effective Attribute-Aware Representation Rectification framework for GZSL, dubbed (AR) 2 , to adaptively rectify the feature extractor to learn novel features while keeping original valuable features. Specifically, our method consists of two key components, i.e., Unseen-Aware Distillation (UAD) and Attribute-Guided Learning (AGL). During training, UAD exploits the prior knowledge of attribute texts that are shared by both seen/unseen classes with attention mechanisms to detect and maintain unseen class-sensitive visual features in a targeted manner, and meanwhile, AGL aims to steer the model to focus on valuable features and suppress them to fit noisy elements in the seen classes by attribute-guided representation learning. Extensive experiments on various benchmark datasets demonstrate the effectiveness of our method 1 .
Attribute-Aware Representation Rectification for Generalized Zero-Shot Learning
[ { "figure_caption": "Figure 1 .1Figure 1. Pink indicates features that are beneficial to the downstream data. Purple indicates newly learned valuable features.Pre-trained models lack the ability to extract fine-grained information of specific datasets. Direct fine-tuning leads to forgetfulness and overfitting, whereas our approach aims to keep and learn valuable information (better viewed in color).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. The overall framework of our method. 'S' denotes the student model and 'T' denotes the teacher model. UAD identifies and localizes the features extracted by the teacher model to filter the valuable parts. AGL utilizes the features extracted by the student model to update the attribute pool to facilitate the associations between the learned features and attributes (better viewed in color).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Comparison of the stability of our method with finetune. H-O: H score of our method. H-F: H score of Finetune. T-O: T score of our method. T-F: T score of Finetune (better viewed in color).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The analysis of sensitivity to hyperparameters β and γ (better viewed in color).", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization of the attention heat map. The first row represents the heat map of MSDN and the second row denotes our method (better viewed in color).", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. The t-SNE visualization of our method and MSDN. (a-b): seen class features; (c-d): unseen class features. (a, c): MSDN, (b, d): Our method (better viewed in color).", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "The experimental results(%) of CUB, SUN, and AWA2 on ZSL and GZSL settings. Method types are listed in the BRANCH column. GEN: Generative Method; OWM: One-Way Mapping; CS: Common Space; DA: Data Augmentation; and RR: Representation Rectification. The best, second-best, and third-best results are highlighted in red, blue, and underlined, respectively.", "figure_data": "CUBSUNAWA2", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The results(%) of ablation study. '*' denotes MSDN[6] as the baseline. '✓' denotes adding the module. The best results are marked in bold.", "figure_data": "CUBSUNAWA2METHOD UAD AGLTHTHTHBaseline*76.1 68.1 65.8 41.3 70.1 67.7Finetune78.5 71.8 63.8 42.0 68.4 67.3Ours-1✓79.7 73.0 65.4 42.9 69.5 68.4Ours-2✓79.1 72.2 64.5 42.1 68.3 67.5Ours✓✓80.2 73.5 66.2 43.7 70.9 70.0", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The effect of the number m of similar seen classes and the parameter λ in Eq. (9).", "figure_data": "CUBAWA2SETTINGTHTHm = 179.6 72.1 69.6 69.0m = 579.5 73.0 70.9 70.0m = 1080.2 73.5 69.1 68.7CUBSUNSETTINGTHTHλ = 0.978.4 72.5 64.9 43.0λ = 0.579.6 72.9 64.5 42.8λ = learnable 80.2 73.5 66.2 43.7", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Zhijie Rao; Jingcai Guo; Xiaocheng Lu; Qihua Zhou; Jie Zhang; Wei Kang; Chenxin Li; Song Guo
[ { "authors": "Soravit Changpinyo; Wei-Lun Chao; Boqing Gong; Fei Sha", "journal": "", "ref_id": "b0", "title": "Synthesized classifiers for zero-shot learning", "year": "2016" }, { "authors": "Wei-Lun Chao; Soravit Changpinyo; Boqing Gong; Fei Sha", "journal": "Springer", "ref_id": "b1", "title": "An empirical study and analysis of generalized zeroshot learning for object recognition in the wild", "year": "2016" }, { "authors": "Shiming Chen; Wenjie Wang; Beihao Xia; Qinmu Peng; Xinge You; Feng Zheng; Ling Shao", "journal": "", "ref_id": "b2", "title": "Free: Feature refinement for generalized zero-shot learning", "year": "2021" }, { "authors": "Shiming Chen; Guosen Xie; Yang Liu; Qinmu Peng; Baigui Sun; Hao Li; Xinge You; Ling Shao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Hsva: Hierarchical semantic-visual adaptation for zero-shot learning", "year": "2021" }, { "authors": "Shiming Chen; Ziming Hong; Yang Liu; Guo-Sen Xie; Baigui Sun; Hao Li; Qinmu Peng; Ke Lu; Xinge You", "journal": "", "ref_id": "b4", "title": "Transzero: Attribute-guided transformer for zero-shot learning", "year": "2022" }, { "authors": "Shiming Chen; Ziming Hong; Guo-Sen Xie; Wenhan Yang; Qinmu Peng; Kai Wang; Jian Zhao; Xinge You", "journal": "", "ref_id": "b5", "title": "Msdn: Mutually semantic distillation network for zero-shot learning", "year": "2022" }, { "authors": "Zhi Chen; Yadan Luo; Ruihong Qiu; Sen Wang; Zi Huang; Jingjing Li; Zheng Zhang", "journal": "", "ref_id": "b6", "title": "Semantics disentangling for generalized zero-shot learning", "year": "2021" }, { "authors": "Zhi Chen; Pengfei Zhang; Jingjing Li; Sen Wang; Zi Huang", "journal": "", "ref_id": "b7", "title": "Zero-shot learning by harnessing adversarial samples", "year": "2023" }, { "authors": "Matthias De Lange; Rahaf Aljundi; Marc Masana; Sarah Parisot; Xu Jia; Aleš Leonardis; Gregory Slabaugh; Tinne Tuytelaars", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b8", "title": "A continual learning survey: Defying forgetting in classification tasks", "year": "2021" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b9", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Rafael Felix; Ian Reid; Gustavo Carneiro", "journal": "", "ref_id": "b10", "title": "Multi-modal cycle-consistent generalized zero-shot learning", "year": "2018" }, { "authors": "Robert Geirhos; Jörn-Henrik Jacobsen; Claudio Michaelis; Richard Zemel; Wieland Brendel; Matthias Bethge; Felix A Wichmann", "journal": "Nature Machine Intelligence", "ref_id": "b11", "title": "Shortcut learning in deep neural networks", "year": "2020" }, { "authors": "Jingcai Guo; Song Guo; Qihua Zhou; Ziming Liu; Xiaocheng Lu; Fushuo Huo", "journal": "", "ref_id": "b12", "title": "Graph knows unknowns: Reformulate zero-shot learning as sample-level graph recognition", "year": "2023" }, { "authors": "Zongyan Han; Zhenyong Fu; Shuo Chen; Jian Yang", "journal": "", "ref_id": "b13", "title": "Contrastive embedding for generalized zero-shot learning", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b14", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Dan Hendrycks; Kimin Lee; Mantas Mazeika", "journal": "PMLR", "ref_id": "b15", "title": "Using pre-training can improve model robustness and uncertainty", "year": "2019" }, { "authors": "Dat Huynh; Ehsan Elhamifar", "journal": "", "ref_id": "b16", "title": "Fine-grained generalized zero-shot learning via dense attribute-based attention", "year": "2020" }, { "authors": "Huajie Jiang; Ruiping Wang; Shiguang Shan; Xilin Chen", "journal": "", "ref_id": "b17", "title": "Transferable contrastive network for generalized zeroshot learning", "year": "2019" }, { "authors": "Rohit Keshari; Richa Singh; Mayank Vatsa", "journal": "", "ref_id": "b18", "title": "Generalized zero-shot learning via over-complete distribution", "year": "2020" }, { "authors": "Elyor Kodirov; Tao Xiang; Shaogang Gong", "journal": "", "ref_id": "b19", "title": "Semantic autoencoder for zero-shot learning", "year": "2017" }, { "authors": "Xia Kong; Zuodong Gao; Xiaofan Li; Ming Hong; Jun Liu; Chengjie Wang; Yuan Xie; Yanyun Qu", "journal": "", "ref_id": "b20", "title": "En-compactness: Self-distillation embedding & contrastive generation for generalized zero-shot learning", "year": "2022" }, { "authors": "Hannes Christoph H Lampert; Stefan Nickisch; Harmeling", "journal": "IEEE", "ref_id": "b21", "title": "Learning to detect unseen object classes by betweenclass attribute transfer", "year": "2009" }, { "authors": "Hao Li; Pratik Chaudhari; Hao Yang; Michael Lam; Avinash Ravichandran; Rahul Bhotika; Stefano Soatto", "journal": "", "ref_id": "b22", "title": "Rethinking the hyperparameters for fine-tuning", "year": "2019" }, { "authors": "Jingjing Li; Mengmeng Jing; Ke Lu; Zhengming Ding; Lei Zhu; Zi Huang", "journal": "", "ref_id": "b23", "title": "Leveraging the invariant side of generative zero-shot learning", "year": "2019" }, { "authors": "Xiangyu Li; Zhe Xu; Kun Wei; Cheng Deng", "journal": "", "ref_id": "b24", "title": "Generalized zero-shot learning via disentangled representation", "year": "2021" }, { "authors": "Xiaofan Li; Yachao Zhang; Shiran Bian; Yanyun Qu; Yuan Xie; Zhongchao Shi; Jianping Fan", "journal": "", "ref_id": "b25", "title": "Vs-boost: Boosting visual-semantic association for generalized zero-shot learning", "year": "2023" }, { "authors": "Lu Liu; Tianyi Zhou; Guodong Long; Jing Jiang; Chengqi Zhang", "journal": "", "ref_id": "b26", "title": "Attribute propagation network for graph zero-shot learning", "year": "2020" }, { "authors": "Shichen Liu; Mingsheng Long; Jianmin Wang; Michael I Jordan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Generalized zero-shot learning with deep calibration network", "year": "2018" }, { "authors": "Yang Liu; Jishun Guo; Deng Cai; Xiaofei He", "journal": "", "ref_id": "b28", "title": "Attribute attention for semantic disambiguation in zero-shot learning", "year": "2019" }, { "authors": "Sanath Narayan; Akshita Gupta; Fahad Shahbaz Khan; G M Cees; Ling Snoek; Shao", "journal": "Springer", "ref_id": "b29", "title": "Latent embedding feedback and discriminative features for zero-shot classification", "year": "2020" }, { "authors": "Genevieve Patterson; James Hays", "journal": "IEEE", "ref_id": "b30", "title": "Sun attribute database: Discovering, annotating, and recognizing scene attributes", "year": "2012" }, { "authors": "Bernardino Romera; - Paredes; Philip Torr", "journal": "PMLR", "ref_id": "b31", "title": "An embarrassingly simple approach to zero-shot learning", "year": "2015" }, { "authors": "Edgar Schonfeld; Sayna Ebrahimi; Samarth Sinha; Trevor Darrell; Zeynep Akata", "journal": "", "ref_id": "b32", "title": "Generalized zero-and few-shot learning via aligned variational autoencoders", "year": "2019" }, { "authors": "Michael Ramprasaath R Selvaraju; Abhishek Cogswell; Ramakrishna Das; Devi Vedantam; Dhruv Parikh; Batra", "journal": "", "ref_id": "b33", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017" }, { "authors": "Kumar Vinay; Gundeep Verma; Ashish Arora; Piyush Mishra; Rai", "journal": "", "ref_id": "b34", "title": "Generalized zero-shot learning via synthesized examples", "year": "2018" }, { "authors": "R Maunil; Hemanth Vyas; Sethuraman Venkateswara; Panchanathan", "journal": "Springer", "ref_id": "b35", "title": "Leveraging seen and unseen semantic relationships for generative zero-shot learning", "year": "2020" }, { "authors": "Catherine Wah; Steve Branson; Peter Welinder; Pietro Perona; Serge Belongie", "journal": "", "ref_id": "b36", "title": "The caltech-ucsd birds-200-2011 dataset", "year": "2011" }, { "authors": "Chaoqun Wang; Shaobo Min; Xuejin Chen; Xiaoyan Sun; Houqiang Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b37", "title": "Dual progressive prototype network for generalized zero-shot learning", "year": "2021" }, { "authors": "Yongqin Xian; Bernt Schiele; Zeynep Akata", "journal": "", "ref_id": "b38", "title": "Zero-shot learning-the good, the bad and the ugly", "year": "2017" }, { "authors": "Yongqin Xian; Tobias Lorenz; Bernt Schiele; Zeynep Akata", "journal": "", "ref_id": "b39", "title": "Feature generating networks for zero-shot learning", "year": "2018" }, { "authors": "Yongqin Xian; Saurabh Sharma; Bernt Schiele; Zeynep Akata", "journal": "", "ref_id": "b40", "title": "f-vaegan-d2: A feature generating framework for any-shot learning", "year": "2019" }, { "authors": "Guo-Sen Xie; Li Liu; Xiaobo Jin; Fan Zhu; Zheng Zhang; Jie Qin; Yazhou Yao; Ling Shao", "journal": "", "ref_id": "b41", "title": "Attentive region embedding network for zero-shot learning", "year": "2019" }, { "authors": "Guo-Sen Xie; Li Liu; Fan Zhu; Fang Zhao; Zheng Zhang; Yazhou Yao; Jie Qin; Ling Shao", "journal": "Springer", "ref_id": "b42", "title": "Region graph embedding network for zero-shot learning", "year": "2020" }, { "authors": "Guo-Sen Xie; Xu-Yao Zhang; Tian-Zhu Xiang; Fang Zhao; Zheng Zhang; Ling Shao; Xuelong Li", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b43", "title": "Leveraging balanced semantic embedding for generative zero-shot learning", "year": "2022" }, { "authors": "Wenjia Xu; Yongqin Xian; Jiuniu Wang; Bernt Schiele; Zeynep Akata", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Attribute prototype network for zero-shot learning", "year": "2020" }, { "authors": "Wenjia Xu; Yongqin Xian; Jiuniu Wang; Bernt Schiele; Zeynep Akata", "journal": "", "ref_id": "b45", "title": "Vgse: Visually-grounded semantic embeddings for zero-shot learning", "year": "2022" }, { "authors": "Yunlong Yu; Zhong Ji; Jungong Han; Zhongfei Zhang", "journal": "", "ref_id": "b46", "title": "Episode-based prototype generating network for zero-shot learning", "year": "2020" }, { "authors": "Li Zhang; Tao Xiang; Shaogang Gong", "journal": "", "ref_id": "b47", "title": "Learning a deep embedding model for zero-shot learning", "year": "2017" }, { "authors": "Yizhe Zhu; Jianwen Xie; Zhiqiang Tang; Xi Peng; Ahmed Elgammal", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "Semantic-guided multi-attention localization for zero-shot learning", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 308.86, 271.04, 236.25, 47.09 ], "formula_id": "formula_0", "formula_text": "W t = [W t 1 , W t 2 ] and W s = [W s 1 , W s 2 ]. Each model includes a feature extractor E and a classifier W = [W 1 , W 2 ]. Let w 1 , w 2 denote the learnable param- eters of W 1 , W 2 , respectively. Then w t" }, { "formula_coordinates": [ 4, 99.03, 421.29, 187.34, 24.8 ], "formula_id": "formula_1", "formula_text": "p(i, j|w 1 ) = exp (v i w 1 f j ) r q=1 exp (v i w 1 f q ) ,(1)" }, { "formula_coordinates": [ 4, 59.28, 508.04, 227.08, 26.56 ], "formula_id": "formula_2", "formula_text": "L CE (a, f, p|w 2 ) = -log exp (a(vw 2 f )p) K k=1 exp (a k (vw 2 f )p) ,(2)" }, { "formula_coordinates": [ 4, 360.55, 398.94, 184.57, 24.8 ], "formula_id": "formula_3", "formula_text": "g(k) = τ ( ∂L CE (a c , f t , p t |w t 2 ) ∂f t ),(3)" }, { "formula_coordinates": [ 4, 365.4, 631.04, 179.71, 27.55 ], "formula_id": "formula_4", "formula_text": "g = τ (g(k) + 1 d ku∈U k g(k u )),(4)" }, { "formula_coordinates": [ 5, 149, 172.55, 137.36, 8.96 ], "formula_id": "formula_5", "formula_text": "g = g • p,(5)" }, { "formula_coordinates": [ 5, 118.31, 261.65, 168.05, 11.72 ], "formula_id": "formula_6", "formula_text": "L U AD = ||f s -f t || 2 • g,(6)" }, { "formula_coordinates": [ 5, 131.58, 576.36, 154.78, 31.02 ], "formula_id": "formula_7", "formula_text": "h i = N j=1 pj i f j i N j=1 pj i ,(7)" }, { "formula_coordinates": [ 5, 371.61, 96.62, 173.5, 30.55 ], "formula_id": "formula_8", "formula_text": "hi = 1 B B b=1 r j=1 p b (i, j)f b j p b (i, j) ,(8)" }, { "formula_coordinates": [ 5, 369.95, 184.67, 175.16, 11.59 ], "formula_id": "formula_9", "formula_text": "h = h × λ + h × (1.0 -λ),(9)" }, { "formula_coordinates": [ 5, 353.56, 324.76, 191.55, 26.56 ], "formula_id": "formula_10", "formula_text": "L AGL = -log exp (ahw p ) K k=1 exp (a k hw p ) .(10)" }, { "formula_coordinates": [ 5, 355.6, 450.87, 189.51, 9.65 ], "formula_id": "formula_11", "formula_text": "L AR = L CE + βL U AD + γL AGL ,(11)" }, { "formula_coordinates": [ 5, 362.3, 543.81, 178.66, 11.03 ], "formula_id": "formula_12", "formula_text": "Θ t = Θ t × δ + Θ s × (1.0 -δ), (12" }, { "formula_coordinates": [ 5, 540.96, 546.2, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" } ]
2023-11-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b32", "b8", "b15", "b24", "b26", "b15", "b26", "b25", "b7", "b38", "b15", "b24", "b39", "b24", "b15" ], "table_ref": [], "text": "Deep neural networks (DNNs) deployed in the open world often encounter a diverse range of inputs from unknown classes, commonly referred to as out-of-distribution (OOD) data. However, DNNs' tendency to be overly confident, yet inaccurate about such inputs makes them less reliable, particularly in safety-critical applications such as autonomous driving [11] and healthcare [35]. Therefore, a DNN should be able to identify and avoid making predictions on these OOD inputs that differ from its training data. For instance, in an autonomous vehicle, the driving system must promptly alert the driver and transfer control when it detects unfamiliar scenes or objects that were not encountered during its training [33]. Accordingly, addressing the challenge of OOD detection has gained significant attention in recent studies [48].\nAmong various OOD detection methods, post-hoc inference techniques [1, 8, 12, 14, 16, 20, 22, 25-27, 34, 36-40, 45, 50] stand out as they can be applied to any pretrained model, making them versatile and applicable to a wide range of models without the need for modifications during the training phase. These techniques extract crucial information from intermediate [9,20,36] or output [14, 16,25,27,34] layers of DNNs, establishing an OOD score to distinguish between in-distribution (ID) and OOD samples. Since the output layer (i.e., normalised or unnormalised probabilities) in a DNN adeptly captures high-level semantics (i.e., objects, scenes etc.), researchers have increasingly focused on employing its features for OOD detection. For example, Hendrycks et al. [14] proposed the maximum softmax probability (MSP) as an initial baseline for OOD detection. Subsequently, Hendrycks et al. [16] and Liu et al. [27] further harnessed more extreme information from the output layer for OOD detection, utilising the maximum logit and energy (i.e., a smooth approximation for the maximum logit), respectively. Moreover, recent studies have incorporated advanced post-processing techniques [26,38,39] to enhance the performance exhibited by MSP and energy scores. While all these methods focus on extreme information, some studies have adopted a broader perspective, considering information spanned across ID classes or training samples [16,25,34,40], which we refer to here as collective information. For example, Lee et al. [25] proposed a method to fit a class-conditional Gaussian distribution on the penultimate layer features using the training samples and derive an OOD score with Mahalanobis distance.\nIn this paper, we introduce a novel metric ExCeL, designed to enhance OOD detection by incorporating both extreme and collective information at the output layer. While the logit of the top predicted class, commonly referred to as max logit [16] captures the extreme information, we show that the likelihood of other classes in subsequent ranks yields the collective information required to improve the distinguishability of OOD samples. Our approach is motivated by the observation that, during inference, when an input is predicted as a specific ID class, the rankings of the remaining classes are more consistently predictable for ID data compared to OOD data. Therefore, each ID class can be characterised by a unique class rank signature, that can be represented as a class likelihood matrix (CLM) with rows corresponding to predicted ID classes and columns to their ranks. Each matrix element signifies the likelihood of a particular ID class occurring at a specific rank. This likelihood is computed by analysing predicted class rankings across training samples.\nIn Figure 1, we show the consistent performance exhibited by ExCeL, compared to existing post-hoc baselines in terms of the mean overall rank. To summarise, we make the following contributions. The rest of the paper is organised as follows. In Section 2, we present the related work, while Section 3 provides the background and the preliminaries related to OOD detection. We provide an overview of our methodology in Section 4, followed by a detailed explanation of our experiment setups in Section 5. Next, we present the results of our experiments in Section 6, together with an analysis of the findings and outcomes. Finally, Section 7 concludes the paper." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b51" ], "table_ref": [], "text": "A plethora of recent work attempted to address the challenge of OOD detection [48]. These techniques can be broadly categorised into three main groups: post-hoc inference methods, training methods without outlier data, and training methods with outlier data [52]." }, { "figure_ref": [], "heading": "Post-hoc inference methods", "publication_ref": [ "b8", "b15", "b24", "b26", "b15", "b26", "b25", "b7", "b38", "b38", "b15", "b24", "b39", "b15", "b39" ], "table_ref": [], "text": "Post-hoc inference methods [1, 8, 12, 14, 16, 20, 22, 25-27, 34, 36-40, 45, 50] utilise post-processors applied to the base classifier. These works formulate an OOD score, which, in turn, is employed to produce a binary ID/OOD prediction through thresholding. These methods are active during the inference phase and generally assume that the classifier has been trained using the standard cross-entropy loss. They extract information from either intermediate layers [9,20,36] or the output layer [14, 16,25,27,34] of a DNN to establish the OOD score. Given that highlevel semantics are more effectively captured in the output layer, much attention has been directed towards exploiting output layer features for OOD detection. Early work by Hendrycks et al. [14] proposed the maximum softmax probability (MSP) as a reliable baseline for detecting OOD inputs. Building upon this, later studies adopted a similar approach, leveraging more extreme information from the output layer. For instance, subsequent work by Hendricks et al. [16] used the maximum logit directly, whereas Liu et al. [27] employed the energy score, which is a smooth approximation for the maximum logit.\nExpanding on these strategies, some studies incorporated advanced post-processing techniques [26,38,39] to elevate the performance exhibited by MSP and energy scores. For example, Sun et al. [39] introduced ReAct by rectifying activations at an upper limit, obtaining a modified logit vector with improved OOD separability. While these methods focus on extreme information, others explored a more comprehensive view of the information provided by the output layer [16,25,34,40]. More specifically, Hendrycks et al. [16] employed KL divergence between the softmax prediction vector and a reference vector to define an OOD score, considering predictions for all classes. In contrast, Sun et al. [40] leveraged information across training samples, defining an OOD score based on the distance to the k th nearest neighbor. However, none of the prior works utilised collective information embedded within the output layer across all classes and training samples." }, { "figure_ref": [], "heading": "Training methods without outlier data", "publication_ref": [ "b45", "b18", "b16", "b45", "b18", "b16", "b20" ], "table_ref": [], "text": "These methods incorporate regularisation techniques during training without relying on auxiliary OOD data, often referred to as outliers [3, 7, 10, 17-19, 31, 41, 42, 46]. They include a diverse range of approaches, such as constraining vector norms [46], modifying the decision boundary [19], and applying sophisticated learning methods [17]. For instance, Wei et al. [46] enforced a constant vector norm on the logits to prevent their continuous increase throughout the model training. Furthermore, Huang et al. [19] proposed to simplify the decision boundary between ID and OOD by decomposing the large semantic space into smaller groups with similar concepts. While the majority of these techniques followed a supervised learning approach, other work [17,21] adopted self-supervised learning for OOD detection." }, { "figure_ref": [], "heading": "Training methods with outlier data", "publication_ref": [ "b14", "b46", "b50", "b53" ], "table_ref": [], "text": "In contrast to the methods discussed in Section 2.2, these techniques harness the knowledge derived from auxiliary OOD data during model training [15,47,49,51]. This allows OOD detectors to generalise well to unseen data and detect OOD inputs more effectively at test time. Within these approaches, some merely incorporate a set of outliers, while others attempt to mine of the most informative outliers, a process known as outlier mining [4,30]. Generally, these methods outperform post-hoc and training-based approaches without outlier data, as they expose the model to OOD characteristics to some extent during the training phase. Nonetheless, they have limitations in generalisation since the model gets exposed to only certain types of OODs.\nOverall, post-hoc inference methods emerge as the standout choice for OOD detection, owing to their ease of implementation and competitive performance. While existing post-hoc detectors predominantly concentrate on either extreme or collective information, we propose ExCeL that combines both aspects available within the output layer." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "In classification tasks, the problem of out-of-distribution detection can be defined using the following setup. Let X = R d be the input space and Y = {1, 2, ..., C} be the output space. Assume a deep neural network f : X → R |Y| is trained on a set of data D = {(x i , y i )} N i=1 drawn from a distribution P defined on X × Y. The network outputs a logit vector which is used to predict the label of an input sample. Furthermore, let D in denote the marginal distribution of P for X , which represents the distribution of ID data. At test time, the model may encounter inputs from other distributions, denoted as D out , that differ from D in , and are recognised as out-of-distribution. Thus, the goal of OOD detection is to define a decision function g such that for a given test input x ∈ X :\ng(x; f ) = 1 if x ∼ D in 0 if x ∼ D out(1)\nPost-hoc detectors modify the OOD detection problem in Equation 1 by leveraging a scoring function S(x) and make the decision via a threshold (λ) comparison as follows.\ng λ (x) = in if S(x; f ) ≥ λ out if S(x; f ) < λ (2)" }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we first explain the intuition behind our proposed method. Following that, we outline the ExCeL score computation algorithm. Lastly, we analytically justify our motivation. " }, { "figure_ref": [ "fig_0" ], "heading": "Intuition", "publication_ref": [], "table_ref": [], "text": "As discussed in Section 1, our idea is motivated by the observation that, during inference, when an input is predicted as a specific ID class, the rankings of the subsequent classes are more deterministic for ID data compared to OOD data.\nTo illustrate this more clearly, we depict the likelihood of a subset of ID classes ranking among the top ten for four base classes of the CIFAR100 dataset in Figure 2. It is important to note that the first rank is always assigned to the respective base class. We notice that when a test input is predicted as either an elephant or a camel, there is a strong likelihood of the class bear appearing within the top five predictions, given its semantic proximity to these classes. Similarly, in the case of an input predicted as a chair or a table, the class bed is highly likely to be among the top five predictions. We refer to these distinct patterns associated with each class as class rank signatures. Notably, such trends are absent in OOD data, enabling us to leverage this information to design ExCeL based on the predicted class ranking for efficient OOD detection." }, { "figure_ref": [], "heading": "ExCeL score computation", "publication_ref": [], "table_ref": [], "text": "The ExCeL score computation comprises four steps; two steps involving pre-computation using training samples and two steps executed at test time. Firstly, we calculate the class likelihood matrix for each ID class by leveraging correctly classified training samples specific to that class. Following this, the class likelihood matrix undergoes a smooth-ing process to amplify the influence of frequently occurring classes while penalising less prevalent ones. During test time, based on the top predicted class of an input, a rank score is computed using the relevant class likelihood matrix. This score captures collective information from the ranking of predicted classes. Finally, the rank score is linearly combined with the max logit to compute the final ExCeL score for OOD detection. These steps are further discussed in the following sections." }, { "figure_ref": [], "heading": "Generating the Class Likelihood Matrix", "publication_ref": [], "table_ref": [], "text": "The \np c C1 p c C2 . . . p c CC      , p c ij = n c ij N c(3)\nHere, n c ij represents the number of occurrences where class i appears at rank j among correctly classified samples in class c. Furthermore, N c denotes the total number of correctly classified samples in class c, and C represents the total number of ID classes. It is important to highlight that when calculating the likelihood matrix for any class c, we exclusively take into account the correctly classified training samples belonging to that class. Therefore, the top rank in the likelihood matrix is invariably occupied by the class c itself.\np c i1 = 1 if i = c 0 otherwise .(4)" }, { "figure_ref": [], "heading": "Smoothing the CLM", "publication_ref": [], "table_ref": [], "text": "DNNs tend to exhibit some degree of overfitting to the training set. Therefore, some samples may lose the correlation between similar classes during training. This will induce noise in the CLM computed in Section 4.2.1. Hence, to extract high-level information from the CLM, we incorporate a smoothing step, employing a piecewise function based on the following criteria:\n• For classes frequently occurring in a specific rank, a fixed high reward is assigned. • If the likelihood of a class, though not highly significant, surpasses that of a random prediction, a small reward is given. • If the likelihood of a class is worse than a random prediction but not zero, a small penalty is imposed. • Classes that do not appear in a specific rank receive a fixed high penalty. Specifically, let Pc be the smoothed likelihood matrix of P c . Each element pc ij in Pc is determined based on the corresponding value p c ij in P c based on Equation 5.\npc ij =          a C-1 if p c ij ≥ b C-1 1 C-1 if 1 C-1 ≤ p c ij < b C-1 -1 C-1 if 0 < p c ij < 1 C-1 -a C-1 if p c ij = 0(5)\nHere, a corresponds to the reward, while b denotes the high likelihood threshold. We use the validation set to determine these hyperparameters. The smoothed class likelihood matrix is then employed to calculate the rank score that provides a measure of how closely a prediction aligns with the distinct class rank signature." }, { "figure_ref": [], "heading": "Computing the rank score", "publication_ref": [], "table_ref": [], "text": "For a given test image x, let the predicted class ranking be [c 1 , c 2 , c 3 ,..., c C ], where c 1 and c C are the classes with the highest and the lowest logit values respectively. Since the top predicted class is c 1 , we use the smoothed class likelihood matrix of class c 1 , denoted by Pc1 for the rank score calculation. Thus, we compute the rank score (RS) of x as,\nRS(x) = C i=1 pc1 cii(6)\nSince we consider the CLM associated with the top-ranked class when computing the rank score, it is worth noting that the first term (i.e., pc1 c11 ) consistently yields 1 (i.e., analogous to a C-1 in Pc ) for all inputs, in accordance with Equation 4. Consequently, the presence of the first term in Equation 6 merely introduces a constant shift to the score, without actively contributing to the discrimination between ID and OOD. However, for completeness, we retain the first term in rank score computation.\nMoreover, the rank score can also be computed efficiently via matrix operations. In order to achieve this, the predicted ranking is represented as a one-hot encoded matrix (ρ) with the predicted classes as the rows and ranks as columns. For example, in a four-class classification problem, if the predicted class ranking for an input x is [1, 4, 2, 3], ρ x would be as follows.\nρ x =     1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0     (7)\nAccordingly, for an input x, if the one-hot encoded predicted class ranking matrix is ρ x , we can compute the rank score as,\nRS(x) = tr[( Pc ) T ρ x ](8)\nA higher rank score is indicative of accumulating more rewards as per Equation 5. This suggests a strong alignment between the predicted class ranking and the rank patterns observed in training samples, indicating that the input is highly likely to be ID. In this way, the rank score encompasses the collective information spanning all ID classes and training samples, which is then utilised to improve OOD detection." }, { "figure_ref": [], "heading": "Combining with the maximum logit", "publication_ref": [ "b15" ], "table_ref": [], "text": "The rank score draws on collective information from subsequent ranks, excluding the top rank, which itself holds valuable information inherent to ID data. Therefore, as the final step, we combine the rank score with the logit value of the top predicted class, referred to as MaxLogit by Hendrycks et al. [16], to compute the ExCeL score for OOD detection. Since MaxLogit contains extreme information within the output layer, the final ExCeL score incorporates both extreme and collective information embedded in the output layer. We define the ExCeL score as a linear combination of the rank score and the MaxLogit as per Equation 9.\nExCeL(x) = α • RS(x) + (1 -α) • MaxLogit (9)\nHere, α balances the trade-off between using collective and extreme information for OOD detection. We fine-tune α using a validation set following the same approach used in Section 4.2.2 to fine-tune a and b." }, { "figure_ref": [], "heading": "Analytical justification", "publication_ref": [], "table_ref": [], "text": "We next analytically justify the existence of distinct patterns in the class ranking that can be exploited to improve OOD detection. Suppose for any predicted ID class, the remaining C -1 classes occur uniformly distributed across the subsequent ranks. Then, for any class c, the likelihood matrix, denoted as Equation 3, takes the following form:\np c ij =     1\nif (i = c and j = 1) 0 if (i ̸ = c and j = 1) or (i = c and j ̸ = 1)\n1 C-1 otherwise . (10\n)\nThus, for any predicted class ranking of an input x, the rank score would be,\nRS(x) = C i=1 pc cii = a C -1 + 1 C -1 + ... + 1 C -1 = a C -1 + 1 = k (constant) .(11)\nSubsequently, we can compute the ExCeL score as,\nExCeL(x) = α • RS(x) + (1 -α) • MaxLogit = α • k + (1 -α) • MaxLogit .(12)\nAs shown in Equation 12, when classes appear uniformly at random, the rank score remains constant, leading the ExCeL score to correspond to a linearly transformed MaxLogit. In this case, the OOD detection performance of ExCeL score would be identical to that of the MaxLogit, since a linear transformation would not impact the separability between ID and OOD. Hence, if ExCeL demonstrates enhanced OOD detection performance compared to MaxLogit, it would affirm the presence of unique class rank signatures within ID classes." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b51" ], "table_ref": [], "text": "We evaluate ExCeL over common OOD detection benchmarks. To ensure a fair comparison with various baselines, we use the OpenOOD 1 library by Zhang et al. [52]. We have implemented ExCeL in the OpenOOD environment and the code will be made publicly available upon the acceptance of the manuscript." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b22", "b23", "b22", "b56", "b54", "b52", "b42", "b54", "b44" ], "table_ref": [], "text": "We use CIFAR100 [23] and ImageNet-200 (a.k.a., Tiny-ImageNet) [24] as ID data in our experiments. Each ID dataset is evaluated against near-OOD and far-OOD datasets. As near-OOD data follow a closer distribution to ID compared to far-OOD, near-OOD detection is more challenging than far-OOD detection. For CIFAR100, CI-FAR10 [23] and TinyImageNet datasets serve as near-OOD, while MNIST [6], SVHN [32], Textures [5], and Places365 [53] are considered as far-OOD. Similarly, for TinyImageNet, SSB-hard [44] and NINCO [2] datasets are used as near-OOD, while iNaturalist [43], Textures [5], and OpenImage-O [45] datasets are used as far-OOD. For consistency, we adopt the same train, validation, and test splits used by OpenOOD developers in implementing our method.\n1 https://github.com/Jingkang50/OpenOOD." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b12", "b28" ], "table_ref": [], "text": "For both CIFAR-100 and TinyImageNet datasets, we use ResNet-18 [13] as the base model. Each model is trained for 100 epochs using the standard cross-entropy loss. We use the SGD optimiser with a momentum of 0.9, a learning rate of 0.1, and a cosine annealing decay schedule [29]. Furthermore, we incorporate a weight decay of 0.0005, and employ batch sizes of 128 and 256 for CIFAR100 and ImageNet-200, respectively." }, { "figure_ref": [], "heading": "Comparison with baselines", "publication_ref": [ "b24", "b0", "b49", "b7" ], "table_ref": [], "text": "We compare ExCeL with twenty-one existing post-hoc inference methods provided by the OpenOOD library. These baselines include early OOD detection methods such as maximum softmax probability (MSP) [14], Mahalanobis distance (MDS) [25], and OpenMax [1], as well as state-ofthe-art approaches like SHE [50], ASH [8], and DICE [38]." }, { "figure_ref": [], "heading": "Evaluation metrics", "publication_ref": [], "table_ref": [], "text": "We employ two metrics to evaluate the OOD detection performance: i) FPR95, which measures the false positive rate of OOD samples when the true positive rate of ID samples is at 95%; ii) AUROC, representing the area under the receiver operating curve. An effective OOD detector will exhibit a low FPR95 alongside a high AUROC. We also measure the overall performance of each method by computing the mean of near and far-OOD performances. Finally, to compare the performance of a method with other baselines across ID datasets, we define the mean overall rank which is computed as the average of overall ranks for AUROC and FPR95 as per Equation 13.\nMean Overall Rank = R AUROC overall + R FPR95 overall 2(13)\nMoreover, we report the mean and the standard deviation of the above metrics computed over three independent runs in Section 6." }, { "figure_ref": [], "heading": "Hyperparameter tuning", "publication_ref": [ "b14", "b26", "b39" ], "table_ref": [], "text": "Fine-tuning hyperparameters on a validation set is widely adopted in prior OOD detection work [15,27,40]. Following a similar approach, we determine the three parameters associated with ExCeL using the validation set. By performing a grid search on a, b, and α, we discovered the best hyperparameter combination for both CIFAR100 and ImageNet-200 datasets, is a = 10, b = 5, and α = 0.8." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "We present our main results in Table 1 and Table 2. Note that due to space constraints, we present the average performance for both near-OOD and far-OOD across all OOD all FPR95, and has the equal best mean overall rank with RMDS. Similarly, in ImageNet-200, ExCeL is ranked fifth in overall AUROC and first in overall FPR95. Again, Ex-CeL is ranked equal first in mean overall rank, but this time, sharing it with a different method, GEN. We can also observe from the results that most of the other baselines exhibit strong performance in specific cases but demonstrate only moderate or poor performance in others. For example, RMDS excels in the CIFAR100 ranking, sharing equal best mean overall rank with ExCeL. However, its performance lags behind in ImageNet-200. Similarly, GEN performs exceptionally well in ImageNet-200, but falls behind in CIFAR100. In contrast, ExCeL delivers consistent results in both datasets, achieving a mean overall rank of 1.5 in CIFAR100 and 3.0 in ImageNet-200.\nFinally, the results also show that ExCeL performs slightly better in far-ODD detection than near-OOD detection. For example, in CIFAR100, ExCeL is ranked second and fourth in terms of FPR95 for far and near ODD, respectively. Similarly, in ImageNet-200, ExCeL is ranked fourth and fifth in terms of FPR95 for far and near ODD. This can be explained using the characteristics of likelihood matrices in different datasets." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Class likelihood matrix and Maxlogit", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "As analysed in Section 4.3, an improvement in the OOD detection in ExCeL compared to MaxLogit confirms the existence of unique class rank patterns in ID classes. First, to validate this, we show the FPR95 comparison of ExCeL and MaxLogit in Figure 3 on CIFAR100 and ImageNet-200 datasets. We observe two key behaviours in the comparison. For some OOD datasets ExCeL achieves a significant improvement in FPR95 compared to MaxLogit, while in others, the performance of ExCeL remains closely aligned with MaxLogit. For example, when CIFAR100 serves as ID, Ex-CeL significantly improves the OOD detection in SVHN and Textures, while the performance on Places365 is similar to MaxLogit.\nThe reason for this can be explained using the class likelihood matrices of ID and OOD data as shown in Figure 4. Here, we show how the subsequent classes are ranked in ID and OOD samples for a selected class in CIFAR100. Specifically, for CIFAR100, we see clear clusters of classes occurring mainly within the top ranks for ID data, indicating a unique class rank signature. In contrast, the class occurrence in Textures data looks random and sparse which allows the ExCeL score to separate the two datasets effectively. On the other hand, the occurrence of classes in Places365 is close to a uniformly random distribution, making the separation difficult for ExCeL. Consequently, ExCeL performs better against OOD data whose predicted class rankings are sparse and random. In general this happens more in far-OOD than near-OOD, as indicated by our results.\nAs can be seen from Table 1 and Table 2, the difference in performance between ExCeL and MaxLogit is more significant in far-OOD detection. More precisely, compared to MaxLogit, ExCeL shows 4.5% and 5.6% reduction in mean FPR95 for far-OOD detection in CIFAR100 and ImageNet-200, respectively. For near-OOD detection, the corresponding improvements are lower than that. They are 0.3% and 1.9%, respectively. This can be attributed to the relatively high semantic similarity between ID and near-OOD samples. Consequently, the class rankings in near-OOD samples tend to be more aligned with ID class rank signatures compared to far-OOD instances that are more sparse and random, rendering ExCeL more informative for differentiating between ID and far-OOD." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In this paper, we proposed a novel OOD score, ExCeL that combines extreme information and collective information within the output layer for enhanced OOD detection. We utilised the MaxLogit as extreme information and proposed a novel class rank score that captures information embedded across all ID classes and training samples. We demonstrated that each ID class has a unique signature, de-termined by the predicted classes in the subsequent ranks, which becomes less pronounced in OOD data. Experiments conducted on CIFAR100 and ImageNet-200 showed that ExCeL consistently ranks among the five top-performing methods out of twenty-one existing baselines. Furthermore, ExCeL showed the equal best performance with RMDS and GEN methods on CIFAR-100 and ImageNet-200 datasets, respectively, in terms of the overall mean rank. With regard to the overall consistent performance across datasets, ExCeL surpasses all the other post-hoc baselines.\nWith regard to AUROC, when CIFAR100 is considered as ID, ExCeL ranks among the top five methods in five cases (i.e., against ImageNet-200, SVHN, Textures, Places365, and Far-OOD), being the most consistent method out of twenty-one baselines, as shown in Table 4. When ImageNet-200 serves as ID, ExCeL drops slightly short, ranking among the top five methods only in three out of the seven scenarios according to Table 6. Finally, ExCeL outperforms all the other baselines in terms of both FPR95 and AUROC, achieving the highest consistency (i.e., the most number of values in bold in a table) in three out of four results tables, when overall consistency is considered. This is further validated in Table 1 and Table 2 (cf. Section 6 in the main text) since ExCeL exhibits the best performance across both datasets in terms of the mean overall rank." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "This appendix presents the per-dataset performance for near and far-OOD detection. Table 3 and Table 4 demonstrate the FPR95 and AUROC results, respectively, when CIFAR100 serves as ID. Similarly, Table 5 and Table 6 report the FPR95 and AUROC results, respectively, when ImageNet-200 is considered as ID.\nAs can be seen from Table 3, in terms of FPR95, ExCeL exhibits the most consistency, ranking among the top five performing methods in five cases (i.e., against ImageNet-200, near-OOD, SVHN, Textures, and far-OOD), which is the highest for any method when CIFAR100 is considered as ID. Similarly, when ImageNet-200 serves as ID, ExCeL ranks among the top five methods in all seven scenarios, as shown in Table 5. " } ]
Deep learning models often exhibit overconfidence in predicting out-of-distribution (OOD) data, underscoring the crucial role of OOD detection in ensuring reliability in predictions. Among various OOD detection approaches, post-hoc detectors have gained significant popularity, primarily due to their ease of use and implementation. However, the effectiveness of most post-hoc OOD detectors has been constrained as they rely solely either on extreme information, such as the maximum logit, or on the collective information (i.e., information spanned across classes or training samples) embedded within the output layer. In this paper, we propose ExCeL that combines both extreme and collective information within the output layer for enhanced accuracy in OOD detection. We leverage the logit of the top predicted class as the extreme information (i.e., the maximum logit), while the collective information is derived in a novel approach that involves assessing the likelihood of other classes appearing in subsequent ranks across various training samples. Our idea is motivated by the observation that, for in-distribution (ID) data, the ranking of classes beyond the predicted class is more deterministic compared to that in OOD data. Experiments conducted on CIFAR100 and ImageNet-200 datasets demonstrate that ExCeL consistently is among the five top-performing methods out of twenty-one existing post-hoc baselines when the joint performance on near-OOD and far-OOD is considered (i.e., in terms of AUROC and FPR95). Furthermore, ExCeL shows the best overall performance across both datasets, unlike other baselines that work best on one dataset but has a performance drop in the other. Figure 1. The performance comparison of 22 post-hoc OOD detection algorithms based on mean overall rank. The x-axis represents the mean overall rank when CIFAR100 serves as ID, while yaxis represents the mean overall rank when ImageNet-200 is considered as ID. ExCeL shares the equal best performance on CI-FAR100 and ImageNet-200 datasets with RMDS and GEN methods, respectively.
ExCeL : Combined Extreme and Collective Logit Information for Enhancing Out-of-Distribution Detection
[ { "figure_caption": "Figure 2 .2Figure 2. Class rank signatures for the top ten ranks for four base ID classes in CIFAR100. Specifically, for the base classes Camel and Elephant, there is a high likelihood of class Bear appearing among the top five ranks. Similarly, for the base classes Chair and Table, class Bed is observed to have a high likelihood of ranking within the top five positions. This is the central concept employed in ExCeL for OOD detection.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "objective of the class likelihood matrix (CLM) is to model the probability mass function (PMF) across all ID classes within each rank. To achieve this, we start by filtering correctly classified training samples for each specific class, and rank the remaining C -1 classes based on their corresponding logit values. Within each rank, we then calculate the likelihood of a particular class across the training samples. In Figure2, when an input is predicted as a chair, the probability of class bed appearing in one of the next four ranks (i.e., ranks 2-5) is approximately 0.08. Similarly, when an input is predicted as elephant, class bear appears in ranks 2-4, with a probability of approximately 0.08, in each position.Likewise, for each ID class c, an element p c ij in the class likelihood matrix (P c ∈ R C×C ) indicates the probability of class i occurring in the j th rank when an input is predicted as class c as shown in Equation 3.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. FPR95 comparison between MaxLogit and ExCeL.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Class likelihood matrices computed for ID and OOD samples predicted as a selected ID class in CIFAR100. We use Textures and Places365 as OOD samples. We see clear clusters of classes occurring mainly within the top ranks for ID data. In Textures, the predicted class rankings show a random and sparse behaviour, while in Places365, the occurrence of classes is close to a uniformly random distribution.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "• We show that the collective information spanning all classes and training samples, embedded in the output layer of trained DNNs through predicted class ranks, can be effectively used to improve OOD detection. Consequently, we emphasise the existence of a class rank signature for each ID class, frequently evident in ID data but not in OOD data.• We represent the class rank signature as a twodimensional class likelihood matrix and propose a novel post-hoc OOD detection score named ExCeL, that combines the extreme information provided by the max logit and the collective information provided by the class likelihood matrix.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of post-hoc OOD detectors for CIFAR100 (ID). The performance rank of each method is indicated within brackets. Top five values are marked in bold. Results on CIFAR100 indicate that the ExCeL and RMDS methods share the best performance, with a mean overall rank of 1.5. The top five ranks also include ReAct, GEN, and MaxLogit.", "figure_data": "Post-processorNear-OODAUROC (%) ↑ Far-OODOverallNear-OODFPR95 (%) ↓ Far-OODOverallMean Overall RankOpenMax [1]76.41 ± 0.25 (15) 79.48 ± 0.41 (11) 77.95 (14)56.58 ± 0.73 (9)54.50 ± 0.68 (6)55.54 (4)9.0MSP [14]80.27 ± 0.11 (7)77.76 ± 0.44 (14) 79.02 (12)54.80 ± 0.33 (3)58.70 ± 1.06 (12) 56.75 (10)11.0TempScale [12]80.90 ± 0.07 (4)78.74 ± 0.51 (13)79.82 (8)54.49 ± 0.48 (2)57.94 ± 1.14 (11)56.22 (8)8.0ODIN [26]79.90 ± 0.11 (10) 79.28 ± 0.21 (12) 79.59 (10) 57.91 ± 0.51 (10) 58.86 ± 0.79 (13) 58.39 (13)11.5MDS [25]58.69 ± 0.09 (20) 69.39 ± 1.39 (18) 64.04 (20) 83.53 ± 0.60 (19) 72.26 ± 1.56 (21) 77.90 (19)19.5MDSEns [25]46.31 ± 0.24 (22) 66.00 ± 0.69 (22) 56.16 (22) 95.88 ± 0.04 (22) 66.74 ± 1.04 (17) 81.31 (21)21.5RMDS [34]80.15 ± 0.11 (9)82.92 ± 0.42 (1)81.54 (1)55.46 ± 0.41 (5)52.81 ± 0.63 (3)54.14 (2)1.5Gram [36]51.66 ± 0.77 (21) 73.36 ± 1.08 (17) 62.51 (21) 92.28 ± 0.29 (21) 64.44 ± 2.37 (16) 78.36 (20)20.5EBO [27]80.91 ± 0.08 (3)79.77 ± 0.61 (8)80.34 (7)55.62 ± 0.61 (7)56.59 ± 1.38 (8)56.11 (7)7.0OpenGAN [22]65.98 ± 1.26 (18) 67.88 ± 7.16 (20) 66.93 (18) 76.52 ± 2.59 (16) 70.49 ± 7.38 (19) 73.51 (16)17.0GradNorm [20]70.13 ± 0.47 (17) 69.14 ± 1.05 (19) 69.64 (17) 85.58 ± 0.46 (20) 83.68 ± 1.92 (22) 84.63 (22)19.5ReAct [39]80.77 ± 0.05 (5)80.39 ± 0.49 (6)80.58 (4)56.39 ± 0.34 (8)54.20 ± 1.56 (5)55.30 (3)3.5KLM [16]76.56 ± 0.25 (14) 76.24 ± 0.52 (16) 76.40 (16) 77.92 ± 1.31 (17) 71.65 ± 2.01 (20) 74.79 (17)16.5VIM [45]74.98 ± 0.13 (16)81.70 ± 0.62 (4)78.34 (13) 62.63 ± 0.27 (14)50.74 ± 1.00 (1)56.69 (9)11.0KNN [40]80.18 ± 0.15 (8)82.40 ± 0.17 (2)81.29 (3)61.22 ± 0.14 (13)53.65 ± 0.28 (4)57.44 (12)7.5DICE [38]79.38 ± 0.23 (11)80.01 ± 0.18 (7)79.70 (9)57.95 ± 0.53 (11)56.25 ± 0.60 (7)57.10 (11)10.0RankFeat [37]61.88 ± 1.28 (19) 67.10 ± 1.42 (21) 64.49 (19) 80.59 ± 1.10 (18) 69.45 ± 1.01 (18) 75.02 (18)18.5ASH [8]78.20 ± 0.15 (13)80.58 ± 0.66 (5)79.39 (11) 65.71 ± 0.24 (15) 59.20 ± 2.46 (14) 62.46 (15)13.0SHE [50]78.95 ± 0.18 (12) 76.92 ± 1.16 (15) 77.94 (15) 59.07 ± 0.25 (12) 64.12 ± 2.70 (15) 61.60 (14)14.5GEN [28]81.31 ± 0.08 (1)79.68 ± 0.75 (9)80.50 (5)54.42 ± 0.33 (1)56.71 ± 1.59 (9)55.57 (5)5.0MaxLogit [16]81.05 ± 0.07 (2)79.67 ± 0.57 (10)80.36 (6)55.47 ± 0.66 (6)56.73 ± 1.33 (10)56.10 (6)6.0ExCeL (Ours)80.70 ± 0.06 (6)82.04 ± 0.90 (3)81.37 (2)55.21 ± 0.56 (4)52.24 ± 1.90 (2)53.73 (1)1.5", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of post-hoc OOD detectors for ImageNet-200 (ID). The performance rank of each method is indicated within brackets. Top five values are marked in bold. Results on ImageNet-200 indicate that the ExCeL and GEN methods share the best performance, with a mean overall rank of 3.0. The top five ranks also include KNN, ASH, and TempScale.", "figure_data": "Post-processorNear-OODAUROC (%) ↑ Far-OODOverallNear-OODFPR95 (%) ↓ Far-OODOverallMean Overall RankOpenMax [1]80.27 ± 0.10 (13) 90.20 ± 0.17 (12) 85.24 (13) 63.48 ± 0.25 (12)33.12 ± 0.66 (8)48.30 (12)12.5MSP [14]83.34 ± 0.06 (3)90.13 ± 0.09 (13)86.74 (8)54.82 ± 0.35 (2)35.43 ± 0.38 (13)45.13 (7)7.5TempScale [12]83.69 ± 0.04 (1)90.82 ± 0.09 (10)87.26 (4)54.82 ± 0.23 (2)34.00 ± 0.37 (9)44.41 (6)5.0ODIN [26]80.27 ± 0.08 (13)91.71 ± 0.19 (5)85.99 (11) 66.76 ± 0.26 (14) 34.23 ± 1.05 (11) 50.50 (14)12.5MDS [25]61.93 ± 0.51 (19) 74.72 ± 0.26 (18) 68.33 (19) 79.11 ± 0.31 (17) 61.66 ± 0.27 (17) 70.39 (17)18.0MDSEns [25]54.32 ± 0.24 (22) 69.27 ± 0.57 (21) 61.80 (21) 91.75 ± 0.10 (21) 80.96 ± 0.38 (20) 86.36 (21)21.0RMDS [34]82.57 ± 0.25 (5)88.06 ± 0.34 (16) 85.32 (12)54.02 ± 0.58 (1)32.45 ± 0.79 (7)43.24 (3)7.5Gram [36]67.67 ± 1.07 (18) 71.19 ± 0.24 (20) 69.43 (18) 86.40 ± 1.21 (20) 84.36 ± 0.78 (21) 85.38 (20)19.0EBO [27]82.50 ± 0.05 (6)90.86 ± 0.21 (9)86.68 (9)60.24 ± 0.57 (9)34.86 ± 1.30 (12) 47.55 (11)10.0OpenGAN [22]59.79 ± 3.39 (20) 73.15 ± 4.07 (19) 66.47 (20) 84.15 ± 3.85 (19) 64.16 ± 9.33 (18) 74.16 (18)19.0GradNorm [20]72.75 ± 0.48 (17) 84.26 ± 0.87 (17) 78.51 (17) 82.67 ± 0.30 (18) 66.45 ± 0.22 (19) 74.56 (19)18.0ReAct [39]81.87 ± 0.98 (9)92.31 ± 0.56 (3)87.09 (6)62.49 ± 2.19 (11)28.50 ± 0.95 (5)45.50 (8)7.0KLM [16]80.76 ± 0.08 (12) 88.53 ± 0.11 (15) 84.65 (16) 70.26 ± 0.64 (16) 40.90 ± 1.08 (15) 55.58 (16)16.0VIM [45]78.68 ± 0.24 (16)91.26 ± 0.19 (7)84.97 (15)59.19 ± 0.71 (6)27.20 ± 0.30 (1)43.20 (2)8.5KNN [40]81.57 ± 0.17 (11)93.16 ± 0.22 (2)87.37 (3)60.18 ± 0.52 (8)27.27 ± 0.75 (2)43.73 (5)4.0DICE [38]81.78 ± 0.14 (10) 90.80 ± 0.31 (11) 86.29 (10) 61.88 ± 0.67 (10) 36.51 ± 1.18 (14) 49.20 (13)11.5RankFeat [37]56.92 ± 1.59 (21) 38.22 ± 3.85 (22) 47.57 (22) 92.06 ± 0.23 (22) 97.72 ± 0.75 (22) 94.89 (22)22.0ASH [8]82.38 ± 0.19 (8)93.90 ± 0.27 (1)88.14 (1)64.89 ± 0.90 (13)27.29 ± 1.12 (3)46.09 (9)5.0SHE [50]80.18 ± 0.25 (15) 89.81 ± 0.61 (14) 85.00 (14) 66.80 ± 0.74 (15) 42.17 ± 1.24 (16) 54.49 (15)14.5GEN [28]83.68 ± 0.06 (2)91.36 ± 0.10 (6)87.52 (2)55.20 ± 0.20 (4)32.10 ± 0.59 (6)43.65 (4)3.0MaxLogit [16]82.90 ± 0.04 (4)91.11 ± 0.19 (8)87.01 (7)59.76 ± 0.59 (7)34.03 ± 1.21 (10) 46.90 (10)8.5ExCeL (Ours)82.40 ± 0.04 (7)91.97 ± 0.27 (4)87.19 (5)57.90 ± 0.40 (5)28.45 ± 0.80 (4)43.18 (1)3.0benchmarks in each group (cf. Section 5.1), along withthe overall OOD performance. Per-dataset statistics canbe found in the appendix. According to the results, Ex-", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Naveen Karunanayake; Suranga Seneviratne; Sanjay Chawla
[ { "authors": "Abhijit Bendale; Terrance E Boult", "journal": "", "ref_id": "b0", "title": "Towards open set deep networks", "year": "2016" }, { "authors": "Julian Bitterwolf; Maximilian Müller; Matthias Hein", "journal": "", "ref_id": "b1", "title": "In or out? fixing imagenet out-of-distribution detection evaluation", "year": "2023" }, { "authors": "Guangyao Chen; Peixi Peng; Xiangqian Wang; Yonghong Tian", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b2", "title": "Adversarial reciprocal points learning for open set recognition", "year": "2021" }, { "authors": "Jiefeng Chen; Yixuan Li; Xi Wu; Yingyu Liang; Somesh Jha", "journal": "Springer", "ref_id": "b3", "title": "Atom: Robustifying out-of-distribution detection using outlier mining", "year": "2021" }, { "authors": "Mircea Cimpoi; Subhransu Maji; Iasonas Kokkinos; Sammy Mohamed; Andrea Vedaldi", "journal": "", "ref_id": "b4", "title": "Describing textures in the wild", "year": "2014" }, { "authors": "Li Deng", "journal": "IEEE signal processing magazine", "ref_id": "b5", "title": "The mnist database of handwritten digit images for machine learning research [best of the web", "year": "2012" }, { "authors": "Terrance Devries; Graham W Taylor", "journal": "", "ref_id": "b6", "title": "Learning confidence for out-of-distribution detection in neural networks", "year": "2018" }, { "authors": "Andrija Djurisic; Nebojsa Bozanic; Arjun Ashok; Rosanne Liu", "journal": "", "ref_id": "b7", "title": "Extremely simple activation shaping for outof-distribution detection", "year": "2022" }, { "authors": "Xin Dong; Junfeng Guo; Ang Li; Wei-Te Ting; Cong Liu; Kung", "journal": "", "ref_id": "b8", "title": "Neural mean discrepancy for efficient out-ofdistribution detection", "year": "2022" }, { "authors": "Xuefeng Du; Zhaoning Wang; Mu Cai; Yixuan Li", "journal": "", "ref_id": "b9", "title": "Vos: Learning what you don't know by virtual outlier synthesis", "year": "2022" }, { "authors": "Angelos Filos; Panagiotis Tigkas; Rowan Mcallister; Nicholas Rhinehart; Sergey Levine; Yarin Gal", "journal": "PMLR", "ref_id": "b10", "title": "Can autonomous vehicles identify, recover from, and adapt to distribution shifts", "year": "2020" }, { "authors": "Chuan Guo; Geoff Pleiss; Yu Sun; Kilian Q Weinberger", "journal": "PMLR", "ref_id": "b11", "title": "On calibration of modern neural networks", "year": "2017" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b12", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Dan Hendrycks; Kevin Gimpel", "journal": "", "ref_id": "b13", "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "year": "2016" }, { "authors": "Dan Hendrycks; Mantas Mazeika; Thomas Dietterich", "journal": "", "ref_id": "b14", "title": "Deep anomaly detection with outlier exposure", "year": "2018" }, { "authors": "Dan Hendrycks; Steven Basart; Mantas Mazeika; Andy Zou; Joe Kwon; Mohammadreza Mostajabi; Jacob Steinhardt; Dawn Song", "journal": "", "ref_id": "b15", "title": "Scaling out-of-distribution detection for realworld settings", "year": "2019" }, { "authors": "Dan Hendrycks; Mantas Mazeika; Saurav Kadavath; Dawn Song", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Using self-supervised learning can improve model robustness and uncertainty", "year": "2019" }, { "authors": "Yen-Chang Hsu; Yilin Shen; Hongxia Jin; Zsolt Kira", "journal": "", "ref_id": "b17", "title": "Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data", "year": "2020" }, { "authors": "Rui Huang; Yixuan Li", "journal": "", "ref_id": "b18", "title": "Mos: Towards scaling out-ofdistribution detection for large semantic space", "year": "2021" }, { "authors": "Rui Huang; Andrew Geng; Yixuan Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "On the importance of gradients for detecting distributional shifts in the wild", "year": "2021" }, { "authors": "Umar Khalid; Ashkan Esmaeili; Nazmul Karim; Nazanin Rahnavard", "journal": "IEEE", "ref_id": "b20", "title": "Rodd: A self-supervised approach for robust out-of-distribution detection", "year": "2022" }, { "authors": "Shu Kong; Deva Ramanan", "journal": "", "ref_id": "b21", "title": "Opengan: Open-set recognition via open data generation", "year": "2021" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b22", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Ya Le; Xuan Yang", "journal": "CS 231N", "ref_id": "b23", "title": "Tiny imagenet visual recognition challenge", "year": "2015" }, { "authors": "Kimin Lee; Kibok Lee; Honglak Lee; Jinwoo Shin", "journal": "Advances in neural information processing systems", "ref_id": "b24", "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "year": "2018" }, { "authors": "Shiyu Liang; Yixuan Li; Rayadurgam Srikant", "journal": "", "ref_id": "b25", "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "year": "2017" }, { "authors": "Weitang Liu; Xiaoyun Wang; John Owens; Yixuan Li", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Energy-based out-of-distribution detection", "year": "2020" }, { "authors": "Xixi Liu; Yaroslava Lochman; Christopher Zach", "journal": "", "ref_id": "b27", "title": "Gen: Pushing the limits of softmax-based out-of-distribution detection", "year": "2023" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b28", "title": "Sgdr: Stochastic gradient descent with warm restarts", "year": "2016" }, { "authors": "Yifei Ming; Ying Fan; Yixuan Li", "journal": "PMLR", "ref_id": "b29", "title": "POEM: Out-ofdistribution detection with posterior sampling", "year": "2022" }, { "authors": "Yifei Ming; Yiyou Sun; Ousmane Dia; Yixuan Li", "journal": "", "ref_id": "b30", "title": "How to exploit hyperspherical embeddings for out-of-distribution detection?", "year": "2022" }, { "authors": "Yuval Netzer; Tao Wang; Adam Coates; Alessandro Bissacco; Bo Wu; Andrew Y Ng", "journal": "", "ref_id": "b31", "title": "Reading digits in natural images with unsupervised feature learning", "year": "2011" }, { "authors": "Julia Nitsch; Masha Itkina; Ransalu Senanayake; Juan Nieto; Max Schmidt; Roland Siegwart; J Mykel; Cesar Kochenderfer; Cadena", "journal": "IEEE", "ref_id": "b32", "title": "Out-of-distribution detection for automotive perception", "year": "2021" }, { "authors": "Jie Ren; Stanislav Fort; Jeremiah Liu; Abhijit Guha Roy; Shreyas Padhy; Balaji Lakshminarayanan", "journal": "", "ref_id": "b33", "title": "A simple fix to mahalanobis distance for improving near-ood detection", "year": "2021" }, { "authors": "Abhijit Guha; Roy ; Jie Ren; Shekoofeh Azizi; Aaron Loh; Vivek Natarajan; Basil Mustafa; Nick Pawlowski; Jan Freyberg; Yuan Liu; Zach Beaver", "journal": "Medical Image Analysis", "ref_id": "b34", "title": "Does your dermatology classifier know what it doesn't know? detecting the long-tail of unseen conditions", "year": "2022" }, { "authors": "Shama Chandramouli; Sageev Sastry; Oore", "journal": "PMLR", "ref_id": "b35", "title": "Detecting out-of-distribution examples with gram matrices", "year": "2020" }, { "authors": "Yue Song; Nicu Sebe; Wei Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Rankfeat: Rank-1 feature removal for out-of-distribution detection", "year": "2022" }, { "authors": "Yiyou Sun; Yixuan Li", "journal": "Springer", "ref_id": "b37", "title": "Dice: Leveraging sparsification for out-of-distribution detection", "year": "2022" }, { "authors": "Yiyou Sun; Chuan Guo; Yixuan Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "React: Out-ofdistribution detection with rectified activations", "year": "2021" }, { "authors": "Yiyou Sun; Yifei Ming; Xiaojin Zhu; Yixuan Li", "journal": "PMLR", "ref_id": "b39", "title": "Outof-distribution detection with deep nearest neighbors", "year": "2022" }, { "authors": "Jihoon Tack; Sangwoo Mo; Jongheon Jeong; Jinwoo Shin", "journal": "Advances in neural information processing systems", "ref_id": "b40", "title": "Csi: Novelty detection via contrastive learning on distributionally shifted instances", "year": "2020" }, { "authors": "Leitian Tao; Xuefeng Du; Xiaojin Zhu; Yixuan Li", "journal": "", "ref_id": "b41", "title": "Non-parametric outlier synthesis", "year": "" }, { "authors": "Grant Van Horn; Oisin Mac Aodha; Yang Song; Yin Cui; Chen Sun; Alex Shepard; Hartwig Adam; Pietro Perona; Serge Belongie", "journal": "", "ref_id": "b42", "title": "The inaturalist species classification and detection dataset", "year": "2018" }, { "authors": "Sagar Vaze; Kai Han; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b43", "title": "Open-set recognition: A good closed-set classifier is all you need?", "year": "2021" }, { "authors": "Haoqi Wang; Zhizhong Li; Litong Feng; Wayne Zhang", "journal": "", "ref_id": "b44", "title": "Vim: Out-of-distribution with virtual-logit matching", "year": "2022" }, { "authors": "Hongxin Wei; Renchunzi Xie; Hao Cheng; Lei Feng; Bo An; Yixuan Li", "journal": "PMLR", "ref_id": "b45", "title": "Mitigating neural network overconfidence with logit normalization", "year": "2022" }, { "authors": "Jingkang Yang; Haoqi Wang; Litong Feng; Xiaopeng Yan; Huabin Zheng; Wayne Zhang; Ziwei Liu", "journal": "", "ref_id": "b46", "title": "Semantically coherent out-of-distribution detection", "year": "2021" }, { "authors": "Jingkang Yang; Kaiyang Zhou; Yixuan Li; Ziwei Liu", "journal": "", "ref_id": "b47", "title": "Generalized out-of-distribution detection: A survey", "year": "2021" }, { "authors": "Qing Yu; Kiyoharu Aizawa", "journal": "", "ref_id": "b48", "title": "Unsupervised out-ofdistribution detection by maximum classifier discrepancy", "year": "2019" }, { "authors": "Jinsong Zhang; Qiang Fu; Xu Chen; Lun Du; Zelin Li; Gang Wang; Shi Han; Dongmei Zhang", "journal": "", "ref_id": "b49", "title": "Out-of-distribution detection based on in-distribution data patterns memorization with modern hopfield energy", "year": "2022" }, { "authors": "Jingyang Zhang; Nathan Inkawhich; Randolph Linderman; Yiran Chen; Hai Li", "journal": "", "ref_id": "b50", "title": "Mixture outlier exposure: Towards out-of-distribution detection in fine-grained environments", "year": "2023" }, { "authors": "Jingyang Zhang; Jingkang Yang; Pengyun Wang; Haoqi Wang; Yueqian Lin; Haoran Zhang; Yiyou Sun; Xuefeng Du; Kaiyang Zhou; Wayne Zhang", "journal": "", "ref_id": "b51", "title": "Openood v1. 5: Enhanced benchmark for out-of-distribution detection", "year": "2023" }, { "authors": "Bolei Zhou; Agata Lapedriza; Aditya Khosla; Aude Oliva; Antonio Torralba", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b52", "title": "Places: A 10 million image database for scene recognition", "year": "2017" }, { "authors": "", "journal": "Far-OOD OpenMax", "ref_id": "b53", "title": "The performance rank of each method is indicated within brackets. Top five values are marked in bold. Post-processor CIFAR-10 ImageNet-200 Near-OOD MNIST SVHN Textures Places365", "year": "" }, { "authors": "", "journal": "", "ref_id": "b54", "title": "FPR95 comparison of post-hoc OOD detectors for ImageNet-200", "year": "" }, { "authors": "", "journal": "OOD OpenMax", "ref_id": "b55", "title": "Post-processor SSB-hard NINCO Near-OOD iNaturalist Textures OpenImage-O Far", "year": "" }, { "authors": "", "journal": "", "ref_id": "b56", "title": "AUROC comparison of post-hoc OOD detectors for ImageNet-200", "year": "" }, { "authors": "", "journal": "OOD OpenMax", "ref_id": "b57", "title": "Post-processor SSB-hard NINCO Near-OOD iNaturalist Textures OpenImage-O Far", "year": "" } ]
[ { "formula_coordinates": [ 3, 371.47, 529.02, 173.64, 24.15 ], "formula_id": "formula_0", "formula_text": "g(x; f ) = 1 if x ∼ D in 0 if x ∼ D out(1)" }, { "formula_coordinates": [ 3, 363.59, 611.87, 181.52, 23.3 ], "formula_id": "formula_1", "formula_text": "g λ (x) = in if S(x; f ) ≥ λ out if S(x; f ) < λ (2)" }, { "formula_coordinates": [ 4, 358.44, 431.91, 186.67, 54.61 ], "formula_id": "formula_2", "formula_text": "p c C1 p c C2 . . . p c CC      , p c ij = n c ij N c(3)" }, { "formula_coordinates": [ 4, 380.53, 617.77, 164.58, 23.3 ], "formula_id": "formula_3", "formula_text": "p c i1 = 1 if i = c 0 otherwise .(4)" }, { "formula_coordinates": [ 5, 90.77, 274.53, 195.59, 56.51 ], "formula_id": "formula_4", "formula_text": "pc ij =          a C-1 if p c ij ≥ b C-1 1 C-1 if 1 C-1 ≤ p c ij < b C-1 -1 C-1 if 0 < p c ij < 1 C-1 -a C-1 if p c ij = 0(5)" }, { "formula_coordinates": [ 5, 131.71, 520.71, 154.65, 30.32 ], "formula_id": "formula_5", "formula_text": "RS(x) = C i=1 pc1 cii(6)" }, { "formula_coordinates": [ 5, 381.64, 116.04, 163.47, 46.17 ], "formula_id": "formula_6", "formula_text": "ρ x =     1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0     (7)" }, { "formula_coordinates": [ 5, 382.03, 205.43, 163.08, 12.17 ], "formula_id": "formula_7", "formula_text": "RS(x) = tr[( Pc ) T ρ x ](8)" }, { "formula_coordinates": [ 5, 333.11, 491.38, 212.01, 8.96 ], "formula_id": "formula_8", "formula_text": "ExCeL(x) = α • RS(x) + (1 -α) • MaxLogit (9)" }, { "formula_coordinates": [ 5, 310.7, 660.37, 39.15, 39.25 ], "formula_id": "formula_9", "formula_text": "p c ij =     1" }, { "formula_coordinates": [ 5, 346.07, 677.38, 197.2, 35.78 ], "formula_id": "formula_10", "formula_text": "1 C-1 otherwise . (10" }, { "formula_coordinates": [ 5, 540.96, 704.51, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 6, 74.09, 110.2, 212.27, 78.56 ], "formula_id": "formula_12", "formula_text": "RS(x) = C i=1 pc cii = a C -1 + 1 C -1 + ... + 1 C -1 = a C -1 + 1 = k (constant) .(11)" }, { "formula_coordinates": [ 6, 64.68, 225.94, 221.69, 23.9 ], "formula_id": "formula_13", "formula_text": "ExCeL(x) = α • RS(x) + (1 -α) • MaxLogit = α • k + (1 -α) • MaxLogit .(12)" }, { "formula_coordinates": [ 6, 343.34, 458.56, 201.77, 23.79 ], "formula_id": "formula_14", "formula_text": "Mean Overall Rank = R AUROC overall + R FPR95 overall 2(13)" } ]
2023-11-23
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b43", "b44", "b13", "b31", "b4", "b12", "b20", "b44", "b44", "b44", "b44", "b44", "b44" ], "table_ref": [], "text": "Image super-resolution (SR) aims to reconstruct a highresolution image from a given low-resolution (LR) counterpart [44]. Recently, diffusion models, known for their effectiveness in modeling complex distributions, have gained † Work done as an intern at Shanghai AI Laboratory. A comparison between the most recent SOTA method ResShift [45] for the acceleration of diffusion-based SR and the proposed method. We achieve on-par or even superior perceptual quality using only one inference step. (\"-N\" behind the method name represents the number of inference steps, and the value in the bracket is the quantitative result measured by MUSIQ↑ [14].) widespread adoption and demonstrated remarkable performance in SR tasks, particularly in terms of perceptual quality.\nSpecifically, current strategies for employing diffusion models can be broadly categorized into two streams: concatenating the LR image to the input of the denoiser in the diffusion models [31,32], and adjusting the inverse process of a pre-trained diffusion model [4,5,13]. Despite achieving promising results, both strategies encounter computational efficiency issues. Notably, the initial state of these conditional diffusion models is a pure Gaussian noise without using the prior knowledge from the LR image. Consequently, a substantial number of inference steps are required to achieve satisfactory performance, significantly hindering the practical applications of diffusion-based SR techniques.\nEfforts have been made to enhance the sampling efficiency of diffusion models, leading to various techniques proposed [21,27,36]. However, in the realm of low-level vision where maintaining high fidelity is critical, these techniques often fall short as they achieve acceleration at the cost of performance. More recently, innovative techniques have emerged to reformulate the diffusion process in image restoration tasks, focusing on improving the signal-to-noise ratio of the initial diffusion state and thereby shorten the Markov chain. For instance, [42] initiates the denoising diffusion process with the input noisy image, while in the SR task, [45] models the initial step as a combination of the LR image and random noise. Nonetheless, even in these most recent works [42,45], limitations persist. For instance, while [42] shows promising results within just three inference steps, it requires a clear formulation of the image degradation process. Besides, [45] still necessitates 15 inference steps and exhibits degraded performance with noticeable artifacts if the number of inference steps is further reduced.\nTo address these challenges, we introduce a novel approach that can generate high-resolution (HR) images in only one sampling step, without compromising the diversity and perceptual quality of the diffusion model, as shown in Fig. 1 and Fig. 2. Specifically, we propose to directly learn a well-paired bi-directional deterministic mapping between the input random noise and the generated HR image from a teacher diffusion model. To accelerate the generation of wellmatched training data, we first derive a deterministic sampling strategy from the most recent state-of-the-art work [45], designed for accelerating diffusion-based SR, from its original stochastic formulation. Additionally, we propose a novel consistency-preserving loss to leverage ground-truth images, further enhancing the perceptual quality of the generated HR images by minimizing the error between ground-truth (GT) images and those generated from the predicted initial state. Experimental results demonstrate that our method achieves comparable or even better performance compared to SOTA methods and the teacher diffusion model [45], while greatly reducing the number of inference steps from 15 to 1, resulting in up to a ×10 speedup in inference.\nOur main contributions are summarized as follows: • We accelerate the diffusion-based SR model to a single inference step with comparable or even superior performance for the first time. Instead of shortening the Markov chain of the generation process, we propose a simple yet effective approach that directly distills a deterministic generation function into a student network. • To further fasten training, we derive a deterministic sampling strategy from the recent SOTA method [45] on accelerating the SR task, enabling efficient generation of well-matched training pairs. • We propose a novel consistency-preserving loss that can utilize the ground-truth images during training, preventing the student model from only focusing on fitting the deterministic mapping of the teacher diffusion model, therefore leading to better performance. • Extensive experiments on both synthetic and real-world datasets show that our proposed method can achieve comparable or even superior performance compared to SOTA methods and the teacher diffusion model, while greatly reducing the number of inference steps from 15 to 1." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Image Super-Resolution", "publication_ref": [ "b7", "b43", "b0", "b1", "b14", "b42", "b15", "b25", "b32", "b5", "b27", "b21", "b40", "b11", "b15", "b25", "b32" ], "table_ref": [], "text": "With the rise of deep learning, deep learning-based techniques gradually become the mainstream of the SR task [8,44]. One prevalent approach of early works is to train a regression model using paired training data [1,2,15,43].\nWhile the expectation of the posterior distribution can be well modeled, they inevitably suffer from the over-smooth problem [16,26,33]. To improve the perceptual quality of the generated HR images, generative-based SR models attract increasing attention, e.g., autoregressive-based models [6,25,28,29]. While significant improvements are achieved, the computational cost of autoregressive models is usually large. Subsequently, normalizing flows [22,41] are demonstrated to have good perceptual quality under an efficient inference process, while its network design is restricted by the requirements of the invertibility and ease of calculation. Besides, GAN-based methods also achieve great success in terms of perceptual quality [9, 12,16,26,33 " }, { "figure_ref": [ "fig_4" ], "heading": "Acceleration of Diffusion Models", "publication_ref": [ "b20", "b36", "b31", "b31", "b44", "b44", "b2", "b33", "b18", "b19", "b44" ], "table_ref": [], "text": "Recently, the acceleration of diffusion models has attracted more and more attention. Several algorithms are proposed for general diffusion models [21,27,36,37] (a) The inference of SR3 [32] starts from a pure noise, which requires a large number of inference steps (T=100 after using DDIM [36]). Figure 3. A comparison between the vanilla diffusion-based SR method [32], a most recent method for acceleration of the diffusionbased SR [45], and the proposed one-step SR. Different from recent works that shorten the Markov chain to speed up the inference process [42,45], the proposed method directly learns the deterministic generation process and the details can be found in Fig. 4.\nthe ordinary differential equation (ODE) of the inference process makes this scheme less attractive on a large-scale dataset [23]. To alleviate the training overhead, progressive distillation strategies are usually adopted [24,34]. Meanwhile, instead of simply simulating the behavior of a teacher diffusion model through distillation, better inference paths are explored in an iterative manner [19,20]. While progressive distillation effectively decreases the training overhead, the error accumulates at the same time, leading to an obvious performance loss in SR. Most recently, targeting the image restoration task, some works reformulate the diffusion process by either using the knowledge of degradation process [42] or a pre-defined distribution of the initial state [45], yielding a shortened Markov chain of the generation process and better performance than directly applying DDIM [36] in low-level tasks. However, they either require a clear formulation of the degradation or still require a relatively large number of inference steps." }, { "figure_ref": [ "fig_6" ], "heading": "Motivation", "publication_ref": [ "b19", "b31", "b19", "b33", "b44", "b44" ], "table_ref": [ "tab_7", "tab_8" ], "text": "Preliminary. Given an LR image y and its corresponding HR image x 0 , existing diffusion-based SR methods aim to model the conditional distribution q(x 0 |y) through a Markov chain where a forward process is usually defined as q(x t |x t-1 ) = N (x t ; √ 1 -β t x t-1 , β t I) with an initial state x T ∼ N (0, I). The role of the diffusion model can be regarded as transferring the input domain (standard Gaussian noise) to the HR image domain conditioned on the LR image. Since the matching relationship between x T and x 0 is unknown, usually a diffusion model [10,20,32] through an iterative manner is required to learn/infer from an unknown mapping between x T and x 0 . Our method is grounded in the idea that having an SR model that effectively captures the conditional distribution q(x 0 |y) and establishes a deterministic mapping between x T and x0 given an LR image y, we can streamline the inference process to a single step by employing another network, denoted as f θ , to learn the correspondence between x0 and x T , as illustrated in Fig. 3. Distillation for diffusion SR models: less is more. While the concept of distilling the mapping between x T and x0 to a student network has been previously explored [20], its application to SR introduces several challenges: • The training overhead becomes substantial for one-step distillation due to a large number of inference steps of previous models, e.g., LDM [31] still need 100 steps after using DDIM [36] for inference to generate high-quality pairs (x 0 , x T , y) as the training data of the student model. • The performance degradation is attributed to the introduction of a more intricate distillation strategy involving iteration. For example, to reduce the training overhead, an iterative distillation strategy [34] is adopted which gradually decreases the number of inference steps during training. However, despite achieving satisfactory results in generation tasks, the cumulative error significantly impacts the fidelity of the SR results, as SR tasks are relatively more sensitive to image quality.\nTo address the aforementioned two challenges, we propose to distill the diffusion SR process into a single step in a simple but effective way based on the following observations. More details of the observations can be seen in Sec. 5.3 • We demonstrate that the most recent SOTA method for accelerating the diffusion-based SR [45], which achieves comparable performance in 15 steps as LDM [31] in 100 DDIM steps, has a deterministic mapping between x T and x 0 . Besides, the greatly reduced number of inference steps and the existence of the deterministic mapping make the training of a single-step distillation possible as shown in Fig. 6 and Table 4. • Learning the mapping between x T and x0 is found to be easier than denoising x t under different noise levels as shown in Table 5. Therefore, it is feasible to directly learn the mapping between x T and x0 so that the accumulated error by the iterative distillation can be avoided. • Due to the accumulated error, a more sophisticated distillation strategy (iterative-based) does not contribute to the improvement in our setting as shown in Table 6.\nThe organization of the following sections is as follows: we first demonstrate that ResShift [45], in which the inference process is originally stochastic, can be converted to a deterministic model without retraining in Sec 4.1, and then the proposed consistency preserving distillation in Sec 4.2." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Deterministic Sampling", "publication_ref": [ "b44", "b44", "b0", "b44", "b44", "b44" ], "table_ref": [], "text": "A core difference between ResShift [45] and LDM [31] is the formulation of the initial state x T . Specifically, in ResShift [45], the information from the LR image y is integrated into the diffusion step x t as follows q(x t |x 0 , y) = N (x t ; x 0 + η t (y -x 0 ), κ 2 η t I), (1) where η t is a serial of hyper-parameters that monotonically increases with the timestep t and obeys η T → 1 and η 0 → 0. As such, the inverse of the diffusion process starts from an initial state with rich information from the LR image y as follows x T = y + κ √ η T ϵ where ϵ ∼ N (0, I). To generate a HR image x from a given image y, the original inverse process of [45] is as follows\np θ (x t-1 |x t , y) = N (x t-1 |µ θ (x t , y, t), κ 2 η t-1 η t α t I),(2)\nwhere µ θ (x t , y, t) is reparameterized by a deep network. As shown in Eq. 2, given an initial state\nx T = y + κ √ η T ϵ,\nthe generated image is stochastic due to the existence of the random noise during the sampling from p θ (x t-1 |x t , y).\nInspired by DDIM sampling [36], we find that a non-Markovian reverse process q(x t-1 |x t , x 0 , y) exists which\nFor ease of presentation, the LR image is y is pre-upsampled to the same spatial resolution with the HR image x. Besides, similar to [31,45], the diffusion is conducted in the latent space.\nkeeps the marginal distribution q(x t |x 0 , y) unchanged so that it can be directly adopted to a pre-trained model. The reformulated deterministic reverse process is as follows\nq(x t-1 |x t , x 0 , y) = δ(k t x 0 + m t x t + j t y),(3)\nwhere δ is the unit impulse, and k t , m t , j t are as follows\n       m t = ηt-1 ηt j t = η t-1 - √ η t-1 η t k t = 1 -η t-1 + √ η t-1 η t -ηt-1 ηt .(4)\nThe details of the derivation can be found in the supplementary material. As a consequence, for inference, the reverse process conditioned on y is reformulated as follows\nx t-1 = k t x0 + m t x t + j t y = k t f θ (x t , y, t) + m t x t + j t y,(5)\nwhere f θ (x t , y, t) is the predicted HR image from a pretrained ResShift [45] model. By sampling from the reformulated process in Eq. 5, a deterministic mapping between x T (or ϵ) and x0 can be obtained and is denoted as F θ (x T , y)." }, { "figure_ref": [ "fig_4" ], "heading": "Consistency Preserving Distillation", "publication_ref": [ "b44" ], "table_ref": [], "text": "Vanilla distillation. We propose utilizing a student network f θ to learn the deterministic mapping F θ between the random initialized state x T and its deterministic output F θ (x T , y) from a teacher diffusion model. The vanilla distillation loss is defined as follows\nL distill = L M SE (f θ (x T , y, T ), F θ (x T , y)),(6)\nwhere f θ (x T , y, T ) is the student network that directly predicts the HR image in only one step, and F θ represents the proposed deterministic inference process of ResShift [45] in Sec. 4.1 through an iterative manner using a pre-trained network parameterized by θ. We observe that the student model trained solely with the distillation loss in Eq. 6 already achieves promising results in just one inference step, as indicated by \"(distill only)\" in the result tables.\nRegularization by the ground-truth image. A limitation of the aforementioned vanilla distillation strategy is that the GT image is not utilized during training, thereby restricting the upper performance bound of the student model. To further enhance the student's performance, we propose a novel strategy that incorporates a learned inversion of the HR image to provide additional regularization from the groundtruth images. In addition to the vanilla distillation loss, the student network concurrently learns the inverse mapping during training by minimizing the following loss,\nL inverse = L M SE (f θ (F θ (x T , y), y, 0), x T ),(7)\nwhere the last parameter of f θ is set from T in Eq. 6 to 0, indicating that the model is predicting the inversion instead of the x0 . Then the GT image x 0 can be employed to regularize the output SR image given its predicted inversion xT as follows xT = detach(f θ (x 0 , y, 0))\nL gt = L M SE (f θ (x T , y, T ), x 0 ),(8)\nwhere L gt is the proposed consistency preserving loss. By reusing f θ to learn both f θ (•, •, T ) and f θ (•, •, 0) simultaneously, we can initialize the parameter θ of the student model from the teacher one θ to speed up the training.\nThe overall training objective. The student network is trained to minimize the aforementioned three losses at the same time as follows\nθ = arg min θ E y,x0,x T [L distill + L reverse + L gt ],(9)\nwhere the losses are defined in Eq. 6, 7, and 8 respectively. We assign equal weight to each loss term, and ablation studies are in the supplementary material. The overall of the proposed method is summarized in Algorithm 1 and Fig. 4." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [ "b44", "b44", "b44", "b44", "b39", "b10", "b38", "b45", "b39", "b17", "b44", "b44", "b19", "b46", "b37", "b34", "b13" ], "table_ref": [], "text": "Training Details. For a fair comparison, we follow the same experimental setup and backbone design as that in [45]. Specifically, the main difference is that we finetuned the model for 30K iterations instead of training from scratch for 500K in [45]. We find that the student model can converge quickly so that even sample ϵ ∼ N (0, κ 2 η T I)\n5:\nx T = y + ϵ 6:\nfor t = T, T -1, ..., 1 do 7:\nif t = 1 then 8: x0 = f θ (x 1 , y, 1) 9:\nelse 10: \nx t-1 = k t f θ (x t ,\nL distill = L M SE (f θ (x T , y, T ), x0 )\n14:\nL inverse = L M SE (f θ (x 0 , y, 0), x T ) 15:\nxT = f θ (x 0 , y, 0), 16:\nL gt = L M SE (f θ (detach(x T ), y, T ), x 0 ) 17: L = L distill + L inverse + L gt 18:\nPerform a gradient descent step on ∇ θ L 19: end while 20: return The student model f θ . from scratch following [45]. We train the models on the training set of ImageNet [7] following the same pipeline with ResShift [45] where the degradation model is adopted from RealESRGAN [40]. Compared methods. We compare our method with several representative SR models, including RealSR-JPEG [11], ESRGAN [39], BSRGAN [46], SwinIR [17], RealESR-GAN [40], DASR [18], LDM [31], and ResShift [45]. For a comprehensive comparison, we further evaluate the performance of diffusion-based models LDM [31] and ResShift [45] with a reduced number of sampling steps. Besides, we compare the proposed method with Rectified-Flow [20], a SOTA method that can compress the generation process into a single step, in Table 6. Metrics. For the evaluation of the proposed method on the synthetic testing dataset with reference images, we utilize PSNR, SSIM, and LPIPS [47] to measure the fidelity performance. Besides, two recent SOTA non-reference metrics are used to justify the realism of all the images, i.e., CLIP-IQA [38] which leverages a CLIP model [30] pre-trained on a large-scale dataset (Laion400M [35]) and MUSIQ [14]." }, { "figure_ref": [ "fig_5" ], "heading": "Experimental Results", "publication_ref": [ "b44", "b44", "b44", "b44", "b37", "b44" ], "table_ref": [ "tab_4", "tab_5", "tab_6", "tab_6" ], "text": "Evaluation on real-world datasets. RealSR [3] and Re-alSet65 [45] are adopted to evaluate the generalization ability of the model on unseen real-world data. Specifically, in Re-alSR [3], there are 100 real images captured by two different cameras in different scenarios. Besides, RealSet65 [45] in- cludes 65 LR images in total, collected from widely used datasets and the internet. The results on these two datasets are reported in Table 1. As shown in the table, the proposed method with only one inference step can outperform the teacher model that we used by a large margin. Besides, for the latest metric CLIPIQA, the proposed method archives the best performance among all the competitors. Some visual comparisons are shown in Fig. 5, in which the proposed method achieves promising results using only one step.\nEvaluation on synthetic datasets. We further evaluate the performance of different methods on the synthetic dataset ImageNet-Test following the setting in [45]. Specifically, 3000 high-resolution images are first randomly selected from the validation set of ImageNet [7]. The corresponding LR images are obtained by using the provided script in [45]. As shown in Table 2, while reducing the inference step from 15 to only 1 slightly decreases PSNR and SSIM, the proposed method achieves the best perceptual quality measured by LPIPS, a more recent full-reference image quality assessment (IQA) metric than SSIM. Besides, the proposed method also achieves the best performance among all the methods measured on the most recent SOTA metric CLIP-IQA [38], demonstrating that the proposed 1-step model is on par with or even slightly better than the teacher model with 15 inference steps in terms of perceptual qualities. Evaluation of the efficiency. We assess the computational efficiency of the proposed method in comparison to SOTA approaches. As shown in Table 3, the proposed method demonstrates superior performance with only one inference step, outperforming ResShift [45]-the adopted teacher model, which had already significantly reduced the inference time compared to LDM [31]. It is worth noting that all methods presented in Table 3 run in latent space, and the computational cost of VQ-VAE is counted." }, { "figure_ref": [ "fig_6" ], "heading": "Analysis", "publication_ref": [ "b44", "b44", "b18", "b44", "b44" ], "table_ref": [ "tab_7", "tab_8" ], "text": "How important is the deterministic sampling? We evaluate the performance of the model trained on generated paired samples from the proposed deterministic sampling and the default stochastic sampling strategy (x T , Fθ (x T , y)) in [45]. Due to the randomness of the generated samples x ∼ Fθ (x T , y), given a random noise ϵ, the prediction is an expectation of its conditional distribution. The comparison in Fig. 6 further verifies that the results trained w/o deterministic teacher model exhibit blurred details. Besides, as shown in Table 4, there is a significant performance degradation when we replace the proposed deterministic sampling with the default one in [45], demonstrating the effectiveness and necessity of involving the proposed deterministic sampling.\nWhy does a single-step distillation work? Previous studies suggest that directly learning the mapping between x T and x 0 is typically challenging due to the non-causal properties of the generation process [19]. However, our empirical findings indicate that the matching between x T and x 0 in the SR task is relatively easier to learn than denoising under different noise levels, as diffusion models do. Specifically, the capacity of the student network f θ is sufficient to effectively capture the ODE process F θ using only one step. To verify our assumption, we evaluate the performance of smaller models trained under different strategies. Specifically, one model is trained following the experimental settings of [45] [45] and the proposed deterministic sampling in Eq. 5 using only distillation loss. We evaluate their performance on the RealSet65 testing set.\nwhile the number of parameters decreases from 118.6M to 24.3M. Another model uses the same backbone as the aforementioned small model while directly learning the mapping relationship between x T and x0 from the standard-size teacher diffusion model. A comparison between these two small models is reported in Table 5. As demonstrated by the results, the model trained for denoising under different noise levels suffers from a serious performance drop compared with the model that directly learns the deterministic mapping between. This strongly supports our assumption that directly learning the deterministic mapping is relatively easier. " }, { "figure_ref": [], "heading": "Is a more sophisticated distillation strategy necessary?", "publication_ref": [ "b19", "b19", "b44", "b44" ], "table_ref": [], "text": "To explore the necessity of more advanced techniques that learn the mapping between x T and x 0 , we evaluate the performance of Rectified Flow [20], a recent method that learns the mapping to a single step through an iterative manner. Specifically, Reflow operations are conducted to avoid crossing the generation paths, and then followed by distilling the rectified generation process into a single step. However, as shown in Table 6, the involved iterative distillation degrades the performance of the final model due to the accumulated error as discussed by the author [20]. Besides, as verified by the previous section that the deterministic mapping between x T and x 0 is easy to learn in the SR task, the benefit of a more sophisticated distillation strategy is not obvious. Learned inversion. As the core of the consistency pre- 6. A comparison between models accelerated by the proposed method and [45], which includes a reflow and a distillation operation. The models are evaluated on ImageNet-Test [45]. " }, { "figure_ref": [], "heading": "LR image DDIM inversion Learned inversion GT image", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel strategy to accelerate the diffusion-based SR models into a single inference step. Specifically, a one-step bi-directional distillation is proposed to learn the deterministic mapping between the input noise and the generated high-resolution image and versa vice from a teacher diffusion model with our derived deterministic sampling. Meanwhile, a novel consistency preserving loss is optimized at the same time during the distillation so that the student model not only uses the information from the pre-trained teacher diffusion model but also directly learns from ground-truth images. Experimental results demonstrate that the proposed method can achieve on-par or even better performance than the teacher model in only one step." } ]
While super-resolution (SR) methods based on diffusion models exhibit promising results, their practical application is hindered by the substantial number of required inference steps. Recent methods utilize the degraded images in the initial state, thereby shortening the Markov chain. Nevertheless, these solutions either rely on a precise formulation of the degradation process or still necessitate a relatively lengthy generation path (e.g., 15 iterations). To enhance inference speed, we propose a simple yet effective method for achieving single-step SR generation, named SinSR. Specifically, we first derive a deterministic sampling process from the most recent state-of-the-art (SOTA) method for accelerating diffusion-based SR. This allows the mapping between the input random noise and the generated high-resolution image to be obtained in a reduced and acceptable number of inference steps during training. We show that this deterministic mapping can be distilled into a student model that performs SR within only one inference step. Additionally, we propose a novel consistency-preserving loss to simultaneously leverage the ground-truth image during the distillation process, ensuring that the performance of the student model is not solely bound by the feature manifold of the teacher model, resulting in further performance improvement. Extensive experiments conducted on synthetic and real-world datasets demonstrate that the proposed method can achieve comparable or even superior performance compared to both previous SOTA methods and the teacher model, in just one sampling step, resulting in a remarkable up to ×10 speedup for inference.
SinSR: Diffusion-Based Image Super-Resolution in a Single Step
[ { "figure_caption": "Figure 1 .1Figure1. A comparison between the most recent SOTA method ResShift[45] for the acceleration of diffusion-based SR and the proposed method. We achieve on-par or even superior perceptual quality using only one inference step. (\"-N\" behind the method name represents the number of inference steps, and the value in the bracket is the quantitative result measured by MUSIQ↑[14].)", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. An illustration of the generative ability of the proposed method in only one step. Given the same LR image (Fig. (a) and (b)), by using different noise added to the input, HR images (Fig. (c)-(e)) with different details are generated, e.g., eyes of different shapes and colors. Best zoom in for details.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "stoc … 𝑥 ! ∼ 𝒩(𝑦, 𝜅 \" 𝐼) 𝑥 * # ~p$ (x|x % , y)The degraded image 𝑦 is embedded into the initial state 𝑥 ! 𝜇 ! (𝑥 \" , 𝑦, 𝑡) 𝜖~𝛴 ! (𝑥 \" , 𝑦, 𝑡) The recent SOTA method ResShift[45] shortens the Markov chain to speed up the inference process by incorporating the information of the LR image y to the initial state x T (T=15).", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "stoc 𝑥 ! ∼ 𝒩(𝑦, 𝜅 \" 𝐼) 𝑥 * # = F $ (x % , y) 𝑥 !\"# = 𝑓 $ 𝑥 ! , 𝑦, 𝑡 Deterministic sampling … Deterministic sampling Deterministic sampling Student network 𝑓 $ % ℒ &'(!')) = 𝐿 *+, ((𝑓 $ % 𝑥 -, 𝑦, 𝑇 , 𝐹 $ 𝑥 -, 𝑦 )Pretrained teacher model 𝑓 $ (c) A simplified pipeline of the proposed method SinSR (distill only). It directly learns the deterministic mapping between x T and x 0 , therefore the inference process can be further compressed into only one step (T=1).", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The overall framework of the proposed method. By minimizing L distill and Linverse, the student network f θ learns the deterministic bi-directional mapping between xT and x0 obtained from a pre-trained teacher diffusion model in one step. Meanwhile, the proposed consistency preserving loss Lgt is optimized during training to utilize the information from the GT images to pursue better perceptual quality instead of simply fitting the deterministic mappings from the teacher model. Specifically, the GT image is first converted to its latent code xT = f θ (x0, y, 0), and then converted back to calculate its reconstruction loss LMSE(f θ (xT , y, T ), x0).", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visual comparison on real-world samples. Please zoom in for more details.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. A comparison between the model trained with the default stochastic sampling process in ResShift[45] and the proposed deterministic sampling in Eq. 5. Best zoom in for more details.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. A comparison between HR images generated from DDIM inversion and the proposed learned inversion. Zoom in for details.serving loss, a comparison with the DDIM inversion [36] is shown in Fig.7, where the proposed method achieves better fidelity performance. It indicates that the proposed method can obtain a more accurate estimation of x T . Besides, more analyses regarding the consistency preserving loss are in the supplementary material. Training overhead. While the proposed method involves solving ODEs during training, benefiting from a shortened inference process and initializing the student model from the pre-trained teacher model, the training cost of finetuning using the proposed training paradigm is still lower than that of retraining the diffusion model from scratch. Specifically, the training cost is shown in Table7.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Training Require: Pre-trained teacher diffusion model f θ Require: Paired training set (X, Y ) 1: Init f θ from the pre-trained model, i.e., θ ← θ.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "y, t) + m t x t + j t y", "figure_data": "11:end if12:end for13:", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative results of models on two real-world datasets. The best and second best results are highlighted in bold and underline.", "figure_data": "DatasetsMethodsRealSRRealSet65CLIPIQA↑MUSIQ↑CLIPIQA↑MUSIQ↑ESRGAN [39]0.236229.0480.373942.369RealSR-JPEG [11]0.361536.0760.528250.539BSRGAN [46]0.543963.5860.616365.582SwinIR [17]0.465459.6360.578263.822RealESRGAN [40]0.489859.6780.599563.220DASR [18]0.362945.8250.496555.708LDM-15 [31]0.383649.3170.427447.488ResShift-15 [45]0.595859.8730.653761.330SinSR-1 (distill only)0.611957.1180.682261.267SinSR-10.688761.5820.715062.169MethodsPSNR↑SSIM↑Metrics LPIPS↓CLIPIQA↑MUSIQ↑ESRGAN [39]20.670.4480.4850.45143.615RealSR-JPEG [11]23.110.5910.3260.53746.981BSRGAN [46]24.420.6590.2590.58154.697SwinIR [17]23.990.6670.2380.56453.790RealESRGAN [40]24.040.6650.2540.52352.538DASR [18]24.750.6750.2500.53648.337LDM-30 [31]24.490.6510.2480.57250.895LDM-15 [31]24.890.6700.2690.51246.419ResShift-15 [45]24.900.6730.2280.60353.897SinSR-1 (distill only)24.690.6640.2220.60753.316SinSR-124.560.6570.2210.61153.357", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative results of models on ImageNet-Test. The best and second best results are highlighted in bold and underline.", "figure_data": "", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Efficiency and performance comparisons with SOTA methods on ImageNet-Test. \"-N\" represents the number of sampling steps the model used. The running time per image is tested on a Tesla A100 GPU on the x4 (64→ 256) task averaged over the batch size (bs).", "figure_data": "Metrics", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "A comparison between the model trained with the default stochastic sampling process in ResShift", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "A comparison of the models trained with different strategies on RealSet65. The model trained with the diffusion loss, i.e., ResShift, is more sensitive to the model size than directly learning the deterministic mapping between xT and x0, indicating that the deterministic mapping is relatively easier to learn.", "figure_data": "MethodsCLIPIQA ↑ MUSIQ ↑ResShift [45] (24.32M)0.536552.71ResShift [45] (118.59M)0.653761.33SinSR (distill only) (24.32M)0.649958.71", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Num of Iters s/Iter Training TimeResShift [45]500k1.32s∼7.64 daysSinSR (Ours)30k7.41s∼2.57 days", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "A comparison of the training cost on an NVIDIA A100.", "figure_data": "", "figure_id": "tab_11", "figure_label": "7", "figure_type": "table" } ]
Yufei Wang; Wenhan Yang; Xinyuan Chen; Yaohui Wang; Lanqing Guo; Lap-Pui Chau; Ziwei Liu; Yu Qiao; Alex C Kot; Bihan Wen
[ { "authors": "Byungkon Namhyuk Ahn; Kyung-Ah Kang; Sohn", "journal": "", "ref_id": "b0", "title": "Image super-resolution via progressive cascading residual network", "year": "2018" }, { "authors": "Firstname Alpher", "journal": "Frobnication. IEEE TPAMI", "ref_id": "b1", "title": "", "year": "2002" }, { "authors": "Jianrui Cai; Hui Zeng; Hongwei Yong; Zisheng Cao; Lei Zhang", "journal": "", "ref_id": "b2", "title": "Toward real-world single image super-resolution: A new benchmark and a new model", "year": "2019" }, { "authors": "Jooyoung Choi; Sungwon Kim; Yonghyun Jeong; Youngjune Gwon; Sungroh Yoon", "journal": "", "ref_id": "b3", "title": "Ilvr: Conditioning method for denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Hyungjin Chung; Byeongsu Sim; Jong Chul; Ye ", "journal": "", "ref_id": "b4", "title": "Comecloser-diffuse-faster: Accelerating conditional diffusion models for problems through stochastic contraction", "year": "2022" }, { "authors": "Ryan Dahl; Mohammad Norouzi; Jonathon Shlens", "journal": "", "ref_id": "b5", "title": "Pixel recursive super resolution", "year": "2017" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b6", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Chao Dong; Chen Change Loy; Kaiming He; Xiaoou Tang", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b7", "title": "Image super-resolution using deep convolutional networks", "year": "2015" }, { "authors": "Baisong Guo; Xiaoyun Zhang; Haoning Wu; Yu Wang; Ya Zhang; Yan-Feng Wang", "journal": "", "ref_id": "b8", "title": "Lar-sr: A local autoregressive model for image super-resolution", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b9", "title": "Denoising diffusion probabilistic models", "year": "" }, { "authors": "Xiaozhong Ji; Yun Cao; Ying Tai; Chengjie Wang; Jilin Li; Feiyue Huang", "journal": "", "ref_id": "b10", "title": "Real-world super-resolution via kernel estimation and noise injection", "year": "2020" }, { "authors": "Tero Karras; Timo Aila; Samuli Laine; Jaakko Lehtinen", "journal": "", "ref_id": "b11", "title": "Progressive growing of gans for improved quality, stability, and variation", "year": "2018" }, { "authors": "Bahjat Kawar; Michael Elad; Stefano Ermon; Jiaming Song", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Denoising diffusion restoration models", "year": "2022" }, { "authors": "Junjie Ke; Qifei Wang; Yilin Wang; Peyman Milanfar; Feng Yang", "journal": "", "ref_id": "b13", "title": "Musiq: Multi-scale image quality transformer", "year": "2021" }, { "authors": "Jiwon Kim; Jung Kwon Lee; Kyoung Mu; Lee ", "journal": "", "ref_id": "b14", "title": "Accurate image super-resolution using very deep convolutional networks", "year": "2016" }, { "authors": "Christian Ledig; Lucas Theis; Ferenc Huszár; Jose Caballero; Andrew Cunningham; Alejandro Acosta; Andrew Aitken; Alykhan Tejani; Johannes Totz; Zehan Wang", "journal": "", "ref_id": "b15", "title": "Photorealistic single image super-resolution using a generative adversarial network", "year": "2017" }, { "authors": "Jingyun Liang; Jiezhang Cao; Guolei Sun; Kai Zhang; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b16", "title": "Swinir: Image restoration using swin transformer", "year": "2021" }, { "authors": "Jie Liang; Hui Zeng; Lei Zhang", "journal": "Springer", "ref_id": "b17", "title": "Efficient and degradation-adaptive network for real-world image superresolution", "year": "2022" }, { "authors": "Yaron Lipman; Ricky T Q Chen; Heli Ben-Hamu; Maximilian Nickel; Matthew Le", "journal": "", "ref_id": "b18", "title": "Flow matching for generative modeling", "year": "2023" }, { "authors": "Xingchao Liu; Chengyue Gong", "journal": "", "ref_id": "b19", "title": "Flow straight and fast: Learning to generate and transfer data with rectified flow", "year": "2022" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "Andreas Lugmayr; Martin Danelljan; Luc Van Gool; Radu Timofte", "journal": "Springer", "ref_id": "b21", "title": "Srflow: Learning the super-resolution space with normalizing flow", "year": "2020" }, { "authors": "Eric Luhman; Troy Luhman", "journal": "", "ref_id": "b22", "title": "Knowledge distillation in iterative generative models for improved sampling speed", "year": "2021" }, { "authors": "Chenlin Meng; Robin Rombach; Ruiqi Gao; Diederik Kingma; Stefano Ermon; Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b23", "title": "On distillation of guided diffusion models", "year": "2023" }, { "authors": "Jacob Menick; Nal Kalchbrenner", "journal": "", "ref_id": "b24", "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "year": "2018" }, { "authors": "Sachit Menon; Alexandru Damian; Shijia Hu; Nikhil Ravi; Cynthia Rudin", "journal": "", "ref_id": "b25", "title": "Pulse: Self-supervised photo upsampling via latent space exploration of generative models", "year": "2020" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "PMLR", "ref_id": "b26", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Aaronvanden Oord; Nal Kalchbrenner; Oriol Vinyals; Lasse Espeholt; Alex Graves; Koray Kavukcuoglu", "journal": "arXiv: Computer Vision and Pattern Recognition,arXiv: Computer Vision and Pattern Recognition", "ref_id": "b27", "title": "Conditional image generation with pixelcnn decoders", "year": "2016" }, { "authors": "Niki Parmar; Ashish Vaswani; Jakob Uszkoreit; Łukasz Kaiser; Noam Shazeer; Alexander Ku; Dustin Tran", "journal": "Computer Vision and Pattern Recognition", "ref_id": "b28", "title": "Image transformer. arXiv: Computer Vision and Pattern Recognition", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b29", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b30", "title": "High-resolution image synthesis with latent diffusion models", "year": "2007" }, { "authors": "Chitwan Saharia; Jonathan Ho; William Chan; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b31", "title": "Image superresolution via iterative refinement", "year": "2022" }, { "authors": "S M Mehdi; Bernhard Sajjadi; Michael Scholkopf; Hirsch", "journal": "", "ref_id": "b32", "title": "Enhancenet: Single image super-resolution through automated texture synthesis", "year": "2017" }, { "authors": "Tim Salimans; Jonathan Ho", "journal": "", "ref_id": "b33", "title": "Progressive distillation for fast sampling of diffusion models", "year": "2021" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b34", "title": "Laion-400m: Open dataset of clip-filtered 400 million image-text pairs", "year": "2021" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b35", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Yang Song; Prafulla Dhariwal; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b36", "title": "Consistency models", "year": "2023" }, { "authors": "Jianyi Wang; Kelvin Ck Chan; Chen Change Loy", "journal": "", "ref_id": "b37", "title": "Exploring clip for assessing the look and feel of images", "year": "2023" }, { "authors": "Xintao Wang; Ke Yu; Shixiang Wu; Jinjin Gu; Yihao Liu; Chao Dong; Yu Qiao; Chen Change Loy", "journal": "", "ref_id": "b38", "title": "Esrgan: Enhanced super-resolution generative adversarial networks", "year": "2018" }, { "authors": "Xintao Wang; Liangbin Xie; Chao Dong; Ying Shan", "journal": "", "ref_id": "b39", "title": "Real-esrgan: Training real-world blind super-resolution with pure synthetic data", "year": "2021" }, { "authors": "Yufei Wang; Renjie Wan; Wenhan Yang; Haoliang Li; Lap-Pui Chau; Alex Kot", "journal": "", "ref_id": "b40", "title": "Low-light image enhancement with normalizing flow", "year": "2022" }, { "authors": "Yufei Wang; Yi Yu; Wenhan Yang; Lanqing Guo; Lap-Pui Chau; Alex C Kot; Bihan Wen", "journal": "", "ref_id": "b41", "title": "Exposurediffusion: Learning to expose for low-light image enhancement", "year": "2023" }, { "authors": "Zhaowen Wang; Ding Liu; Jianchao Yang; Wei Han; Thomass Huang", "journal": "", "ref_id": "b42", "title": "Deep networks for image super-resolution with sparse prior", "year": "2015" }, { "authors": "Zhihao Wang; Jian Chen; Steven Ch Hoi", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b43", "title": "Deep learning for image super-resolution: A survey", "year": "2020" }, { "authors": "Zongsheng Yue; Jianyi Wang; Chen Change Loy", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Resshift: Efficient diffusion model for image super-resolution by residual shifting", "year": "2023" }, { "authors": "Kai Zhang; Jingyun Liang; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b45", "title": "Designing a practical degradation model for deep blind image super-resolution", "year": "2021" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b46", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 58.44, 571.62, 228.59, 23.23 ], "formula_id": "formula_0", "formula_text": "p θ (x t-1 |x t , y) = N (x t-1 |µ θ (x t , y, t), κ 2 η t-1 η t α t I),(2)" }, { "formula_coordinates": [ 4, 207.96, 611.92, 79.65, 15.86 ], "formula_id": "formula_1", "formula_text": "x T = y + κ √ η T ϵ," }, { "formula_coordinates": [ 4, 339.83, 347.92, 205.95, 9.65 ], "formula_id": "formula_2", "formula_text": "q(x t-1 |x t , x 0 , y) = δ(k t x 0 + m t x t + j t y),(3)" }, { "formula_coordinates": [ 4, 347.53, 388.46, 198.25, 46.45 ], "formula_id": "formula_3", "formula_text": "       m t = ηt-1 ηt j t = η t-1 - √ η t-1 η t k t = 1 -η t-1 + √ η t-1 η t -ηt-1 ηt .(4)" }, { "formula_coordinates": [ 4, 353.57, 492.12, 192.21, 24.6 ], "formula_id": "formula_4", "formula_text": "x t-1 = k t x0 + m t x t + j t y = k t f θ (x t , y, t) + m t x t + j t y,(5)" }, { "formula_coordinates": [ 4, 339.43, 670.97, 206.35, 9.65 ], "formula_id": "formula_5", "formula_text": "L distill = L M SE (f θ (x T , y, T ), F θ (x T , y)),(6)" }, { "formula_coordinates": [ 5, 79.03, 276.31, 208, 9.65 ], "formula_id": "formula_6", "formula_text": "L inverse = L M SE (f θ (F θ (x T , y), y, 0), x T ),(7)" }, { "formula_coordinates": [ 5, 103.02, 373.21, 184.01, 16.7 ], "formula_id": "formula_7", "formula_text": "L gt = L M SE (f θ (x T , y, T ), x 0 ),(8)" }, { "formula_coordinates": [ 5, 64.99, 494.28, 222.04, 17.29 ], "formula_id": "formula_8", "formula_text": "θ = arg min θ E y,x0,x T [L distill + L reverse + L gt ],(9)" }, { "formula_coordinates": [ 5, 317.11, 184.96, 110.89, 32.51 ], "formula_id": "formula_9", "formula_text": "if t = 1 then 8: x0 = f θ (x 1 , y, 1) 9:" }, { "formula_coordinates": [ 5, 358.18, 220.89, 66.61, 9.65 ], "formula_id": "formula_10", "formula_text": "x t-1 = k t f θ (x t ," }, { "formula_coordinates": [ 5, 338.25, 256.76, 142.2, 9.65 ], "formula_id": "formula_11", "formula_text": "L distill = L M SE (f θ (x T , y, T ), x0 )" }, { "formula_coordinates": [ 5, 313.12, 268.72, 170.65, 20.48 ], "formula_id": "formula_12", "formula_text": "L inverse = L M SE (f θ (x 0 , y, 0), x T ) 15:" }, { "formula_coordinates": [ 5, 313.12, 292.63, 189.29, 32.44 ], "formula_id": "formula_13", "formula_text": "L gt = L M SE (f θ (detach(x T ), y, T ), x 0 ) 17: L = L distill + L inverse + L gt 18:" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b10", "b26", "b24", "b0", "b29", "b26", "b19" ], "table_ref": [], "text": "Climate change is increasing the likelihood of extreme weather events, such as large storms [6], which can be destructive, especially for humans and boats in stormy waters. Quick and accurate rescue efforts are crucial in such situations as death can occur in as little as 50 minutes [31]. Au-arXiv:2311.14764v1 [cs.CV] 24 Nov 2023 tomated disaster response in the ocean is a growing area that uses advanced technology to help search and rescue in extreme weather events. Leveraging autonomous systems, remote sensing technologies, artificial intelligence, and realtime data analysis, automated disaster response aims to minimise risks, and accelerate response times. However, detecting entities in need of rescue during severe conditions in the ocean is often challenging. Therefore, there is a need for automated disaster responses that can operate effectively in these harsh conditions and can precisely locate objects, vessels, and people requiring assistance.\nDeveloping robust models to automatically detect objects in the sea during extreme weather conditions is a challenging task. The challenge primarily arises from the difficulty of obtaining high-quality data in such chaotic and unpredictable situations. Adverse weather conditions, particularly during storms, pose a safety risk, preventing the deployment of cameras in critical areas for data collection. Furthermore, the wild nature of disaster scenes often stretches already limited resources, including personnel, equipment, and communication channels, thus constraining the capacity for data acquisition [11]. Due to the lack of datasets for training models to effectively detect maritime objects in the ocean during severe weather conditions, generating synthetic datasets to train and test object detection models can be highly beneficial.\nBy harnessing recent generative models such as Stable Diffusion (SD) [27], and DALL-E [25], it is now possible to generate realistic images based on textual descriptions. However, generating synthetic images solely based on textual descriptions experiences limitations in specifying object location, size, and type within the ocean scene, restricting the replication of realistic scenarios. To overcome this challenge, a more practical approach is to utilize real sea images with marine objects and performing transformations on the sea background.\nUnfortunately, when applying the Stable Diffusion model to edit the real sea image's backgrounds, our experiments reveal a significant limitation in using text prompts to precisely control the desired Sea State. The edited background randomly appears not according to the description, so it would require checking manually to ensure the image's quality. Consequently, this process becomes timeconsuming as it requires manual control to achieve a visually satisfactory Sea State. Furthermore, we found that existing image editing methods often replace or excessively modify the objects in the edited images, making the images unusable. Therefore, relying solely on synthetic image generation techniques to create a dataset with diverse Sea State Levels proves to be challenging, as it demands a substantial amount of time for generation and manual verification.\nTo tackle this issue, we propose a method for editing images of calm ocean scenes into stormy ocean scenes, focusing on UAV-view images. We replace the original calm ocean with an ocean environment corresponding to standard definitions from a Bureau of Meteorology as depicted in Table 1, all while attempting to retain the maritime objects in the images 1 . This method shows as a proof-of-concept to develop a simple-yet-effective approach, capable of automatically screening out poor-quality edited images with overly edited preserved objects.\nOur work takes an input image and use an image generation method to perform background transformation. Recent work proposed the Blended Latent Diffusion [1] which demonstrates significantly improved results over the Stable Diffusion model in preserving foreground objects while also translating the background. Therefore, we employed Blended Latent Diffusion to modify images with object masking. Due to the stochastic nature of diffusion [30], we find that prompts alone can be unreliable for producing images according to the sea state definitions. Thus we apply the Sea State Classifier which classifies the transformed image into one of the sea state definitions. Then, the Object Preservation Checker is applied to evaluate whether the transformed image still preserve the objects from the input image.\nWe leverage the SeaDroneSee dataset [34] for experimentation and validation. To quantitatively evaluate the effectiveness of our approach, we compare it against other image editing methods, such as Stable Diffusion Inpainting [27]. Furthermore, we conducted object detection tests using YoloV5 model pre-trained on calm sea state images [20] to assess the impact of various sea states on object recognisability in edited images. Our findings reveal that the pre-trained object detection model struggles to identify objects in increasingly stormy conditions. This suggests that the model's performance is less effective when it encounters previously unseen stormy sea backgrounds, making it more challenging to detect objects in rough sea surface conditions. Contributions -We list our contributions as follows: 1. We propose a simple-yet-effective method for modifying the sea state level of the ocean in maritime images while preserving the objects of interest, enhancing their utility for various applications; 2. We construct and propose a synthetic dataset, the Safe-Sea dataset, enabling the training of models to accommodate diverse weather conditions; 3. We conduct an evaluation of the SafeSea dataset with YoloV5 to assess the model's performance under varying weather conditions. We continue our paper as follows. Section 2 presents an overview of prior work including maritime datasets and diffusion models. In Section 3, we define our problem and the goal we want to achieve before Section 4 outlines our proposed SafeSea method designed to accomplish this goal. The description of the SafeSea dataset are presented in Section 5. Section 6 presents our experiment and discussion. We conclude the paper and discuss about limitations and future work in Sections 7 and Section 8." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b20", "b27", "b22", "b25", "b8", "b21", "b15", "b6", "b6", "b1", "b16", "b9", "b17", "b14", "b31", "b28", "b7", "b15", "b6", "b0", "b7", "b11", "b26", "b28", "b31" ], "table_ref": [], "text": "Sea Datasets -Numerous public maritime datasets facilitate the training of marine object detection models. Notable examples include the VAIS dataset [35], offering over 1,000 RGB and infrared image pairs, showcasing various ship types. The IPATCH dataset [21] records realistic maritime piracy scenarios, while the SeaShips dataset [28] features ship images from inland waterways. The 'Singapore Maritime Dataset' [23] captures marine objects using onshore platforms and vessels, providing diverse perspectives. Additionally, the Seagull dataset [26] and Maritime SATellite Imagery (MASATI) dataset [9] offer aerial images. The SeaDronesSee dataset [34] contains over 54,000 frames with around 400,000 instances. Despite their strengths, these datasets often focus on specific objects like ships and boats, and often the objects of interest are relatively large compared to the image size in certain datasets. Furthermore, most of the images are taken under good weather conditions. Synthetic Datasets -Extensive research explores methodologies employing synthetic images for training object detection models. Noteworthy contributions include Peng et al. [22] emphasizing the refinement of synthetic object backgrounds for improved detection reliability. The use of powerful game engines like Unity [16] and Unreal Engine [7] is prominent, as demonstrated by Becktor et al. [3] in the context of spacecraft guidance and control. Unreal Engine 4 [7] has proven valuable in autonomous driving, Dosovitskiy et al. [5], and maritime image generation, as utilized by Becktor et al. [2]. Kiefer et al. [17] analyzed maritime and terrestrial images, incorporating real and synthetic data from the Grand Theft Auto V (GTAV) simulation platform [10]. Xiaomin et al. [18] introduced See-DroneSim, utilizing the Blender game engine [4] for UAVbased maritime images. Airbus [15] released a 2018 dataset of 40,000 satellite images designed for ship detection using synthetic aperture radar (SAR) technology. While existing datasets focus on synthetic objects of interest, our work concentrates on generating diverse environmental conditions based on real data. Diffusion Models -Diffusion models have been widely employed for image transformation in various contexts. Trabucco, Doherty et al. [32] employed pre-trained textto-image diffusion models for semantic-based image modification. Shin et al. [29] utilized Stable Diffusion for image generation using the Textual Inversion [8] Presently available real-image maritime datasets have limitations, including fixed camera positions and a restricted variety of marine objects. Furthermore, they often lack diversity in representing various weather conditions reflecting on the sea background. This lack of diversity can restrain the development of a high-quality dataset for training deep-learning models. To address this challenge, numerous studies have been conducted to generate high-quality synthetic images capable of replicating realworld scenarios. These synthetic images can be produced by leveraging game engines such as Unity [16] and Unreal Engine [7] or by employing diffusion models to modify real images [1,8,12,27,29,32]. While using game engines can be resource-intensive, editing images with diffusion models offers a simpler approach. Unfortunately, the editing process via diffusion models is still time consuming and labour intensive due to imprecise control that specifies which part of the image that needs to be edited. This works proposes a proof-of-concept to enable automation in the editing process which significantly reduces the time and labour whilst maintaining the quality of the edited images." }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Here we technically define our problem. Starting from an image X containing a set of objects, B = {b i } N i=1 , where N is the number of objects, and b i is the i-th object bounding box. We first define the foreground f x of image X as all areas of B, subsequently, we define the background bg x as the inverse of all areas of B in X. Initially, bg x is a calm ocean environment. We wish to replace bg x such that it corresponds to a particular sea state, SS, where SS ranges from 1 to 4 and are defined in Table 1, with examples shown in Figure 6. We only aim to replace bg x , and aim to retain the number of N as was present in X. We refer to the resulting image as Y . Specifically, let f y and bg y as the foreground, and the background of Y , respectively. The goal is to have bg y ̸ = bg x , and f y ≈ f x ." }, { "figure_ref": [], "heading": "Proposed SafeSea method", "publication_ref": [], "table_ref": [], "text": "The overall diagram of the propose method is depicted in Figure 1 and the pseudo code is presented in Algorithm 1. The method has three main components: (1) the image generation module which transform the background of an input image; (2) Sea State Level classifier; and (3) the Object Preservation Checker. Once an image is generated, its quality is automatically assessed by using ( 2) and (3). Specifically, the Sea State Level classifier determines the sea state level of the generated image, and the Object Preservation Checker ensures the generated image still contains the objects from the input image.\nThe following section first discusses the image generation module which is powered by a diffusion model. Then, the Sea State Level classifier and the Object Preservation Checker modules are presented." }, { "figure_ref": [], "heading": "Image Generation Module", "publication_ref": [ "b12", "b23", "b0", "b0" ], "table_ref": [], "text": "Diffusion models [13] are capable of being trained with guiding input channels, enabling them to perform conditional image generation such as creating synthesizing visual content based on textual descriptions [24]. Beyond synthesis, diffusion models are versatile image editors, allowing for targeted modifications. In this process, noise is added to the original image and subsequently denoised in response to an optional prompt, which describes the new image. Using masking techniques, these models can carry out semantic editing, selectively altering specific regions within the images while leaving others intact. Various approaches exist for utilizing Stable Diffusion in image editing, which can be broadly categorized into two main groups: those that employ pre-specified masks and those that mask images based on provided text descriptions.\nIn this work, we applied Blended Latent Diffusion method [1] to edit original sea images. Blended Latent Diffusion provides a versatile approach to image editing, utilizing masks alongside guided text prompts. In a nutshell, it provides a versatile approach to image editing, utilizing masks alongside guided text prompts. To alter an image, a mask of identical dimension to that image is applied to designate the region to be modified, while guided text instructions define how the edited area should appear. The intended outcome is an image in which the masked region undergoes alterations while the unmasked portion remains unaltered. In the transformation process, the original image is encoded into a latent space, introducing a controlled level ALGORITHM 1 Pseudocode for generating synthetic images of the sea with marine objects from real images as proposed in SafeSea method. The real image has its sea background edited using Blended Latent Diffusion [1], then its Sea State level is provided by the Sea State Classifier. Finally, the original objects are checked if they are retained in the edited image, which lead to the decision of saving the image if at least one object is preserved.\nInput: Real sea images with marine objects (Rs), matching mask images (M s) with ground true bounding boxes of marine objects are masked as black, text description of background (P )\nOutput: Transformed images with edited background reflected different sea condition and the marine objects are preserved 1: for each X in Rs do Cs ← E R ▷ Crop objects from the edited image 7:\nfor each C in Cs do end for 13: end for of noise. Additionally, the mask is downsampled within this latent space. At each stage of this process, the noisy latent image is subjected to denoising and is regarded as the foreground to be blended with the background elements. We observe that the method demonstrates promise in generating images with different ocean backgrounds whilst maintaining a considerable level of object preservation, in which the object can be recognized visually after the background edition process." }, { "figure_ref": [], "heading": "Sea State Classifier", "publication_ref": [ "b13" ], "table_ref": [ "tab_0" ], "text": "To provide information about Sea State level in the edited image's background, we employed a Sea State classifier. We selected four distinct Sea States for our study, namely Sea States 1, 2, 3, and 4, as these are the only Sea States for which datasets are publicly available. The Sea State definitions are shown in Table 1 .It is important to note that without recorded weather conditions at the time of image capture, visually classifying sea images into different Sea States is a challenging task. Leveraging the Manzoor-Umair Sea State Image Dataset (MU-SSiD) [33], we trained a DenseNet [14] to categorize images into the four Sea State categories. The trained model achieved an accuracy rate of 71% against the testing dataset." }, { "figure_ref": [ "fig_0" ], "heading": "Object Preservation", "publication_ref": [ "b13" ], "table_ref": [], "text": "To evaluate whether an object remains preserved following image transformation, we developed an object preservation checker. This checker's primary function is to identify objects that are no longer recognizable within the provided ground truth bounding box. To do this, we train a binary classifier on a dataset consiting of two classes: \"boat\" and \"not boat.\" The \"boat\" class contains images of cropped boats sourced from the ground truth SeaDrone-See dataset [34], complemented by their augmented versions, which include flipping and blurring. In contrast, the 'not boat' class comprises images from the negative class extracted from the Boat-MNIST dataset [34], as well as crops from randomly selected backgrounds from synthetic images. Additionally, it includes crops intentionally containing small portions of boat objects from the edited images.These crops are generated by following the object ground truth bounding box, ensuring that they contain only one-fourth of the bounding box's area with the rest outside. Using the dataset with only horizontal flip data augmentation for the boat class. We trained a DenseNet [14] model with a training batch size of 32, a fixed learning rate of 1e-5 without decay, and utilized the Adam optimization algorithm during the training process. Overall, the model achieves the accuracy of 74.86% against testing dataset, including crops of boat objects in real images and non-boat crops from real and edited images.\nSubsequently, based on the given ground truth bounding box information, we extracted boat objects from the edited images for evaluation using the trained checker. We conducted random visual checks on the preserved boat objects to evaluate the model's performance. While there are occasional misclassifications of objects as boats that do not visually resemble boats, the model generally achieves satisfactory accuracy in detecting non-boat objects, in which it can pick up cropped objects that resemble boats and filter out non-boat crops according to the visual checks with the accuracy of 69.45%. Examples of the cropped boat objects are illustrated in Figure 2." }, { "figure_ref": [ "fig_0" ], "heading": "SafeSea dataset", "publication_ref": [], "table_ref": [], "text": "The SafeSea dataset is created using the SafeSea method, involving the transformation of 300 calm ocean background images originally sourced from the 'SeaDroneSee' dataset [34]. All edited images were resized to match the dimensions of their respective originals. The SafeSea method produces 69,694 images images. These are then classified into one of the sea state levels. The distribution of these im-Figure 2. Examples of boat object crops from edited images. The crops are taken based on the ground truth bounding boxes provided from the source images in the 'SeaDroneSea' dataset [34]. " }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "This section is divided into two parts. The first part evaluates the proposed SafeSea method efficacy in filtering compared with the other methods such as the vanilla Stable Diffusion Inpainting. Then, we study the performance of YoloV5 detection model on the proposed SafeSea dataset." }, { "figure_ref": [], "heading": "SafeSea method evaluation", "publication_ref": [], "table_ref": [], "text": "We first describe the experiment setup and then present the results afterwards." }, { "figure_ref": [], "heading": "Experiment setup", "publication_ref": [ "b26", "b0", "b26", "b26", "b19" ], "table_ref": [], "text": "Evaluation protocol -We generate 100 images from each evaluated method. Then, each image is manually checked by humans and categorized into good quality of bad quality group. A good quality image is defined as the following rules:\n• The background should contain either island, ocean or cloud. For instance if the background contains unexpected objects such as a fridge, then the image is deemed low quality. • The background should look realistic • All the objects should look visually acceptable. At least one boat is preserved in the image. Each method is then compared by looking at its percentage of generating good quality images. Methods -The proposed SafeSea method is compared against two baselines: (1) the vanilla Stable Diffusion inpainting [27]; and (2) the vanilla Blended Latent Diffusion [1]. Details for each baseline parameters is presented as follow. Stable Diffusion inpainting (SD Inpainting) -We use Stable Diffusion v1-4 [27] with batch size of one. The masked image is derived from the groundtruth bounding boxes. The method is then fed with the following prompts to generate the images. Canon EOS R3, Nikon d850 400mm, Canon DSLR, lens 300mm, 4K, HD. Blended Latent Diffusion (BLD) -We use Stable Diffusion v2-1-base [27] with batch size of ten. Similar to the method above, we use the derived mask image from ground truth bounding boxes. We observe that when editing the sea background of an image to various Sea State Level, the reliance on prompts alone is insufficient. Therefore, to simplify the generation issue with prompt we only use one prompt in our experiments, which is \"Aerial image of sea's surface. Canon EOS R3, Nikon d850 400mm, Canon DSLR, lens 300mm, 4K, HD\". The prompt allows generation of images with varying sea state levels with different generation seeds. SafeSea (proposed) -We utilize Blended Latent Diffusion with the same parameters as above to edit original images. Then we use the sea state level classifier to determine the sea state level of the generated images. The Object Preservation Checker ensures objects are preserved. The generated image will be preserved if it preserves at least one object. Same as the other methods, the mask image is derived from the groundtruth bounding boxes provided by the SeaDroneSee dataset [20]. Note that this dataset contains several classes, but in this experiment we only aim to preserve the boat as boats have much larger object size." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b0" ], "table_ref": [], "text": "Table 2 presents the comparison results. The results suggest that the proposed SafeSea outperforms the baselines. As expected the BLD produces better image quality than the SD Inpainting as also shown in [1] It is clear that SafeSea outperforms the other methods, as it generates more good images with a realistic sea background and maintains object integrity more effectively." }, { "figure_ref": [ "fig_2" ], "heading": "Good images (in %)", "publication_ref": [ "b26", "b0" ], "table_ref": [ "tab_4" ], "text": "SD Inpainting [27] 26.5% ± 5.94 BLD [1] 52.94% ± 8.25 SafeSea (proposed) 63.59% ± 2.76\nTable 2. Comparison between SD Inpainting, BLD and SafeSea when we manually check quality of the 100 generated images from each method. The good image rate computes the percentage of good images passed by manually check by humans. High good image rate suggests the method works better for the task.\nWhile the result of Stable Diffusion Inpainting appear to achieve realism, many objects are blended into the background, or overly edited. Additionally, there are other inherent issues, such as other objects being inserted into the edited image. It appears as though the objects of interest may be inadvertently duplicated and dispersed into the background, as shown in figure 4. Although some edited background is of considerable quality, many of objects of interest are not retained. Additionally, the edited images may not be suitable for use due to the introduction of irrelevant modified objects and unexpected background.\nWhen employing Blended Latent Diffusion, the issue of objects of interest being uncontrollably spread is mitigated, as demonstrated in Figure 5. Notably, the boat objects are effectively preserved, and the edited background exhibits acceptable quality. Nevertheless, it is worth mentioning that the generated sea background does not possess the same vividness observed in Stable Diffusion Inpainting. This disparity may be affected by the utilisation of different trained Diffusion models in these approaches. Future work will investigate other image generation methods that have similar preservation properties to the Blended Latent Diffusion whilst producing more vivid background.\nTo further confirm the above result, we present the percentage of images produced by SD Inpainting and BLD retained after applying our SafeSea filter. Results in Table 4 suggest that SD Inpainting produces much lower quality than BLD as significant amount of images are filtered. " }, { "figure_ref": [], "heading": "Experiment setup", "publication_ref": [ "b26", "b0" ], "table_ref": [], "text": "We run the YoloV5 with default parameters on the SafeSea dataset images. the Mean Average Precision (mAP) for the SD Inpainting [27] BLD [1] Image passing rate (in %) 8.16% 71.85% four sea state level is then calculated." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Figure 7 presents the result. Notably, there is a noticeable decrease in mAP values from Sea State 1 to Sea State 4, both for IoU of 0.5 and the range of 0.5 to 0.95. Given that the Sea State Level background is not entirely controlled within the transformation process, the distribution of images among the sea state categories is determined by the Sea State Classifier. The results indicate a significantly higher number of images classified as Sea State 3 of 45,066; while Sea State 1 comprises the lowest number with 2,087 images. It is important to note that each image originally contains a varying number of objects, both before and after the editing process. Additionally, object size plays a considerable role in the editing process; we have observed that larger objects tend to be better preserved in comparison to smaller ones. This, in turn, has a direct impact on the object detection confidence scores, which subsequently affect the mAP scores. In general, we observe that the pre-trained model tends to be more struggle when detecting objects in images classified with higher sea state levels. However, it is crucial to acknowledge that several other factors also influence the results, such as the number of objects being preserved and the sizes of the original objects. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b0" ], "table_ref": [], "text": "In this work, we introduce SafeSea, a proof-of-concept for generating synthetic maritime images by modifying real images, where the original sea background is transformed to simulate various sea surface conditions, corresponding to different Sea States. Our method capitalizes on the capabilities of Blended Latent Diffusion [1] to manipulate images. Subsequently, these modified images are categorized into distinct Sea State Levels, ranging from 1 to 4. Moreover, the original objects within these images are scrutinized to ensure their preservation throughout the editing process. Employing this technique, we have created the SafeSea dataset, which includes maritime images featuring marine objects set against diverse Sea State backgrounds by utilizing the 'SeaDroneSea' dataset [34]. Additionally, we have observed that stormy sea backgrounds can impact the performance of the YoloV5 object detection model." }, { "figure_ref": [], "heading": "Limitation and Future Work", "publication_ref": [ "b0" ], "table_ref": [], "text": "The current SeaSafe method, as proposed, exhibits certain limitations that necessitate attention in our future work. Primarily, it lacks control over the generated sea background during the editing process, limiting the diversity of realistic backgrounds. Furthermore, the diffusion model employed by BLD [1] has constraints in generating realistic wave and whitecap patterns. Additionally, smaller objects such as swimmers are ignored in the image quality evaluation. Our forthcoming efforts will concentrate on optimizing the image editing process to elevate the overall quality of generated images. Exploring alternative image editing methods is also on the agenda to enhance the image generation module. Additionally, addressing the control over the insertion of irrelevant objects during editing is crucial, as it can significantly impact object detection models. Future work will specifically tackle the introduction of unexpected objects, mitigating their potential impact on object detection models. Simultaneously, improvements are planned for both the Sea State Classifier and Object Detection Checker to elevate their performance. Furthermore, we aim to implement an additional filter to exclude generated images that do not align with the desired Sea State Level criteria. These enhancements collectively constitute our roadmap for refining the SeaSafe method in subsequent stages of development. Future work will also delve into the scalability of SafeSea on larger datasets and in real-world scenarios beyond the SeaDronesSee dataset with the exploration of how the object detector performs against the unedited SeaDronesSee dataset." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work has been supported by Sentient Vision Systems. Sentient Vision Systems is one of the leading Australian developers of computer vision and artificial intelligence software solutions for defence and civilian applications." } ]
High-quality training data is essential for enhancing the robustness of object detection models. Within the maritime domain, obtaining a diverse real image dataset is particularly challenging due to the difficulty of capturing sea images with the presence of maritime objects , especially in stormy conditions. These challenges arise due to resource limitations, in addition to the unpredictable appearance of maritime objects. Nevertheless, acquiring data from stormy conditions is essential for training effective maritime detection models, particularly for search and rescue, where real-world conditions can be unpredictable. In this work, we introduce SafeSea, which is a stepping stone towards transforming actual sea images with various Sea State backgrounds while retaining maritime objects. Compared to existing generative methods such as Stable Diffusion Inpainting [27], this approach reduces the time and effort required to create synthetic datasets for training maritime object detection models. The proposed method uses two automated filters to only pass generated images that meet the criteria. In particular, these filters will first classify the sea condition according to its Sea State level and then it will check whether the objects from the input image are still preserved. This method enabled the creation of the SafeSea dataset, offering diverse weather condition backgrounds to supplement the training of maritime models. Lastly, we observed that a maritime object detection model faced challenges in detecting objects in stormy sea backgrounds, emphasizing the impact of weather conditions on detection accuracy. The code, and dataset are available at https://github.com/martin-3240/SafeSea. Figure 1. Diagram summarizing our method (SafeSea). Original maritime images' sea background is edited with a mask and text description using Blended Latent Diffusion [1]. The edited images are then classified into 4 Sea State categories before their marine objects (boats) are cropped according to the ground truth bounding box and checked for preservation.
SafeSea: Synthetic Data Generation for Adverse & Low Probability Maritime Conditions
[ { "figure_caption": "2 :M2← Corresponding mask image from M s 3: Y ← Blended Latent Diffusion(X, M, P ) ▷ Edited image 4: SS E ← Sea State Classifier(Y ) ▷ Find the edited image's Sea State 5: E R ← Y ▷ Resize the edited image 6:", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Failure edited images using SD Inpainting. It suggests that this method can edit the image's sea background, however, the objects are not retained well, and several irrelevant objects are introduced unintentionally.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "method. Ron et al. proposed the Null-text Inversion method [19], em-Sea States descriptions as defined by the ABS ploying prompt-to-prompt text editing for image denoising.", "figure_data": "Sea State DefinitionThe water exhibits a gentle ripple, devoid1of breaking waves, featuring a low swell ofshort to average length occasionally.2Slight waves breaking, with smooth waves on the water surfaceMildly increased waves, leading to some3rock buoys and causing minor disturbancesfor small craft4The sea takes on a furrowed appearance, characterized by moderate waves", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Number of images generated before and after applying object preservation checker. 97,000 images are generated from 300 sourced images and then filter out the ones that do not retain any object. Example of good edited images using Blended Latent Diffusion. The image's sea background is edited while the objects are preserved. Figure 6. Examples of edited images belong to different classified sea state levels. The Sea State level information is provided to these images after the editing process. The sea surface becomes more dynamic with waves and whitecaps as the Sea State Level increases.", "figure_data": "6.2. Applying YoloV5 on the SafeSea datasetWe employed the SafeSea dataset for assessing a pre-trainedYoloV5 object detection model's ability in detecting 'boat'objects across various Sea State levels.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison between SD Inpainting and BLD when we apply our SafeSea filters. The Image passing rate computes the percentage of images passed by our filters. High passing rate suggests better quality images according to our filters.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Martin Tran; Jordan Shipard; Hermawan Mulyono; Arnold Wiliem; Clinton Fookes
[ { "authors": "Omri Avrahami; Ohad Fried; Dani Lischinski", "journal": "ACM Transactions on Graphics", "ref_id": "b0", "title": "Blended latent diffusion", "year": "2023" }, { "authors": "Jonathan Binner Becktor; Frederik Emil Thorsson Saabye; Evangelos Schöller; Mogens Boukas; Lazaros Blanke; Nalpantidis", "journal": "Elsevier", "ref_id": "b1", "title": "Bolstering maritime object detection with synthetic data", "year": "2022" }, { "authors": "Jonathan Becktor; William Seto; Aditya Deole; Saptarshi Bandyopadhyay; Niyousha Rahimi; Shahriar Talebi; Mehran Mesbahi; Amir Rahmani", "journal": "", "ref_id": "b2", "title": "Robust vision-based multispacecraft guidance navigation and control using cnn-based pose estimation", "year": "2022" }, { "authors": "", "journal": "Stichting Blender Foundation", "ref_id": "b3", "title": "Blender -a 3D modelling and rendering package", "year": "2018" }, { "authors": "Alexey Dosovitskiy; German Ros; Felipe Codevilla; Antonio Lopez; Vladlen Koltun", "journal": "ICML. PMLR", "ref_id": "b4", "title": "CARLA: An open urban driving simulator", "year": "2017" }, { "authors": "", "journal": "", "ref_id": "b5", "title": "Climate change indicators: Weather and climate", "year": "2023-06" }, { "authors": "", "journal": "Epic Games. Unreal engine", "ref_id": "b6", "title": "", "year": "" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "ICLR", "ref_id": "b7", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2022" }, { "authors": "Antonio-Javier Gallego; Antonio Pertusa; Pablo Gil", "journal": "Remote Sensing", "ref_id": "b8", "title": "Automatic ship classification from optical aerial images with convolutional neural networks", "year": "2018" }, { "authors": "", "journal": "Grand theft auto", "ref_id": "b9", "title": "", "year": "" }, { "authors": "Emily Heaslip", "journal": "", "ref_id": "b10", "title": "Why marine weather forecasts are so inaccurate -and how to improve them", "year": "" }, { "authors": "Amir Hertz; Ron Mokady; Jay Tenenbaum; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "ICLR", "ref_id": "b11", "title": "Prompt-to-prompt image editing with cross-attention control", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b12", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Gao Huang; Zhuang Liu; Laurens Van Der Maaten; Kilian Q Weinberger", "journal": "", "ref_id": "b13", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "Jeff Inversion; Martin Faudi", "journal": "Kaggle", "ref_id": "b14", "title": "Airbus ship detection challenge", "year": "2018" }, { "authors": "Arthur Juliani; Vincent-Pierre Berges; Ervin Teng; Andrew Cohen; Jonathan Harper; Chris Elion; Chris Goy; Yuan Gao; Hunter Henry; Marwan Mattar; Danny Lange", "journal": "", "ref_id": "b15", "title": "Unity: A general platform for intelligent agents", "year": "2020" }, { "authors": "Benjamin Kiefer; David Ott; Andreas Zell", "journal": "", "ref_id": "b16", "title": "Leveraging synthetic data in object detection on unmanned aerial vehicles", "year": "2022" }, { "authors": "Xiaomin Lin; Cheng Liu; Allen Pattillo; Miao Yu; Yiannis Aloimonous", "journal": "", "ref_id": "b17", "title": "Seadronesim: Simulation of aerial images for detection of objects above water", "year": "2023" }, { "authors": "Ron Mokady; Amir Hertz; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b18", "title": "Null-text inversion for editing real images using guided diffusion models", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b19", "title": "ntnu. seadronessee-odv2-rebalanced 2x2-train-val-1x1-test dataset", "year": "2006" }, { "authors": "Luis Patino; Tom Cane; Alain Vallee; James Ferryman", "journal": "CVPRW", "ref_id": "b20", "title": "Pets 2016: Dataset and challenge", "year": "2016" }, { "authors": "Xingchao Peng; Baochen Sun; Karim Ali; Kate Saenko", "journal": "", "ref_id": "b21", "title": "Learning deep object detectors from 3d models", "year": "2015" }, { "authors": "K Dilip; Deepu Prasad; Lily Rajan; Eshan Rachmawati; Chai Rajabally; Quek", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b22", "title": "Video processing from electrooptical sensors for object detection and tracking in a maritime environment: A survey", "year": "" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b23", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Soh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b24", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Ricardo Ribeiro; Jorge Gonc ¸alo Cruz; Alexandre Matos; Bernardino", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b25", "title": "A data set for airborne maritime surveillance environments", "year": "2019" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b26", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Zhenfeng Shao; Wenjing Wu; Zhongyuan Wang; Wan Du; Chengyuan Li", "journal": "IEEE Transactions on Multimedia", "ref_id": "b27", "title": "Seaships: A large-scale precisely annotated dataset for ship detection", "year": "2018" }, { "authors": "Joonghyuk Shin; Minguk Kang; Jaesik Park", "journal": "", "ref_id": "b28", "title": "Fill-up: Balancing long-tailed data with generative models", "year": "2023" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b29", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Michael Tipton; Elizabeth Mccormack; Graham Elliott; Monica Cisternelli; Arthur Allen; Arden C Turner", "journal": "Journal of Thermal Biology", "ref_id": "b30", "title": "Survival time and search time in water: Past, present and future", "year": "2022" }, { "authors": "Brandon Trabucco; Kyle Doherty; Max Gurinas; Ruslan Salakhutdinov", "journal": "", "ref_id": "b31", "title": "Effective data augmentation with diffusion models", "year": "2023" }, { "authors": "Muhammad Umair; Ahmed Manzoor; Syed Hashmani; Sajjad Hussain; Hasmi Rizvi; Mohd Taib; Mehak Nasir Abdullah; Maqbool Memon", "journal": "Symmetry", "ref_id": "b32", "title": "A novel deep learning model for sea state classification using visual-range sea images", "year": "2022" }, { "authors": "Amadeus Leon; Benjamin Varga; Martin Kiefer; Andreas Messmer; Zell", "journal": "WACV", "ref_id": "b33", "title": "Seadronessee: A maritime benchmark for detecting humans in open water", "year": "2022" }, { "authors": "Mabel M Zhang; Jean Choi; Kostas Daniilidis; Michael T Wolf; Christopher Kanan", "journal": "", "ref_id": "b34", "title": "Vais: A dataset for recognizing maritime imagery in the visible and infrared spectrums", "year": "2015" } ]
[]
2023-11-24
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b6", "b16", "b40", "b41", "b45", "b20", "b46", "b3", "b9", "b0", "b23" ], "table_ref": [], "text": "As a common neurodegenerative disorder, epilepsy affects approximately 1% population worldwide [27,35]. A reliable epileptic seizure onset detection system can benefit patients significantly, because such a system equipped with accurate algorithms can promptly alert a seizure onset. Ac- cording to previous studies [17,32,34], they predominantly focused on designing seizure detection algorithms based on electroencephalograms (EEGs). Although EEG can sense subtle brain changes just after a seizure begins, the use of scalp or implantable EEG devices often causes discomfort to patients and restricts them to hospital epilepsy monitoring units (EMUs). Consequently, there is a growing interest in developing an accurate video-based seizure detection system that could alleviate the discomfort associated with EEG helmets and facilitate remote monitoring of epileptic patients in residential settings [41]. However, the development of video-based seizure detection is hindered by several challenges as follows:\n(1) Lack of datasets. Seizure-related video data collection is substantially time-consuming and requires doctoral expertise to annotate the different seizure-related periods. And, surveillance video recordings from the hospital EMUs contain highly sensitive information, they often capture the faces of patients, their families, and healthcare staff. This sensitivity raises privacy concerns and complicates the data collection process. Based on these reasons, there is no public video data intended for seizure study yet, which hinders the development of video-based seizure detection research.\n(2) Lack of effective analytic tools. Fundamentally, video-based seizure detection is an action recognition task, requiring the analysis of complex seizure-related actions in epileptic patients. Prior research indicates that existing methods struggle with achieving short detection latency and are often limited to nocturnal seizures when using raw RGB frames [42,46]. While skeleton-based approaches powered by graph convolutional networks (GCNs) offer several advantages over RGB-based methods [9, 21,47], there also encounter significant limitations in the context of seizure detection. Firstly, pre-trained pose estimation models [2,4,10,26,37] cannot be applied to epileptic patients' action recognition directly due to the complexity of epileptic patient scenarios. These models suffer from unusual clothing, varied camera angles, and intricate behaviors of patients, leading to difficulties in accurately tracking patient skeletons. Secondly, as shown in Fig. 1, there are inherent difficulties in recognizing certain seizure-related actions using traditional skeleton-based approaches. For example, When a patient turns head, the tracking of the nose joint across frames can create ambiguity, making it unclear whether the patient is turning head or simply shifting it. Similarly, eye movements or wrist/ankle twists can only be tracked as static points instead of any changes. Furthermore, finger/toe movement also cannot be tracked by the model with the mainstream 17-keypoint template [1,24].\n(3) Detection latency and false detection rate. Detection latency is a critical metric in a seizure detection system, representing the time gap between the real onset of a seizure and the detected onset. A shorter detection latency is highly desirable since it enables the system to alert caregivers before the onset of severe tonic-clonic symptoms, allowing timely interventions to prevent secondary injuries from a seizure attack. Also, false detection rate (FDR) is important in determining the quality of a seizure detection system [36,40]. In terms of EEG time-series, the characteristics of EEG signals during the ictal period significantly differ from those in a healthy status, particularly in the frequency characteristics. In video-based detection, however, many normal patient behaviors closely resemble seizure-related actions. This similarity presents a challenge in accurately distinguishing between them, especially when decisions are based on a single video clip.\nIn this study, we propose a novel skeleton-based spatiotemporal vision graph neural network (STViG), designed for efficient, accurate, and timely REal-time Automated Detection system of epileptic Seizures from surveillance Videos (READS-V). We address the aforementioned limitations by offering the following contributions: (1) We acquire a dataset of epileptic patient data from hospital EMUs and train a custom patient pose estimation model by manual pose annotations; (2) As expressed in the Fig. 1, instead of using skeleton-based coordinates, our STViG model utilizes skeleton-based patch embeddings as inputs to address remaining challenges. We also introduce a partitioning strategy in STViG to learn spatiotemporal inter-partition and intra-partition representations; (3) We generate the probabilities instead of binary classifications as outputs in this application, and make use of accumulative probability strategy to obtain shorter detection latency and lower FDR in real-time scenarios." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b5", "b12", "b41", "b45", "b10", "b11", "b17", "b38", "b4", "b29", "b32", "b44", "b14", "b48" ], "table_ref": [], "text": "Video-based seizure detection. There have been several efforts paid to video-based seizure detection or jerk detection since 2012 [6,13,19,42], but they neither utilized DL approaches to achieve accurate performance nor supervised patients 24/7 by video monitoring. Since 2021, two recent works [20,46] made use of DL models to improve the video-based seizure detection performance, however, one cannot provide all-day video monitoring with short latency and another one did not utilize a common RGB camera for video monitoring. Our proposed STViG framework can provide accurate and timely 24/7 video monitoring.\nAction recognition. There are two mainstream strategies for action recognition tasks, one is 3D-CNN for RGBbased action recognition, and the other one is skeletonbased action recognition. Many 3D-CNN architectures [11,12,18,38,39] have been proven to be effective tools for learning spatiotemporal representations of RGB-based video streams, they are widely applied to video understanding and action recognition tasks. Compared to RGB-based approaches with a large number of trainable parameters, skeleton-based approaches perform better in both accuracy and efficiency because they only focus on the skeleton information which is more relevant to the actions, and can alleviate contextual nuisances from raw RGB frames. Several GCN-based approaches [5,23,30,33,45] were proposed to achieve great performance in skeleton-based action recognition. [9] proposed PoseConv3D to combine 3D-CNN architecture and skeleton-based heatmaps, and proposed its variant RGBPoseConv3D to fuse RGB frames to obtain better performance. However, all aforementioned action recognition models show weaknesses in analyzing seizurerelated actions, which include subtle behavioral changes when seizures begin. In this work, we propose STViG model to address this challenge.\nVision graph neural network.\n[14] first proposed vision graph neural networks (ViG) as an efficient and effective alternative backbone for image recognition. Since then Figure 2. Proposed skeleton-based STViG framework for READS-V system. Starting from raw RGB frames, we extract skeletonbased patches around each joint by fusing RGB frame and pose heatmap. These patches are then transformed into feature vectors through a patch embedding layer. We utilize a partitioning strategy to divide skeleton-based embeddings into several partitions. The STViG model, comprising both spatial and temporal modeling modules, is designed to learn inter-and intra-partition representations, thereby facilitating a better understanding of seizure-related actions. The bottom three figures express how the partitioning strategy works. After generating probability outputs, an accumulative probability strategy is used to enhance the performance of seizure onset decisions.\nmany ViG variants [15,28,43,49] have been proposed to handle various applications. Inspired by ViG, we first propose STViG to extend ViG to accomplish a skeleton-based action recognition task from videos." }, { "figure_ref": [], "heading": "READS-V Framework", "publication_ref": [], "table_ref": [], "text": "Fig. 2 presents an overview of skeleton-based STViG framework designed for READS-V. The process Starting begins with a video clip composed of consecutive RGB frames, we train the custom pose estimation model to generate joint heatmaps, then fuse these heatmaps and raw frames to construct skeleton-based patches. Prior to being processed by the STViG model, these patches are transformed into feature vectors through a trainable patch embedding layer. Additionally, the joints are grouped into several partitions, enhancing our model's ability to recognize seizure-related actions. During the STViG processing phase, we conduct spatial and temporal modeling respectively, and utilize partition structure to learn inter-partition and intra-partition representations. STViG model outputs probabilities instead of binary classes for video clips, then a decision-making rule integrating probabilities and accumulative function can achieve shorter detection latency and lower FDR. The subsequent subsections will provide detailed explanations of each step in this process." }, { "figure_ref": [], "heading": "Custom pose estimation", "publication_ref": [], "table_ref": [], "text": "The STViG framework relies on a pose estimation algorithm to extract joint heatmaps at first. However, when we tested several mainstream pre-trained pose estimation models on our patient video data, the results were unsatisfactory. These models failed to accurately track the locations of patients and their joints. Therefore, we have to train our custom pose estimation model.\nWe manually annotated 580 frames containing various behaviors during the both ictal periods and healthy status across all patients, then utilized lightweight-openpose [29] model as basis for our training for its efficient deployment.\nEventually, we achieved a custom pose estimation model for all patients." }, { "figure_ref": [], "heading": "Patch extraction and patch embedding", "publication_ref": [], "table_ref": [], "text": "In custom pose estimation algorithm, each joint is associated with a heatmap generated by a 2D Gaussian map [3]. We make use of this heatmap as filters to fuse the raw RGB frames, then extract a small patch around each joint from the fused image. However, determining the optimal size of the Gaussian maps to capture the regions of interest is a challenge during the training phase of the pose estimation model. To address this, we manually extract the patch p it of i-th (i = 1, 2..., N ) joint at frame t by generating a Gaussian map with adjustable σ for each joint to filter the raw RGB frame I t :\np it = exp(- (m -x it ) 2 + (n -y it ) 2 2σ 2 ) ⊗ I t (m, n) (1)\nwhere |m -\nx i | ≤ H/2, |n -y i | ≤ W/2. (H, W ), (m, n)\nand (x it , y it ) respectively stand for the size of patches, the coordinate of image pixels, and the location of i-th joint at frame t. The σ controls the shape of 2D Gaussian kernels.\nAs a result, we are able to extract a sequence of skeletonbased patches P ∈ R N ×T ×H×W ×3 from each video clip, then a patch embedding layer transforms patches into 1D feature vectors X ∈ R N ×T ×C ." }, { "figure_ref": [], "heading": "Graph construction with partition strategy", "publication_ref": [], "table_ref": [], "text": "Given a sequence of skeleton-based patch embeddings, we construct an undirected skeleton-based spatiotemporal vision graph as G = (V, E) on this skeleton sequence across N joints and T stacked frames. Each node in the node set\nV = {v it |i = 1, 2, ..., N, t = 1, 2..., T } is associated with feature vectors x it ∈ R C .\nIn this study, we proposed a partitioning strategy to construct graph edges between nodes. According to Openpose joint template, which contains 18 joints, we focus on 15 joints by excluding three (l ear, r ear, neck) as redundant. These 15 joints are divided into 5 partitions, each comprising three joints (head: nose, left/right eye; right arm: right wrist/elbow/shoulder; right leg: right hip/knee/ankle; left arm: left wrist/elbow/shoulder; left leg: left hip/knee/ankle). In our STViG, we conduct both spatial and temporal modeling for the graph. Spatial modeling involves constructing two subsets of edges based on different partition strategies: inter-partition edge set, denoted as E inter = {v it v jt |i ∈ Part p1 , j ∈ Part p2 , p1 ̸ = p2} and intra-partition edge set, denoted as E intra = {v it v jt |(i, j) ∈ Part p }, where p, p1, p2 = 1, ..., 5. As shown in the bottom left two schematic figures in Fig. 2, each node is connected to the nodes from all other partitions in the inter-partition step, and each node is only connected to the other nodes within the same partition in the intra-partition step. The bottom-left two figures of Fig. 2 visualize the graph construction with partitioning strategy.\nFor temporal modeling, we consider neighbors of each node within a K S × K T in both temporal and spatial dimensions. The temporal edge set is denoted as\nE T = v it v j(t+1) |d(v it , v jt ) ≤ K S /2, d(v it , v i(t+1) ) ≤ K T /2\n, where d is the distance between two nodes. To facilitate the partitioning strategy, we arrange the joints with their partitions in a 1D sequence. Thus, if K S = 3, as shown in the bottom-right figure of Fig. 2, the middle joint of a partition aggregates only from intra-partition nodes over K T frames, while the border joint of a partition aggregates from inter-partition nodes over K T frames." }, { "figure_ref": [], "heading": "Partitioning spatiotemporal graph modeling", "publication_ref": [ "b1", "b15", "b15", "b44" ], "table_ref": [], "text": "Spatial Modeling. The proposed skeleton-based STViG aims to process the input features X ∈ R N ×T ×C into the probabilities of seizure onset. Starting from input feature vectors, we first conduct spatial modeling to learn the spatial representations between nodes at each frame by interpartition and intra-partition graph convolution operation as\nG ′ = F intra (F inter (G, W inter ), W intra )\n, where F is graph convolution operation, specifically we adopt max-relative graph convolution [14,22] to aggregate and update the nodes with fully-connected (FC) layer and activation function:\nx ′ it = σ(Linear([x it , max(x jt -x it |j ∈ N t (x it ))]))\n(2) where N t (x it ) is neighbor nodes of x it at frame t, Linear is a FC layer with learnable weights and σ is activation function, e.g. ReLU. The only difference between F inter and F intra depends on N t (x it ) is from E inter or E intra . Given input feature vectors at frame t, denoted as X t ∈ R N ×C , the graph convolution processing as Eq. (2) can be denoted as X ′ t = M RGC(X t ), then the partitioning spatial graph processing module in STViG would be:\nX ′′ t = σ(M RGC(X t W in inter )W out inter + X t ) X ′ t = σ(M RGC(X ′′ t W in intra )W out intra + X t )(3)\nWe add original input X t as a residual component to both two partitioning spatial modeling, which is intended to avoid smoothing and keep diversity of features during the model training [7,16]. We keep output dimension of every learnable layer is same as input dimension, so that we achieve output feature X ′ ∈ R N ×T ×C when we stack T frames X ′ t ∈ R N ×C after spatial modeling. Temporal Modeling. Given the dimension of output features after spatial modeling is N × T × C, and we order the partitions (containing joints) in a 1D sequence, so that we can naturally consider the input for the temporal modeling as a 3D volume, denoted as X ∈ R 1×N ×T ×C . According to the construction of graph in temporal modeling G = (V, E T ), we can adopt convolution operation to aggregate and update nodes:\nx ′ i = 1 j K S j K T j w ij x i(4)\nInspired by [9, 16,45], we utilize residual 3D-CNN architecture, denoted as Conv, to conduct temporal graph modeling as Eq. ( 4) for its simplicity:\nX ′ = Conv(1)(Conv(1, K S , K T )(Conv(1)(X))) + X(5)\nwhere (1) and (1, K S , K T ) are kernel size for the 3D convolution operation." }, { "figure_ref": [ "fig_1" ], "heading": "STViG network architecture", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Fig. 3 is the detailed architecture of proposed STViG network including the size of features at each layer. We keep feature size in spatial modeling, and implement downsampling at the last layer of each residual temporal block. After four residual STViG stages, we make use of global average pooling to generate a 1D feature vector, then one FC 1.\nIn spatial modeling, E = 12 and E = 2 represent the number of edges/neighbors in inter-partition and intrapartition graph convolution, respectively. The spatial modeling phase always keeps the output channels unchanged. In temporal modeling, we set K S = 3 for simultaneously implementation of inter-partition and intra-partition temporal convolution network (TCN) operation, and set K T = 3 for considering only one frame before and after the current frame. The residual 3D-CNN block is used to expand feature dimension and downsample temporal frames.\nPositional embedding. In previous studies, either skeleton-based ST-GCNs which input coordinate triplets or RGB-/Heatmap-based methods made use of specific joint coordinates or whole body context with global positional information. In this work, however, input patches in 1D sequence do not provide any positional information. Thus, we add a positional embedding to each node feature vector. Here we propose 3 different ways to inject positional information: (1) Add learnable weight x it = x it + e it ; (2) Concatenation with joint coordinates x it = [x it , (x it , y it )];\n(3) Add embeddings of joint coordinates:\nx it = x it + Stem(x it , y it );\nDynamic partitions. Since the partitions are arranged as a 1D sequence, each partition is only connected to 1 or 2 adjacent partitions, it cannot consider all other partitions as spatial modeling does. Thus, we obtain the dynamic partitions by conducting partition shuffle before temporal graph updating. It is noted that we only shuffle the partitions, and the order of joints within a partition is unchanged." }, { "figure_ref": [], "heading": "Seizure onset decision-making", "publication_ref": [], "table_ref": [], "text": "Inspired by [44], after generating output probabilities P t by proposed skeleton-based STViG model for video clips at time t, we can accumulate a period τ of previously detected probabilities as accumulative probabilities AP t = t i=t-τ P i with detection rate r, then make a seizure onset decision at time t onset when accumulative probabilities reach a decision threshold DT : t onset = AP t > DT . We measure the distance between t onset and EEG onset as latency of EEG onset L EO , and the distance between t onset and clinical onset as latency of clinical onset L CO ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "Data acquisition. We acquire surveillance video data of epileptic patients from a hospital, this dataset includes 14 epileptic patients with 33 tonic-clonic seizures. Patients in the hospital EMUs are supervised by Bosch NDP-4502-Z12 camera with 1920 × 1080 resolution 24/7. We extract successive frames as video clips for real-time processing. Each video clip spans a duration of 5s with the original rate of 30 FPS. To enhance the efficiency of our real-time analysis, we apply fixed stride sampling to reduce the frame count from the original 150 frames to 30 frames.\nData labeling. How to label the video clips is a challenge in video-based seizure detection. Typically, EEGbased seizure detection is treated as a binary classification task, distinguishing between interictal (pre-seizure) and ictal (during-seizure) periods. Medical experts annotate the moment from interictal to ictal periods in EEG recordings as the EEG onset. However, in video analysis, the transition from normal to seizure behavior is not as immediate. For several seconds (ranging from 1 to 30 seconds) after the EEG onset, a patient's behavior may appear only slightly abnormal or nearly normal. We refer to this phase as the transition period. The moment when patients start exhibiting clearly abnormal actions, such as jerking, convulsion, or stiffening, is defined as clinical onset. It is straightforward to label video clips as healthy status (0) before the EEG onset and as seizure status (1) after the clinical onset. These labels (0 and 1) can be interpreted as the probability or risk of seizure onset. In terms of the transition period, it is hard to assign a precise probability to each clip. Clinically, it's observed in the transition period that patients are more likely to appear healthy closer to the EEG onset and more likely to exhibit abnormal behaviors as the time approaches the clinical onset. To address this, we use an exponential function to assign increasing probabilities to video clips during the transition period, ranging from 0 to 1. Other options for probability-increasing functions are explored in the appendix. It is noted that the probabilities of video clips depend on the end frame of video clips lying in which period. Consequently, we define this task as a probability regression task rather than a traditional classification task. Data splitting. For each seizure event, we categorize the video segments based on their timing relative to the EEG onset and clinical onset, which are annotated by medical experts. We define the period < 2min after clinical onset as the ictal period, and the period < 30min before EEG onset as the interictal period. Given that patients' behaviors often remain unchanged for extended periods, we strive to include a variety of actions in the extracted clips. The transition period usually ranges from 1s ∼ 20s after EEG onset for each seizure event. In our dataset, the average transition period is approximately 18.2s across 33 seizures. To address the imbalance in duration between interictal and other periods, we extract clips from interictal period without overlapping, and from both transition and ictal periods with 4s overlap- " }, { "figure_ref": [ "fig_3" ], "heading": "Seizure-related action recognition performance", "publication_ref": [ "b10", "b11" ], "table_ref": [ "tab_2" ], "text": "Our STViG-Base and STViG-Light obtain 5.9% and 6.1% errors across all patients. We reproduce several state-ofthe-art action recognition models, conducting a grid search to optimize the learning rate and weight decay for each model. We select one CNN-based approach, RGBPoseC-onv3d, and 5 state-of-the-art skeleton-based approaches for the comparison. According to Table 3, two STViG models both outperform previous state-of-the-arts. We also visualize model comparison of performance vs. efficiency in Fig. 5, which highlights the advantages of STViG models. Notably, STViG-Light demonstrates strong performance in both accuracy and efficiency. The RGBPoseConv3D model is a leading RGB-based action recognition model, it was inspired by several 3D CNN architectures [11,12,38] and fused RGB and skeleton as input features simultaneously. Our results reveal it performs worse than skeleton-based approaches even if it contains a large number of parameters, in our opinion, its reduced effectiveness is due to the complexities of the EMU environment, which adversely affect RGB-based methods. As for skeleton-based approaches, most perform around 10% error with lower FLOPs, and CTRGCN performs the best among all skeleton-based approaches. These results underscore the superiority of our skeleton-based STViG models over both RGB-based and other skeleton-based approaches in terms of accuracy and efficiency, making them highly suitable for real-time deployment." }, { "figure_ref": [], "heading": "Ablation study", "publication_ref": [], "table_ref": [ "tab_4", "tab_4" ], "text": "We conduct ablation study with STViG-Base to evaluate the effects of positional embedding way and dynamic partition strategy on the model. In Table 4, we can see that Stem positional embedding way with dynamic partition achieves the best performance.\nEffect of dynamic partition. Dynamic partition strategy shows better performance among all four types of positional embedding, which indicates that dynamic partition strategy can effectively enhance the model to learn relationships between different partitions, thereby understanding the seizure-related actions.\nEffect of positional embedding. According to the results in Table 4, positional information can bring better per- " }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_5", "fig_5" ], "heading": "Model interpretability", "publication_ref": [], "table_ref": [], "text": "We provide interpretability with occlusion maps [48] to demonstrate the capabilities of our STViG model to learn representations of seizure-related actions between different partitions. We successively take 4 video clips of a patient from healthy status to seizure coming, as shown in Fig. 6, each image represents the end frame of corresponding clips, higher value (redder) partition means more salient to seizure-related actions, and lower value (bluer) partition means less relevant. We can see that, initially, when the patient is lying on the bed, all partitions are shown as less salient in Fig. 6a. As the seizure begins, the patient starts to exhibit abnormal movements, such as turning the head and moving the right arm. Correspondingly, these partitions appear redder in Fig. 6b and Fig. 6c, indicating their increased relevance to the seizure activity. Several seconds later, as the patient begins jerking, most partitions are shown as more salient in Fig. 6d. This form of model interpretability provides a visual tool for doctors to efficiently analyze the progression of a seizure attack and understand the types of seizure-related actions, thereby aiding in better diagnosis and treatment planning." }, { "figure_ref": [], "heading": "Make seizure onset decision", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "After obtaining probabilities from consecutive video clips by STViG-Base model, we set up τ = 3 s, r = 0.5 s as mentioned in Section 3.6 to calculate L EO , L CO and FDR across all seizures. We also evaluate the effect of accumulative strategy on the detected latency, as shown in Table 5. We can see that performance without accumulative strategy is satisfactory based on accurate STViG model, but accumulative strategy with DT = 2 can obtain much better performance where 5.1 s L EO , -13.1 s L CO , which means seizure can be detected only 5.1 s after seizure begins inside brain, and can alarm the seizure coming 13.1 s before patients tend to exhibit serious convulsions. And a FDR of 0 is achieved by all conditions." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a STViG model based on skeletonbased patch embeddings to recognize the epileptic patients' actions effectively, experiments indicate proposed STViG model outperforms state-of-the-art action recognition approaches. It provides an accurate, efficient, and timely solution for the READS-V system to fundamentally help epileptic patients. In the future, we will collect more patient video data with various types of seizures to train a more generalized model for wider epileptic patients." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "https://github.com/xuyankun/" }, { "figure_ref": [], "heading": "READS-V: Real-time Automated Detection of Epileptic Seizures from Surveillance Videos via Skeleton-based Spatiotemporal ViG", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Visualization", "publication_ref": [], "table_ref": [], "text": "First, we visualize the pose estimation performance comparison between our custom model and the public pretrained model, as shown in Fig. 7, thereby demonstrating the limitation of public pre-trained model for tracking patient skeletons.\nSecond, we present the visualization of predictive probabilities of video clips obtained by STViG model from two different seizures, which is shown as . It is important to note that these frames are not successive, we are intended to show the clips with different predictive probabilities in a seizure. We can see that our model can detect subtle motion changes during the seizure beginning.\nThese two visualization results built in .gif format are included in the supplementary package." }, { "figure_ref": [], "heading": "B. Chosen of increasing probability function", "publication_ref": [], "table_ref": [], "text": "In this study, we label the transition period from 0 to 1 in exponential function due to clinical phenomenon. We also choose two alternatives -linear and sigmoid functions to make a comparison with exponential function in STViG-Base model. We can see from " }, { "figure_ref": [], "heading": "E. Implementation details", "publication_ref": [], "table_ref": [], "text": "First, the patch size (32 × 32) and σ (0.3) value are experimentally determined based on our application scenarios, e.g. the resolution of raw frames and the distance between the camera and patient. The principle is to make the patch contain sufficient joint information. Second, we reproduced the state-of-the-art action recognition models based on [9], where model backbones are provided. We save all 18 keypoints for previous GCN-based models' training while STViG only requires 15 keypoints. In terms of RGBPoseConv3D, we extracted the required input format (skeleton heatmaps + RGB frames) for training. We have conducted a grid search for optimizing weight decay and learning rate in reproduced model training, and saved the best one of each model for comparisons." }, { "figure_ref": [], "heading": "F. Data usage and availability", "publication_ref": [], "table_ref": [], "text": "The data acquired from the hospital is approved by the Institutional Review Board between us and hospital. We will release the patient video dataset for open access when the sensitive information removal and confidential authorization are finished. Additionally, for the following researcher's convenience, associated with raw videos, we will also release our trained pose estimation model based on [29] for tracking patient poses in an openpose-based keypoint template. " } ]
An accurate and efficient epileptic seizure onset detection system can significantly benefit patients. Traditional diagnostic methods, primarily relying on electroencephalograms (EEGs), often result in cumbersome and nonportable solutions, making continuous patient monitoring challenging. The video-based seizure detection system is expected to free patients from the constraints of scalp or implanted EEG devices and enable remote monitoring in residential settings. Previous video-based methods neither enable all-day monitoring nor provide short detection latency due to insufficient resources and ineffective patient action recognition techniques. Additionally, skeleton-based action recognition approaches remain limitations in identifying subtle seizure-related actions. To address these challenges, we propose a novel skeleton-based spatiotemporal vision graph neural network (STViG) for efficient, accurate, and timely REal-time Automated Detection of epileptic Seizures from surveillance Videos (READS-V). Our experimental results indicate STViG outperforms previous state-of-the-art action recognition models on our collected patients' video data with higher accuracy (5.9% error) and lower FLOPs (0.4G). Furthermore, by integrating a decision-making rule that combines output probabilities and an accumulative function, our READS-V system achieves a 5.1 s EEG onset detection latency, a 13.1 s advance in clinical onset detection, and zero false detection rate. The code is available at:
READS-V: Real-time Automated Detection of Epileptic Seizures from Surveillance Videos via Skeleton-based Spatiotemporal ViG
[ { "figure_caption": "Figure 1 .1Figure 1. Motivation of skeleton-based patch embedding for seizure-related action recognition. The left shows real seizure-related actions; The middle shows challenges of traditional skeleton-based approaches; The right shows our strategy to address challenges.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. STViG architecture. Begin with patches extracted from the target video clip, input and positional embedding layers transform patches into feature vectors. STViG consists of four residual stages with expansion and downsampling operations. Each stage contains several proposed spatiotemporal learning layers. The output probability with dimension 1 is generated from a Sigmoid function. N, T, C denotes the number of joints, frames and channels, and H, W stand for the height and width of extracted patches.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Data labeling of video-based seizure detection task. For each seizure, a video recording is categorized into 3 different periods: interictal (label:0), ictal (label:1), and transition (label:0 to 1 in exponential function according to clinical phenomenon).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Comparison of performance vs. efficiency.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "(a) Healthy status. (b) Head turn (seizure begins).", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Occlusion maps for model interpretability. Each image represents the end frame of the video clip. Redder partitions means more salient to the seizure-related actions.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Details", "figure_data": "Layer nameOutput size TSTViG-Light/Base(Patches)30(1 × 15 × T × 32 × 32 × 3)Stem3032 × 32, 12/24(Features)30(1 × 15 × T × C)E = 12, 12/24 Stage130    E = 2, 12/24 1 × 1 2 , 12/24 1 × 3 2 , 12/24    × 21 × 1 2 , 24/48E = 12, 24/48 Stage215    E = 2, 24/48 1 × 1 2 , 24/48 1 × 3 2 , 24/48    × 21 × 1 2 , 48/96Stage38    E = 12, 48/96 E = 2, 48/96 1 × 1     × 6Stage44      × 2Head1GAP, 1-d FC, SigmoidFLOPS0.44G/1.76G", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Summary of dataset used in this work.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison with previous state-of-the-arts. We reproduce state-of-the-art approaches to make comparisons. Error is averaged RMSE (%) for each video clip.", "figure_data": "MethodInputBackbone Error FLOPs Param.DGSTGCN [8]SkeletonGCN16.70.50G1.4MRGBPoseConv3D [9] RGB+Skeleton 3D-CNN13.812.63G3.2MSTGCN [45]SkeletonGCN14.11.22G3.1MMSG3D [25]SkeletonGCN11.31.82G2.7MAAGCN [31]SkeletonGCN9.51.38G3.7MCTRGCN [5]SkeletonGCN8.30.62G1.4MSTViG-Light (Ours)PatchViG6.10.44G1.4MSTViG-Base (Ours)PatchViG5.91.76G5.4M", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table 2 summarize the dataset we used in this study.", "figure_data": "4.2. Experimental settingWe manually set a size of 32 × 32 (H × W ) for extractedpatches based on the 1920× 1080 raw frame resolution, andσ of gaussian kernel is 0.3. All generated outputs are con-nected with a Sigmoid function to map the outputs as prob-abilities from 0 to 1, the activation function used anywhereelse is chosen as ReLU, and mean square error loss functionis used to train the model. During the training phase, epochand batch size are respectively set to 200 and 32, we chooseAdam optimizer with 1e-4 learning rate with 0.1 gammadecay every 40 epochs and 1e-6 weight decay. We save thebest model with the lowest error on the validation set.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation study. We utilize STViG-Base model to evaluate the effects of positional embedding way and dynamic partition. The shown values are averaged error (%) for each clip.", "figure_data": "PositionalDynamic Partitionembeddingw/ow.None16.9 9.1Cat14.8 8.9Learn13.4 8.6Stem8.45.9DTFDRLatencyAccumulative w/o w.0.50L EO L CO15.3s -2.9s10.9s -7.3s0.20L EO L CO11.4s -6.8s5.1s -13.1s", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Seizure detection performance. We evaluate the effect of different DTs and the proposed accumulative strategy for detected latency. DT : decision threshold, LEO: latency of EEG onset LCO: latency of clinical onset, FDR: false detection rate.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Yankun Xu; Jie Yang; Wenjie Ming; Shuang Wang; Mohamad Sawan; Cenbrain
[ { "authors": "Mykhaylo Andriluka; Leonid Pishchulin; Peter Gehler; Bernt Schiele", "journal": "", "ref_id": "b0", "title": "2d human pose estimation: New benchmark and state of the art analysis", "year": "2014" }, { "authors": "Valentin Bazarevsky; Ivan Grishchenko; Karthik Raveendran; Tyler Zhu; Fan Zhang; Matthias Grundmann", "journal": "", "ref_id": "b1", "title": "Blazepose: On-device real-time body pose tracking", "year": "2020" }, { "authors": "Zhe Cao; Tomas Simon; Shih-En Wei; Yaser Sheikh", "journal": "", "ref_id": "b2", "title": "Realtime multi-person 2d pose estimation using part affinity fields", "year": "2017" }, { "authors": "Z Cao; G Hidalgo; T Martinez; S Simon; Y A Wei; Sheikh", "journal": "IEEE TPAMI", "ref_id": "b3", "title": "Openpose: Realtime multi-person 2d pose estimation using part affinity fields", "year": "2019" }, { "authors": "Yuxin Chen; Ziqi Zhang; Chunfeng Yuan; Bing Li; Ying Deng; Weiming Hu", "journal": "", "ref_id": "b4", "title": "Channel-wise topology refinement graph convolution for skeleton-based action recognition", "year": "2021" }, { "authors": "Kris Cuppens; Chih-Wei Chen; Kevin ; Bing-Yung Wong; Anouk Van De; Lieven Vel; Berten Lagae; Tinne Ceulemans; Sabine Tuytelaars; Bart Van Huffel; Hamid Vanrumste; Aghajan", "journal": "", "ref_id": "b5", "title": "Using spatio-temporal interest points (stip) for myoclonic jerk detection in nocturnal video", "year": "2012" }, { "authors": "Lingwei Dang; Yongwei Nie; Chengjiang Long; Qing Zhang; Guiqing Li", "journal": "", "ref_id": "b6", "title": "Msr-gcn: Multi-scale residual graph convolution networks for human motion prediction", "year": "2021" }, { "authors": "Jiaqi Haodong Duan; Kai Wang; Dahua Chen; Lin", "journal": "", "ref_id": "b7", "title": "Dg-stgcn: dynamic spatial-temporal modeling for skeletonbased action recognition", "year": "2022" }, { "authors": "Haodong Duan; Yue Zhao; Kai Chen; Dahua Lin; Bo Dai", "journal": "", "ref_id": "b8", "title": "Revisiting skeleton-based action recognition", "year": "2022" }, { "authors": "Jiefeng Hao-Shu Fang; Hongyang Li; Chao Tang; Haoyi Xu; Yuliang Zhu; Yong-Lu Xiu; Cewu Li; Lu", "journal": "IEEE TPAMI", "ref_id": "b9", "title": "Alphapose: Whole-body regional multi-person pose estimation and tracking in real-time", "year": "2022" }, { "authors": "Christoph Feichtenhofer", "journal": "", "ref_id": "b10", "title": "X3d: Expanding architectures for efficient video recognition", "year": "2020" }, { "authors": "Christoph Feichtenhofer; Haoqi Fan; Jitendra Malik; Kaiming He", "journal": "", "ref_id": "b11", "title": "Slowfast networks for video recognition", "year": "2019" }, { "authors": "Roland D Evelien E Geertsema; Therese Thijs; Ben Gutter; Johan B Vledder; Arends; Gerhard H Frans S Leijten; Visser; Stiliyan N Kalitzin", "journal": "Epilepsia", "ref_id": "b12", "title": "Automated video-based detection of nocturnal convulsive seizures in a residential care setting", "year": "2018" }, { "authors": "Kai Han; Yunhe Wang; Jianyuan Guo; Yehui Tang; Enhua Wu", "journal": "NeurIPS", "ref_id": "b13", "title": "Vision gnn: An image is worth graph of nodes", "year": "2022" }, { "authors": "Yan Han; Peihao Wang; Souvik Kundu; Ying Ding; Zhangyang Wang", "journal": "", "ref_id": "b14", "title": "Vision hgnn: An image is more than a graph of nodes", "year": "2023" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b15", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Ramy Hussein; Hamid Palangi; Rabab Ward; Jane Wang", "journal": "", "ref_id": "b16", "title": "Epileptic seizure detection: A deep learning approach", "year": "2018" }, { "authors": "Shuiwang Ji; Wei Xu; Ming Yang; Kai Yu", "journal": "IEEE TPAMI", "ref_id": "b17", "title": "3d convolutional neural networks for human action recognition", "year": "2012" }, { "authors": "Stiliyan Kalitzin; George Petkov; Demetrios Velis; Ben Vledder; Fernando Lopes Da Silva", "journal": "IEEE TBME", "ref_id": "b18", "title": "Automatic segmentation of episodes containing epileptic clonic seizures in video sequences", "year": "2012" }, { "authors": "Tamás Karácsony; Anna ; Mira Loesch-Biffar; Christian Vollmar; Jan Rémi; Soheyl Noachtar; João Paulo; Silva Cunha", "journal": "Scientific Reports", "ref_id": "b19", "title": "Novel 3d video action recognition deep learning approach for near real time epileptic seizure classification", "year": "2022" }, { "authors": "Yu Kong; Yun Fu", "journal": "International Journal of Computer Vision", "ref_id": "b20", "title": "Human action recognition and prediction: A survey", "year": "2022" }, { "authors": "Guohao Li; Matthias Muller; Ali Thabet; Bernard Ghanem", "journal": "", "ref_id": "b21", "title": "Deepgcns: Can gcns go as deep as cnns?", "year": "2019" }, { "authors": "Maosen Li; Siheng Chen; Xu Chen; Ya Zhang; Yanfeng Wang; Qi Tian", "journal": "", "ref_id": "b22", "title": "Actional-structural graph convolutional networks for skeleton-based action recognition", "year": "2019" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b23", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Ziyu Liu; Hongwen Zhang; Zhenghao Chen; Zhiyong Wang; Wanli Ouyang", "journal": "", "ref_id": "b24", "title": "Disentangling and unifying graph convolutions for skeleton-based action recognition", "year": "2020" }, { "authors": "Zhenguang Liu; Haoming Chen; Runyang Feng; Shuang Wu; Shouling Ji; Bailin Yang; Xun Wang", "journal": "", "ref_id": "b25", "title": "Deep dual consecutive network for human pose estimation", "year": "2021" }, { "authors": "Emilio Solomon L Moshé; Philippe Perucca; Torbjörn Ryvlin; Tomson", "journal": "The Lancet", "ref_id": "b26", "title": "Epilepsy: new advances", "year": "2015" }, { "authors": "Mustafa Munir; William Avery; Radu Marculescu", "journal": "", "ref_id": "b27", "title": "Mobilevig: Graph-based sparse attention for mobile vision applications", "year": "2023" }, { "authors": "Daniil Osokin", "journal": "", "ref_id": "b28", "title": "Real-time 2d multi-person pose estimation on cpu: Lightweight openpose", "year": "2018" }, { "authors": "Lei Shi; Yifan Zhang; Jian Cheng; Hanqing Lu", "journal": "", "ref_id": "b29", "title": "Twostream adaptive graph convolutional networks for skeletonbased action recognition", "year": "2019" }, { "authors": "Lei Shi; Yifan Zhang; Jian Cheng; Hanqing Lu", "journal": "IEEE TIP", "ref_id": "b30", "title": "Skeleton-based action recognition with multi-stream adaptive graph convolutional networks", "year": "2020" }, { "authors": "H Ali; John V Shoeb; Guttag", "journal": "", "ref_id": "b31", "title": "Application of machine learning to epileptic seizure detection", "year": "2010" }, { "authors": "Yi-Fan Song; Zhang Zhang; Caifeng Shan; Liang Wang", "journal": "", "ref_id": "b32", "title": "Stronger, faster and more explainable: A graph convolutional baseline for skeleton-based action recognition", "year": "2020" }, { "authors": "Siyi Tang; Jared A Dunnmon; Khaled Saab; Xuan Zhang; Qianying Huang; Florian Dubost; Christopher Daniel L Rubin; Lee-Messer", "journal": "", "ref_id": "b33", "title": "Self-supervised graph neural networks for improved electroencephalographic seizure analysis", "year": "2021" }, { "authors": "Rainer Roland D Thijs; Surges; J O' Terence; Josemir W Brien; Sander", "journal": "The Lancet", "ref_id": "b34", "title": "Epilepsy in adults", "year": "2019" }, { "authors": "Pierre Thodoroff; Joelle Pineau; Andrew Lim", "journal": "PMLR", "ref_id": "b35", "title": "Learning robust features using deep learning for automatic seizure detection", "year": "2016" }, { "authors": "Alexander Toshev; Christian Szegedy", "journal": "", "ref_id": "b36", "title": "Deeppose: Human pose estimation via deep neural networks", "year": "2014" }, { "authors": "Du Tran; Lubomir Bourdev; Rob Fergus; Lorenzo Torresani; Manohar Paluri", "journal": "", "ref_id": "b37", "title": "Learning spatiotemporal features with 3d convolutional networks", "year": "2015" }, { "authors": "Du Tran; Heng Wang; Lorenzo Torresani; Jamie Ray; Yann Lecun; Manohar Paluri", "journal": "", "ref_id": "b38", "title": "A closer look at spatiotemporal convolutions for action recognition", "year": "2018" }, { "authors": "Markos G Alexandros T Tzallas; Dimitrios G Tsipouras; Evaggelos C Tsalikakis; Loukas Karvounis; Spiros Astrakas; Margaret Konitsiotis; Tzaphlidou", "journal": "", "ref_id": "b39", "title": "Automated epileptic seizure detection methods: a review study", "year": "2012" }, { "authors": "Marije Van Der Lende; M E Fieke; Gerhard H Cox; Josemir W Visser; Roland D Sander; Thijs", "journal": "Epilepsia", "ref_id": "b40", "title": "Value of video monitoring for nocturnal seizure detection in a residential setting", "year": "2016" }, { "authors": "George Anouk Van Westrhenen; Petkov; Richard Stiliyan N Kalitzin; Roland D Hc Lazeron; Thijs", "journal": "Epilepsia", "ref_id": "b41", "title": "Automated video-based detection of nocturnal motor seizures in children", "year": "2020" }, { "authors": "Jiafu Wu; Jian Li; Jiangning Zhang; Boshen Zhang; Mingmin Chi; Yabiao Wang; Chengjie Wang", "journal": "", "ref_id": "b42", "title": "Pvg: Progressive vision graph for vision recognition", "year": "2023" }, { "authors": "Yankun Xu; Jie Yang; Wenjie Ming; Shuang Wang; Mohamad Sawan", "journal": "Expert Systems with Applications", "ref_id": "b43", "title": "Shorter latency of real-time epileptic seizure detection via probabilistic prediction", "year": "2024" }, { "authors": "Sijie Yan; Yuanjun Xiong; Dahua Lin", "journal": "", "ref_id": "b44", "title": "Spatial temporal graph convolutional networks for skeleton-based action recognition", "year": "2018" }, { "authors": "Yonghua Yang; Rani A Sarkis; Rima El Atrache; Tobias Loddenkemper; Christian Meisel", "journal": "IEEE JBHI", "ref_id": "b45", "title": "Video-based detection of generalized tonic-clonic seizures using deep learning", "year": "2021" }, { "authors": "Rujing Yue; Zhiqiang Tian; Shaoyi Du", "journal": "Neurocomputing", "ref_id": "b46", "title": "Action recognition based on rgb and skeleton data sets: A survey", "year": "2022" }, { "authors": "D Matthew; Rob Zeiler; Fergus", "journal": "", "ref_id": "b47", "title": "Visualizing and understanding convolutional networks", "year": "2013" }, { "authors": "Bo Zhang; Yunpeng Tan; Zheng Zhang; Wu Liu; Hui Gao; Zhijun Xi; Wendong Wang", "journal": "", "ref_id": "b48", "title": "Factorized omnidirectional representation based vision gnn for anisotropic 3d multimodal mr image segmentation", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 60.05, 262.83, 226.32, 23.88 ], "formula_id": "formula_0", "formula_text": "p it = exp(- (m -x it ) 2 + (n -y it ) 2 2σ 2 ) ⊗ I t (m, n) (1)" }, { "formula_coordinates": [ 4, 101.13, 295.92, 185.23, 9.65 ], "formula_id": "formula_1", "formula_text": "x i | ≤ H/2, |n -y i | ≤ W/2. (H, W ), (m, n)" }, { "formula_coordinates": [ 4, 50.11, 465.09, 236.25, 21.61 ], "formula_id": "formula_2", "formula_text": "V = {v it |i = 1, 2, ..., N, t = 1, 2..., T } is associated with feature vectors x it ∈ R C ." }, { "formula_coordinates": [ 4, 314.67, 123.42, 230.44, 21.91 ], "formula_id": "formula_3", "formula_text": "E T = v it v j(t+1) |d(v it , v jt ) ≤ K S /2, d(v it , v i(t+1) ) ≤ K T /2" }, { "formula_coordinates": [ 4, 308.86, 327.2, 162.53, 12.87 ], "formula_id": "formula_4", "formula_text": "G ′ = F intra (F inter (G, W inter ), W intra )" }, { "formula_coordinates": [ 4, 317.71, 397.32, 218.56, 14.34 ], "formula_id": "formula_5", "formula_text": "x ′ it = σ(Linear([x it , max(x jt -x it |j ∈ N t (x it ))]))" }, { "formula_coordinates": [ 4, 318.36, 528.81, 226.75, 31.8 ], "formula_id": "formula_6", "formula_text": "X ′′ t = σ(M RGC(X t W in inter )W out inter + X t ) X ′ t = σ(M RGC(X ′′ t W in intra )W out intra + X t )(3)" }, { "formula_coordinates": [ 5, 121.08, 478.58, 165.28, 30.44 ], "formula_id": "formula_7", "formula_text": "x ′ i = 1 j K S j K T j w ij x i(4)" }, { "formula_coordinates": [ 5, 55.54, 565.57, 230.82, 24.63 ], "formula_id": "formula_8", "formula_text": "X ′ = Conv(1)(Conv(1, K S , K T )(Conv(1)(X))) + X(5)" }, { "formula_coordinates": [ 6, 50.11, 170.77, 236.25, 21.64 ], "formula_id": "formula_9", "formula_text": "x it = x it + Stem(x it , y it );" } ]
10.1145/3447548.3467075
2023-11-24
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b10", "b11", "b12", "b16", "b17", "b18", "b19" ], "table_ref": [], "text": "Time series analysis is a fundamental problem [1] that has played an important role in many real-world applications [2], such as retail sales forecasting [3], [4], imputation of missing data for economic time series [5], anomaly detection for industrial maintenance [6], classification of time series from various domain [7], etc. Numerous statistical and machine All the authors are with the DAMO Academy, Alibaba Group. E-mail:{tian.zt,niupeisong.nps}@alibaba-inc.com . {xue.w,liang.sun,jinrong.jr}@alibaba-inc.com .\n* Equal contribution † Corresponding authors learning methods have been developed for time series analysis in the past. Recently, inspired by its great success in natural language processing and computer vision [8]- [11], transformer has been introduced to various time series tasks with promising results [12], especially for time series forecasting [13]- [17]. We have recently witnessed the rapid development of foundation models in NLP. The key idea is to pre-train a large language model from billions of tokens to facilitate model training for downstream tasks, particularly when we have a few, sometimes even zero, labeled instances. Another advantage of foundation models is that they provide a unified framework for handling diverse tasks, which contrasts conventional wisdom where each task requires a specially designed algorithm. However, so far, little progress has been made to exploit pretrained or foundation models for time series analysis. One main challenge is the lack of a large amount of data to train a time series foundation model. To the best of our knowledge, so far the largest data sets for time series analysis is less than 10GB [18], which is much smaller than that for NLP. To address this challenge, we propose to leverage pre-trained language models for general time series analysis. Our approach provides a unified framework for diverse time series tasks, such as imputation, classification, anomaly detection, forecasting, and few-shot or zero-shot learning. As shown in Figure 1, using the same backbone learned by the pre-trained language model (LM), our approach performs either on-par or better than the state-of-the-art methods in all main time series analysis tasks. Besides extensive empirical studies, we also investigate why a transformer model pre-trained from the language domain can be adapted to time series analysis with almost no change. Our analysis indicates that the self-attention modules in the pretrained transformer acquire the ability to perform certain nondata-dependent operations through training. These operations are closely linked to principal component analysis over the input patterns. We believe it is this generic function performed by the self-attention module that allows trained transformer models to be so-called universal compute engine [19] or general computation calculator [20]. We support our claims by conducting an empirical investigation of the resemblance in model behaviors when self-attention is substituted with PCA, and by providing a theoretical analysis of their correlation." }, { "figure_ref": [], "heading": "GPT2-adapter", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "GPT2-frozen", "publication_ref": [], "table_ref": [], "text": "TimesNet\nHere we summarize our key contributions as follows:\n1) We propose a unified framework that uses a frozen pretrained language model to achieve a SOTA or comparable performance in all major types of time series analysis tasks supported by thorough and extensive experiments, including time series classification, short/long-term forecasting, imputation, anomaly detection, few-shot and zero-sample forecasting. 2) We present four distinct adapters that greatly enhance the performance of most critical downstream tasks, including time series forecasting and anomaly detection. By incorporating efficient parameter tuning, our framework surpasses all SOTA methods. 3) We found, both theoretically and empirically, that selfattention performs a function similar to PCA, which helps explain the universality of transformer models. 4) We demonstrate the universality of our approach by exploring a pre-trained transformer model from another backbond model (BERT) or modality (computer vision) to power the time series forecasting.\nThe remainder of this paper is structured as follows. Section II briefly summarizes the related work. Section III presents the proposed detailed model structure. In Section IV, we conduct a thorough and extensive evaluation of the performance of cross-modality time series analysis using our proposed method in seven main time series analysis tasks compared to various SOTA baseline models. Section V provides the visualization results of different time series downstream tasks, and Section VI presents various ablation studies. Section VII demonstrates the universality of our proposed method using pre-trained models with another structure or pre-trained from another modality including BERT-frozen and BEiT-frozen. Section VIII analyzes the cost of model training and inference. In Section IX, we provide a theoretical explanation of the connection between self-attention and PCA. Finally, in Section X, we discuss our results and future directions. Due to space limit, more extensive discussion of related work, experimental results, and theoretical analysis are provided in the Appendix." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we provide short reviews of literature in the areas of time series analysis, in-modality transfer learning, cross-modality knowledge transfer learning and parameterefficient fine-tuning." }, { "figure_ref": [], "heading": "A. Time Series Forecasting", "publication_ref": [ "b20", "b21", "b22", "b23", "b7", "b8", "b24", "b9", "b25", "b11", "b14", "b15", "b13", "b9", "b16", "b26" ], "table_ref": [], "text": "Time series forecasting models can be roughly divided into three categories, ranging from the classic ARIMA models to the most recent transformer models. The first generation of welldiscussed models can be dated back to auto-regressive family, such as ARIMA [21], [22] that follows the Markov process and recursively execute sequential forecasting. However, it is limited to stationary sequences while most time series is non-stationary. Additionally, with the bloom of deep neural networks, recurrent neural networks (RNNs), such as LSTM [23] and GRU [24], were designed for sequential tasks. Yet the recurrent model is not efficient for training and long-term dependencies are still under resolved.\nRecently, transformer models have achieve great progress in NLP [8], [9], [25] and CV [10], [26] tasks. Also, a large amount of transformer models are proposed for time series forecasting [12]. Next we briefly introduce several representative algorithms. Informer [15] proposes a probability sparse attention mechanism to deal with long-term dependencies. Autoformer [16] introduces a decomposition transformer architecture and replaces the attention module with an Auto-Correlation mechanism. FEDformer [14] uses Fourier enhanced structure to improve computational efficiency and achieves linear complexity. Similar to patching in ViT [10], PatchTST [17] employs segmentation of time series that divide a sequence into patches to increase input length and reduce information redundancy. Besides, a simple MLP-based model DLinear [27] outperforms most transformer models and it validates channelindependence works well in time series forecasting." }, { "figure_ref": [], "heading": "B. In-modality Transfer Learning through pre-trained models", "publication_ref": [ "b8", "b27", "b24", "b28", "b25" ], "table_ref": [], "text": "In recent years, a large number of research works have verified the effectiveness of the pre-trained model from NLP, CV to Vision-and-Language (VL). Latest studies for NLP focus on learning contextual word embeddings for downstream tasks. With the increase of computing power, the very deep transformer models have shown powerful representation ability in various language tasks. Among them, BERT [9] uses transformer encoders and employs masked language modeling task that aims to recover the random masked tokens within a text. OpenAI proposed GPT [28] that trains transformer decoders on a large language corpus and then fine-tunes on taskspecific data. GPT2 [25] is trained on larger datasets with much more parameters and can be transferred to various downstream tasks. Since transformer models can adapt to various inputs, the idea of pre-training can also be well adapted to visual tasks. DEiT [29] proposed a teacher-student strategy for transformers with convolution neural networks (CNNs) as the teacher model and achieves competitive performance. BEiT [26] converts images as visual tokens and successfully uses the BERT model in CV. However, because of the insufficient training sample, there is little research on pre-trained models on general time series analysis that cover all major tasks as CV or NLP domain." }, { "figure_ref": [], "heading": "C. Cross-Modality Knowledge Transfer", "publication_ref": [ "b29", "b18", "b30" ], "table_ref": [], "text": "Since transformers can handle different modal tasks through tokenizing the inputs to embeddings, it is also an interesting topic whether the transformers have universal representation ability and can be used for transferring between various domains. The VL pre-trained model VLMo [30] proposed a stagewise pre-training strategy that utilizes frozen attention blocks pre-trained by image-only data to train the language expert. One of the most related works which transfers knowledge from a pre-trained language model to other domains is [19], which studies the strong performance of a frozen pre-trained language model (LM) compared to an end-toend transformer alternative learned from other domains' data. Another related work to knowledge transfer to the time series is the Voice2series [31], which leverages a pre-trained speech processing model for time series classification and achieves superior performance. To the best of our knowledge, no previous research has investigated cross-modality knowledge transfer for the time series forecasting task, let alone general time series analysis." }, { "figure_ref": [], "heading": "D. Parameter-Efficient Fine-Tuning", "publication_ref": [ "b31", "b32", "b33", "b34" ], "table_ref": [], "text": "Parameter-efficient fine-tuning (PEFT) techniques are both proposed in NLP and CV for fine-tuning less parameters in various downstream tasks. Their goal is to minimize computation costs by fine-tuning a smaller number of parameters, while still achieving or even surpassing the performance of full fine-tuning. Specifically, adapter [32] inserts a small modules between transformer layers. Prefix tuning [33] adds some tunable prefix to the keys and values of the multi-head attention at every layer. Low-Rank Adaptation, or LoRA [34], injects trainable low-rank matrices into transformer layers to approximate the weight updates. [35] provides a unified view of previous PEFT methods." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Overview", "publication_ref": [], "table_ref": [], "text": "The overall architecture of our proposed model are shown in Figure 2. After undergoing instance normalization and patching, the primary tokens for transformers are mapped by the TS input embedding module. We then insert various time series specific adapters into the pre-trained language model. For each transformer block, we concatenate learnable prompts obtained through frequency adapter with the input tokens. Temporal adapters and channel adapters are inserted after the multi-head attention module. We utilize the reshape operation to convert the feature from temporal dimension to channel dimension. Finally, the representation of last block is applied into various downstream tasks." }, { "figure_ref": [], "heading": "B. Instance Normalization", "publication_ref": [ "b35" ], "table_ref": [], "text": "Data normalization is crucial for pre-trained models across various modalities. Thus, we incorporate a simple data normalization block, non-affine reverse instance norm [36], to further facilitate knowledge transfer. This normalization block simply normalizes the input time series using mean and variance, and then adds them back to the output. For each input univariate time series X ∈ R L with mean Exp and standard deviation V ar, we normalize it as: \nX = X -Exp √ V ar + ϵ .(1)" }, { "figure_ref": [], "heading": "C. Patching", "publication_ref": [ "b16" ], "table_ref": [], "text": "To extract local semantic information, we utilize patching [17] by aggregating adjacent time steps to form a single patch-based token. Patching enables a significant increase in the input historical time horizon while maintaining the same token length and reducing information redundancy for transformer models." }, { "figure_ref": [], "heading": "D. Frozen Pre-trained Block", "publication_ref": [ "b18", "b36" ], "table_ref": [], "text": "Our architecture retains the positional embedding layers and self-attention blocks from the pre-trained models. As self-attention layers and FFN (Feedforward Neural Networks) contain the majority of learned knowledge from pre-trained language models, we opt to freeze the self-attention blocks while fine-tuning. To enhance downstream tasks with minimal effort, we fine-tune layer normalization layer, which is considered a standard practice [19], [37]." }, { "figure_ref": [ "fig_2" ], "heading": "E. Adapters", "publication_ref": [ "b37", "b16", "b26", "b13" ], "table_ref": [ "tab_0", "tab_1" ], "text": "PEFT methods, such as LoRA, Parallel Adapter [38] and Prefix Tuning, enable efficient adaptation of pre-trained models. We have conducted experiments with multiple adapters (Table I) and have identified the optimal approaches for fine-tuning and adapter design for time series data. We incorporate various aspects of time series information, including temporal, channel, and frequency information, and propose relative adapters.\n1) Temporal & Channel Adapters: Both PatchTST [17] and DLinear [27] assume channel-independence and focus more on modeling temporal correlation. Multiple approaches [] tried to capture channel correlation, but all of them fail to significantly improve the performance of time series analysis. One reason for such a failure is that we often observe a large number of channels in time series, and directly modeling the pairwise correlation among channels tends to overfit training data, leading to poor generalized performance. can lead to significant performance improvements.\nTo address this challenge, we introduce a \"compressor\" like channel adaptor with a bottleneck structure, as shown in Figure 3. It first maps a relatively high dimension channel information into a low dimensional hidden space, run through a two layer MLP, and then maps it back to the high dimensional space. Using the bottleneck structure, we are able to capture the channel correlation through the hidden space without suffering from the overfitting problem. A similar structure is also used for temporal adaptor. For each transformer block, we duplicate the self-attention module and insert different adapters after the multi-head attention modules. To facilitate dimension conversion for processing, we simply reshape the feature dimension from M × T × D to T × M × D and reshape back in the end of each block.\n2) Frequency Adapters: FEDformer [14] combines Fourier analysis with the Transformer-based method to capture the overall characteristics of time series. Thus, to incorporate more global information, we design frequency adapter. Figure 3(b) shows the structure of frequency adapter.\nFor the l th layer and patched input time series X patched ∈ R P ×N , we initially convert it from the time domain within each patch to the frequency domain through Fast Fourier Transform (FFT). Subsequently, we perform a projection with W l ∈ R 2N ×2F before applying inverse FFT. This process can be represented as follows:\nXfreq l = iFFT(W l • FFT(X patched ))(2)\nP l = Embedding( Xfreq l ).(3)\nThen, the adaptation prompts P l ∈ R F ×D can obtained using an embedding module, where D is the dimension of transformer block. The adaptation prompts are concatenated with tokens T l ∈ R M ×D as [T l ; P l ] ∈ R (M +F )×D .\n3) Select Gate: Table II illustrates the varying effects of different adapters on different datasets. Therefore, the adaptive selection of adapters plays a pivotal role in enhancing overall performance. As Figure 4, we utilize a scaled sigmoid as the adaptive gating mechanism:\ngate(g) = σ(λg) = 1 1 + e -λg ,(4)\nwhere g is a learnable parameter and λ represents the scaling factor. " }, { "figure_ref": [ "fig_3" ], "heading": "F. Anomaly Adapter", "publication_ref": [ "b38", "b39", "b40", "b41", "b40" ], "table_ref": [], "text": "In our conference version of this work, we follow GPT-2(6) [39] and Timesnet [40], and performs anomaly detection based on the reconstruction loss between ground truth and predicted values. Although our empirical studies show it outperforms the existing reconstruction based approaches for anomaly detection, it is clear that this method has serious limitations and falls short of matching the performance of stateof-the-art methods that are specially designed for time series anomaly detection [41], [42]. As pointed out in [41], normal patterns in time series tend to repeat themselves over time, while anomaly signals do not exhibit themselves repeatedly, an property we refer to as the contrastive bias. In order to capture this property for more effective time series anomaly detection, we develop an anomaly adapter, as illustrated in Figure 5. In particular, we propose to capture the contrastive bias through the self-attention matrix: since normal patterns tend to repeat themselves over time, we expect to observe clear periodical patterns from the corresponding row of self-attention matrix, while such periodical patterns will be absent from the self-attention matrix for anomaly signals. By treating numbers in each row of the self-attention matrix as a distribution after appropriately normalization, we can measure the difference by the KL divergence.\nSpecifically, for the l th layer, we calculate the distribution difference between attention A l and the outputs of anomaly adapter A anomaly l : \nloss discrepancy l = KL( Âl ||A anomaly l )(5)\nÂl = 1 2 [A + A T -diag(A)](6)\nA anomaly li,j\n= [ 1 √ 2πσ i exp(- dis(i, j) 2σ 2 i )],(7)\nwhere σ i is learnable and dis(i, j) represents the distance between i th and j th tokens." }, { "figure_ref": [], "heading": "G. Model Variants", "publication_ref": [ "b24" ], "table_ref": [], "text": "In this paper, we introduce two model variants for analysis: the pre-trained model with all adapters and the pre-trained model without any adapter. In the following, we primarily use GPT2 [25] with K layers as the backbone network. Accordingly, we refer to the two variants as GPT2(K)-adapter and GPT2(K)-frozen respectively.\n1) GPT2(K)-adapter: The majority of the parameters in the model remain frozen, including the attention module and positional embedding module. All adapters are integrated into the GPT2(K)-adapter variant. Additionally, for anomaly detection, the anomaly adapter is employed to capture discrepancy information.\n2) GPT2(K)-frozen: For better understand how language knowledge is transferred to time series, we also propose GPT2(K)-frozen without any adapters. Since the distribution between time series and language differs significantly, the positional embedding module needs to be trained for reducing the distribution gap." }, { "figure_ref": [], "heading": "IV. MAIN TIME SERIES ANALYSIS TASKS", "publication_ref": [ "b39", "b39", "b39", "b26", "b15", "b13", "b42", "b43", "b44", "b43", "b16", "b45", "b46", "b40", "b41", "b47", "b48", "b49", "b50", "b51", "b52", "b53" ], "table_ref": [], "text": "Our proposed methods excel in various downstream time series analysis tasks through fine-tuning. To demonstrate the effectiveness of our approach, we conduct extensive experiments on major types of downstream tasks, including time series classification, anomaly detection, imputation, short/longterm forecasting and few-shot/zero-shot forecasting. To ensure a fair comparison, we adhere to the experimental settings of TimesNet [40].\nBaselines We select representative baselines and cite their results from [40], which includes the most recent and quite extensive empirical studies of time series. The baselines include CNN-based models: TimesNet [40]; MLPbased models: DLinear [27]; Transformer-based models: Autoformer [16], FEDformer [14], ETSformer [43], Non-stationary Transformer [44], Flowformer [45], Non-stationary Transformer [44], PatchTST [17]. Besides, N-HiTS [46] and N-BEATS [47] are used for short-term forecasting. Anomaly Transformer [41], DCdetector [42], THOC [48], InterFusion [49], OmniAnomaly [50] and BeatGAN [51] are used for anomaly detection. Rocket [52], LSTNet [53] and TCN [54] are also used for classification." }, { "figure_ref": [ "fig_0" ], "heading": "A. Main Results", "publication_ref": [], "table_ref": [], "text": "Overall, as shown in Figure 1, GPT2-frozen outperforms other models in most tasks, including long/short-term forecasting, classification, anomaly detection, imputation, and fowshot/zero-short forecasting. This confirms that time series tasks can also take advantage of cross-modality transferred knowledge. Also, GPT2-adapter achieves SOTA performance in all tasks, demonstrating that transferred knowledge is a powerful tool for time series analysis." }, { "figure_ref": [], "heading": "B. Time Series Anomaly Detection", "publication_ref": [ "b54", "b55", "b55", "b56", "b57", "b41", "b40", "b39", "b58", "b35", "b43" ], "table_ref": [ "tab_3", "tab_4", "tab_5" ], "text": "1) Setups: Detecting anomalies in time series is vital in industrial applications, ranging from health monitoring to space & earth exploration. We compare models on five commonly used datasets, including SMD [55], MSL [56], SMAP [56], SWaT [57] and PSM [58]. For GPT2(6)-frozen, only the classical reconstruction error is used.\n2) Results: Table III demonstrates that GPT2(6)-adapter achieves the best performance with the averaged F1-score 95.35%, surpassing previous SOTA time series anomaly detection methods DCdetector [42] (95.01%) and Anomaly Transformer [41] (94.91%).\nAlso, for methods with reconstruction error only, GPT2(6)frozen outperforms TimesNet by 1.7%. Thus, in addition to its proficiency in representing complete sequences for classification purposes, GPT2(6)-frozen demonstrates proficiency in detecting infrequent anomalies within time series and achieves comparable performance to top-notch reconstructionbased methods. However, anomaly detection extends beyond evaluating individual samples; it involves identifying anomalies relative to normal samples. To significantly enhance our detection performance, we introduce the anomaly adapter, which seamlessly empowers GPT2(6)-adapter with contrastive bias through a small plugin kernel.\nC. Time Series Long-term Forecasting 1) Setups: Eight popular real-world benchmark datasets [40], including Weather, Traffic , Electricity, ILI , and 4 ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2), are used for long-term forecasting evaluation. http://pems.dot.ca.gov https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html 2) Results: As shown in Table IV, GPT2(6)-adapter surpasses all other baselines and GPT2(6)-frozen achieves comparable performance to PatchTST. Notably, compared with the recent published SOTA method TimesNet, GPT2(6)-adapter yields a relative 19.6% average MSE reduction.\nWe aim to leverage both channel-wise/temporal information and frequency information bias through our carefully designed plugin adapter. While we have achieved state-of-theart performance, it is worth noting that the advantage of our proposed method in this full dataset training setting is relatively diminished compared to the few-shot learning setting. This discrepancy can be attributed to information saturation, as the majority of time series sequences inherently possess a lowrank representation. With an ample amount of data available, the model is capable of learning the representation end-to-end without relying on knowledge transfer. D. Time Series Short-term Forecasting 1) Setups: To fully evaluate different algorithms in forecasting tasks, we also conduct short-term forecasting (with relatively short forecasting horizon) experiments on M4 [59], contains marketing data of various frequencies.\n2) Results: The results in Table V indicate that GPT2(6)adapter achieves the lowest SMAPE of 11.713, outperforming previous SOTA method TimesNet with SMAPE of 11.829. Additionally, the performance of GPT2( 6)-frozen is superior to advanced Transformer-based and MLP-based models, and comparable to TimesNet and N-BEATS.\nIn this context, we aim to delve deeper into the distinction between long-term and short-term forecasting tasks. Certain studies have observed a notable nonstationary distribution shift [36], [44] within long-term forecasting datasets, while the period phenomena are more prominent in short-term forecasting tasks. Hence, the key difference lies in comparing the performance under two distinct scenarios: one with a significant distribution shift and the other without. It is important to note that time series sequences typically exhibit distribution shifts as the patterns change over time. Consequently, long-term forecasting tasks are more susceptible to this situation. In line with empirical results, our proposed method demonstrates strong performance in both scenarios." }, { "figure_ref": [], "heading": "E. Time Series Imputation 1) Setups:", "publication_ref": [ "b14" ], "table_ref": [], "text": "We conduct experiments on six popular realworld datasets, including 4 ETT datasets [15] (ETTh1, ETTh2, ETTm1, ETTm2), Electricity and Weather, where the datamissing is common. Following the settings of TimesNet, different random mask ratios ({12.5%, 25%, 37.5%, 50%}) of time points are selected for the evaluation on various proportions of missing data.\n2) Results: The results are shown in on ETTh1,and a 10.0% MSE reduction on average on six benchmark datasets. It verifies that the proposed method can also effectively mine temporal patterns of incomplete time series." }, { "figure_ref": [], "heading": "F. Time Series Classification 1) Setups:", "publication_ref": [ "b59", "b16", "b39", "b51", "b53" ], "table_ref": [ "tab_7" ], "text": "To evaluate the model's capacity for high-level representation learning, we employ sequence-level classification. Specifically, we follow the same setting as TimesNet: For classification, 10 multivariate UEA classification datasets [60] are selected for evaluation, including gesture, action, audio recognition medical diagnosis and other practical tasks.\n2) Results: As shown in Table VII, GPT2(6)-frozen and GPT2(6)-adapter achieve an average accuracy of 74.0% and 74.1% respectively, both surpassing all baselines including TimesNet (73.60%). Specifically, compared to recent published patch-transformer-based models [17], GPT2(6)-adapter surpasses it by a large margin 9.0% which shows the prior NLP transfer knowledge can indeed help in time series representation.\nIt's worth mentioning that most recent advancements in time series classification predominantly utilize convolutional methods [40], [52], [54], leading to state-of-the-art (SOTA) results. This preference is attributed to their rapid training and inference times, making them highly suitable for practical and industrial applications. However, it does not imply that they are the optimal models for all classification tasks within the time series domain. Forecasting, anomaly detection, and classification all fall under the umbrella of time series representation learning tasks. In this context, transformer-based models hold significant potential for achieving superior classification performance. Transformers are widely recognized as one of the best, if not the best, architectures for representation learning." }, { "figure_ref": [], "heading": "G. Few-shot Forecasting", "publication_ref": [ "b60", "b61" ], "table_ref": [ "tab_8", "tab_9" ], "text": "The large language model (LLM) has demonstrated remarkable performance in both few-shot and zero-shot learning settings [61], [62]. It can be argued that few-shot and zeroshot learning also represent the ultimate tasks for a universal time series forecasting model. To extensively evaluate the representation power of the GPT2(6) for time series analysis, we conduct experiments under few-shot and zero-shot learning settings.\n1) Setups: Similar to traditional experimental settings, each time series is split into three parts: training data, validation data, and test data. For few-shot learning, only a certain percentage (10%, 5%) timesteps of training data are used.\n2) Results: The results of 10% few-shot learning are shown in Table VIII. Compared to TimesNet, DLinear, PatchTST and other methods, GPT2(6)-frozen and GPT2(6)-adapter both achieve better performance. Traditionally, CNN-based and single MLP-based models are considered more dataefficient for training and suitable for few-shot learning methods. In comparison to convolution-based TimesNet, MLP-based DLinear and transformer-based PatchTST, GPT2(6)-adapter demonstrates relative average MSE reductions of 34.7%, 12.1% and 5.7% respectively. The results of 5% few-shot learning are provided in the Appendix table XX. We also add a comparison with traditional algorithms (ETS, ARIMA, NaiveDrift) in the Appendix D as well, and GTP2(6)-adapter also surpasses all those traditional methods.\nH. Zero-shot Forecasting 1) Setups: This task is used to evaluate the cross datasets adaption ability of our proposed algorithm, i.e. how well a model is able to perform on dataset A (without any training data from A) when it is trained from dataset B. 2) Results: The results are summarized in Table IX. GPT2(6)-frozen and GPT2(6)-adapter model consistently outperforms all recent state-of-the-art transformer and MLP-based time series forecasting methods. Compared to recently published state-of-the-art MLP-based method Dlinear, convolutionbased method Timesnet, and transformer-based method Patchtst, GPT2(6)-adapter demonstrates a relative average metric reduction of 14.6%, 15.1% and 8.9%, respectively. Also, both GPT2(6)-frozen and GPT2(6)-adapter are comparable to N-BEATS without any meta-learning design and outperform N-BEATS in the ELECTR dataset.We attribute this to the knowledge transfer capability from the frozen pre-trained transformer model.\nZero-shot learning is undoubtedly a promising direction that warrants exploration. The concept of learning without retraining the model, known as zero-shot learning or in-context learning, has been presented to us by LLM (Language Model). We firmly believe that the time series domain is no exception and holds significant potential for exploration. Our proposed method represents a small but meaningful step towards this path.\nV. VISUALIZATION In order to clarify the representation ability more clearly, we provides showcases of imputation, forecasting (long-term and few-shot), anomaly detection and classification in this section." }, { "figure_ref": [ "fig_4" ], "heading": "A. Visualization of Imputation", "publication_ref": [], "table_ref": [], "text": "Figure 6 (a) shows the imputation results of GPT2(3), TimesNet and DLinear on ETTh1 12.5%. Notably, GPT2(3) demonstrates a remarkable ability to fit well at locations of steep data increase." }, { "figure_ref": [ "fig_4" ], "heading": "B. Visualization of Forecasting", "publication_ref": [], "table_ref": [], "text": "In Figure 6 (b) and (c), we present the long-term forecasting results on ETTm1 and few-shot forecasting results on ETTh2, respectively. GPT2(6) exhibits superior performance in aligning with future series. Moreover, in the few-shot learning results, GPT2(6) successfully captures the increasing trend of the data, whereas TimesNet and DLinear fail to do so. " }, { "figure_ref": [], "heading": "C. Visualization of Anomaly Detection", "publication_ref": [], "table_ref": [], "text": "Figure 7 shows various anomalies in SMD, including the point-wise anomalies and pattern-wise anomalies (segment, seasonal anomalies). It can be seen that GPT2(6) can robustly detect various anomalies from normal points." }, { "figure_ref": [], "heading": "D. Visualization of Classification", "publication_ref": [ "b5" ], "table_ref": [], "text": "Figure 8 reports the t-SNE visualization of the feature maps for GPT2 (6) and TimesNet on UWaveGestureLibrary (8 classes) and SelfRegulationSCP1 (2 classes). The visualization clearly indicates that the feature maps of each class in GPT2(6) are distinctly separated, especially for the red points in UWaveGestureLibrary, further enhancing its accuracy." }, { "figure_ref": [], "heading": "VI. ABLATIONS", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct several ablations on model selection and effectiveness of pre-training. We also perform experiments on 5% data in few-shot forecasting." }, { "figure_ref": [ "fig_6" ], "heading": "A. Experiment analysis of GPT2-frozen model", "publication_ref": [ "b13", "b15", "b16" ], "table_ref": [ "tab_12", "tab_11", "tab_12" ], "text": "We conduct experiments to analyze whether the self-attention frozen pre-trained model improves performance compared with overall fine-tuning and random initialization. We mostly utilize GPT2-frozen for analysis to maintain simplicity and improve clarity.\nFirstly, we compare GPT2(6)-frozen with the same model without freezing (No Freeze) and random initial model (No Pre-train). For the end-to-end paradigm No Pre-train GPT2(6), we directly train all parameters of the model. We summarize the results in Table XIII. Then we analyze the performance of various layers to clarify our selection of GPT2( 6)-frozen. Results on 5% datasets are also provided in the Appendix table XX.\n1) Fine-tune More Parameters: Compared with fine-tuning all parameters, self-attention frozen pre-trained model GPT2(6)frozen achieves better performance on most datasets and yields an overall 11.5% relative MSE reduction on 10% data. It verifies that frozen pre-trained attention layers are effective for time series forecasting.\n2) Parameters Initialization: Compared with the random initial model, self-attention frozen pre-trained model GPT2(6)frozen achieves better performance on most datasets and yields an overall 14.3% relative MSE reduction on 10% data. It again suggests that a model pre-trained on cross-domain data can achieve significant performance improvement in time series forecasting.\n3) The Number of GPT2 Layers: For most transformer-based methods in time-series forecasting [14], [16], [17], no more than 3 encoder layers are included. However, most pre-trained models with at least 12 layers may suffer from overfitting in time series forecasting. To better balance performance and computational efficiency, we test using various numbers of layers on ETTh2. Additionally, we train a completely random initialized non-pretrained GPT2 as a comparison. The results are shown in Figure 9, for both 5% and 10% data, the pre-trained model is unable to do well with few layers but significantly outperforms non-pre-trained GPT2 with more attention blocks transferred from NLP. It indicates that pre-trained attention layers produce a great benefit in time series forecasting. Also, the pre-trained model achieves better performance between 3 and 9 layers. Thus GPT2 with 6 layers is chosen as our default architecture. XI shows that only input and output modules can not work and pre-trained knowledge play an importance part in time series tasks.\n5) Fine-Tuning Parameters Selection: In this section, we conduct ablation experiments to study which parameters are important to fine-tune. Since the input embedding and output layers are randomly initialized for adapting to a new domain, they must be trained. Then, we study adding layer normalization and positional embeddings to the list of fine-tuning parameters. Table XII shows the results that re-train parameters of layer normalization and positional embeddings can bring certain benefits, especially in longer prediction lengths. Thus, we follow the standard practice to re-train positional embeddings and layer normalization." }, { "figure_ref": [ "fig_12" ], "heading": "B. Analysis of Data Volume", "publication_ref": [], "table_ref": [], "text": "Results of few-shot learning show that GPT2(6)-frozen shows SOTA performance in few-shot learning tasks in which the model is trained on and 10% data. Plus, it has comparable performance with the SOTA baselines PatchTST and Dlinear on full sample forecasting setting as well. This phenomenon raises a question that how performance changes with an increase in data sample size.\nThus, we conduct experiments on various percentages P ∈ {5%, 10%, 20%, 50%, 80%, 100%} of ETTh2. Figure 10 shows that the performance improvement for GPT2(6)-frozen is almost flattened. These results illustrate that such a cross-domain frozen model is extremely efficient in few-shot time series forecasting and only requires a few fine-tuning samples to reach a SOTA performance. For more complete data, end-toend training models start to catch up, but still, a GPT2(6)-frozen model can be comparable to those SOTA end-to-end training algorithms." }, { "figure_ref": [], "heading": "C. Analysis of Adapters", "publication_ref": [], "table_ref": [ "tab_15" ], "text": "Here, we delve into the analysis of various adapters, including their functioning and the mechanism of the select gate.\n1) Ablation on Various Adapters: In Table XIV, we present a comparative analysis of performance among various adapters, full adapters (GPT2-adapter) and no adapter (GPT2-frozen). The results indicate that both adapters yield positive improvement, and different datasets are influenced by different adapters. The GPT2-adapter with full adapters achieves the best performance, highlighting the rationality of the adapter design.\n2) How Select Gate Works: Table XV shows the learned gate coefficients of each layer. For ETTh1-720, only 1 st and 4 th layer requires temporal adapters whereas for ETTh2-720 all layers except 2 nd layer necessitate temporal adapters. The results illustrate that the select gate mechanism effectively identifies the most suitable choice of adapters. " }, { "figure_ref": [], "heading": "VII. EXPLORING TRANSFER LEARNING FROM OTHERS: THE UNEXCEPTIONAL NATURE OF GPT2-BASED-FPT", "publication_ref": [ "b8", "b25" ], "table_ref": [], "text": "We also present experiments on BERT-frozen [9] model and the image-pretrained BEiT-frozen model [26] to illustrate the generality of pre-trained models for cross-domain knowledge transferring. The results in Table XVI demonstrate that the ability of knowledge transfer is not exclusive to GPT2-based pre-trained language models. Subsequently, our theoretical analysis will shed light on the universality of this phenomenon." }, { "figure_ref": [ "fig_13", "fig_13" ], "heading": "VIII. TRAINING/INFERENCING COST", "publication_ref": [ "b62", "b63" ], "table_ref": [], "text": "Analysis of computational cost is helpful for investigating the practicality of the LLM-based model. The results can be found in table XVII. Each baseline model comes in two variants, featuring model hidden dimensions of 32 and 768, which align with GPT-2's specifications. Furthermore, the majority of the baseline models consist of three layers. We assessed the computational cost using a batch from ETTh2 (with a batch The results indicate that GPT-2(3) has substantially enhanced time efficiency and reduced parameter quantity compared to baselines with the same model dimension. This was a surprise since we initially anticipated that this large language model might be slower. However, we surmise that the efficient optimization of huggingface's GPT model implementation primarily accounts for such a significant improvement in time costs. Furthermore, GPT-2(3) and GPT-2( 6) demonstrate a mere 6.12% and 4.60% proportion of learnable parameters among the overall parameter size, respectively. The observation, i.e. we can directly use a trained LM for time series forecasting without having to modify its model, makes us believe that the underlying model is doing something very generic and independent from texts despite it being trained from text data. Our analysis aims to show that part of this generic function can be related to PCA, as minimizing the gradient with respect to the self-attention layer seems to do something similar to PCA. In this section, we take the first step towards revealing the generality of self-attention by connecting the self-attention with principal component analysis (PCA). Moreover, when coming the question of why fine-tuning is restricted to the embedding layer and layer norm, following ETTh1 720\n[1,0,0,1,0,0] [1,1,1,0,1,1] [1,1,1,1,1,1] 0.432 ETTh2 96 [1,0,1,1,0,0] [1,1,0,0,1,1] [1,1,1,1,0,1] 0.269 ETTh2 720 [1,0,1,1,1,1] [1,1,1,1,1,1] [1,1,1,1,1,1] 0.392\nour hypothesis that the pre-trained LM as a whole performs something generic, partially fine-tuning any of its components may break the generic function and lead to relatively poor performance for time series analysis.\nFor each layer, we calculate and perform statistical analysis of the pairwise token similarity values. Specifically, we denote each output feature map with shape of (b, n, d), where b is the batch size, n is the number of tokens, and d is the dimension of each token feature. We calculate the cosine similarity, and the resulting pairwise similarity matrix of shape (b, n, n). Next we count the number of occurrences of similarity values within each interval as a simple statistical analysis.\nOur analysis is motivated by the observation that the within-layer token similarity increases with deeper layers in transformer. We report the layer-wise average token cosine similarity on ETTh2 dataset in Figure 11 (a,c), where we mix weights from pre-trained LM with weights randomly sampled from Gaussian distribution. Here we summarize our observations: a) in a randomly initialed GPT2 (6) model, the token similarity is low among all layers (0.1 -0.2); b) when gradually switched to the pretrained GPT2 model, the token similarity significantly increases in the deep layers and eventually reaches more than 0.9 in the last layer. One potential explanation for the increasing token similarity is that all the token vectors are projected into the low-dimensional top eigenvector space of input patterns. To verify this idea, we further conduct experiments where we replace the self-attention module with PCA and find token similarity patterns remain unchanged according to Figure 11 (b), which further justifies the potential connection between PCA and self-attention.\nTo build the theoretical connection between PCA and selfattention, we first analyze the gradient structure of self-attention. Let X = (x 1 , . . . , x N ) ⊤ ∈ R N ×D be the input pattern, and let f (X) = (f 1 (X), . . . , f N (x)) ⊤ : R N ×D → R N ×D be the func- \n(X) = softmax(XAX ⊤ )X where A = W Q W ⊤ K ∈ R D×D . Lemma IX.1. Let the Jacobian J = ∂fi(X) ∂xj N i,j=1\nrepresent the gradient f (X) w.r.t the input pattern, then we have\n|J| 2 ≤ |A| 2 N i=1 P i,i + 1 2 x i - N j=1 P i,j x j 2 + ∆\nwhere\n∆ = |A| 2 N i̸ =j P i,j x j - N k=1 P i,k x k 2 + |A|2 2 N j=1 |x i | 2 and P i,j = exp(x ⊤ i Axj ) N k=1 exp(x ⊤ i Axk) .\nThis lemma reveals an important gradient structure of J. The proof of essentially follows the analysis in [63], and we include it in Appendix H for completeness.\nUsing the gradient structure revealed in Lemma IX.1, we can connect self-attention with PCA. In order to minimize the norm of gradient |J| 2 , we essentially need to make \nN i=1 |x i - N j=1 P i,j x j | 2 small.\nWhen A is small and all the input patterns are centered at 0 (i.e.\nN i=1 x i = 0), we have N i=1 |x i -X ⊤ P i,: | 2 ≈ N i=1 |x i -X ⊤ XAx i | 2 .\nThe theorem below shows that A minimizing the objective\nN i=1 |x i -X ⊤ XAx i | 2 contains the largest m eigenvectors of X ⊤ X where m is the rank of A. Theorem 1. Let W Q and W K be matrices of size D × m. Let λ 1 ≥ λ 2 ≥ ... ≥ λ D be\nthe eigenvalues of X ⊤ X ranked in descending order, and let v i ∈ R D , i = 1, . . . , D be the corresponding eigenvectors. The optimal solution A * that minimizes\nN i=1 |x i -X ⊤ XAx i | 2 is given by A = m i=1 1 λi v i v ⊤\ni . The proof of Theorem 1 can be found in Appendix H. Following Theorem 1, through the training of pushing gradient to zero, self-attention learns to perform a function closely related to PCA.\nFurthermore, we believe that our findings are in line with a recent work [64]. Researchers have discovered that LLM (Language Model) functions as an exceptional compression machine. Notably, LLM(Chinchilla), trained solely on text data, achieves lower compression rates when applied to ImageNet patches and LibriSpeech samples, outperforming domainspecific compressors such as PNG or FLAC. This implies that a model with strong compression capabilities also possesses good generalization abilities. In this regard, PCA (Principal Component Analysis) can be readily adapted as a compression algorithm, making it suitable for cross-domain applications." }, { "figure_ref": [], "heading": "X. CONCLUSIONS", "publication_ref": [ "b64", "b65" ], "table_ref": [], "text": "In this paper, we developed a foundation model for time series analysis, based on pre-trained model from NLP or CV, that can (a) facilitate the model training for downstream tasks, and (b) provide unified framework for diverse time series analysis tasks. After extensive exploration of parameter-efficient tuning, our empirical studies conclusively demonstrate that the proposed method, equipped with the newly designed adapters, outperforms state-of-the-art approaches across almost all time series tasks. We also examine the universality of transformer by connecting self-attention with PCA, an important step towards understanding how generative models work in practice. On the other hand, we do recognize some limitations of our work: the zero-shot performance of our approach is still behind Nbeat on several datasets, and our analysis of the generality of transformer is still in the early stage. To better understand the universality of transformer, we also plan to examine it from the viewpoint of n-gram language model, an approach that is taken by [65], [66]. In Appendix H, we include our initial analysis along this direction." }, { "figure_ref": [], "heading": "APPENDIX", "publication_ref": [ "b20", "b21", "b22", "b23", "b7", "b8", "b24", "b9", "b25", "b11", "b14", "b15", "b13", "b9", "b16", "b26", "b39", "b14", "b16", "b26", "b66", "b5" ], "table_ref": [], "text": "We have presented a novel general time series analysis model in this paper, and to the best of our knowledge, there has been limited work on similar comprehensive methods for time series analysis. The most closely related field is time series forecasting, where transformer models have gained widespread popularity. Therefore, our focus in this related work will primarily be on introducing the end-to-end time series forecasting method.\nTime series forecasting models can be roughly divided into three categories, ranging from the classic ARIMA models to the most recent transformer models. The first generation of well-discussed models can be dated back to auto-regressive family, such as ARIMA [21], [22] that follows the Markov process and recursively execute sequential forecasting. However, it is limited to stationary sequences while most time series is non-stationary. Additionally, with the bloom of deep neural networks, recurrent neural networks (RNNs), such as LSTM [23] and GRU [24], were designed for sequential tasks. Yet the recurrent model is inefficient for training and long-term dependencies are still under resolved.\nRecently, transformer models have achieve great progress in NLP [8], [9], [25] and CV [10], [26] tasks. Also, a large amount of transformer models are proposed to apply to time series forecasting [12]. In the following, we briefly introduce several representative algorithms. Informer [15] proposes a probability sparse attention mechanism to deal with long-term dependencies. Autoformer [16] introduces a decomposition transformer architecture and replaces the attention module with an Auto-Correlation mechanism. FEDformer [14] uses Fourier enhanced structure to improve computational efficiency and achieves linear complexity. Similar to patching in ViT [10], PatchTST [17] employs segmentation of time series that divide a sequence into patches to increase input length and reduce information redundancy. Besides, a simple MLP-based model DLinear [27] outperforms most transformer models and it validates channel-independence works well in time series forecasting. Recently, TimesNet [40] has treated time series as a 2D signal and utilized a convolution-based inception net backbone to function as a comprehensive time series analysis model. This work is closely related to our tasks in this paper.\nIn this section, we separately summarize dataset details long/short-term forecasting and few-shot/zero-shot forecasting. a) Datasets of Long-term Forecasting and Few-shot Learning: The details of datasets are shown as follows: 1) ETT datasets [15] contain electricity load of various resolutions (ETTh & ETTm) from two electricity stations. 2) Weather contains 21 meteorological indicators of Germany within 1 year; 3) Illness contains the influenza-like illness patients in the United States; 4) Electricity dataset contains the electricity consumption; 5) Traffic dataset contains the occupation rate of freeway system across the State of California. Table XVIII summarizes details of feature statistics.\nSimilar to PatchTST [17], Exchange is not contained. [27] shows that simply repeating the last value in the look-back window can outperform or be comparable to the best results. Also, ILI is not used for few-shot learning for the limited quantity that is hard to follow the definition of few-shot. b) Datasets of Short-term Forecasting and Zero-shot Learning: The details of short-term forecasting and zero-shot learning datasets are shown as follows: 1) M4 is a large and diverse dataset that contains time series of various frequencies and fields, including business, financial and economic forecasting; 2) M3 is smaller than M4, but also contains time series from diverse domains and frequencies; 3) TOURISM is the dataset of tourism activities with different frequencies and contains a much higher fraction of erratic series compared with M4; 4) ELECTR represents the electricity usage monitoring of 370 customers over three years. Table ?? summarizes details of the datasets and zero-shot mapping between source and target.\nAll the deep learning networks are implemented in PyTorch and trained on NVIDIA V100 32GB GPUs. We use the pre-trained models from [67] for experiments. For few-shot learning, an early stopping counter is employed to stop the training process after three epochs if no loss degradation on the valid set is observed. Plus, we convert the multivariate data into univariate data. Specifically, we treat each feature of the sequence as a single time series. This is mainly for memory efficiency after patching of GPT2 (6) and previous works, DLinear and PatchTST, have proved the effectiveness of channel-independence." }, { "figure_ref": [], "heading": "A. Accuracy Metrics", "publication_ref": [], "table_ref": [], "text": "For long-term/short-term forecasting and few-shot forecasting, we use mean square error (MSE) and mean absolute error (MAE) as metrics. For zero-shot learning, mean absolute percentage error (MAPE) is used for TOURISM; symmetric MAPE (sMAPE) is used for M3 and M4; normalized deviation (ND) is used for ELECTR. All experiments are repeated 3 times and the mean of the metrics is used in the final results." }, { "figure_ref": [], "heading": "C. Mean and STD for Few-shot Learning", "publication_ref": [ "b5", "b5" ], "table_ref": [], "text": "Table XXI lists both mean and STD for GPT2 (6), DLinear and PatchTST with 3 runs on 5% ETTh2 and ETTm2. The results show a small variance in performance of GPT2(6) that represents the stability of GPT2 (6). " }, { "figure_ref": [], "heading": "E. Baselines with Instance Normalization", "publication_ref": [ "b35" ], "table_ref": [ "tab_22" ], "text": "Instance normalization [36] is a plug-in for time series for distribution shift. Most baselines, such as Autoformer and FEDformer are not equipped with instance normalization. Thus, for a fair comparison, we add the experiment, as in Table XXIII, for baselines w/o instance normalization and GPT(6) can also perform superior. " }, { "figure_ref": [ "fig_8", "fig_8", "fig_12", "fig_10", "fig_10" ], "heading": "F. Detailed Definition of Zero-shot Learning", "publication_ref": [ "b67", "b68", "b69", "b70", "b26", "b71", "b72", "b64", "b65", "b25", "b64", "b65", "b25" ], "table_ref": [], "text": "Task Definition Each experiment contains two distinct datasets, source, and target datasets. The source dataset is used to train the model and then forecasts without fine-tuning in the target dataset. The target dataset is split into non-overlapping historical and test sequences. We use the historical sequence as input to the model, and the obtained output is used to calculate errors with the test sequences. Besides meta-learning-based models like N-BEATS, evaluated models' parameters are not allowed any adjustment using the forecasting phase. Also, same as [68], each data set adopts a specific metric (M4: sMAPE; M3: sMAPE; TOURISM: MAPE; ELECTR: ND)\nIn our numerical experiments, we obtain two interesting observations. First, the token similarity within a sample is larger in pretrained LM. We report the layer-wise average token cosine similarity in ETTh2 experiment in Figure 13. In particular, Figure 13 (a) shows that in a fine-tuned random initialed GPT2(6) model, the token similarity is around 0.1-0.2 among different layers. When switching to the frozen pre-trained GPT2-FPT model, the token similarity significantly increases in the deep layers and eventually reaches more than 0.9 in the last layer. The ETTh2 dataset contains high volatility hourly information related to the electricity transformer temperature. In this situation, higher token similarity implies the high-frequency noise in the data is eased and only low-frequency information will be reserved. In other words, after going through the pretrained GPT2-FPT model, the signal-noise ratio is enhanced. We use the following theorem to characterize this behavior.\nG. Theorem A.1 Theorem A.1 (informal). We consider the self-attention for l-th query token. Let's assume the input token x i are bounded with mean µ for i = 1, 2, ..., n. Under mild conditions, with high probability, the output value token V l converges to µW v on the order of O(n -1/2 ), where W v is the parameter matrix to compute the value token.\nThe Theorem A.1 describes the self-attention structure can efficiently make output value token V l converge its mean value µW v . In the time series forecasting task, each token represents several adjacent points in a time series. When the time series has some periodical or translation invariant structures, by comparing a given token with other tokens, one could have a higher chance to figure out those invariant structures. This phenomenon is especially important in few-shot forecasting tasks. Without enough token noise distillation ability, the model will more likely tend to overfit due to insufficient training data.\nWe denote x i as i-th element of vector x, W ij as the element at i-th row and j-th column of matrix W , and W j as the j-th row of matrix W . Moreover, we denote x i as the i-th patch (token) of the inputs with x i = X i .\nBefore given the formal statement of the Theorem A.1, we first show the assumptions. 1) The token x i is the sub-gaussian random vector with mean µ i and variance (σ 2 /d)I for i = 1, 2, ..., n.\n2) µ follows a discrete distribution with finite values µ ∈ V. Moreover, there exist 0 < ν 1 , 0 < ν 2 < ν 4 such that a)\n∥µ i ∥ = ν 1 , and b) µ i W Q W T K µ i ∈ [ν 2 , ν 4 ] for all i and |µ i W Q W ⊤ K µ ⊤ j | ≤ ν 2 for all µ i ̸ = µ j ∈ V. 3) W V and W Q W ⊤\nK are element-wise bounded with ν 5 and ν 6 respectively, that is, |W\n(ij) V | ≤ ν 5 and |(W Q W ⊤ K ) (ij) | ≤ ν 6\n, for all i, j from 1 to d. In the above assumptions, we ensure that for a given query patch, the difference between the clustering center and noises are large enough to be distinguished.\nTheorem A.2 (formal statement of Theorem A.1). Let patch x i be σ 2 -subgaussian random variable with mean µ i and all n patches follow the same clustering center of query l. Per Assumptions aforementioned, when\n√ d ≥ 3(ψ(δ, d) + ν 2 + ν 4 ), then with probability 1 -5δ, we have n i=1 exp 1 √ d x l W Q W ⊤ k x i x i W V n j=1 exp 1 √ d x l W Q W ⊤ K x j -µ l W V ∞ ≤ 4 exp ψ(δ, d) √ d σν 5 2 dn log 2d δ + 7 exp ν 2 -ν 4 + ψ(δ, d) √ d -1 ∥µ l W V ∥ ∞ ,\nwhere ψ(δ, d) = 2σν 1 ν 6 2 log 1 δ + 2σ 2 ν 6 log d δ .\nProof. See the proof of Lemma 2 in [69] with\nk 1 = k = n. ■ H. Theorem A.4\nWe first give the formal statement of Theorem A.4.\nTheorem A.3 (formal statement of Theorem A.4). Let g i ∈ R d and y i ∈ R T be the feature map vector and forecasting targets for the sample i = 1, 2, ..., N respectively, and we assume 1 N N i=1 g i g ⊤ i ⪰ σI for some σ > 0. We want to learn a matrix W ∈ R d×T from the following optimization problem:\nW = arg min 1 2N N i=1 ∥W g i -y i ∥ 2 2 .(8)\nIf we apply stochastic gradient descent with diminishing step sizes η t = 1 σt at step t, we will need t = Õ(ϵ -1 σ -1 ) steps to reach\n1 t t j=1 1 2N N i=1 ∥W j g i -y i ∥ 2 2 - 1 2N N i=1 ∥W * g i -y i ∥ 2 2 ≤ ϵ,(9)\nwhere W * is the optimal solution and W j is the j step's solution and Õ we suppress the logarithmic dependence.\nProof. As we assume 1 N T i=1 g i g ⊤ i ⪰ σI, the hessian of optimization problem in ( 8) is also positive definite, which is equivalent to the optimization problem in ( 8) is strongly convex with parameter proportional to σ. Then via standard stochastic gradient decent analysis (e.g., section 3.1 in [70]), we obtain:\n1 t t j=1 1 2N N i=1 ∥W j g i -y i ∥ 2 2 - 1 2N N i=1 ∥W * g i -y i ∥ 2 2 ≤ O log t σt = Õ(σ -1 t -1 ). (10\n)\nReplace Ratio 100% Replace Ratio 80% Replace Ratio 60%\nReplace Ratio 40% Replace Ratio 20% Replace Ratio 0%\nReplace Ratio (%)\nFig. 12: The performance and token similarity within samples with respect to each layer with different random replace ratios. Pretrained parameters are replaced by random initial parameters according to certain proportions. Therefore, to reach ϵ optimization gap, we just need to set t = Õ(σ -1 ϵ -1 ). ■\nThe second observation is that for the pretrained GPT2-FPT model, the last transformer layer's outputs, i.e., feature maps, are spread widely throughout the feature space. We report the t-SNE visualization of the feature maps for GPT2-FPT and an end-to-end model PatchTST in Figure 14. In Figure 14 (a) and (b), we color the samples chunked from the one single time series into the same color and the same configuration of the T-SNE is applied. One may observe that the feature maps of GPT2-FPT has less concentration compared to PatchTST. It implies the GPT2-FPT's feature maps corresponding to different samples are more distinctive which eventually facilitates the learning ability of the last MLP layer. Researchers [71] have found that contrastive learning-based representation learning may result in a uniform distribution of training data, and such behavior plays an important role in its good downstream task performance. We use the following theorem to justify it.\nTheorem A.4 (informal). Let g i and y i be the feature map vector and forecasting targets for the sample i = 1, 2, ..., N respectively, and we assume 1 N N i=1 g i g ⊤ i ⪰ σI for some σ > 0. Under mild conditions, if we train an MLP layer that maps feature maps to forecasting targets via the stochastic gradient descent, the total step to reach some optimization tolerance is on the order of O(σ -1 ).\nThe Theorem A.4 considers the covariate matrix of feature maps being positive definite that indicates the set of all feature maps {g i } spans the whole feature spaces, and the higher spread level gives a larger σ. In this case, if we only want to learn an MLP layer, the problem reduces to a well-conditioned least-squared regression problem. Then the fast convergence rate is achieved.\nEfficiently learning the last MLP layer plays a very important role in time series forecasting and can substantially impact the prediction performance. In [27], the authors show that learning a single MLP layer can also bring very promising performance. In few-shot forecasting, the pre-trained GPT2 model may still preserve highly diverse feature maps than end-to-end type models and eventually leads to fast learning speed on the last MLP layer.\nAnother possible benefit of wide spared feature maps is enhancing the model memorization ability when using a multi-layer decoder structure. In the literature on network memorization ability (e.g., [72], [73]), the deep learning model tends to have better memorization ability when feature maps are well separated. In forecasting tasks, capturing extreme or rare behavior is very important. The pretrained GPT gains more capacity in the decoder to correctly forecast uncommon time series. Why does the proposed pretrained-frozen-model work so effectively? We have achieved state-of-the-art performance in time series analysis using a language model that is mostly trained on natural language data. The answer lies in the universality of the frozen structure, which includes attention layers and Feed Forward layers. We can represent images and time series forecasting tasks as an n-gram estimation problem, akin to text analysis, by employing a patching approach. This method treats subsequences of time series or image patches as individual tokens. Central to sequential prediction is the n-order Markov process, and a simple way to capture the n-order Markov process is n-gram language model. To predict next token w 0 , we need to compute p(w 0 |w 1 , . . . , w n-1 ), which can be further computed as p(w 0 w 1 . . . w n-1 )/p(w 1 . . . w n-1 ). Hence, the core of n-gram language model is to estimate the probability of observing a sequence of n tokens. When n is large, most of n token sequences will not be observed from data, leading to the sparse data problem, a common challenge faced by n-gram language model. As a result, a large body of research in n-gram language model is focused on how to effectively estimate probability of having n-token sequences even when they are NOT observed from data. We hypothesize that the transformer model pretrained by GPT-2 essentially allows us to estimate p(w 0 w 1 . . . w n-1 ) from observations of significantly shorter token sequences. In this section, we will show that the function of estimating probabilities of longer sequences from observation of shorter sequences is universal and is independent from domain as long as data exhibit a skew distribution (e.g., follows a power law). We note that our work is closely related to the discussion presented in [65], [66], where the authors also connect the function of transformer to compute of n-grams. We however note that our key result is to show the universality in computing probability of longer sequences from observations of shorter sequences, which can't be found in any existing studies. Although the discussion is restricted to discrete tokens, it should be generalized to continuous signals as we can always quantize continuous signals into a finite number of discrete tokens, similar to what BEiT [26] did.\nTo gain a better understanding, let's start by examining a \"zero-layer\" Transformer model. This model operates by taking a token, embedding it, and transforming it back to produce logits that predict the subsequent token. Because it cannot transfer information from other tokens, it relies solely on the current token to predict the next one. Consequently, the optimal behavior of this model is to closely resemble the bigram log-likelihood.\nThen we move on to the so-called \"attention-only\" transformer, which doesn't have MLP layers. As discussed in a recent work [65], one-layer attention-only Transformers can be comprehended as a combination of a bigram model and multiple \"skip-trigram\" models (impacting the probabilities of sequences \"A. . . BC\"). This can be intuitively understood as each attention head having the ability to selectively attend from the current token (\"B\") to a previous token (\"A\") and transfer relevant information to fine-tune the probability of potential subsequent tokens (\"C\"). [66] further discusses a multi-layer transformer can do more complex n-gram estimation using an induction heads mechanism. To be more precise, induction heads employ a straightforward principle: the ' Building upon these discussions, we are now prepared to substantiate the following argument: For sequential data following a power law, there is a potentially universal solution to the final estimation of n-gram probabilities. That's the reason behind the universality of pretrained LM's performance in cross-domain tasks. For simplicity, we assume that n is so large that we are unable to observe any occurrence of n-gram from data, and we only observe the occurrence of n ′ -grams with n ′ < n. We denote by s n i the ith unique n-gram, and by the notation s n ′ j ∈ s n i if n ′ -gram s n ′ j appears in s n i , the ith n-gram. Let m n be the number of unique n-grams. According to the maximum entropy model, our estimation of n-gram probabilities can be cast into the following optimization problem: min mn i=1 p(s n i ) log p(s n i ) s. t. i:s n ′ j ∈s n i p(s n i ) = p(s n ′ j ) where p(s n ′ j ) represents the probability of observing pattern s n ′ j from the data and j ∈ [m n ′ ], n ′ ∈ [n -1].\nFor each constraint for p(s n ′ j ), we introduce a Lagrangian dual variable λ n ′ j , and rewrite the optimization problem as follows: min λ log mn i=1 exp (n ′ ,j):s n ′ j ∈s n i\nλ n ′ j - n-1 n ′ =1 m n ′ j=1 λ n ′ j p(s n ′ j ),\nwhere n-gram probability p(s n j ) is given as p(s n j ) = λ n ′ j ) In the case that all n-grams follow a power law, for each n ′ ∈ [n -1], we divide n ′ -gram into two groups: the group V n ′ includes the high frequency n ′ -gram and the group U n ′ including the low frequency of n ′ -gram. For simplicity, we assume that the probability for all the high frequency n ′ -grams are roughly α n ′ ∈ [0, 1] and the probability for all the low frequency n ′ -grams are roughly β n ′ ∈ [0, 1]. By assuming that all the patterns in V n ′ and U n ′ share similar appearance frequency, we simplify the optimization problem by only introducing two dual variables for each n ′ -gram, i.e. λ n ′ a for high-frequency patterns and λ n ′ b for low-frequency patterns as follow Using these notations, we have the optimization problem simplified as min λ log( mn i=1 exp(\nn-1 n ′ =1 j:s n ′ j ∈s n i λ n ′ a I(s n ′ j ∈ V n ′ ) +λ n ′ b I(s n ′ j ∈ U n ′ ))) - n-1 n ′ =1 λ n ′ a g n ′ + λ n ′ b h n ′\n, where g n ′ = s n ′ j ∈V n ′ p(s n ′ j ) and h n ′ = s n ′ j ∈U n ′ p(s n ′ j ). Furthermore, let q n ′ a be the probability to observe a high frequency n ′ -gram appearing in any n-gram, and q n ′ b be the probability to observe a low frequency n ′ -gram appearing in any n-gram, we have mn i=1 exp(\nn-1 n ′ =1 j:s n ′ j ∈s n i λ n ′ a I(s n ′ j ∈ V n ′ ) + λ n ′ b I(s n ′ j ∈ U n ′ )) = m n n-1 n ′ =1 (1 + q n ′ a exp(λ n ′ a ))(1 + q n ′ b exp(λ n ′ b )) + O √ m n .\nBy skipping the term O( √ m n ), we further simplify the optimization problem as = min λ log 1 + q n ′ b exp(λ) -λh ′ n .\nAs illustrated by the above analysis, dual variables λ n ′ a and λ n ′ b will only depend on statistics q n ′ a , q n ′ b , g n ′ and h n ′ . They are independent from the detailed statistics p(s n ′ j ) and how each n ′ -gram appears in different n-gram. Thus, this simple analysis does indicate, to some degree, that the solution obtained from the maximum entropy model can be universal, as long as n-grams follow skewed distributions like power law.\nWe informally demonstrate that transformer models utilize attention mechanisms to perform a sophisticated form of n-gram estimation, and the generation rule for such n-gram distributions could be universal. This is how universality is achieved in our proposed cross-domain knowledge transfer. However, we currently lack a concrete metric to evaluate the performance of knowledge transfer between different domains, which requires further investigation. Nonetheless, in our experimental study, we demonstrate that a transformer model (beit) [26] trained on images can perform well on cross-domain time series forecasting tasks. Understand the Gradient Structure of Self-Attention Let X = (x 1 , . . . , x N ) ⊤ ∈ R N ×D be the input pattern, and let f (X) = (f 1 (X), . . . , f N (x)) ⊤ : R N ×D → R N ×D be the function for self-attention, i.e.\nf i (X) = softmax(XAX ⊤ )X\nwhere A = W Q W ⊤ K ∈ R D×D . Let the Jacobian J = ∂fi(X) ∂xj N i,j=1\nrepresent the gradient f (X) with respect to input pattern.\nThe lemma below shows an important structure of J. Proof. According to the analysis from the work, we have the gradient J i,j = ∂fi(X) xj is given by J i,j = P i,j I + X ⊤ Q i XAδ i,j + E j,i XA ⊤\nwhere Q i = diag(P i,: ) -P i,: P ⊤ i,:\nHere P i,: ∈ R N + represents the i-th row of matrix P . We thus have \n|J| 2 ≤ N i,j=1 |J i,j | 2 ≤ N i,j=1 P i,j + N i=1 |X ⊤ Q i X| 2 |A| 2 + N i,j=1 |X ⊤ Q i E j," }, { "figure_ref": [], "heading": "■", "publication_ref": [], "table_ref": [], "text": "As indicated by Lemma 1, one of the key components in the upper bound of Jacobian is |x i -N j=1 P i,j x j | 2 . Thus, through the optimization, we like to reduce the size of the gradient and therefore may prefer to reduce the quantity to N i=1 |x i -N j=1 P i,j x j | 2 . Hence, it will be interesting to understand the choice of W Q and W K that leads to the minimization of where ρ is introduced to control the size of A." }, { "figure_ref": [], "heading": "Connection between Self-Attention and Principal Component Analysis", "publication_ref": [], "table_ref": [], "text": "Let consider the optimization problem in (H) when ρ is small, we can approximate P i,j as P i,j ≈ 1 N + 1 N x ⊤ i Ax j Define x = X ⊤ 1/N . We have\nN i=1 |x i -X ⊤ P i,: | 2 = N i=1 x i -x -X ⊤ XAx i\n2 By assuming that all the input patterns are zero centralized, we have x = 0 and N i=1 |x i -X ⊤ XAx i | 2 = tr (I -X ⊤ XA) 2 X ⊤ X The theorem below shows that A minimizing the objective N i=1 |x i -X ⊤ XAx i | 2 contains the largest m eigenvectors of X ⊤ X where m is the rank of A. Theorem 2. Let W Q and W K be matrices of size D × m. Let λ 1 ≥ λ 2 ≥ ... ≥ λ D be the eigenvalues of X ⊤ X ranked in descending order, and let v i ∈ R D , i = 1, . . . , D be the corresponding eigenvectors. The optimal solution A * that minimizes\nN i=1 |x i -X ⊤ XAx i | 2 is given by A = m i=1 1 λi v i v ⊤ i Proof. Since W Q , W K ∈ R D×m\nwhere m < D, we know that A is a matrix of rank m. Hence, we know min\nA N i=1 |x i -X ⊤ XAx i | 2 ≥ N k=m+1 λ k\nWe also know that by choosing A as A = m i=1\n1 λi v i v ⊤ i we have N i=1 |x i -X ⊤ XAx i | 2 = tr I - m i=1 v i v ⊤ i 2 X ⊤ X = D k=m+1\nλ k Hence, the solution A for minimizing N i=1 |x i -X ⊤ XAx i | 2 is essential a weighted combination of top eigenvectors of X ⊤ X. Since a small gradient will prefer a small quantity of N i=1 |x i -X ⊤ XAx i | 2 , by minimizing through the self-attention layer, we essentially choose weight matrix W Q and W K to be aligned with the principal directions of X ⊤ X. ■" }, { "figure_ref": [], "heading": "", "publication_ref": [ "b26", "b16" ], "table_ref": [], "text": "Task Definition Since [27] and [17] have verified that channel-independence works well for time series datasets, we treat each multivariate series as multiple independent univariate series. Similar to traditional experimental settings, each time series is split into three parts: training data, validation data, and test data. For the few-shot forecasting task, only a certain percentage (5%, 10%) timesteps of training data are used, and the other two parts remain unchanged. The evaluation metrics remain the same as for classic multivariate time series forecasting. We repeat this experiment 3 times and report the average metrics in the following experiments.\nDetail Experiment Tables for Few-shot Time-Series Forecasting 5% setting in Table XX " } ]
Despite the impressive achievements of pre-trained models in the fields of natural language processing (NLP) and computer vision (CV), progress in the domain of time series analysis has been limited. In contrast to NLP and CV, where a single model can handle various tasks, time series analysis still relies heavily on task-specific methods for activities such as classification, anomaly detection, forecasting, and few-shot learning. The primary obstacle to developing a pre-trained model for time series analysis is the scarcity of sufficient training data. In our research, we overcome this obstacle by utilizing pre-trained models from language or CV, which have been trained on billions of data points, and apply them to time series analysis. We assess the effectiveness of the pre-trained transformer model in two ways. Initially, we maintain the original structure of the self-attention and feedforward layers in the residual blocks of the pre-trained language or image model, using the Frozen Pre-trained Transformer (FPT) for time series analysis with the addition of projection matrices for input and output. Additionally, we introduce four unique adapters, designed specifically for downstream tasks based on the pre-trained model, including forecasting and anomaly detection. These adapters are further enhanced with efficient parameter tuning, resulting in superior performance compared to all state-of-the-art methods. Our comprehensive experimental studies reveal that (a) the simple FPT achieves top-tier performance across various time series analysis tasks; and (b) fine-tuning the FPT with the customdesigned adapters can further elevate its performance, outshining specialized task-specific models. As presented in Figure 1, pretrained models from natural language domains demonstrate remarkable performance, outstripping competitors in all key time series analysis tasks. Furthermore, both theoretical and empirical evidence suggests that the self-attention module behaves analogously to principle component analysis (PCA). This insight is instrumental in understanding how the transformer bridges the domain gap and is a vital step toward grasping the universality of a pre-trained transformer. The code is publicly available at https://github.com/PSacfc/GPT4TS_Adapter.
One Fits All: Universal Time Series Analysis by Pretrained LM and Specially Designed Adaptors
[ { "figure_caption": "Fig. 1 :1Fig. 1: Model performance comparison in various tasks.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :Fig. 3 :23Fig. 2: Model architecture. Pre-trained parameters are transferred to the time series tasks. Self-attention and Feedforward layers in the transformer blocks and positional embedding are frozen. Some adapters are insert into the pre-trained model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Select gate with various scaling factors.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Illustration of attention and gaussian kernel anomaly adaptor for normal and abnormal points.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Visualization of imputation, long-term forecasting and few-shot forecasting.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :Fig. 8 :78Fig. 7: Visualization of anomaly detection on SMD.", "figure_data": "", "figure_id": "fig_5", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Fig. 9 :9Fig. 9: Comparison of pre-trained and non-pre-trained GPT2 with various layers on ETTh2. Color represents various prediction length O ∈ {96, 192} and line style means different models .", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 11 :11Fig. 11: (a, c) The performance and token similarity within samples with respect to each layer with different random mixed ratio. Pre-trained parameters are mixed with random initial parameters according to certain proportions. (b) Token similarity within samples when replacing the attention with PCA.", "figure_data": "", "figure_id": "fig_7", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 13 :13Fig. 13: The token similarity within samples with respect to each layer. (a) GPT2-noPretrain-model; (b) GPT2-Pretrained-model; (c) Pretrained attention is replaced by PCA.", "figure_data": "", "figure_id": "fig_8", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Fig. 14 :14Fig. 14: The t-SNE visualization of sample feature maps for (a) GPT-backbone, (b) end-to-end-PatchTST-model. (c) The token similarity within samples within different continuous sequence lengths.", "figure_data": "", "figure_id": "fig_10", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "[A][B] ... [A] → [B]' rule, which elevates the likelihood of generating the subsequent token 'B' given the current token 'A' if there is a fuzzy match of the AB bigram in the historical context. This rule seems to largely decouple A and B, which means they do not memorize a fixed table of n-gram statistics. The rule [A][B] . . . [A] → [B] applies regardless of what A and B are, which can abstract to new patterns.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 Z1(λ) exp (n ′ ,j):s n ′ j ∈s n i λ n ′ j and Z(λ) = mn i=1 exp( (n ′ ,j):s n ′ j ∈s n i", "figure_data": "", "figure_id": "fig_12", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "min λ n- 1 n 1 n11′ =1 log 1 + q n ′ a exp(λ n ′ a ) -+ n-′ =1 log 1 + q n ′ b exp(λ n ′ b -λ n ′ b h n ′ , which is equivalent to λ a n ′ = min λ log 1 + q n ′ a exp(λ) -λg ′ n λ b n ′", "figure_data": "", "figure_id": "fig_13", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Lemma A. 5 . 2 N|A| 2 N 2 + |A|2 2 N52222|J| 2 ≤ |A| i̸ =j P i,j x j -N k=1 P i,k x k j=1 |x i | 2 and P i,j = exp(x ⊤ i Axj ) N k=1 exp(x ⊤ i Ax k )", "figure_data": "", "figure_id": "fig_14", "figure_label": "52222", "figure_type": "figure" }, { "figure_caption": "Ni=1 |x i -N j=1 P i,j x j | 2 , i.e. the following optimization problem min|A| F ≤ρ N i=1 x i -N j=1 P i,j x j 2", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Various PEFT methods on ETTh1 and ETTm1.", "figure_data": "DatasetsOursMetric (MSE) Prefix Tuning Parallel Adapter LoRAETTh1 960.3660.3730.3710.374ETTh2 960.2690.2740.2780.281", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Different adapters for different datasets.", "figure_data": "DatasetsAdapters Temporal Channel FrequencyMSE✓--0.375ETTh1 96-✓-0.373--✓0.369✓--0.280ETTh2 96-✓-0.285--✓0.282", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Anomaly detection task. We calculate the F1-score, Precision and Recall (as %) for each dataset. A higher F1 indicates better performance. Black: best, Underline: second best.", "figure_data": "MethodsGPT2(6)-adapter GPT2(6)-frozen DCdetector Anomaly * TimeNet ** THOC InterFusion OmniAnomaly BeatGANSM DP R F188.65 91.37 89.9988.89 84.98 86.8983.59 91.10 87.1889.40 95.45 92.3387.91 82.54 84.6179.76 90.95 84.9987.02 85.43 86.2283.68 86.82 85.2272.90 84.09 78.10M SLP R F193.26 93.90 93.6082.00 82.91 82.4593.69 99.69 96.6092.09 95.15 93.5989.54 75.36 81.8488.45 90.97 89.6981.28 92.70 86.6289.02 86.37 87.6789.75 85.42 87.53SM APP R F195.43 98.38 96.8890.60 60.95 72.8895.63 98.92 97.0294.13 99.40 96.6990.14 56.40 69.3992.06 89.34 90.6889.77 88.52 89.1492.49 81.99 86.9292.38 55.85 69.60SW aTP R F196.44 99.78 98.0892.20 96.34 94.2393.11 99.77 96.3391.55 96.73 94.0790.75 95.40 93.0283.94 86.36 85.1380.59 85.58 83.0181.42 84.30 82.8364.01 87.76 79.92P SMP R F199.06 97.39 98.2298.62 95.68 97.1397.14 98.74 97.9496.91 98.90 97.8998.51 96.20 97.3488.14 90.99 89.5483.61 83.45 83.5288.39 74.46 80.8390.30 93.84 92.83Avg. F 195.3586.7295.0194.9185.2488.0185.7084.6981.60", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Long-term forecasting task. We use prediction length O ∈ {96, 192, 336, 720} for ILI and O ∈ {24, 36, 48, 60} for others. A lower MSE indicates better performance. Black: best, Underline: second best.", "figure_data": "Methods GPT2(6)-adapter GPT2(6)-frozenGPT2(0)DLinearPatchTSTTimesNetFEDformer AutoformerMetric MSEMAEMSE MAEMSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEW eather96 0.144 192 0.188 336 0.239 720 0.308 Avg 0.2190.183 0.228 0.268 0.321 0.2500.162 0.212 0.181 0.232 0.176 0.237 0.149 0.198 0.172 0.220 0.217 0.296 0.266 0.336 0.204 0.248 0.222 0.266 0.220 0.282 0.194 0.241 0.219 0.261 0.276 0.336 0.307 0.367 0.254 0.286 0.270 0.299 0.265 0.319 0.245 0.282 0.280 0.306 0.339 0.380 0.359 0.395 0.326 0.337 0.338 0.345 0.333 0.362 0.314 0.334 0.365 0.359 0.403 0.428 0.419 0.428 0.237 0.270 0.252 0.285 0.248 0.300 0.225 0.264 0.259 0.287 0.309 0.360 0.338 0.382ET T h196 0.366 192 0.407 336 0.420 720 0.4320.394 0.420 0.439 0.4550.376 0.397 0.422 0.428 0.375 0.399 0.370 0.399 0.384 0.402 0.376 0.419 0.449 0.459 0.416 0.418 0.466 0.450 0.405 0.416 0.413 0.421 0.436 0.429 0.420 0.448 0.500 0.482 0.442 0.433 0.488 0.464 0.439 0.443 0.422 0.436 0.491 0.469 0.459 0.465 0.521 0.496 0.477 0.456 0.485 0.478 0.472 0.490 0.447 0.466 0.521 0.500 0.506 0.507 0.514 0.512Avg 0.4060.4270.427 0.426 0.465 0.455 0.422 0.437 0.413 0.430 0.458 0.450 0.440 0.460 0.496 0.487ET T h296 0.269 192 0.334 336 0.359 720 0.3920.331 0.379 0.398 0.4330.285 0.342 0.318 0.368 0.289 0.353 0.274 0.336 0.340 0.374 0.358 0.397 0.346 0.388 0.354 0.389 0.383 0.407 0.383 0.418 0.339 0.379 0.402 0.414 0.429 0.439 0.456 0.452 0.373 0.407 0.406 0.427 0.448 0.465 0.329 0.380 0.452 0.452 0.496 0.487 0.482 0.486 0.406 0.441 0.420 0.446 0.605 0.551 0.379 0.422 0.462 0.468 0.463 0.474 0.515 0.511Avg 0.3380.3850.354 0.394 0.381 0.412 0.431 0.446 0.330 0.379 0.414 0.427 0.437 0.449 0.450 0.459ET T m196 0.292 192 0.330 336 0.360 720 0.4130.339 0.363 0.379 0.4060.292 0.346 0.330 0.372 0.299 0.343 0.290 0.342 0.338 0.375 0.379 0.419 0.505 0.475 0.332 0.372 0.371 0.394 0.335 0.365 0.332 0.369 0.374 0.387 0.426 0.441 0.553 0.496 0.366 0.394 0.398 0.409 0.369 0.386 0.366 0.392 0.410 0.411 0.445 0.459 0.621 0.537 0.417 0.421 0.454 0.440 0.425 0.421 0.416 0.420 0.478 0.450 0.543 0.490 0.671 0.561Avg 0.3480.3710.352 0.383 0.388 0.403 0.357 0.378 0.351 0.380 0.400 0.406 0.448 0.452 0.588 0.517ET T m296 0.160 192 0.212 336 0.264 720 0.3550.247 0.287 0.319 0.3760.173 0.262 0.192 0.281 0.167 0.269 0.165 0.255 0.187 0.267 0.203 0.287 0.255 0.339 0.229 0.301 0.245 0.317 0.224 0.303 0.220 0.292 0.249 0.309 0.269 0.328 0.281 0.340 0.286 0.341 0.302 0.352 0.281 0.342 0.274 0.329 0.321 0.351 0.325 0.366 0.339 0.372 0.378 0.401 0.399 0.408 0.397 0.421 0.362 0.385 0.408 0.403 0.421 0.415 0.433 0.432Avg 0.2470.3070.266 0.326 0.284 0.339 0.267 0.333 0.255 0.315 0.291 0.333 0.305 0.349 0.327 0.37196 0.1310.2250.139 0.238 0.138 0.234 0.140 0.237 0.129 0.222 0.168 0.272 0.193 0.308 0.201 0.317ECL192 0.151 336 0.162 720 0.1920.245 0.254 0.2840.153 0.251 0.152 0.247 0.153 0.249 0.157 0.240 0.184 0.289 0.201 0.315 0.222 0.334 0.169 0.266 0.168 0.263 0.169 0.267 0.163 0.259 0.198 0.300 0.214 0.329 0.231 0.338 0.206 0.297 0.207 0.295 0.203 0.301 0.197 0.290 0.220 0.320 0.246 0.355 0.254 0.361Avg 0.1590.2520.167 0.263 0.166 0.259 0.166 0.263 0.161 0.252 0.192 0.295 0.214 0.327 0.227 0.338T raf f ic96 0.378 192 0.384 336 0.393 720 0.4340.250 0.248 0.255 0.2760.388 0.282 0.390 0.272 0.410 0.282 0.360 0.249 0.593 0.321 0.587 0.366 0.613 0.388 0.407 0.290 0.403 0.276 0.423 0.287 0.379 0.256 0.617 0.336 0.604 0.373 0.616 0.382 0.412 0.294 0.413 0.280 0.436 0.296 0.392 0.264 0.629 0.336 0.621 0.383 0.622 0.337 0.450 0.312 0.447 0.298 0.466 0.315 0.432 0.286 0.640 0.350 0.626 0.382 0.660 0.408Avg 0.3970.2570.414 0.294 0.413 0.281 0.433 0.295 0.390 0.263 0.620 0.336 0.610 0.376 0.628 0.379Average 0.3020.3210.316 0.336 0.335 0.347 0.332 0.350 0.304 0.326 0.376 0.362 0.394 0.396 0.436 0.419", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Short-term forecasting task on M4. The prediction lengths are in[6,48] and results are weighted averaged from several datasets under different sample intervals. A lower SMAPE value indicates better performance. Black: best, Underline: second best.", "figure_data": "MethodsGPT2(6)-adapter GPT2(6)-frozen TimesNet PatchTST N-HiTS N-BEATS DLinear FEDformer AutoformerY earlySMAPE MASE OWA13.288 3.005 0.78513.531 3.015 0.79313.387 2.996 0.78613.477 3.019 0.79213.418 3.045 0.79313.436 3.043 0.79416.965 4.283 1.05813.728 3.048 0.80313.974 3.134 0.822QuarterlySMAPE MASE OWA9.955 1.162 0.87610.177 1.194 0.89810.100 1.182 0.89010.380 1.233 0.92110.202 1.194 0.89910.124 1.169 0.88612.145 1.520 1.10610.792 1.283 0.95811.338 1.365 1.012M onthlySMAPE MASE OWA12.599 0.933 0.87612.894 0.956 0.89712.670 0.933 0.87812.959 0.970 0.90512.791 0.969 0.89912.677 1.053 0.88013.514 1.037 0.95614.260 1.102 1.01213.958 1.103 1.002OthersSMAPE MASE OWA4.420 3.101 0.9544.940 3.228 1.0294.891 3.302 1.0354.952 3.347 1.0495.061 3.216 1.0404.925 3.391 1.0536.709 4.953 1.4874.954 3.264 1.0365.485 3.865 1.187AverageSMAPE MASE OWA11.713 1.572 0.85811.991 1.600 0.87911.829 1.585 0.86712.059 1.623 0.89011.927 1.613 0.88111.851 1.599 0.87013.639 2.095 1.05112.840 1.701 0.95212.909 1.771 0.972", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "Imputation task. We randomly mask 12.5%, 25%, 37.5%, 50% time points of 96-length time series. A lower MSE indicates better performance. Black: best, Underline: second best.", "figure_data": "MethodsGPT2(3)-adapter GPT2(3)-frozen TimesNetPatchTSTDLinearFEDformerStationaryAutoformerMask Ratio MSEMAEMSE MAEMSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEET T m112.5% 0.018 25% 0.022 37.5% 0.026 50% 0.0340.091 0.099 0.107 0.1230.017 0.085 0.023 0.101 0.041 0.130 0.080 0.193 0.052 0.166 0.032 0.119 0.046 0.144 0.022 0.096 0.023 0.101 0.044 0.135 0.080 0.193 0.052 0.166 0.032 0.119 0.046 0.144 0.029 0.111 0.029 0.111 0.049 0.143 0.103 0.219 0.069 0.191 0.039 0.131 0.057 0.161 0.040 0.128 0.036 0.124 0.055 0.151 0.132 0.248 0.089 0.218 0.047 0.145 0.067 0.174Avg 0.0250.1050.028 0.105 0.027 0.107 0.047 0.140 0.093 0.206 0.062 0.177 0.036 0.126 0.051 0.150ET T m212.5% 0.019 25% 0.021 37.5% 0.024 50% 0.0270.081 0.088 0.094 0.1020.017 0.076 0.018 0.080 0.026 0.094 0.062 0.166 0.056 0.159 0.021 0.088 0.023 0.092 0.020 0.080 0.020 0.085 0.028 0.099 0.085 0.196 0.080 0.195 0.024 0.096 0.026 0.101 0.022 0.087 0.023 0.091 0.030 0.104 0.106 0.222 0.110 0.231 0.027 0.103 0.030 0.108 0.025 0.095 0.026 0.098 0.034 0.110 0.131 0.247 0.156 0.276 0.030 0.108 0.035 0.119Avg 0.0220.0910.021 0.084 0.022 0.088 0.029 0.102 0.096 0.208 0.101 0.215 0.026 0.099 0.029 0.105ET T h112.5% 0.040 25% 0.056 37.5% 0.072 50% 0.1050.137 0.161 0.182 0.2190.043 0.140 0.057 0.159 0.093 0.201 0.151 0.267 0.070 0.190 0.060 0.165 0.074 0.182 0.054 0.156 0.069 0.178 0.107 0.217 0.180 0.292 0.106 0.236 0.080 0.189 0.090 0.203 0.072 0.180 0.084 0.196 0.120 0.230 0.215 0.318 0.124 0.258 0.102 0.212 0.109 0.222 0.107 0.216 0.102 0.215 0.141 0.248 0.257 0.347 0.165 0.299 0.133 0.240 0.137 0.248Avg 0.0680.1740.069 0.173 0.078 0.187 0.115 0.224 0.201 0.306 0.117 0.246 0.094 0.201 0.103 0.214ET T h212.5% 0.027 25% 0.041 37.5% 0.047 50% 0.0540.122 0.130 0.141 0.1520.039 0.125 0.040 0.130 0.057 0.152 0.100 0.216 0.095 0.212 0.042 0.133 0.044 0.138 0.044 0.135 0.046 0.141 0.061 0.158 0.127 0.247 0.137 0.258 0.049 0.147 0.050 0.149 0.051 0.147 0.052 0.151 0.067 0.166 0.158 0.276 0.187 0.304 0.056 0.158 0.060 0.163 0.059 0.158 0.060 0.162 0.073 0.174 0.183 0.299 0.232 0.341 0.065 0.170 0.068 0.173Avg 0.0450.1360.048 0.141 0.049 0.146 0.065 0.163 0.142 0.259 0.163 0.279 0.053 0.152 0.055 0.15612.5% 0.0660.1780.080 0.194 0.085 0.202 0.055 0.160 0.092 0.214 0.107 0.237 0.093 0.210 0.089 0.210ECL25% 0.075 37.5% 0.085 50% 0.9330.191 0.203 0.2120.087 0.203 0.089 0.206 0.065 0.175 0.118 0.247 0.120 0.251 0.097 0.214 0.096 0.220 0.094 0.211 0.094 0.213 0.076 0.189 0.144 0.276 0.136 0.266 0.102 0.220 0.104 0.229 0.101 0.220 0.100 0.221 0.091 0.208 0.175 0.305 0.158 0.284 0.108 0.228 0.113 0.239Avg 0.0800.1960.090 0.207 0.092 0.210 0.072 0.183 0.132 0.260 0.130 0.259 0.100 0.218 0.101 0.225W eather12.5% 0.026 25% 0.029 37.5% 0.031 50% 0.035 Avg 0.0300.046 0.052 0.057 0.064 0.0550.026 0.049 0.025 0.045 0.029 0.049 0.039 0.084 0.041 0.107 0.027 0.051 0.026 0.047 0.028 0.052 0.029 0.052 0.031 0.053 0.048 0.103 0.064 0.163 0.029 0.056 0.030 0.054 0.033 0.060 0.031 0.057 0.035 0.058 0.057 0.117 0.107 0.229 0.033 0.062 0.032 0.060 0.037 0.065 0.034 0.062 0.038 0.063 0.066 0.134 0.183 0.312 0.037 0.068 0.037 0.067 0.031 0.056 0.030 0.054 0.060 0.144 0.052 0.110 0.099 0.203 0.032 0.059 0.031 0.057Average0.0450.1260.048 0.128 0.050 0.132 0.064 0.159 0.119 0.224 0.112 0.229 0.056 0.142 0.061 0.1514) No Pre-training but Freezing: For comprehensivelyablation on pre-training and freezing strategies, we also addexperiment for random initialized GPT2(6) with freezing. Theresults in Table", "figure_id": "tab_6", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "Full results for the classification task. * . in the Transformers indicates the name of * former. A higher accuracy indicates better performance. Black: best, Underline: second best.", "figure_data": "MethodsClassical RNN TCN Rocket LSTNetTransformers Auto. Station. FED. ETS. Flow.DLinear TimesNet GPT2(6)-frozen GPT2(6)-adapterEthanolConcentration45.239.928.9 31.632.731.2 28.1 33.832.635.734.235.0FaceDetection64.765.752.8 68.468.066.0 66.3 67.668.068.669.269.7Handwriting58.825.853.3 36.731.628.0 32.5 33.827.032.132.732.2Heartbeat75.677.175.6 74.673.773.7 71.2 77.675.178.077.279.5JapaneseVowels96.298.198.9 96.299.298.4 95.9 98.996.298.498.698.1PEMS-SF75.186.786.1 82.787.380.9 86.0 83.875.189.687.986.1SelfRegulationSCP190.884.084.6 84.089.488.7 89.6 92.587.391.893.293.2SelfRegulationSCP253.352.855.6 50.657.254.4 55.0 56.150.557.259.460.6SpokenArabicDigits71.2100.0 95.6 100.0 100.0 100.0 100.0 98.881.499.099.298.7UWaveGestureLibrary94.487.888.4 85.987.585.3 85.0 86.682.185.388.187.5Average72.571.870.3 71.172.770.7 71.0 73.067.573.674.074.1", "figure_id": "tab_7", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "Few-shot learning results on 10% data. We use prediction length O ∈ {96, 192, 336, 720}. A lower MSE indicates better performance. Black: best, Underline: second best.", "figure_data": "Methods GPT2(6)-adapter GPT2(6)-frozenGPT2(0)DLinearPatchTSTTimesNetFEDformer AutoformerMetric MSEMAEMSE MAEMSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEW eather96 0.155 192 0.202 336 0.251 720 0.320 Avg. 0.2320.201 0.245 0.281 0.329 0.2640.163 0.215 0.190 0.240 0.171 0.224 0.165 0.215 0.184 0.230 0.188 0.253 0.221 0.297 0.210 0.254 0.243 0.284 0.215 0.263 0.210 0.257 0.245 0.283 0.250 0.304 0.270 0.322 0.256 0.292 0.270 0.305 0.258 0.299 0.259 0.297 0.305 0.321 0.312 0.346 0.320 0.351 0.321 0.339 0.348 0.359 0.320 0.346 0.332 0.346 0.381 0.371 0.387 0.393 0.390 0.396 0.238 0.275 0.263 0.297 0.241 0.283 0.242 0.279 0.279 0.301 0.284 0.324 0.300 0.342ET T h196 0.439 192 0.556 336 0.601 720 0.7080.447 0.515 0.543 0.5810.458 0.456 0.601 0.536 0.492 0.495 0.516 0.485 0.861 0.628 0.512 0.499 0.613 0.552 0.570 0.516 0.709 0.587 0.565 0.538 0.598 0.524 0.797 0.593 0.624 0.555 0.722 0.598 0.608 0.535 0.801 0.635 0.721 0.622 0.657 0.550 0.941 0.648 0.691 0.574 0.750 0.619 0.725 0.591 1.385 0.831 0.986 0.743 0.762 0.610 0.877 0.641 0.728 0.614 0.721 0.616Avg. 0.5760.4290.590 0.525 0.874 0.647 0.691 0.600 0.633 0.542 0.869 0.628 0.639 0.561 0.702 0.596ET T h296 0.306 192 0.378 336 0.382 720 0.4350.357 0.399 0.417 0.4560.331 0.374 0.539 0.495 0.357 0.411 0.353 0.389 0.378 0.409 0.382 0.416 0.413 0.451 0.402 0.411 0.675 0.555 0.569 0.519 0.403 0.414 0.490 0.467 0.478 0.474 0.474 0.477 0.406 0.433 0.718 0.580 0.671 0.572 0.426 0.441 0.537 0.494 0.504 0.501 0.547 0.543 0.449 0.464 0.732 0.605 0.824 0.648 0.477 0.480 0.510 0.491 0.499 0.509 0.516 0.523Avg. 0.3750.4070.397 0.421 0.666 0.559 0.605 0.538 0.415 0.431 0.479 0.465 0.466 0.475 0.488 0.499ET T m196 0.370 192 0.419 336 0.466 720 0.5640.397 0.429 0.452 0.5100.390 0.404 0.610 0.508 0.352 0.392 0.410 0.419 0.583 0.501 0.578 0.518 0.774 0.614 0.429 0.423 0.666 0.540 0.382 0.412 0.437 0.434 0.630 0.528 0.617 0.546 0.754 0.592 0.469 0.439 0.895 0.615 0.419 0.434 0.476 0.454 0.725 0.568 0.998 0.775 0.869 0.677 0.569 0.498 0.916 0.646 0.490 0.477 0.681 0.556 0.769 0.549 0.693 0.579 0.810 0.630Avg. 0.4540.4470.464 0.441 0.772 0.577 0.411 0.429 0.501 0.466 0.677 0.537 0.722 0.605 0.802 0.628ET T m296 0.186 192 0.250 336 0.302 720 0.4130.265 0.309 0.342 0.4080.188 0.269 0.283 0.344 0.213 0.303 0.191 0.274 0.212 0.285 0.291 0.399 0.352 0.454 0.251 0.309 0.353 0.384 0.278 0.345 0.252 0.317 0.270 0.323 0.307 0.379 0.694 0.691 0.307 0.346 0.420 0.422 0.338 0.385 0.306 0.353 0.323 0.353 0.543 0.559 2.408 1.407 0.426 0.417 0.553 0.491 0.436 0.440 0.433 0.427 0.474 0.449 0.712 0.614 1.913 1.166Avg. 0.2870.3310.293 0.335 0.402 0.410 0.316 0.368 0.296 0.343 0.320 0.353 0.463 0.488 1.342 0.93096 0.1390.2340.139 0.237 0.142 0.240 0.150 0.253 0.140 0.238 0.299 0.373 0.231 0.323 0.261 0.348ECL192 0.158 336 0.182 720 0.2470.251 0.277 0.3310.156 0.252 0.158 0.254 0.164 0.264 0.160 0.255 0.305 0.379 0.261 0.356 0.338 0.406 0.175 0.270 0.175 0.271 0.181 0.282 0.180 0.276 0.319 0.391 0.360 0.445 0.410 0.474 0.233 0.317 0.230 0.315 0.223 0.321 0.241 0.323 0.369 0.426 0.530 0.585 0.715 0.685Avg. 0.1810.2730.176 0.269 0.176 0.270 0.180 0.280 0.180 0.273 0.323 0.392 0.346 0.427 0.431 0.478T raf f ic96 0.416 192 0.424 336 0.432 720 0.4800.283 0.292 0.297 0.3350.414 0.297 0.478 0.368 0.419 0.298 0.403 0.289 0.719 0.416 0.639 0.400 0.672 0.405 0.426 0.301 0.481 0.363 0.434 0.305 0.415 0.296 0.748 0.428 0.637 0.416 0.727 0.424 0.434 0.303 0.488 0.365 0.449 0.313 0.426 0.304 0.853 0.471 0.655 0.427 0.749 0.454 0.487 0.337 0.537 0.386 0.484 0.336 0.474 0.331 1.485 0.825 0.722 0.456 0.847 0.499Avg. 0.4380.3010.440 0.310 0.496 0.371 0.447 0.313 0.430 0.305 0.951 0.535 0.663 0.425 0.749 0.446Average 0.3630.3500.371 0.367 0.521 0.447 0.413 0.401 0.385 0.376 0.556 0.458 0.511 0.472 0.687 0.559", "figure_id": "tab_8", "figure_label": "VIII", "figure_type": "table" }, { "figure_caption": "Zero-shot Results. Dataset-specific metrics aggregated over each dataset. A lower value indicates better performance. The source dataset of M3, Tourism, Electricity, and M4. For M4, the source data for N-BEATS is FRED, and M3 for other models. Black: best, Red: second best, Violet: third best. Y, Q, M and O are abbreviations for Yearly, Quarterly, Monthly and Others respectively.", "figure_data": "M4 (sMAPE)M3 (sMAPE)TOURISM (MAPE)ELECTRY (23k)Q (24k)M (48k)O (5k)Avg. (100k)Y (645)Q (756)M (1428)O (174)Avg. (3003)Y (518)Q (427)M (366)Avg. (1311)ND×100Avg.N-BEATS13.269.5912.67 4.6911.6715.079.0713.194.2912.3823.57 14.66 19.3218.8217.815.19DLinear14.19 18.85 14.76 9.1915.3317.439.7415.656.8114.0339.59 18.30 24.7628.5117.618.86TimesNet15.65 11.87 16.16 6.8614.5518.75 12.2614.016.8814.1735.59 19.22 30.5428.8419.318.96PatchTST13.96 10.92 14.66 7.0813.2215.999.6214.719.4413.3933.23 19.27 27.5727.1017.317.67FEDformer13.88 11.51 18.15 7.5215.0416.009.4815.128.9413.5343.41 19.88 28.3931.5518.419.63Autoformer14.55 17.34 25.06 9.6620.0216.18 13.9216.9114.6815.8751.19 34.95 31.4740.3933.927.54GPT(6) -frozen13.74 10.78 14.63 7.0813.1216.42 10.1314.104.8113.0627.17 16.21 21.9222.1417.216.38GPT2(6)--adapter13.64 10.65 14.64 6.9913.0715.749.5313.954.7612.5228.58 15.58 21.9522.4916.316.09", "figure_id": "tab_9", "figure_label": "IX", "figure_type": "table" }, { "figure_caption": "No Pretrain and No Freeze results on 10% data. We use prediction length O ∈ {96, 192, 336, 720} for ILI and O ∈ {24, 36, 48, 60} for others.", "figure_data": "MethodsGPT2(6)-frozenNo FreezeNo PretrainMetricMSEMAEMSEMAEMSEMAEW eather96 192 336 7200.163 0.210 0.256 0.3210.215 0.254 0.292 0.3390.168 0.221 0.175 0.229 0.238 0.286 0.244 0.287 0.289 0.318 0.301 0.325 0.398 0.383 0.390 0.378ET T h196 192 336 7200.458 0.570 0.608 0.7250.456 0.516 0.535 0.5910.605 0.532 0.680 0.560 0.713 0.579 0.738 0.602 0.747 0.586 0.893 0.641 0.945 0.688 2.994 1.169ET T h296 192 336 7200.331 0.402 0.406 0.4490.374 0.411 0.433 0.4640.369 0.394 0.422 0.433 0.464 0.455 0.482 0.466 0.420 0.439 0.540 0.496 0.535 0.515 0.564 0.519ET T m196 192 336 7200.390 0.429 0.469 0.5690.404 0.423 0.439 0.4980.429 0.430 0.463 0.446 0.510 0.470 0.506 0.455 0.385 0.401 0.426 0.421 0.780 0.591 0.576 0.505ET T m296 192 336 7200.188 0.251 0.307 0.4260.269 0.309 0.346 0.4170.243 0.311 0.244 0.315 0.307 0.352 0.318 0.363 0.337 0.364 0.409 0.412 0.471 0.440 0.473 0.450", "figure_id": "tab_10", "figure_label": "X", "figure_type": "table" }, { "figure_caption": "Ablation on random initialized model with freezing.", "figure_data": "Methods GPT2(6)-frozenNo FreezeNo PretrainNo Pretrain + FreezeMetricMSEMAEMSE MAE MSE MAE MSEMAEET T h296 192 0.418 0.3760.421 0.4410.440 0.449 0.465 0.457 0.540 0.503 0.478 0.614 0.536 0.7210.497 0.580size of 128) on a 32G V100 GPU.", "figure_id": "tab_11", "figure_label": "XI", "figure_type": "table" }, { "figure_caption": "Ablation by fixing positional embeddings or layer normalization on 5% ETTm1 and ETTm2. Parameters of GPT2(6)-frozen are successively added to the list of fine-tuned parameters.", "figure_data": "MethodsInput & Output+ LN+ POSMetricMSEMAEMSEMAEMSEMAEET T m196 192 336 7200.395 0.444 0.510 0.6070.410 0.438 0.472 0.5170.392 0.409 0.386 0.405 0.436 0.435 0.440 0.438 0.495 0.467 0.485 0.459 0.564 0.503 0.557 0.499ET T m296 192 336 7200.198 0.261 0.336 0.4730.282 0.324 0.377 0.4440.198 0.279 0.199 0.280 0.263 0.325 0.256 0.316 0.322 0.356 0.318 0.353 0.457 0.435 0.460 0.436Fig. 10: Results on various percentages of ETTh2. Linecolor represents different models and line style means variousprediction lengths O ∈ {96, 192}.", "figure_id": "tab_12", "figure_label": "XII", "figure_type": "table" }, { "figure_caption": "Ablation on adapters.", "figure_data": "DatasetsAdapters Temporal Channel FrequencyMSEMAE---0.376 0.397✓--0.375 0.400ETTh1 96-✓-0.373 0.398--✓0.369 0.397✓✓✓0.366 0.394---0.285 0.342✓--0.280 0.341ETTh2 96-✓-0.285 0.343--✓0.282 0.344✓✓✓0.269 0.331", "figure_id": "tab_14", "figure_label": "XIV", "figure_type": "table" }, { "figure_caption": "Ablation on select gate. The list displays the learned coefficients, which represent the gate coefficient for each layer.", "figure_data": "DatasetsTemporalLearned Coefficients ChannelFrequencyMSE", "figure_id": "tab_15", "figure_label": "XV", "figure_type": "table" }, { "figure_caption": "", "figure_data": "MethodsMetric96ETTh2 192 336 720 96ETTm2 192 336 720GPT2(6)-frozenMSE 0.376 0.421 0.408 -0.199 0.256 0.318 0.460 MAE 0.419 0.441 0.439 -0.280 0.316 0.353 0.436BERT(6)-frozenMSE 0.397 0.480 0.481 -0.222 0.281 0.331 0.441 MAE 0.418 0.465 0.472 -0.300 0.335 0.367 0.428BEiT(6)-frozenMSE 0.405 0.448 0.524 -0.208 0.272 0.331 0.452 MAE 0.418 0.446 0.500 -0.291 0.326 0.362 0.433DLinear [27]MSE 0.442 0.617 1.424 -0.236 0.306 0.380 0.674 MAE 0.456 0.542 0.849 -0.326 0.373 0.423 0.583PatchTST [17]MSE 0.401 0.452 0.464 -0.206 0.264 0.334 0.454 MAE 0.421 0.455 0.469 -0.288 0.324 0.367 0.483FEDformer [14]MSE 0.390 0.457 0.477 -0.299 0.290 0.378 0.523 MAE 0.424 0.465 0.483 -0.320 0.361 0.427 0.510Autoformer [16]MSE 0.428 0.496 0.486 -0.232 0.291 0.478 0.533 MAE 0.468 0.504 0.496 -0.322 0.357 0.517 0.538tion for self-attention, i.e., f i", "figure_id": "tab_17", "figure_label": "XVI", "figure_type": "table" }, { "figure_caption": "Training parameters and Training/Inference Cost Comparison", "figure_data": "ModelTraining Params Percentages Training(s) Inference(s)FEDformer-3243k1000.8890.170TimesNet-321.9M1000.7470.302PatchTST-32543k1000.0430.022FEDformer-76833M1000.2080.056TimesNet-76842M1005.7232.162PatchTST-76820M1000.4570.123GPT-2(3)-7684M6.120.0930.032GPT-2(6)-7684M4.60.1040.054GPT2(6)-frozen-7684M4.60.1040.054", "figure_id": "tab_18", "figure_label": "XVII", "figure_type": "table" }, { "figure_caption": "Dataset details of few-shot learning.", "figure_data": "DatasetLength Dimension FrequencyETTh1742071 hourETTm69680715 minWeather526962210 minILI96677 daysElectricity263043211 hourTraffic175448621 hour", "figure_id": "tab_19", "figure_label": "XVIII", "figure_type": "table" }, { "figure_caption": "A subset of results showing both Mean and STD on 5% datasets.Since deep learning methods are more advantageous than traditional methods when applied to large datasets. For few-shot learning, traditional methods should also consider. The results are shown in Table XXII that GPT2(6) also achieves best performance.", "figure_data": "MethodsGPT2-backbone(6 Layers)MetricMSEMAEET T h296 192 0.418 ± 0.0013 0.441 ± 0.0014 0.376 ± 0.0072 0.421 ± 0.0054 336 0.408 ± 0.0006 0.439 ± 0.0002 720 --ET T m296 192 0.256 ± 0.0030 0.316 ± 0.0017 0.199 ± 0.0040 0.280 ± 0.0042 336 0.318 ± 0.0046 0.353 ± 0.0032 720 0.460 ± 0.0132 0.436 ± 0.0066D. Comparison with Traditional Methods on Few-shot Learning", "figure_id": "tab_20", "figure_label": "XXI", "figure_type": "table" }, { "figure_caption": "Comparison with traditional methods.", "figure_data": "MethodsGPT2(6) 5%GPT2(6) 10%ETSARIMANaiveDriftMetricMSE MAE MSE MAEMSEMAE MSE MAE MSE MAEET T h296 192 0.418 0.441 0.402 0.411 0.376 0.421 0.331 0.3742.954 10.2260.742 0.481 0.443 0.764 0.561 1.212 0.585 0.495 1.560 0.785ET T m196 192 0.440 0.438 0.429 0.423 186.445 4.654 0.710 0.557 2.869 1.215 0.386 0.405 0.390 0.404 52.237 2.689 0.693 0.547 1.539 0.913", "figure_id": "tab_21", "figure_label": "XXII", "figure_type": "table" }, { "figure_caption": "Comparison on 5% data. Autoformer and FEDformer are equiped with instance normalization.", "figure_data": "MethodsGPT2(6)PatchTSTDLinearAutoformerAutoformer(Revin)FEDformerFEDformer(Revin)MetricMSE MAE MSE MAE MSE MAE MSE MAE MSEMAEMSE MAE MSEMAEET T m296 192 0.256 0.316 0.264 0.324 0.306 0.373 0.291 0.357 0.296 0.199 0.280 0.206 0.288 0.236 0.326 0.232 0.322 0.2240.300 0.3430.229 0.320 0.223 0.294 0.361 0.2880.298 0.336", "figure_id": "tab_22", "figure_label": "XXIII", "figure_type": "table" }, { "figure_caption": "i X| 2 |A| 2 ≤ N + |A| 2 P i,j x ⊤ i x j -X ⊤ P i,:", "figure_data": "N i=1N j=1 P i,j |x j | 2 -N j=1 P i,j x j2+ |A| 2N i,j=1 |X ⊤ Q i e j x ⊤ i |≤ N + |A| 2 i,j=1 ≤ |A| 2 N i=1 N j=1 P i,j x j -N k=1 P i,k x k 2 N + |A| 2 N i=1 P i,i + 1 2 x i -X ⊤ P i,: 2 + N + |A| 2 N i̸ =j P i,j x j -X ⊤ P i,:2 +|A| 2 2N j=1|x i | 2:=∆", "figure_id": "tab_23", "figure_label": "", "figure_type": "table" } ]
Tian Zhou; Peisong Niu; Xue Wang; Liang Sun; Rong Jin
[ { "authors": "R Hyndman; G Athanasopoulos", "journal": "OTexts", "ref_id": "b0", "title": "Forecasting: Principles and Practice", "year": "2021" }, { "authors": "Q Wen; L Yang; T Zhou; L Sun", "journal": "", "ref_id": "b1", "title": "Robust time series analysis and applications: An industrial perspective", "year": "2022" }, { "authors": "J.-H Böse", "journal": "Proceedings of the VLDB Endowment", "ref_id": "b2", "title": "Probabilistic demand forecasting at scale", "year": "2017" }, { "authors": "P Courty; H Li", "journal": "The Journal of Business", "ref_id": "b3", "title": "Timing of seasonal sales", "year": "1999" }, { "authors": "M Friedman", "journal": "J. Amer. Statist. Assoc", "ref_id": "b4", "title": "The interpolation of time series by related series", "year": "1962" }, { "authors": "J Gao; X Song; Q Wen; P Wang; L Sun; H Xu", "journal": "", "ref_id": "b5", "title": "RobustTAD: Robust time series anomaly detection via decomposition and convolutional neural networks", "year": "2020" }, { "authors": "H Ismail Fawaz; G Forestier; J Weber; L Idoumghar; P.-A Muller", "journal": "Data Mining and Knowledge Discovery", "ref_id": "b6", "title": "Deep learning for time series classification: a review", "year": "2019" }, { "authors": "A Vaswani", "journal": "", "ref_id": "b7", "title": "Attention is all you need", "year": "2017" }, { "authors": "J Devlin; M Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b8", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "A Dosovitskiy; Etc", "journal": "", "ref_id": "b9", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Y Rao; W Zhao; Z Zhu; J Lu; J Zhou", "journal": "Advances in Neural Information Processing Systems (NeurIPS)", "ref_id": "b10", "title": "Global filter networks for image classification", "year": "2021" }, { "authors": "Q Wen; T Zhou; C Zhang; W Chen; Z Ma; J Yan; L Sun", "journal": "", "ref_id": "b11", "title": "Transformers in time series: A survey", "year": "2023" }, { "authors": "B Lim; S Ö Arık; N Loeff; T Pfister", "journal": "International Journal of Forecasting", "ref_id": "b12", "title": "Temporal fusion transformers for interpretable multi-horizon time series forecasting", "year": "2021" }, { "authors": "T Zhou; Z Ma; Q Wen; X Wang; L Sun; R Jin", "journal": "", "ref_id": "b13", "title": "FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting", "year": "2022" }, { "authors": "H Zhou; S Zhang; J Peng; S Zhang; J Li; H Xiong; W Zhang", "journal": "", "ref_id": "b14", "title": "Informer: Beyond efficient transformer for long sequence time-series forecasting", "year": "2021" }, { "authors": "H Wu; J Xu; J Wang; M Long", "journal": "", "ref_id": "b15", "title": "Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting", "year": "2021" }, { "authors": "Y Nie; N H Nguyen; P Sinthong; J Kalagnanam", "journal": "", "ref_id": "b16", "title": "A time series is worth 64 words: Long-term forecasting with transformers", "year": "2022" }, { "authors": "R Godahewa; C Bergmeir; G I Webb; R J Hyndman; P Montero-Manso", "journal": "", "ref_id": "b17", "title": "Monash time series forecasting archive", "year": "2021" }, { "authors": "K Lu; A Grover; P Abbeel; I Mordatch", "journal": "", "ref_id": "b18", "title": "Frozen pretrained transformers as universal computation engines", "year": "2022-06" }, { "authors": "A Giannou; S Rajput; J -Y. Sohn; K Lee; J D Lee; D Papailiopoulos", "journal": "", "ref_id": "b19", "title": "Looped Transformers as Programmable Computers", "year": "2023-01" }, { "authors": "G E Box; G M Jenkins", "journal": "Journal of the Royal Statistical Society. Series C (Applied Statistics)", "ref_id": "b20", "title": "Some recent advances in forecasting and control", "year": "1968" }, { "authors": "G E Box; D A Pierce", "journal": "Journal of the American statistical Association", "ref_id": "b21", "title": "Distribution of residual autocorrelations in autoregressive-integrated moving average time series models", "year": "1970" }, { "authors": "S Hochreiter; J Schmidhuber", "journal": "Neural computation", "ref_id": "b22", "title": "Long short-term memory", "year": "1997" }, { "authors": "J Chung; C Gulcehre; K Cho; Y Bengio", "journal": "", "ref_id": "b23", "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "year": "2014" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "", "ref_id": "b24", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "H Bao; L Dong; S Piao; F Wei", "journal": "", "ref_id": "b25", "title": "BEit: BERT pre-training of image transformers", "year": "2022" }, { "authors": "A Zeng; M Chen; L Zhang; Q Xu", "journal": "", "ref_id": "b26", "title": "Are transformers effective for time series forecasting?", "year": "2023" }, { "authors": "A Radford; K Narasimhan", "journal": "", "ref_id": "b27", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "H Touvron; M Cord; M Douze; F Massa; A Sablayrolles; H Jégou", "journal": "PMLR", "ref_id": "b28", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "H Bao; W Wang; L Dong; Q Liu; O K Mohammed; K Aggarwal; S Som; F Wei", "journal": "", "ref_id": "b29", "title": "Vlmo: Unified vision-language pre-training with mixture-of-modality-experts", "year": "2021" }, { "authors": "C.-H H Yang; Y.-Y Tsai; P.-Y Chen", "journal": "", "ref_id": "b30", "title": "Voice2series: Reprogramming acoustic models for time series classification", "year": "2021" }, { "authors": "N Houlsby; A Giurgiu; S Jastrzebski; B Morrone; Q De Laroussilhe; A Gesmundo; M Attariyan; S Gelly", "journal": "PMLR", "ref_id": "b31", "title": "Parameter-efficient transfer learning for NLP", "year": "2019-06-15" }, { "authors": "X L Li; P Liang", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021-08" }, { "authors": "E J Hu; Y Shen; P Wallis; Z Allen-Zhu; Y Li; S Wang; L Wang; W Chen", "journal": "", "ref_id": "b33", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "J He; C Zhou; X Ma; T Berg-Kirkpatrick; G Neubig", "journal": "", "ref_id": "b34", "title": "Towards a unified view of parameter-efficient transfer learning", "year": "2022" }, { "authors": "T Kim; J Kim; Y Tae; C Park; J.-H Choi; J Choo", "journal": "", "ref_id": "b35", "title": "Reversible instance normalization for accurate time-series forecasting against distribution shift", "year": "2022" }, { "authors": "N Houlsby", "journal": "PMLR", "ref_id": "b36", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "Y Zhu; J Feng; C Zhao; M Wang; L Li", "journal": "", "ref_id": "b37", "title": "Counter-interference adapter for multilingual machine translation", "year": "2021" }, { "authors": "T Zhou; P Niu; X Wang; L Sun; R Jin", "journal": "", "ref_id": "b38", "title": "One Fits All: Power general time series analysis by pretrained lm", "year": "2023" }, { "authors": "H Wu; T Hu; Y Liu; H Zhou; J Wang; M Long", "journal": "", "ref_id": "b39", "title": "Timesnet: Temporal 2d-variation modeling for general time series analysis", "year": "2023" }, { "authors": "J Xu; H Wu; J Wang; M Long", "journal": "", "ref_id": "b40", "title": "Anomaly transformer: Time series anomaly detection with association discrepancy", "year": "2021" }, { "authors": "Y Yang; C Zhang; T Zhou; Q Wen; L Sun", "journal": "", "ref_id": "b41", "title": "Dcdetector: Dual attention contrastive representation learning for time series anomaly detection", "year": "2023" }, { "authors": "G Woo; C Liu; D Sahoo; A Kumar; S Hoi", "journal": "", "ref_id": "b42", "title": "Etsformer: Exponential smoothing transformers for time-series forecasting", "year": "2022" }, { "authors": "Y Liu; H Wu; J Wang; M Long", "journal": "", "ref_id": "b43", "title": "Non-stationary transformers: Exploring the stationarity in time series forecasting", "year": "2022" }, { "authors": "Z Huang; X Shi; C Zhang; Q Wang; K C Cheung; H Qin; J Dai; H Li", "journal": "Springer", "ref_id": "b44", "title": "Flowformer: A transformer architecture for optical flow", "year": "2022" }, { "authors": "C Challu", "journal": "", "ref_id": "b45", "title": "N-hits: Neural hierarchical interpolation for time series forecasting", "year": "2022" }, { "authors": "B N Oreshkin; D Carpov; N Chapados; Y Bengio", "journal": "", "ref_id": "b46", "title": "N-beats: Neural basis expansion analysis for interpretable time series forecasting", "year": "2019" }, { "authors": "L Shen; Z Li; J Kwok", "journal": "Curran Associates, Inc", "ref_id": "b47", "title": "Timeseries anomaly detection using temporal hierarchical one-class network", "year": "2020" }, { "authors": "Z Li; Y Zhao; J Han; Y Su; R Jiao; X Wen; D Pei", "journal": "Association for Computing Machinery", "ref_id": "b48", "title": "Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding", "year": "2021" }, { "authors": "Y Su; Y Zhao; C Niu; R Liu; W Sun; D Pei", "journal": "Association for Computing Machinery", "ref_id": "b49", "title": "Robust anomaly detection for multivariate time series through stochastic recurrent neural network", "year": "2019" }, { "authors": "B Zhou; S Liu; B Hooi; X Cheng; J Ye", "journal": "", "ref_id": "b50", "title": "Beatgan: Anomalous rhythm detection using adversarially generated time series", "year": "2019" }, { "authors": "A Dempster; F Petitjean; G I Webb", "journal": "Data Mining and Knowledge Discovery", "ref_id": "b51", "title": "ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels", "year": "2020" }, { "authors": "G Lai; W.-C Chang; Y Yang; H Liu", "journal": "", "ref_id": "b52", "title": "Modeling long-and short-term temporal patterns with deep neural networks", "year": "2018" }, { "authors": "J.-Y Franceschi; A Dieuleveut; M Jaggi", "journal": "Advances in neural information processing systems", "ref_id": "b53", "title": "Unsupervised scalable representation learning for multivariate time series", "year": "2019" }, { "authors": "Y Su; Y Zhao; C Niu; R Liu; W Sun; D Pei", "journal": "", "ref_id": "b54", "title": "Robust anomaly detection for multivariate time series through stochastic recurrent neural network", "year": "2019" }, { "authors": "K Hundman; V Constantinou; C Laporte; I Colwell; T Soderstrom", "journal": "", "ref_id": "b55", "title": "Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding", "year": "2018" }, { "authors": "A P Mathur; N O Tippenhauer", "journal": "IEEE", "ref_id": "b56", "title": "Swat: A water treatment testbed for research and training on ics security", "year": "2016" }, { "authors": "A Abdulaal; Z Liu; T Lancewicki", "journal": "", "ref_id": "b57", "title": "Practical approach to asynchronous multivariate time series anomaly detection and localization", "year": "2021" }, { "authors": "S Makridakis; E Spiliotis; V Assimakopoulos", "journal": "International Journal of Forecasting", "ref_id": "b58", "title": "The m4 competition: Results, findings, conclusion and way forward", "year": "2018" }, { "authors": "A Bagnall", "journal": "", "ref_id": "b59", "title": "The uea multivariate time series classification archive", "year": "2018" }, { "authors": "T B Brown", "journal": "", "ref_id": "b60", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": " Openai", "journal": "", "ref_id": "b61", "title": "Gpt-4 technical report", "year": "2023" }, { "authors": "H Kim; G Papamakarios; A Mnih", "journal": "PMLR", "ref_id": "b62", "title": "The lipschitz constant of selfattention", "year": "2021" }, { "authors": "G Delétang; A Ruoss; P.-A Duquenne; E Catt; T Genewein; C Mattern; J Grau-Moya; L K Wenliang; M Aitchison; L Orseau", "journal": "", "ref_id": "b63", "title": "Language modeling is compression", "year": "2023" }, { "authors": "N Elhage; Etc", "journal": "", "ref_id": "b64", "title": "A mathematical framework for transformer circuits", "year": "2021" }, { "authors": "C Olsson; Etc", "journal": "", "ref_id": "b65", "title": "In-context learning and induction heads", "year": "2022" }, { "authors": "T Wolf", "journal": "Association for Computational Linguistics", "ref_id": "b66", "title": "Transformers: State-of-the-art natural language processing", "year": "2020-10" }, { "authors": "B N Oreshkin; D Carpov; N Chapados; Y Bengio", "journal": "", "ref_id": "b67", "title": "Meta-learning framework with applications to zero-shot time-series forecasting", "year": "2021" }, { "authors": "P Wang; X Wang; F Wang; M Lin; S Chang; H Li; R Jin", "journal": "Springer", "ref_id": "b68", "title": "Kvt: k-nn attention for boosting vision transformers", "year": "2022" }, { "authors": "S Lacoste-Julien; M Schmidt; F Bach", "journal": "", "ref_id": "b69", "title": "A simpler approach to obtaining an o (1/t) convergence rate for the projected stochastic subgradient method", "year": "2012" }, { "authors": "T Wang; P Isola", "journal": "", "ref_id": "b70", "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "year": "2020" }, { "authors": "G Vardi; G Yehudai; O Shamir", "journal": "", "ref_id": "b71", "title": "On the optimal memorization power of relu neural networks", "year": "2021" }, { "authors": "C Yun; Y.-W Chang; S Bhojanapalli; A S Rawat; S Reddi; S Kumar", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b72", "title": "O (n) connections are expressive enough: Universal approximability of sparse transformers", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 405.33, 160.15, 158.37, 23.2 ], "formula_id": "formula_0", "formula_text": "X = X -Exp √ V ar + ϵ .(1)" }, { "formula_coordinates": [ 4, 362.56, 545.66, 201.14, 12.84 ], "formula_id": "formula_1", "formula_text": "Xfreq l = iFFT(W l • FFT(X patched ))(2)" }, { "formula_coordinates": [ 4, 382.29, 565.48, 181.42, 12.84 ], "formula_id": "formula_2", "formula_text": "P l = Embedding( Xfreq l ).(3)" }, { "formula_coordinates": [ 4, 375.03, 698.67, 188.67, 22.31 ], "formula_id": "formula_3", "formula_text": "gate(g) = σ(λg) = 1 1 + e -λg ,(4)" }, { "formula_coordinates": [ 5, 98.75, 736.33, 201.94, 13.91 ], "formula_id": "formula_4", "formula_text": "loss discrepancy l = KL( Âl ||A anomaly l )(5)" }, { "formula_coordinates": [ 5, 382.18, 301.81, 181.53, 22.31 ], "formula_id": "formula_5", "formula_text": "Âl = 1 2 [A + A T -diag(A)](6)" }, { "formula_coordinates": [ 5, 401.01, 329.28, 162.7, 24.51 ], "formula_id": "formula_6", "formula_text": "= [ 1 √ 2πσ i exp(- dis(i, j) 2σ 2 i )],(7)" }, { "formula_coordinates": [ 14, 326.8, 284.11, 221.42, 24.84 ], "formula_id": "formula_7", "formula_text": "[1,0,0,1,0,0] [1,1,1,0,1,1] [1,1,1,1,1,1] 0.432 ETTh2 96 [1,0,1,1,0,0] [1,1,0,0,1,1] [1,1,1,1,0,1] 0.269 ETTh2 720 [1,0,1,1,1,1] [1,1,1,1,1,1] [1,1,1,1,1,1] 0.392" }, { "formula_coordinates": [ 15, 48.61, 560.38, 242.17, 50.15 ], "formula_id": "formula_8", "formula_text": "(X) = softmax(XAX ⊤ )X where A = W Q W ⊤ K ∈ R D×D . Lemma IX.1. Let the Jacobian J = ∂fi(X) ∂xj N i,j=1" }, { "formula_coordinates": [ 15, 56.59, 620.25, 201.26, 16.69 ], "formula_id": "formula_9", "formula_text": "|J| 2 ≤ |A| 2 N i=1 P i,i + 1 2 x i - N j=1 P i,j x j 2 + ∆" }, { "formula_coordinates": [ 15, 48.96, 638.22, 229.66, 35.37 ], "formula_id": "formula_10", "formula_text": "∆ = |A| 2 N i̸ =j P i,j x j - N k=1 P i,k x k 2 + |A|2 2 N j=1 |x i | 2 and P i,j = exp(x ⊤ i Axj ) N k=1 exp(x ⊤ i Axk) ." }, { "formula_coordinates": [ 15, 322.49, 464.32, 128.72, 14.11 ], "formula_id": "formula_11", "formula_text": "N i=1 |x i - N j=1 P i,j x j | 2 small." }, { "formula_coordinates": [ 15, 311.98, 478.44, 251.06, 26.88 ], "formula_id": "formula_12", "formula_text": "N i=1 x i = 0), we have N i=1 |x i -X ⊤ P i,: | 2 ≈ N i=1 |x i -X ⊤ XAx i | 2 ." }, { "formula_coordinates": [ 15, 311.42, 515.07, 251.62, 51.43 ], "formula_id": "formula_13", "formula_text": "N i=1 |x i -X ⊤ XAx i | 2 contains the largest m eigenvectors of X ⊤ X where m is the rank of A. Theorem 1. Let W Q and W K be matrices of size D × m. Let λ 1 ≥ λ 2 ≥ ... ≥ λ D be" }, { "formula_coordinates": [ 15, 322.49, 589.74, 216, 14.56 ], "formula_id": "formula_14", "formula_text": "N i=1 |x i -X ⊤ XAx i | 2 is given by A = m i=1 1 λi v i v ⊤" }, { "formula_coordinates": [ 21, 55.61, 243.35, 426.44, 25.58 ], "formula_id": "formula_15", "formula_text": "∥µ i ∥ = ν 1 , and b) µ i W Q W T K µ i ∈ [ν 2 , ν 4 ] for all i and |µ i W Q W ⊤ K µ ⊤ j | ≤ ν 2 for all µ i ̸ = µ j ∈ V. 3) W V and W Q W ⊤" }, { "formula_coordinates": [ 21, 414.21, 256.15, 147.13, 14.22 ], "formula_id": "formula_16", "formula_text": "(ij) V | ≤ ν 5 and |(W Q W ⊤ K ) (ij) | ≤ ν 6" }, { "formula_coordinates": [ 21, 48.96, 316.7, 514.07, 110.27 ], "formula_id": "formula_17", "formula_text": "√ d ≥ 3(ψ(δ, d) + ν 2 + ν 4 ), then with probability 1 -5δ, we have n i=1 exp 1 √ d x l W Q W ⊤ k x i x i W V n j=1 exp 1 √ d x l W Q W ⊤ K x j -µ l W V ∞ ≤ 4 exp ψ(δ, d) √ d σν 5 2 dn log 2d δ + 7 exp ν 2 -ν 4 + ψ(δ, d) √ d -1 ∥µ l W V ∥ ∞ ," }, { "formula_coordinates": [ 21, 48.96, 466.06, 514.07, 29.42 ], "formula_id": "formula_18", "formula_text": "k 1 = k = n. ■ H. Theorem A.4" }, { "formula_coordinates": [ 21, 229.17, 557.45, 334.54, 30.32 ], "formula_id": "formula_19", "formula_text": "W = arg min 1 2N N i=1 ∥W g i -y i ∥ 2 2 .(8)" }, { "formula_coordinates": [ 21, 175.83, 619.25, 387.87, 30.32 ], "formula_id": "formula_20", "formula_text": "1 t t j=1 1 2N N i=1 ∥W j g i -y i ∥ 2 2 - 1 2N N i=1 ∥W * g i -y i ∥ 2 2 ≤ ϵ,(9)" }, { "formula_coordinates": [ 21, 125.35, 714.56, 434.2, 30.32 ], "formula_id": "formula_21", "formula_text": "1 t t j=1 1 2N N i=1 ∥W j g i -y i ∥ 2 2 - 1 2N N i=1 ∥W * g i -y i ∥ 2 2 ≤ O log t σt = Õ(σ -1 t -1 ). (10" }, { "formula_coordinates": [ 21, 559.55, 725.29, 4.15, 8.64 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 24, 223.46, 151.07, 137.96, 14.37 ], "formula_id": "formula_23", "formula_text": "λ n ′ j - n-1 n ′ =1 m n ′ j=1 λ n ′ j p(s n ′ j )," }, { "formula_coordinates": [ 24, 95.36, 275.58, 207.3, 32.27 ], "formula_id": "formula_24", "formula_text": "n-1 n ′ =1 j:s n ′ j ∈s n i λ n ′ a I(s n ′ j ∈ V n ′ ) +λ n ′ b I(s n ′ j ∈ U n ′ ))) - n-1 n ′ =1 λ n ′ a g n ′ + λ n ′ b h n ′" }, { "formula_coordinates": [ 24, 78.59, 353.47, 266.05, 30.31 ], "formula_id": "formula_25", "formula_text": "n-1 n ′ =1 j:s n ′ j ∈s n i λ n ′ a I(s n ′ j ∈ V n ′ ) + λ n ′ b I(s n ′ j ∈ U n ′ )) = m n n-1 n ′ =1 (1 + q n ′ a exp(λ n ′ a ))(1 + q n ′ b exp(λ n ′ b )) + O √ m n ." }, { "formula_coordinates": [ 25, 64.85, 52.48, 195.54, 19.76 ], "formula_id": "formula_26", "formula_text": "|J| 2 ≤ N i,j=1 |J i,j | 2 ≤ N i,j=1 P i,j + N i=1 |X ⊤ Q i X| 2 |A| 2 + N i,j=1 |X ⊤ Q i E j," }, { "formula_coordinates": [ 25, 168.39, 237.49, 198.37, 14.11 ], "formula_id": "formula_27", "formula_text": "N i=1 |x i -X ⊤ P i,: | 2 = N i=1 x i -x -X ⊤ XAx i" }, { "formula_coordinates": [ 25, 48.96, 302.59, 234.98, 31.41 ], "formula_id": "formula_28", "formula_text": "N i=1 |x i -X ⊤ XAx i | 2 is given by A = m i=1 1 λi v i v ⊤ i Proof. Since W Q , W K ∈ R D×m" }, { "formula_coordinates": [ 25, 62.74, 334, 174.69, 17.55 ], "formula_id": "formula_29", "formula_text": "A N i=1 |x i -X ⊤ XAx i | 2 ≥ N k=m+1 λ k" }, { "formula_coordinates": [ 25, 72.93, 335.92, 456.32, 31.07 ], "formula_id": "formula_30", "formula_text": "1 λi v i v ⊤ i we have N i=1 |x i -X ⊤ XAx i | 2 = tr I - m i=1 v i v ⊤ i 2 X ⊤ X = D k=m+1" } ]
10.1016/j.csl.2019.06.009
2023-11-24
[ { "figure_ref": [], "heading": "Contexts of application", "publication_ref": [], "table_ref": [], "text": "Multilingual generation is important to be understood by a wider audience as shown by the always-increasing need for translation. In recent years, automatic translation has become an everyday tool for most people, especially non-English speakers, but its output must always be taken with care, especially when the target of the translation is not the mother tongue of the writer. While humans are very good at filling missing information or correcting details in their language, automatic translations should always be revised by professionals for publication or official texts.\nOne difficult challenge in translation, both human and automatic, is ensuring that the information in the source and target texts are strictly equivalent, especially in the case of statistical data. Although human translators work with great care, they do not always reproduce the numbers exactly in their translation which can be embarrassing and can have legal consequences. Rule-based and statistical automatic translators are less prone to these types of errors as they most often copy the values from the original to their translations. Although neural automatic translators produce very fluent texts, they are prone to hallucinations because they start from an abstraction of the original information, so their output must be checked carefully for ensuring that the same information is conveyed in both languages.\nWhile automatic translation can be appropriate for texts written by humans, it is unnecessary when the text can be generated automatically in both languages. For example, in Canada, thousands of weather reports are generated daily from the output of numerical models. Meteorologists use graphical tools to fine-tune the numerical outputs, but the English and French versions are generated automatically thus removing the translation delay and guaranteeing that the same information is conveyed in both languages as required by the Canadian Law. Similar arguments can be made for generating business reports for multinational corporations or sport narratives directly from data. Although in this report, we focus on data-to-text applications, we will show how bilingual generation can also be used for creating translation drill exercises for students." }, { "figure_ref": [], "heading": "What is data-to-text ?", "publication_ref": [ "b9", "b7" ], "table_ref": [], "text": "Before tackling the text generation process, we lay out the types of applications for which bilingual generation seems more appropriate. To simplify, we only consider the generation of a single sentence, but the process can be applied to all sentences of a text.\nAdapting the notation introduced by Upadhyay and Massie (2022), we define a dataset as a group of data instances called events each defined by a data structure for which a sentence must be generated for conveying the insights and the information about the event. The data structure is a set of objects described by features for which values are recorded.\nA text generator in a data-to-text context is thus a function from subsets of data structures to a sentence . Sometimes the are independent, for example, when describing a restaurant with a list of features such as the food, location, the prices, etc. But more realistically, time dependencies occur between events; for example, when describing sports matches in which it is important to convey relations between events to show the progress of a player or a team within a season. An important feature of this data set is thus some time stamp, such as a date or a link between games.\nIn this context, is classically divided in two subtasks (Reiter and Dale, 2000):\n, (What to say ?) determines the content of the sentence by selecting the set of data structures to convey. This step being language independent, it is performed once for both languages.\n(How to say ?) chooses the phrase structure and words to use in the sentence and performs the linguistic realization. This step, called text realization, is language dependent, thus the subscript. If target languages have some commonalities (such as between English and French), it is possible to share parts of the language dependent processing.\nThe generation of bilingual sentences and can thus be framed as function compositions where the selected events are identical for both languages.\nIn end-to-end generation using neural methods that go directly from a data set, each event is represented as flattened tuples :\nfrom which the text is directly generated. It thus remains a challenge to ensure a consistent event selection for each language; moreover, there is always the risk of hallucinations (reporting events that do not appear in the original data) because of the generalization/abstraction process inherent in neural approaches.\nWe present how we use pyrealb to implement and with a rule-based approach in different settings.\nGiven that pyrealb is implemented in python, it can be conveniently combined with data-processing steps implemented using one of the many python data analysis tools.\nIn our examples, we use simple python functions for implementing the data selection and performing common linguistic choices." }, { "figure_ref": [], "heading": "Multilingual realizers", "publication_ref": [ "b6", "b5", "b1", "b4" ], "table_ref": [], "text": "A number multilingual text generators (i.e. dealing with at least another language than English) have been developed. For example, KPML can handle Spanish, Dutch, Chinese, German and Czech; Surgeon-2 can generate German; Grammatical Framework (Ranta 2011) is a programming language designed for writing grammars in several languages in parallel; GenDR (Lareau et al. 2018) can generate sentences in Catalan, French, Polish, Portuguese and Spanish. These generators are based on linguistic theories, considering many details in the construction of sentences, which allows powerful realizations. However, that complexity hinders somewhat their ease of use: writing specifications for them requires an intimate knowledge of the underlying theory.\nSimpleNLG (Gatt and Reiter 2009), as its name implies, defines itself by its ease of learning and of use. Words, phrases and other structures are Java objects created and manipulated by a programmer and integrated into a Java project. SimpleNLG can also be called from other programming languages through a web server with an XML interface. While its principles somewhat limit the power of its realizations compared to other systems, these realizations are adequate for many uses. It has been ported to some languages, namely Galician, Spanish, German, Dutch, Italian and Mandarin but a single language at a time, one exception being SimpleNLG-EnFr that Vaudry and Lapalme developed to work in both English and French at the same time.\nBuilding on this experience, we developed jsRealB (Lapalme, 2022), written in JavaScript, to ease its integration in a web environment. RosaeNLG is a Natural Language Generation library for node.js or browser execution, based on the Pug template engine dealing with English, French, German, Italian and Spanish. RosaeNLG was developed for realizing some simple data to text applications and is especially tuned for outputting lists of objects and properties using appropriate commas and a conjunction at the end of the list. Its linguistic coverage, at least for French and English, is limited compared to jsRealB.\nWe later ported jsRealB to python to create pyrealb, described in the next section, with the same goal of realizing sentences in both English and French, even within the same sentence. The further sections will show how pyrealb can integrate all steps ( , and ) of the data-to-text pipeline in a single and convenient python formalism." }, { "figure_ref": [], "heading": "pyrealb", "publication_ref": [], "table_ref": [], "text": "pyrealb is a Python package which allows English and French sentence realization by programming language instructions that create internal data structures corresponding to the elements of the sentence. The data structure can be built incrementally and, when needed, the realization process traverses it to produce a string in the appropriate language.\npyrealb has the following components for both English and French: To produce the text string corresponding to the structure of a Terminal or a Phrase, the realize() method of Terminal, but more often of a Phrase, must be called. As most often generation occurs in only one language at a time, pyrealb tracks the current language set with either loadEn() or loadFr() after which terminals and phrases created are associated with this language. Each Terminal being associated with a language when it is created, the appropriate morphological rules can be applied when it is realized.\nFeatures are added to these structures using the dot notation to modify their properties. For terminals, their person, number, gender can be specified. For phrases, the sentence may be negated or set to a passive mode;\na noun phrase can be pronominalized. Punctuation signs and HTML tags can also be added.\npyrealb deals with the final realization, which is an often neglected part in NLG systems because it is dubbed to be pedestrian, often associated with glorified format statements, although its output is the only thing that the end user sees. How acceptable is an output if word agreements or elision are not properly done or if it consists of a mere list of tokens? This might be sufficient for automatic evaluation, but it cannot be used in a production setting. A well formatted and grammatically correct output is important for the social acceptability of a system.\nThe fact that neural systems often produce flabbergastingly fluent text explains in part their popularity.\nBecause English and French share most of their grammatical features, options can be specified for both languages, except for a few cases; e.g. English perfect aspect is ignored in French and French tenses imparfait and temps composés are not used in English.\nThe following shows how an English and a French sentence can be built and printed. Note that adjectives are placed according to the rules of each language.\nIt is also possible to mix languages within a single sentence such as in the following French sentence with an English subject. Note that the plural of the English subject is propagated to the French portion of the sentence.\nIn practice, this type of bilingual sentence is seldom used, but it was thought important to cater also for these cases.\npyrealb «walks the talk» by calling itself for realizing its error messages in the current language such as missing words from the lexicon or bad values for options. English is used for errors detected in English sentences and similarly for French.\nloadEn() # set the language to English print(S(NP(D(\"the\"),N(\"cat\"),A(\"small\")), # create a subject NP VP(V(\"jump\").t(\"ps\"), # create VP, setting past for the verb time PP(P(\"on\"), # create a PP with NP(D(\"the\"),N(\"mat\"),A(\"green\")))) # an object NP\n).realize())\n# output: The small cat jumped on the green mat. loadFr() # set the language to French print(S(NP(D(\"le\"),N(\"chat\"),A(\"petit\")), # create a subject NP VP(V(\"sauter\").t(\"ps\"), # create VP, setting past for the verb time PP(P(\"sur\"), # create a PP with NP(D(\"le\"),N(\"tapis\"),A(\"vert\")))) # an object NP\n).realize())\n# output: Le petit chat sauta sur le tapis vert. Data being unpredictable, it is often hard to create a complete pyrealb expression with all its components in a single call. So pyrealb allows an incremental way of building the structure using the add(elem,pos) method to modify an existing Phrase by adding a new parameter at a position (last by default). The following example adds a complement to the verb phrase of the previous example.\nRealizing a variable number of data is also critical in a data-to-text context and within a sentence this implies building the coordination of elements. The following shows how coordination adapts its realization to the number of arguments. Note also the fact that Phrase constructors accept lists of parameters that are flattened before construction of the data structure.\nWe see that the coordination is ignored when there is only one element and that a comma is introduced when there are more than two. The number for the verb depends on how many elements are coordinated. The following example shows a similar case in French, in which the gender and number of both the verb and adjective depend on the number of coordinated subjects according to grammatical rules of French.\nloadEn() verb.add(PP(P(\"over\"),NP(D(\"a\"),N(\"fence\")).n(\"p\")))\nprint(S(subj.n(\"p\"), # set the English subject to plural verb).realize())\n# output: The small cats sautent sur le tapis vert over fences. In any python program, functions can/should be defined for creating recurrent patterns such as the following which creates a sentence structure for reporting an event involving some persons at a date. The tense of the verb can also be specified." }, { "figure_ref": [], "heading": "Challenges for bilingual generation", "publication_ref": [], "table_ref": [], "text": "Because, in a bilingual setting, there are two language contexts with their own rules, care must be given at the evaluation time of the expression. In python, all top-level expressions in scripts such as the ones shown above are evaluated when the script is loaded, it is important to set the appropriate language environment (using loadEn() or loadFr() ) before they were encountered and evaluated.\nTo defer the evaluation, it is possible to use a function ( def in python) whose body will be evaluated when it is called. As python uses the applicative order evaluation mechanism, the parameters of a function are evaluated before its call, so the appropriate language context must also be set when the function is called. To delay the evaluation of a python expression exp until it is needed, lambda: can be added before exp which creates a function whose body can be later evaluated by calling exp() . As we saw earlier, this brings the advantage that this function creates new copies of the original structure. Parameters can also be added to the lambda for more flexibility.\nAlthough not specific to bilingual generation, delaying expression evaluation is also useful in the context of the oneOf(...) function, which selects radnomly one of its arguments. oneOf(...) is particularly useful for varying between synonyms or equivalent phrasings to make the text less repetitive. oneOf(...) checks if the selected element is callable, and if so it calls it and returns the result of this evaluation. So oneOf() is often called with functional parameters such as these:\nfrom datetime import datetime loadEn() def report(event, persons, date, tense=\"p\"): meeting = PP(P(\"at\"), NP(D(\"a\"), N(event)))\nreturn S(CP(C(\"and\"), [NP(D(\"a\"),N(person)) for person in persons]), NP(NO(len(persons)),N(\"person\")).ba(\"(\"), # show number of persons VP(V(\"be\").t(tense),\nA(\"present\"), meeting, DT(date).dOpt({\"hour\":False,\"minute\":False,\"second\":False})))\nprint(report(\"birthday\",[\"mother\",\"girl\"],\ndatetime (2023,5,30),\"ps\").realize())\nprint(report(\"assembly\",[\"grandfather\",\"father\",\"boy\"],\ndatetime (2023,12,30),\"f\").realize())\n# output:\n# A mother and a girl (2 persons) were present at a birthday on Tuesday, May 30, 2023.\n# A grandfather, a father and a boy (3 persons) will be present at an assembly on Saturday, December 30, 2023. Without lambda , given the applicative order of evaluation of python , all expr_i would be evaluated but return only one of them.\nWe have illustrated some features of the pyrealb realizer. For more details, see the online documentation or experiment with a Jupyter Notebook.\nThe next sections give examples of bilingual text generation in data-to-text contexts. Most of the data processing and text organization is common to both languages, the only language-specific part being the final realization. This setup thus greatly simplifies ensuring that the same information is conveyed in both languages." }, { "figure_ref": [], "heading": "Organizing the realization process with pyrealb", "publication_ref": [], "table_ref": [], "text": "As a first example of a bilingual report, we consider the case where there is a strict parallelism between English and French: only words differ, the phrase structure is identical for both languages. This simplification will be removed later, but it allows focusing on some aspects." }, { "figure_ref": [], "heading": "Common phrase structure", "publication_ref": [], "table_ref": [], "text": "Names of persons are added as nouns to each lexicon and a series of equivalent words in English and French are given. A function is defined for determining the appropriate tense to use. loadF() addToLexicon({\"Alice\":{ \"N\": {\"g\": \"f\", \"tab\": \"nI\" } }}) addToLexicon({\"Bob\":{ \"N\": {\"g\": \"m\", \"tab\": \"nI\" } }}) addToLexicon({\"Eve\":{ \"N\": {\"g\": \"f\", \"tab\": \"nI\" } }})\n# text parameterization with words dictionaries indexed by language participants = [\"Alice\", \"Eve\", \"Bob\"] conj = {\"en\":\"and\", \"fr\":\"et\"} prep = {\"en\":\"at\", \"fr\":\"à\"} det = {\"en\":\"a\", \"fr\":\"un\"} copula = {\"en\":\"be\", \"fr\":\"être\"} attribute = {\"en\":\"present\", \"fr\":\"présent\"} individual = {\"en\":\"person\", \"fr\":\"personne\"} dateOptions = {\"minute\":False,\"second\":False} # compare day of date with the day of the reference The realization function is like the report function in the previous section, the main difference being that words are indexed by the lang parameter.\nThis function can be called to create sentences in both French and English, varying the number of participants and the date.\nand produces the following bilingual output, in which dates and numbers are properly written with the correct agreements between components although the user did not specify them explicitly. (today).dOpt(dateOptions).realize(),end=\"\") loadFr();print(\"-\",DT(today,\"fr\").dOpt(dateOptions).realize(),\"\\n\") for (i,day) in zip(range(1,len(participants)+1),\n[today-timedelta(days=1),today,today+timedelta(days=1)]):\nprint(report(\"assembly\",participants[:i], day,\"en\").realize())\nprint(report(\"réunion\",participants[:i], day,\"fr\").realize())\nprint(\"--\") " }, { "figure_ref": [], "heading": "Different phrase structures", "publication_ref": [], "table_ref": [], "text": "The example in the previous section is admittedly restrictive, because it takes for granted that the phrase structure in both languages is identical. This is like localization tools used to adapt computer applications to different languages by adapting menu items and user messages. But this approach cannot always be used in more realistic text generation contexts.\nWe will now show a way to generate similar sentences in both languages while keeping some flexibility in the formulations using an object-oriented organization. The language independent algorithms and phrase choices are performed in a class and the language dependent parts are done in subclasses. Usually the subclasses have a similar organization but they allow different sentence structures.\nHere is the language independent main class equivalent to our previous example. Word choices will be performed in the subclasses attribute and methods such as self.and_conj , self.attend() or self.meeting() . def __init__(self): # called by __init__() in subclasses after setting the language addToLexicon({\"Alice\":{ \"N\": {\"g\": \"f\", \"tab\": \"nI\" } }}) addToLexicon({\"Bob\":{ \"N\": {\"g\": \"m\", \"tab\": \"nI\" } }}) addToLexicon({\"Eve\":{ \"N\": {\"g\": \"f\", \"tab\": \"nI\" } }}) The language-specific parts are the following English and Francais classes. The report() method is also defined in each subclass to that the appropriate language is loaded before calling the language independent part is called via super() . The terminals are specified directly in each language. Note that the sentence structure for attend(meeting) is different in subclasses.\nThese language dependent classes are first instantiated, then called as follows To get a similar output as the previous example, except for the date and the way of indicating attendance.\nThis setup for a single sentence shows how the object-oriented features of Python can be used to organize the parallel sentence realization in two languages. The full code is available on GitHub as demo for pyrealb. In such a simple case, the class organization might seem an overkill, but this organization is very convenient in more complex cases as it will be shown later. When the realization process makes use of Abstract Base Classes, the python interpreter can check that the language dependent realizer methods are similar in all subclasses and help guarantee that equivalent information is conveyed in both languages, provided each method with the same name provide equivalent phrasings.\nThe next section describes use cases for bilingual data-to-text realization in more complex settings, but the fundamental idea is the same: parallel syntactic abstractions in a convenient notation that can be parameterized with values. This ensures that the input data is correctly conveyed in the output thus removing the need for double-checking or having to install guardrails to avoid hallucinations (reporting facts that are not present in the data) or risking the output of inappropriate language.\nWhen the situation is appropriate, namely, when we deal with numerical data, we must be certain that computer systems always produce the right answer, not just usually. Remember the Pentium bug that affected 1 in 9 billion floating point divides but that cost Intel 475 million in 1994.\nAs Martin Kay (1980) put it An algorithm that works most of the time is, in fact, of very little use unless there is some automatic way of deciding when it is and when it is not working." }, { "figure_ref": [], "heading": "Use cases", "publication_ref": [], "table_ref": [], "text": "This section gives data-to-text demonstration programs implementing our methodology. As the full code and some algorithmic details are available on the pyrealb GitHub, we focus on the specificities of the data of each application and display typical outputs. " }, { "figure_ref": [], "heading": "Realization of all the data", "publication_ref": [], "table_ref": [], "text": "We now present two use cases in which the input data has already been selected and the generation process is limited to the presentation of all the data. The generation is thus limited to How to say ? ( and ) which can imply sorting and organizing the input data, though." }, { "figure_ref": [], "heading": "E2E challenge [code]", "publication_ref": [ "b0" ], "table_ref": [], "text": "The task here is to realize descriptions of restaurants based on a meaning-representation given by a list of keyvalue pairs such as the following;\nAbout 50K pairs of meaning-representation with the corresponding expected text were crowdsourced for a shared task, held at the 2017 SIGdial meeting (Dušek et al., 2020). We ported to python our previous jsRealB version, described in this page. This system produces the following two sentences from the meaningrepresentation given above." }, { "figure_ref": [], "heading": "WebNLG Challenge 2020 [code]", "publication_ref": [], "table_ref": [], "text": "This task is to realize information given as simplified RDF triples. An RDF triple is composed of three URIs corresponding to the subject, the predicate and the object that can also be a constant string, a date or a number. The predicate of a triple declares a relation between the subject and the object, such as Alan_Bean | birthPlace | Wheeler,_Texas , in which Alan_Bean is the subject, birthPlace the predicate indicating that the subject was born at the place given by the object and Wheeler,_Texas is the object. This could be verbalized as Alan Bean is born in Wheeler, Texas. The English RDF verbalizer is based on a symbolic approach: each RDF triple corresponds to a sentence in which the subject and the object of a triple are mapped almost verbatim as subject and object of the sentence. This is possible in this case because the subject and object have already been nominalized, but that would not be the case if real URIs had been inputted.\nThe predicate of the triple corresponds to a verb phrase which determines the structure of the sentence. The predicates are ordered to create a meaningful story and parts of sentences are merged when they share subjects or predicates. Our participation at the WebNLG Challenge used jsRealB, through a web server, for the English realization. This system obtained good evaluation results (being in the middle of the pack) for automatic evaluation. For the human evaluation, it was judged excellent (always in the first group of participants) for coverage, relevance and correctness. The text structure and fluency were judged less well (in the second and third group).\nThe current version uses pyrealb and realizes both English and French sentences. The following sentences were realized from the input given above." }, { "figure_ref": [], "heading": "More details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Realization of a subset of the data", "publication_ref": [], "table_ref": [], "text": "We now show examples of cases with plenty of data, for which to generate bilingual texts that focus on some important aspects." }, { "figure_ref": [], "heading": "Weather reports [code]", "publication_ref": [ "b0" ], "table_ref": [], "text": "The input of the application is a set of meteorological information (e.g., precipitations, temperature, wind, UV index, ...) provided by Environment and Climate Change Canada (ECCC). Unlike many data-to-text applications, this information is machine generated: it is created by a numerical weather model which outputs data for ranges of hours after the time the bulletin is expected to be issued.\nFor this demo, we extracted a subset of the global information for regions of Ontario and Québec for 2018 and 2019 which is nevertheless illustrative of the natural language generation problems encountered in this context. We converted the Meteocode, an internal data format of ECCC, to JSON in which time indications are shifted, so that they appear in local time while, in the original, they were in UTC.\nWe now outline the JSON data organization for a weather bulletin used as input for our demonstration program in terms of Python data structures: For a given period, these JSON terms can be visualized as follows:\nAlthough in principle, weather data is strongly time-dependent, the upstream process ensures that the necessary historical information is included in the current record. Thus limits itself to the selection of the most important values within the current dataset selection according to standard values defined by writing rules of Environment Canada.\nHere is an example of an evening bulletin realized by pyrealb in English and French. As the bilingual outputs for both French and English are strictly parallel and a complete bulletin is generated in a language, uses parallel bilingual structures within the code such as the following to determine the phrase structure for the day depending on the hour (e.g., morning or matin when the hour is between 9 and 12).\nMost generation function are parameterized by the language to generate. more details Minimum 14, températures à la hausse pour atteindre 23 en matinée.\nFIN dayPeriods=[(0,5, {\"en\":lambda:NP(N(\"night\")), \"fr\":lambda:NP(N(\"nuit\"))}),\n(5,9, {\"en\":lambda:NP(Adv(\"early\"),N(\"morning\")), \"fr\":lambda:NP(N(\"début\"),PP(P(\"de\"),N(\"matinée\")))}), (9,12, {\"en\":lambda:NP(N(\"morning\")), \"fr\":lambda:NP(N(\"matin\"))}), (12,18,{\"en\":lambda:NP(N(\"afternoon\")), \"fr\":lambda:NP(N(\"après-midi\"))}), (18,24,{\"en\":lambda:NP(N(\"tonight\")), \"fr\":lambda:NP(N(\"soir\"))})] " }, { "figure_ref": [], "heading": "Basketball summaries [code]", "publication_ref": [ "b0" ], "table_ref": [], "text": "We now present a case of time-dependent data used for generating English and French statistic-focused summaries of basketball games using information found in the SportSett:Basketball dataset (Thomson et al.,2020). This dataset combines scores and performance measures about the teams and the players of thousands of NBA games with human-authored summaries about these games.\nThe detailed statistics give information about the number of points, of attempted and made field goals, of blocks, of assists, etc. The following table gives the box scores for the Philadelphia 76ers in their game against the Miami Heat on November 1, 2014, the first game of the Train dataset which we use as a running example in this paper. The heads of tables follow the same conventions as the one used for the official scores.\nThe following shows the data for the 4 (out of the 12) Philadephia players scoring the most points in this game.\nIn this case, the summary reports information about the team and the players in the current game. This implies statistical procedures for determining the winners, the best players and showing turning points and important differences about some aspects of the game (e.g. field goals, three-pointers, etc.) between teams in each quarter.\nBasketball game summaries also consider information from previous games to identify winning or losing streaks or to mention that a performance is above average in the season. Season statistics are also used to identify the outstanding players (about three or four out of more than twenty).\nThis is an example where data selection must consider not only the current but also others either the past games of the season or even information gathered about the past seasons. The result of this selection process is performed once and is used for both and .\nHere is English summary produced by the system from the above data:\nThe Heat (2-0) , leader in their conference, defeated the 76ers (0-3) 114-96 at the Wells Fargo Center in Philadelphia on Saturday.\nThe Heat led in all four quarters. Over the first quarter, the 76ers obtained better goals percentage, a difference of 14%. Over the third quarter, the 76ers got better free throws percentage, an advantage of 21%. The Heat dominated the 76ers for points by 14 over the fourth quarter. In the game, the Heat obtained better three-pointers percentage, 50% to 30%.\nChris Bosh who was a starter led the way, posting 30 points (9)(10)(11)(12)(13)(14)(15)(16)(17)(2)(3)(4)(5)(10)(11) while adding four assists. Tony Wroten who started this game scored a game high with 21 points (6-11 FG, 1-4 3Pt, 8-11 FT) and ten assists and performed a double-double.\nThe Heat showed 49 percent from the field and 20-29 free throws. Mario Chalmers who was a starter added 20 points. Luol Deng contributed 15 points with 7-for-11 FG. Shawne\nWilliams added 15 points (5-9 FG, 3-5 3Pt, 2-2 FT) while adding four assists. Dwyane Wade had nine points (4-18 FG, 0-1 3Pt, 1-3 FT) while adding ten assists.\nThe 76ers showed 52 percent from the field and 19-26 attempts at the charity stripe and committed 24 turnovers. Brandon Davies who was a starter ended up with 18 points with 7-for-9 FG. Luc Mbah a Moute recorded nine points with seven rebounds grabbed and three assists. Malcolm Thomas had eight points in 19 minutes. Alexey Shved posted six points\n(1-4 FG, 1-4 3Pt, 3-3 FT) while adding six assists.\nThe Heat' next game will be at home against the Toronto Raptors on Sunday. The 76ers' next game will be at home against the Houston Rockets on Monday\nThe summary in French is the following Le Heat (2-0) , meneurs dans leur conférence, a dominé les 76ers (0-3) 114-96 au stade Wells Fargo Center samedi à Philadelphia.\nLe Heat a mené pendant les quatre quarts. Durant le premier quart, les 76ers ont réussi les meilleurs lancers en pourcentage, une différence de 14%. Pendant le troisième quart, les 76ers ont réussi les meilleurs lancers francs en pourcentage, un avantage de 21%. Le Heat a dominé les 76ers pour les points par 14 pendant le quatrième quart. Durant la partie, le Heat a obtenu les meilleurs tirs à 3 points en pourcentage, 50% en comparaison avec 30%.\nChris Bosh qui débutait la partie a réalisé une performance excellente comptant 30 points " }, { "figure_ref": [], "heading": "Parallel generation of random data", "publication_ref": [], "table_ref": [], "text": "This nodata-to-text example is nevertheless interesting because it illustrates the sentence modifications of pyrealb applied similarly to both languages. Random variations of sentence patterns are generated to create translation drill exercises. This is a command-line version of a jsRealB web application which is more userfriendly to use, but the text generation algorithm is the same in both versions.\nWith pyrealb a sentence pattern can be parameterized using a lambda to change some of its words, inflections (number and tense) and even its structure by negating it, making it passive or interrogative. For example, the following definition in which the formal parameters are given arbitrary names but easier to remember in the context of the syntactical structure.\nThis definition can be used to produce different sentences, as shown in the following calls with the corresponding realizations. Lines 1-2 is a simple call changing terminals, lines 3-4 shows the negative future form of the sentence and lines 5-6 shows a negative and tag-interrogative form of the sentence in the past tense. f(\"p\",\"child\",\"love\",\"a\",N(\"avocado\")).realize() => 'The children love avocados. ' f(\"s\",\"mother\",\"cook\",\"the\",N(\"apple\")).t(\"f\").typ({\"neg\":True}).realize() => 'The mother will not cook the apples. ' f(\"s\",\"uncle\",\"eat\",\"the\",N(\"apple\")).t(\"ps\").typ({\"neg\":True,\"int\":\"tag\"}).realize() => 'The uncle did not eat the apples, did he? ' 1 2 3 4 5" }, { "figure_ref": [], "heading": "Translation drill exercises [code]", "publication_ref": [ "b3" ], "table_ref": [], "text": "To create translation drill exercises, parallel sentence patterns are called with equivalent parameters and modified in the same way in the source and target languages. Thus two sentences structures can be created, one corresponding to the translation of the other. The realization of the source structure is shown to the user, while the tokens of the realization of the target structure are shuffled with some distractor words. The user types some tokens to create a translation that is compared with the expected realization of the target structure. Translation drills can be created in both translation directions by selecting which language is the source.\nThe following shows two interactions with the system.\nThe data for this demo is a list of python dict s that define the parameterized sentence structure for both languages with lists of alternatives for the parameters. The dict used in the previous example is shown below Translate in English the sentences in French using some of the suggested words.\nType \"end\" to exit.\nThe child can love the watermelons. ', adorer, ., manger, un, melons, eau, ', les, frère, peut, d, enfant These parallel sentence patterns were created manually or adapted from an automatic transformation of the output of the Stanza parser described in (Lapalme 2021). Although for the moment there are only 8 sentence patterns, they can be realized in a variety of sentences when parameter, tense and sentence type variations are taken into account. As shown in the web demo, a teacher can also determine the level of difficulty of sentences going from simple affirmative sentences to complex passive interrogative forms." }, { "figure_ref": [], "heading": "What about GPT ?", "publication_ref": [], "table_ref": [], "text": "GPT (Generative Pretrained Transformer) models have been used to generate many types of text, images and music. These billion-parameter systems, trained on terabytes of internet data, can be used to answer questions in a conversational manner.\nWe submitted the data for the examples of section 5 to the GPT instance of Microsoft Copilot (in November 2023) with appropriate prompts for text generation and translation (the appendix shows the complete transcript) which produced the following outputs." }, { "figure_ref": [], "heading": "E2E challenge", "publication_ref": [], "table_ref": [], "text": "\"The Mill\" is a pub situated in the riverside area, in proximity to \"Raja Indian Cuisine\".\nIt serves English food and is known for its high price range. It is also family-friendly.\n\"Le Mill\" est un pub situé dans la zone riverside, à proximité de \"Raja Indian Cuisine\".\nIl sert de la nourriture anglaise et est connu pour sa gamme de prix élevée.\nIl est également adapté aux familles." }, { "figure_ref": [], "heading": "WebNLG Challenge 2020", "publication_ref": [], "table_ref": [], "text": "Alan Bean, born on March 15, 1932, in Wheeler, Texas, was selected by NASA in 1963. He was part of the Apollo 12 mission, which was operated by NASA. Alfred Worden served as the backup pilot for Apollo 12, and David Scott was the commander of the mission.\nFor these two examples, the results are excellent and comparable to the output of best systems that participated in these challenges. The output is more colloquial than the one produced by pyrealb and is well formatted. These types of input-output pairs, which appeared in the ACL Anthology, were probably included in the training set of the Transformer.\n\"en\": lambda n, child, eat, a, potato:\nS(NP(D(\"the\"), N(child).n(n)), VP(V(eat), NP(D(a), potato.n(\"p\")))), \"params\": [numbers,\n[[\"enfant\", \"child\"], *relatives],\n[[\"manger\", \"eat\"], [\"adorer\", \"love\"], [\"détester\", \"hate\"]],\ndets,\n[[lambda: NP(N(\"pomme\"), PP(P(\"de\"), N(\"terre\"))), lambda: N(\"potato\")],\n[lambda: NP(N(\"melon\"), PP(P(\"de\"), N(\"eau\"))), lambda:\nN(\"watermelon\")]]] We also submitted the data shown above for the weather reports and the basketball summaries generation for which the results are impressive given the fact that the input was limited to a subset of the full data without any context." }, { "figure_ref": [], "heading": "Weather reports", "publication_ref": [], "table_ref": [], "text": "Here is the weather report based on the provided table : \n\nFrom 5:00 to 15:00, there is a 10% chance of precipitation. The sky will be partially covered, with a sky cover value ranging from 2 to 8. Temperatures will rise from 15°C at 5:00 to a high of 28°C at 14:00. The wind will be coming from the southwest with a speed of 10 km/h.\nFrom 15:00 to 18:00, the chance of precipitation increases to 30%, with possible showers or even a thunderstorm. The sky will be mostly covered, with a sky cover value of 8. Temperatures will decrease to 25°C. The wind will continue from the southwest but will increase to a speed of 20 km/h.\nFrom 18:00 to 20:00, temperatures will further decrease to 23°C. The wind will continue from the southwest at a speed of 20 km/h.\nPlease note that the UV index will be 7.7 between 12:00 and 14:00, which is considered high. " }, { "figure_ref": [], "heading": "Basketball summaries", "publication_ref": [ "b8" ], "table_ref": [], "text": "During the game, a total of 96 points were scored. The team made 35 out of 67 field goal attempts, with 7 out of 23 successful three-point attempts. They also made 19 out of 26 free throw attempts.\nIn the first quarter (Q1), the team scored 30 points, making 13 out of 21 field goal attempts and 3 out of 8 three-point attempts. They made 1 out of 2 free throw attempts.\nIn the second quarter (Q2), the team scored 24 points, making 8 out of 15 field goal attempts. However, they didn't make any three-point attempts in this quarter. They made 8 out of 12 free throw attempts.\nIn the third quarter (Q3), the team scored 29 points, making 10 out of 16 field goal attempts and 4 out of 8 three-point attempts. They made 5 out of 6 free throw attempts.\nIn the fourth quarter (Q4), the team scored 13 points, making 4 out of 15 field goal attempts. They didn't make any three-point attempts in this quarter. They made 5 out of 6 free throw attempts.\nThe team collected a total of 37 rebounds, with 4 of them being offensive rebounds. They also had 28 assists, 9 steals, and 10 blocks. However, they turned the ball over 24 times and committed 21 personal fouls.\nIn this case, given a box-score table with coded titles for columns and lines, Copilot managed to make sense of abbreviations such as FG3M and FG3A. This is better than us, because not being a basketball fan, we had no clue of what these codes meant. The text organisation is also good, because the results for the whole game are shown before the ones for each quarter.\nGiven the information for the four players scoring the most points, the result was this text which is factually correct, although a bit repetitive.\nHere is a summary of each player's performance:\nTony Wroten: He started the game and played for 33 minutes. He made 6 out of 11 field goal attempts, including 1 out of 4 three-point attempts. He also made 8 out of 11 free throw attempts.\nHe collected 3 total rebounds and provided 10 assists. He had 1 steal, 1 block, 4 turnovers, and 1 personal foul. He scored a total of 21 points but had a negative impact on the team's score with a -11 plus/minus.\nBrandon Davies: He came off the bench and played for 23 minutes. He made 7 out of 9 field goal attempts, including 1 out of 2 three-point attempts. He also made 3 out of 4 free throw attempts. He collected 3 total rebounds and had 3 steals. He had 3 turnovers and 3 personal fouls. He scored a total of 18 points and had a -1 plus/minus, indicating a nearly neutral impact on the team's score.\nHollis Thompson: He started the game and played for 32 minutes. He made 4 out of 8 field goal attempts, including 2 out of 5 three-point attempts. He did not attempt any free throws. He collected 1 total rebound and provided 2 assists. He had 3 blocks, 2 turnovers, and 2 personal fouls. He scored a total of 10 points but had a negative impact on the team's score with a -17 plus/minus.\nHenry Sims: He started the game and played for 27 minutes. He made 4 out of 9 field goal attempts and made 1 out of 2 free throw attempts. He collected 4 total rebounds and provided 2 assists. He had 1 block, no turnovers, and 1 personal foul. He scored a total of 9 points and had a -10 plus/minus, indicating a negative impact on the team's score.\nOf course, the results for the last two examples are difficult to compare with the pyrealb outputs because\nCopilot was given only a small subset of the data without any context. But even there it managed to realize excellent literal readings of the data.\nIn our experiments, each conversation turn took between 4 or 5 seconds of real time and probably costed a few dozens of cents. This is good enough for demoing, but this would probably be prohibitive in a production setting. This type of technology is just beginning and will surely be improved in terms in cost and execution time over the years, perhaps even months. pyrealb is greener as it runs very fast (one or two milliseconds per text) with limited memory requirements on a stock laptop without the need for a GPU. To be fair, we should take into account the development time as well. An NLG system like GPT has a development time that is amortized over all its possible applications, but it has a high cost of inference. For pyrealb, we have a long development time and almost none for inference.\nEven though these black box systems show impressive results, they can be unpredictable (like humans!) During our short experiments we noticed that, when given the same prompt, Copilot did not always return exactly the same results, this can be problematic in some cases. The main advantage of a symbolic system is the control on the generated output for either the formulation or the phrasing. It also lend itself to interpretations, debugging and hardcoding of business rules. Reiter (2023) discusses some pros and cons of using large language models in a data-to-text context." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This document has shown pyrealb brought into play in different data-to-text contexts to convey information in French and English reliably through a convenient formalism that is familiar to linguists. The pyrealb demo directory shows other examples of features in a unilingual context, many of them can be run in either French or English but not in strictly bilingual mode. This approach could be extended to other languages provided that extensive lexicons and programs for implementing grammar rules are developed for them.\nIt is often argued that the drawback of a symbolic approach to generation is that sentence patterns must be developed manually by studying the corpus of reference texts. But we found this approach easier, faster and more fun than fine-tuning the parameters of a learning algorithm or tweaking prompts for an LLM. The fact the pyrealb caters automatically to conjugation, declension, agreements, elision, punctuation and formatting (HTML or not) greatly simplifies the building of sentence patterns to realize production quality texts. Patterns are defined at a relatively abstract level and can be realized in a variety of ways. It would be interesting and challenging to explore the possibility of learning sentence patterns from corpora.\nThe objective of this document was to demonstrate the organization of the generation process with pyrealb.\nOutside of a brief comparison with an instance of a GPT, It did not discuss any industrial exploitation or evaluation, which is an independent but important endeavor." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Fabrizio Gotti and Ehud Reiter for interesting suggestions about a previous version of this paper." }, { "figure_ref": [], "heading": "Bing:", "publication_ref": [], "table_ref": [], "text": "The Acharya Institute of Technology is an educational institution located in India, specifically in the state of Karnataka. It was established in the year 2000 and is situated in Soldevanahalli, on Acharya Dr. Sarvapalli Radhakrishnan Road, Hessarghatta Main Road, Bangalore -560090. The institute's motto is \"Nurturing Excellence\". It is affiliated with the Visvesvaraya Technological University, which is based in the city of Belgaum." }, { "figure_ref": [], "heading": "Question:", "publication_ref": [], "table_ref": [], "text": "Given this weather information can you produce a weather report.\nBing:Here is the weather report based on the provided table:\nFrom 5:00 to 15:00, there is a 10% chance of precipitation. The sky will be partially covered, with a sky cover value ranging from 2 to 8. Temperatures will rise from 15°C at 5:00 to a high of 28°C at 14:00. The wind will be coming from the southwest with a speed of 10 km/h.\nFrom 15:00 to 18:00, the chance of precipitation increases to 30%, with possible showers or even a thunderstorm. The sky will be mostly covered, with a sky cover value of 8. Temperatures will decrease to 25°C. The wind will continue from the southwest but will increase to a speed of 20 km/h.\nFrom 18:00 to 20:00, temperatures will further decrease to 23°C. The wind will continue from the southwest at a speed of 20 km/h.\nPlease note that the UV index will be 7.7 between 12:00 and 14:00, which is considered high. It's recommended to wear protective clothing, a wide-brimmed hat, and UV-blocking sunglasses during this time.\nPlease stay tuned for updates and stay safe!" }, { "figure_ref": [], "heading": "Question:", "publication_ref": [], "table_ref": [], "text": "Given the following basketball box scores:\nDescribe what happenned during this game." }, { "figure_ref": [], "heading": "Bing:", "publication_ref": [], "table_ref": [], "text": "During the game, a total of 96 points were scored. The team made 35 out of 67 field goal attempts, with 7 out of 23 successful three-point attempts. They also made 19 out of 26 free throw attempts." }, { "figure_ref": [], "heading": "Appendix: finding and exploring data", "publication_ref": [], "table_ref": [], "text": "In a real application, the data is usually available although not always easy to collect, filter and curate. One should never underestimate the difficulty of getting high (or even moderate) quality data that often appears in various formats on different devices.\nTo develop new systems and compare their results with the state of the art, it can be useful to use existing data sets. For NLG, many of these data sets are associated with reference texts that can be exploited as targets for generation, corpus analysis or training machine learning systems. Most of these texts were authored by humans directly or by crowdsourcing, but care must be taken because some of them were machine generated.\nHere are some useful sources of NLG data: Data sets for NLG from the Natural Language Generation Portal Data cards from the GEM benchmark environment focusing on NLG evaluation Hugging Face Datasets is a source of well-organized datasets with an efficient python API. At the time of writing (October 2023), there were more than 70,000 datasets (more than 1,500 being tagged Text Generation!), so it is best to search a data set by name.\nAlways take some time to look at the data and check the types and range of values. We have encountered JSON files in which all numeric values had been stored as strings; given that \"10\" < \"2\" is True and \"10\"+\"2\" equals \"102\" , this can give rise to unexpected results or sorting. I want to write a completely factual restaurant description for a guide. It is important that the description be entirely factual, and be one or two-sentence long. This is what I know about the restaurant: name=\"The Mill\" near=\"Raja Indian Cuisine\" customer rating=\"unspecified\" priceRange=\"high\" area=\"riverside\" familyFriendly=\"yes\" food=\"English\" eatType=\"pub\" Could you generate this restaurant description in English?" }, { "figure_ref": [], "heading": "Bing:", "publication_ref": [], "table_ref": [], "text": "\"The Mill\" is a family-friendly pub located in the riverside area, near \"Raja Indian Cuisine\". Known for its highend English cuisine, it provides a warm and welcoming atmosphere for all patrons." }, { "figure_ref": [], "heading": "Question:", "publication_ref": [], "table_ref": [], "text": "Could you rewrite this without any promotional tone, and be entirely factual?" }, { "figure_ref": [], "heading": "Bing:", "publication_ref": [], "table_ref": [], "text": "\"The Mill\" is a pub situated in the riverside area, in proximity to \"Raja Indian Cuisine\". It serves English food and is known for its high price range. It is also family-friendly." }, { "figure_ref": [], "heading": "Question:", "publication_ref": [], "table_ref": [], "text": "Very good. Could you generate a French translation of your last answer?" }, { "figure_ref": [], "heading": "Bing:", "publication_ref": [], "table_ref": [], "text": "\"Le Mill\" est un pub situé dans la zone riverside, à proximité de \"Raja Indian Cuisine\". Il sert de la nourriture anglaise et est connu pour sa gamme de prix élevée. Il est également adapté aux familles." }, { "figure_ref": [], "heading": "Question:", "publication_ref": [], "table_ref": [], "text": "Could you write a very short factual paragraph from this list of facts: In the first quarter (Q1), the team scored 30 points, making 13 out of 21 field goal attempts and 3 out of 8 three-point attempts. They made 1 out of 2 free throw attempts.\nIn the second quarter (Q2), the team scored 24 points, making 8 out of 15 field goal attempts. However, they didn't make any three-point attempts in this quarter. They made 8 out of 12 free throw attempts.\nIn the third quarter (Q3), the team scored 29 points, making 10 out of 16 field goal attempts and 4 out of 8 three-point attempts. They made 5 out of 6 free throw attempts.\nIn the fourth quarter (Q4), the team scored 13 points, making 4 out of 15 field goal attempts. They didn't make any three-point attempts in this quarter. They made 5 out of 6 free throw attempts.\nThe team collected a total of 37 rebounds, with 4 of them being offensive rebounds. They also had 28 assists, 9 steals, and 10 blocks. However, they turned the ball over 24 times and committed 21 personal fouls." }, { "figure_ref": [], "heading": "Question:", "publication_ref": [], "table_ref": [], "text": "Given the following basketball box scores for players:\nDescribe the performance of each player." }, { "figure_ref": [], "heading": "Bing:", "publication_ref": [], "table_ref": [], "text": "Here is a summary of each player's performance:\nTony Wroten: He started the game and played for 33 minutes. He made 6 out of 11 field goal attempts, including 1 out of 4 three-point attempts. He also made 8 out of 11 free throw attempts. He collected 3 total rebounds and provided 10 assists. He had 1 steal, 1 block, 4 turnovers, and 1 personal foul. He scored a total of 21 points but had a negative impact on the team's score with a -11 plus/minus.\nBrandon Davies: He came off the bench and played for 23 minutes. He made 7 out of 9 field goal attempts, including 1 out of 2 three-point attempts. He also made 3 out of 4 free throw attempts. He collected 3 total rebounds and had 3 steals. He had 3 turnovers and 3 personal fouls. He scored a total of 18 points and had a -1 plus/minus, indicating a nearly neutral impact on the team's score.\nHollis Thompson: He started the game and played for 32 minutes. He made 4 out of 8 field goal attempts, including 2 out of 5 three-point attempts. He did not attempt any free throws. He collected 1 total rebound and provided 2 assists. He had 3 blocks, 2 turnovers, and 2 personal fouls. He scored a total of 10 points but had a negative impact on the team's score with a -17 plus/minus.\nHenry Sims: He started the game and played for 27 minutes. He made 4 out of 9 field goal attempts and made 1 out of 2 free throw attempts. He collected 4 total rebounds and provided 2 assists. He had 1 block, no turnovers, and 1 personal foul. He scored a total of 9 points and had a -10 plus/minus, indicating a negative impact on the team's score." }, { "figure_ref": [], "heading": "Question:", "publication_ref": [], "table_ref": [], "text": "Can you report only the outstanding performances of each player from your previous response." }, { "figure_ref": [], "heading": "Bing:", "publication_ref": [], "table_ref": [], "text": "Sure, here are the outstanding performances of each player:\nTony Wroten: He scored 21 points, made 6 out of 11 field goal attempts, and had 10 assists.\nBrandon Davies: He scored 18 points and made 7 out of 9 field goal attempts.\nHollis Thompson: He made 3 blocks.\nHenry Sims: He had no turnovers.\nThese are the highlights based on the box scores provided. Please note that \"outstanding\" can be subjective and may vary depending on individual interpretation." } ]
This document illustrates the use of pyrealb for generating two parallel texts (English and French) from a single source of data. The data selection and text organisation processes are shared between the two languages. only language dependent word and phrasing choices are distinct processes. The realized texts thus convey identical information in both languages without the risk of being lost in translation. This is especially important in cases where strict and simultaneous bilingualism is required. We first present the types of applications targeted by this approach and how the pyrealb English and French realizer can be used for achieving this goal in a natural way. We describe an object-oriented organization to ensure a convenient realization in both languages. To illustrate the process, different types of applications are then briefly sketched with links to the source code. A brief comparison of the text generation is given with the output of an instance of a GPT.
[ { "figure_caption": "(D(\"the\"),N(\"cat\"),A(\"small\")).n(\"p\") # create an English plural NP loadFr() # set the language to French verb = VP(V(\"sauter\"), # create a French VP, present by default PP(P(\"sur\"), # create a PP with NP(D(\"le\"), N(\"tapis\"), A(\"vert\")))) # an object NP print", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "#The mother and the daughter are happy. # The mother, the daughter and the father are happy. heureux\"))).realize()) # output: # La mère est heureuse.# La mère et la fille sont heureuses.# La mère, la fille et le père sont heureux.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "to the English and French lexica for loadF in [loadEn,loadFr]:", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "on Tuesday, September 26, 2023 at 5 p.m.-le mardi 26 septembre 2023 à 17 h Alice (one person) was present at an assembly on Monday, September 25, 2023 at 5 p.m. Alice (une personne) fut présente à une réunion le lundi 25 septembre 2023 à 17 h. --Alice and Eve (two persons) are present at an assembly on Tuesday, September 26, 2023 at 5 p.m. Alice et Eve (deux personnes) sont présentes à une réunion le mardi 26 septembre 2023 à 17 h. --Alice, Eve and Bob (three persons) will be present at an assembly on Wednesday, September 27, 2023 at 5 p.m. Alice, Eve et Bob (trois personnes) seront présents à une réunion le mercredi", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "today = datetime.today() dateOptions = {\"minute\": False, \"second\": False} # compare day of date with the day of the reference def tense(self,date, reference): o = date.toordinal(", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "return VP(V(\"attend\"),meeting) | return VP(V(\"être\"),A(\"présent\"), | PP(P(\"à\"),meeting)) | def individual(self): | def individual(self): return N(\"person\") | return N(\"individu\") | def meeting(self,noun): | def meeting(self,noun): return NP(D(\"the\"), N(noun)) | return NP(D(\"le\"), N(noun)) english = English() francais = Francais() for (i,day) in zip(range(1,len(participants)+1), [today-timedelta(days=1),today,today+timedelta(days=1)]): english.report(\"assembly\",participants[:i],day) francais.report(\"réunion\",participants[:i],day)", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "onFriday, September 29, 2023 at 2 p.m.-le vendredi 29 septembre 2023 à 14 h Alice (one person) attended the assembly on Thursday, September 28, 2023 at 2 p.m. Alice (un individu) fut présente à la réunion le jeudi 28 septembre 2023 à 14 h. --Alice and Eve (two persons) attend the assembly on Friday, September 29, 2023 at 2 p.m. Alice et Eve (deux individus) sont présentes à la réunion le vendredi 29 and Bob (three persons) will attend the assembly on Saturday, September 30, 2023 at 2 p.m. Alice, Eve et Bob (trois individus) seront présents à la réunion le samedi", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "2food[English], priceRange[high], near[Raja Indian Cuisine], name[The Mill], area[riverside], familyFriendly[yes], eatType[pub] 1The Mill is a pub near Raja Indian Cuisine in the riverside area that serves English food with high prices. It is kid friendly.The Mill est un pub près de Raja Indian Cuisine au bord de la rivière qui offre une cuisine anglaise à prix élevés. Il est approprié pour les enfants.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "WEATHER BULLETIN: regular Forecasts issued by pyrealb on Wednesday, July 18, 2018 at 4:00 p.m. for today and tomorrow at 4:00:00 p.m. The next scheduled forecasts will be issued on Thursday, July 19, 2018 at 5:30 a.m. Armstrong -Auden -Wabakimi Park Nakina -Aroland -Pagwa Tonight : Clear. A few clouds. Partly cloudy. 30 percent chance of showers. Wind west 20 km/h around noon. Becoming southwest in the evening. Low 14, with temperature rising to 28 by morning. Thursday : Mainly sunny. Increasing cloudiness tomorrow morning. Mainly cloudy. 30 percent chance of showers. Wind southwest 20 km/h around noon. High 28. Low 15. UV index 8 or very high. Thursday night : Mainly cloudy. 30 percent chance of showers. Wind southwest 20 km/h around noon. Low 14, with temperature rising to 23", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "( 9 -917 L, 2-5 L3, 10-11 LF) et quatre passes décisives. Tony Wroten qui débutait la partie a obtenu le meilleur pointage du match avec 21 points (6-11 L, 1-4 L3, 8-11 LF) et dix passes décisives et a terminé avec un double-double. Le Heat a compté 49 pour cent de tirs réussis et 20 lancers francs sur 29. Mario Chalmers qui figurait dans l'alignement de départ a contribué un efficace 20 points avec six tirs réussis. Luol Deng a fini avec 15 points. Shawne Williams a enregistré 15 points (5-9 L, 3-5 L3, 2-2 LF) tout en ajoutant quatre passes décisives. Dwyane Wade a marqué neuf points avec dix passes décisives. Les 76ers ont compté 52 pour cent de tirs réussis et 19 lancers francs sur 26 et ont subi 24 pertes de ballon. Brandon Davies qui figurait dans l'alignement de départ a enregistré 18 points avec sept tirs réussis. Luc Mbah a Moute a fini avec neuf points avec sept rebonds récupérés et trois passes décisives. Malcolm Thomas a ajouté huit points en 19 minutes. Alexey Shved a marqué six points avec six passes décisives. À venir pour le Heat, un match à domicile contre Toronto. Pour leur prochain match, les 76ers joueront à la maison contre les Rockets de Houston lundi. more details", "figure_data": "", "figure_id": "fig_12", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "(lines 6-23). The parallel pyreab structures are created by two lambda s (lines 8-11 and 12-15). These functions are called by picking randomly in the lists of pairs of synonyms with their translation (lines 16-21) to assemble the actual parameters for the functions. Distractors are picked from the unchosen target values. Syntactical structures and the parameter values can differ in the source and target languages (see lines 20-21) but the lambdas must have the same number of corresponding parameters.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "L'enfant adorera-t-il les melons d'eau?:OK The father will eat watermelons. détester, père, enfant, eau, ', des, Le, d, ., mangera, le, : lambda n, enfant, manger, un, pommeDT: S(NP(D(\"le\"), N(enfant).n(n)), VP(V(manger), NP(D(un), pommeDT.n(\"p\"))", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "It's recommended to wear protective clothing, a wide-brimmed hat, and UV-blocking sunglasses during this time. Please stay tuned for updates and stay safe! Copilot generated a Markdown formatted rendering grouping information by time interval ending with a smiley! In some cases, it combined time intervals with the corresponding temperatures. It also hallucinated a recommandation for the high UV index.", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Constituent or a Dependent notation for building sentences. In this document, we give examples using the Constituent notation, but the same methodology applies to the Dependent notation. The", "figure_data": "conjugations;syntactic rules (python classes) to build sentence structures from terminals and properly order wordswithin a sentence, performing the most common agreements between constituents and carrying outother useful sentence organization tasks such as managing coordination or applying sentencetransformations.pyrealb also performs the spelling out of numbers and the wording of temporal expressions that are especiallyuseful in data to text applications.pyrealb accepts either a", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "issue and next-issue times, list of region names to which the forecast applies in both English and French weather information : list of values of which the first two are the starting hour and ending hour relative to 0h of the issue datetime, when they are negative, they refer to historical data; the other values (described below depending on the type of information). For precipitation-type and wind , a value can be a list of", "figure_data": "administrative information: values which describes an exceptional phenomenon (e.g., gust within a wind period) that occurs duringthis period.tomorrow( 6h,18h) fpto12-2018-07-18-2000-r1209c :: 2018-07-18 16:00:00precipitation-type[15h,0h):[showers, [15h,0h):[thunderstorm]]precipitation-probability[5h,15h):[10], [15h,18h):[30]1 sky-cover[5h,11h):[2, 2], [11h,15h):[2, 8], [15h,18h):[8, 8]2 3 temperatures :[5h,8h):[15], [8h,11h):[23], [11h,14h):[28], [14h,17h):[25], [17h,20h):[23]4 uv-index[12h,14h):[7.7]56 wind[0h,12h):[sw, speed, 10], [12h,20h):[sw, speed, 20]7", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "precipitation-type[15h,0h):[showers, [15h,0h):[thunderstorm]]precipitation-probability[5h,15h):[10], [15h,18h):[30]sky-cover[5h,11h):[2, 2], [11h,15h):[2, 8], [15h,18h):[8, 8]temperatures :[5h,8h):[15], [8h,11h):[23], [11h,14h):[28], [14h,17h):[25], [17h,20h):[23]uv-index[12h,14h):[7.7]wind[0h,12h):[sw, speed, 10], [12h,20h):[sw, speed, 20]GameFGMFGAFG3MFG3AFTMFTAOREBTREBASTSTLBLKTOVPFPTSQ1132138122101025630Q281502812111711424Q310164856110922629Q4415055606242813game3567723192643728910242196", "figure_id": "tab_7", "figure_label": ":", "figure_type": "table" } ]
Guy Lapalme
[ { "authors": "References Ondřej Dušek; Jekaterina Novikova; Verena Rieser", "journal": "Computer Speech & Language", "ref_id": "b0", "title": "Evaluating the state-of-the-art of End-to-End Natural Language Generation: The E2E NLG challenge", "year": "2020" }, { "authors": "Albert Gatt; Ehud Reiter", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "SimpleNLG: A realisation engine for practical applications", "year": "2009-03" }, { "authors": "Martin Kay", "journal": "Machine Translation", "ref_id": "b2", "title": "The Proper Place of Men and Machines in Language Translation", "year": "1997" }, { "authors": "G Lapalme", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Validation of Universal Dependencies by regeneration", "year": "2021-12" }, { "authors": "G Lapalme", "journal": "", "ref_id": "b4", "title": "The jsRealB Text Realizer: Organization and Use Cases Revised version", "year": "2022-05" }, { "authors": "François Lareau; Florie Lambrey; Ieva Dubinskaite; Daniel Galarreta-Piquette; Maryam Nejat", "journal": "European Language Resources Association (ELRA", "ref_id": "b5", "title": "GenDR: A Generic Deep Realizer with Complex Lexicalization", "year": "2018" }, { "authors": "Aarne Ranta", "journal": "CSLI Publications", "ref_id": "b6", "title": "Grammatical Framework: Programming with Multilingual Grammars", "year": "2011" }, { "authors": "E Reiter; R Dale", "journal": "Cambridge University Press", "ref_id": "b7", "title": "Building natural language generation systems", "year": "2000" }, { "authors": "E Reiter", "journal": "", "ref_id": "b8", "title": "LLMs and Data-to-text", "year": "2023-06-29" }, { "authors": "Ashish Upadhyay; Stewart Massie", "journal": "International Committee on Computational Linguistics", "ref_id": "b9", "title": "Content Type Profiling of Data-to-Text Generation Datasets", "year": "2022" }, { "authors": "P.-L Vaudry; G Lapalme", "journal": "", "ref_id": "b10", "title": "Adapting SimpleNLG for bilingual English -French realisation", "year": "2013" } ]
[]
10.1109/tpami.2022.3217852
2023-11-24
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b17", "b1", "b32", "b8", "b6", "b18", "b24", "b38", "b11", "b27", "b40", "b2", "b27", "b40", "b11", "b19" ], "table_ref": [], "text": "Instance segmentation is the problem of labelling every single pixel that belongs to a known set of categories. Deep-learning based methods have shown tremendous progress in recent years with early works such as Mask R-CNN [18] and more recently with Cascade R-CNN [2], SOLOv2 [33] and MaskFormer [9]. Although broadly applicable when we have a lot of labeled data, fully supervised instance segmentation methods are limited to the set of categories they are trained on. In this paper, we explore a model that can be more useful by taking inputs from the user about what objects to segment. We ask for 2 inputs: (i) a single click on the object to be segmented and (ii) a text description of the same object.\nIn isolation, each of these modalities is insufficient to unambiguously designate a single instance to be segmented. For example, consider the click in Figure 1a. It is unclear what the user wants to segment based on this one input. The user could mean they want to select the whole person or just the tie or shirt. This lack of specificity is also reflected in a model trained on single-click data, as seen in Figure 1b. Similarly, text input alone can also be ambiguous -for example, using \"car\" as text input would be insufficient to describe a single instance if there are multiple cars in an image. Though there are ways to address this ambiguity through the use of referring expressions [7,19,25,38], these approaches place a heavy burden on the user to carefully construct perfectly unambiguous text phrases. Together however, a Click + Text input mechanism is a simple low-effort way to unambiguously designate an instance in an image to be segmented.\nA similar framework was first delineated by the PhraseClick [12] paper, which proposed an architecture that takes text as input using a bi-directional LSTM. Although PhraseClick addresses the ambiguity problem, it does so in a class specific manner. Their approach only learns to model the classes in their training dataset, and has no way to generalize beyond the set of words that it sees during training.\nOur model uses the same set of inputs as PhraseClick (Click + Text), but goes beyond the fixed set of words it observes during training. To do so, we leverage the generalization abilities of image-text models such as CLIP [28], which have demonstrated zero-shot generalization abilities by learning from web-scale image/text pairs. Specifically, our method relies on saliency maps extracted from CLIP style models (e.g. using recent approaches such as MaskCLIP [40] or Transformer Explainability by Chefer et al [3]). These \"text saliency\" methods allow us to gauge the relevance of each pixel in an image to a given text-query. Because models like CLIP [28] are trained on large, open-vocabulary datasets, approaches like MaskCLIP [40] gives us a coarse, semantic-level understanding of a wide variety of concepts (see heatmap examples in Appendix). And combined with a click from the user, this gives us precise information about which instance they want to segment.\nThe benefit of using text and click can be seen in Figure 1c and 1d. Our model can successfully use the input text to predict 2 different objects given the same input point, and it is able to do so for text inputs beyond the categories it has seen during training time.\nOur main contributions are as follows:\n1. We propose to condition segmentation models on text by leveraging pre-trained CLIP models using MaskCLIP to generate a per-pixel saliency that is used as input to our model and show our approach to be effective for novel category generalization.\n2. We show that our approach matches or exceeds the performance of the PhraseClick method [12] while generalizing to many more categories. 3. We compare with the recent Segment Anything (SAM) [20] model and show that we outperform it on the task of segmenting instances based on single click and text as input, while training on a much smaller dataset.\nWe also experiment with truly open-vocabulary setting on queries far out of distribution from academic datasets. As evident in Figure 2, our model performs well on classes that were outside of seen or unseen sets within the training data, on images completely distinct from our training or validation data. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b26", "b16", "b3", "b4", "b13", "b31", "b39", "b7", "b36", "b11", "b29", "b5", "b23", "b0", "b25", "b15", "b35", "b21", "b34", "b12", "b40", "b27" ], "table_ref": [], "text": "Semantic / Instance Segmentation. Semantic segmentation is the problem of assigning a semantic label to each pixel in an image [27]. Because it requires a large dataset of dense annotations however, it can be time-consuming and expensive to crowd-source. Training segmentation models in new or niche domains therefore is constrained by data annotation availability and cost. State of the art semantic segmentation techniques employ a fully convolutional architecture that combine low level and high level feature maps for accurate seg-mentation masks [17]. Deeplab V3 [4] uses atrous convolutions to capture objects and features at multiple scales spanning large and small and its successor DeeplabV3+ [5] remains a strong SOTA segmentation architecture, adding a decoder module to Deeplab V3 to improve segmentation quality along object boundaries. Another class of state of the art segmentation models are based on Vision Transformers (or ViT) [14], and extend it to segmentation by decoding image patch embeddings from ViT to obtain class labels (e.g., [32]) This family includes SegViT [39] that proposes to better use the attention mechanisms of ViT to generate mask proposals, as well as ViT-Adapter-L [8] that attempts to correct weak priors in ViT using a pre-training-free adapter.\nInteractive Object Segmentation. Interactive object segmentation seeks to utilize additionalhuman inputs such as clicks or bounding boxes at inference time to guide/refine a segmentation. Deep interactive object detection [36] use a novel strategy to select foreground and background points from an image, which are transformed via Euclidean distance maps in to channels that can be used as inputs into a convolutional network. PhraseClick [12] explores how to produce interactive segmentation masks using text phrases in a fully supervised manner as an additional modality of input. They demonstrate that adding phrase information reduces the number of interactions required to achieve a given segmentation performance, as measured by mIoU. Sofiiuk et al. [30] highlights the issue with other inference-time optimization procedures in related works and proposes an iterative training procedure with a simple feedforward model. Focal click [6] highlights how existing interactive segmentation models can perform poorly on mask refinement when they destroy the correct parts; and proposes a new method that refines masks in localized areas. SimpleClick [24] explores ViT in the context of interactive segmentation, adding only a patch embedding layer to encode user clicks without extensively modifying the ViT backbone.\nZero Shot Segmentation. ZS3Net [1] performs zero shot semantic segmentation by correlating visual and text features using word2vec [26]. They also introduce a self-training procedure using pseudo-labels for pixels of unseen classes. CAGNet [16] adds a contextual module that takes as input the segmentation backbone output and predicts a pixel-wise feature and contextual latent code per pixel. Their aim is to use more pixel-level information with their feature generator whereas ZS3Net contains a feature generator that uses only semantic word embeddings.\nWhile traditional end-to-end segmentation features are grouped implicitly in convolutional networks, GroupVIT [35] seeks to explicitly semantically group similar image regions into larger segments to perform zero-shot segmentation. It achieves 52.3% mIoU for zero shot accuracy on PASCAL VOC 2012. LSeg [22] trains an image encoder to maximize similarity between the text embedding for a given query and the image embedding of the ground truth pixel classes. SPNet [34] performs inference on unseen classes by utilizing semantic word embeddings trained on a free text corpus such as word2vec or fast-text.\nZegformer [13] achieves impressive results on zero-shot segmentation by \"decoupling\" the segmentation task into two stages: grouping pixels into likely segments in a classagnostic manner, and assigning classes to grouped pixels. MaskCLIP [40] achieved SOTA transductive zero-shot semantic segmentation by utilizing a pre-trained CLIP [28] model. They also showed that they can generate psuedo-labels of unseen categories and use it to train a semantic segmentation model. Although this approach can generalize to many classes, it necessitates training a new model for each set of new classes which is costly. " }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Method", "publication_ref": [ "b36", "b40", "b28", "b2", "b40", "b11" ], "table_ref": [], "text": "Our main objective is to create a model capable of open vocabulary segmentation on novel classes. Figure 3 summarizes our approach to this problem. We take as input to our segmentation model an RGB image, a single foreground click, and a text prompt and produces a class agnostic segmentation mask as output. While there are many possible ways to incorporate click and text cues into such a model, we take a simple but effective approach of encoding both side inputs as additional channels to be concatenated with the original input image, then fed to a standard segmentation network (e.g., DeepLabV3+, which we use in our experiments). Specifically, our foreground click is passed through a Euclidean distance transform to create a map with a continuous range of values normalized to [0, 1]. This is a standard technique in the interactive segmentation literature [36].\nIn order to convert a text prompt to a single channel image, we passed the text prompt through a text-saliency model to produce a spatially sensitive guess (i.e., a saliency heatmap) of what pixels are similar to a given text query. In our experiments, we use the MaskCLIP text-saliency model model [40] which allows us to effectively incorporate a textually-sensitive, spatial saliency map that takes as input any open vocabulary text prompt. We note that MaskCLIP builds on the CLIP vision-language model that learns to align similar images and text queries via its massive web-scale dataset of image-caption pairs and contrastive learning scheme. In Figure 4 we visualize the output of this method.\nIn our experiments, we have informally tried several saliency methods such as GradCAM [29], Generic Transformer Interpretability [3], and MaskCLIP [40]. From qualitative experiments, we observed the best results from MaskCLIP, and it also represents a strong baseline that is easy to implement with a few changes to the encoder layer of CLIP.\nOur choice of converting a text prompt to a single channel image is nonstandard; how-ever, we argue that it has a number of benefits. In using a powerful text-saliency model, we significantly lessen the burden on our own segmentation network since its task can now be viewed as that of refining a (admittedly) rough initial segmentation into a clean segmentation given the image and click. Moreover, since this saliency heatmap representation is itself class agnostic, our network should conceptually generalize well to classes that it did not get to see at training time (and we show that this is indeed the case in our experiments). As a contrast, the PhraseClick paper [12] embeds text inputs with Word2Vec and uses a bidirectional LSTM to model contextual relations between words in a phrase. However their image and text vector representations are not explicitly aligned; the image embedding vector is simply produced from a global pooling operation. Moreover, their model is not open-vocabulary, it is limited to a fixed set of prompts introduced during training." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b9", "b36", "b14", "b22", "b38", "b20", "b0" ], "table_ref": [], "text": "To measure our model's ability to generalize to novel classes, we train our model on a subset of all classes in the dataset (which we call \"seen classes\"), but at test time evaluate the trained model on the remaining classes (called \"unseen classes\") as well as all classes present in the dataset. Where available (with the exception of OpenImages) we follow the standard zeroshot segmentation literature splits of \"seen\" and \"unseen\" classes in our experiments.\nIn our experiments, we modify the first layer of a DeepLabv3+ model (with ResNet backbone) to accept a 5 channel image as input, and train all layers from scratch. We modify the number of output classes in the mask prediction module to 2 (to delineate foreground/ background) as we perform inference on each individual instance, and not all instances in a given image. We use standard hyperparameters (based on the MMSeg implementation [10]) for DeeplabV3+ and train on 2 Nvidia A40 GPUs with a batch size of 32. Our heatmaps and clickmaps are normalized per instance, to scale values between [-1, 1].\nTo generate clicks for training, we sample a random point within the ground-truth segmentation mask boundary. Building off of standard interactive segmentation literature [36], positive points are selected to be at least some minimum distance from the object border, and a minimum distance from other positive points. Negative points are sampled using a variety of strategies: first, from points near the border of the object mask boundary; second, from points in other object instances in the same image that we are not trying to segment.\nWe train separate models for the Pascal VOC [15], COCO [23], refCOCO [38], and OpenImages datasets [21]. We train models in two configurations: zero-shot segmentation, and fully-supervised segmentation. In the former, the model has access only to instances in the limited set of seen-classes and RGB images that contain instances of those seen-class sets. For VOC, we use the 5 seen-class set defined in the ZS3 [1] out of 20 total classes. For refCOCO and COCO, we use the standard 20/60 split of segmentation classes proposed in prior zero-shot segmentation literature. For our OpenImages experiments, we found no prior standard split for zero-shot segmentation, and there are 350 total segmentation classes. Thus, we use the intersection of the COCO classes and OpenImages segmentation classes as our seen set, resulting in 64 seen classes for training (∼ 20% of total classes). All results are reported at 90K iterations unless otherwise stated." }, { "figure_ref": [], "heading": "Novel class generalization", "publication_ref": [], "table_ref": [ "tab_0", "tab_1" ], "text": "In Table 1 we show that across all 4 datasets studied, conditioning on text-saliency improves overall mIoU across the board; and that this improvement mostly comes from larger im- provements on the set of unseen classes. For example, on COCO, our heatmap-based model achieves 1.72 mIoU greater than baseline on seen classes, but 6.98 mIoU greater than baseline on unseen classes. In other words, the model is able to use the heatmaps to noticeably improve the quality of unseen class segmentations. Moreover, the smaller the seen class set, the greater the benefit of conditioning the segmentation network on text saliency. We study this effect in Table 2, where we vary the fraction of classes designated as \"seen\" in the OpenImages dataset. Here we see that the improvement increases as number of seen classes decreases; this intuitively makes sense as our technique of converting to a saliency map places the main burden of novel class generalization on the pretrained CLIP model rather than the segmentation network itself." }, { "figure_ref": [ "fig_7" ], "heading": "Qualitative examples", "publication_ref": [ "b19" ], "table_ref": [ "tab_2" ], "text": "In Figure 5 we provide several qualitative examples of our inference results. In all of the examples, we click on unseen classes (e.g., \"cheese\", \"knife\", \"roller skates\", etc). Here we use a model trained on OpenImages with 64 classes set as seen, and compare to a simplified click-only baseline (same architecture) but without text saliency heatmaps as input. In the cheese and knife image for example, the baseline aims to separate object instances by features, but is confused by the overlapping textures from the cheese and knife instances. However, our model conditioned on text is able to clearly distinguish the separate cheese instances and separate them from the knife. The Segment Anything Model (SAM) [20] is a model that was trained with 1.1 billion masks from the SA-1B dataset. SAM can work with a combination of positive/negative clicks and text prompts and showed impressive segmentation results with user-input. In Table 3 we compare with SAM while taking as input a single class name and a click. In spite of our smaller capacity and limited data, we out-perform SAM when training on all examples from COCO, refCOCO and OpenImages. Note that a perfect apples-to-apples comparison is difficult here since SA-1B masks are not class-annotated so we are not able to separate seen from unseen masks, given the SAM mode an unfair advantage in some ways. SAM outputs 3 predictions which we rank by CLIP scores or SAM's confidence scores. For CLIP scores we used the ViT-L/14@336px model, which SAM used in open-vocabulary training. 1In Table 4 we compare our approach with SAM while only training on a subset of classes. Note that even when we further limit our training set and evaluate our model on a set of classes that our model is guaranteed to have not seen, we still outperform SAM on refCOCO. It is important to note that because of SAM's compute requirement, we could not re-train SAM and only evaluated the pre-trained model trained on SA-1B." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We set out to explore improved instance segmentation through the use of a single click and a text prompt. A single click is insufficient to specify what part of an instance to segment; a single text prompt can still be ambiguous unless carefully crafted. We have demonstrated that a single click combined with a text prompt outperforms a click-only baseline across a variety of datasets. We also show that a model conditioned on text-saliency can generalize much better to novel categories. We use saliency maps from MaskCLIP to produce rough localizations for any category. A separate segmentation model is trained on the concatenated input, and segments in a class-agnostic manner, while still retaining class-specific information from the MaskCLIP module. The recent SAM model is class-agnostic and struggles to disambiguate user intent on the overall part vs subpart from a single click. Open vocabulary interactive segmentation is a novel task that has numerous applications, from reducing dense image annotation costs to improving background object removal in photo editing. We hope that the new text and click segmentation task will improve the accuracy of segmentations that require user interaction, while constraining the amount of interaction required. Future research directions could include automatically detecting the best category present around a user's foreground click, to remove the necessity of an additional text input. Our work also intersects with research on how to produce refined segmentation masks from a rough or low quality input (bounding box, point, low quality mask)." }, { "figure_ref": [], "heading": "Details on data generation", "publication_ref": [ "b11", "b36", "b36", "b36" ], "table_ref": [ "tab_4" ], "text": "In our work, we compare interactive segmentation with a single click to segmentation conditioned on text saliency. The Phraseclick paper [12] was the first paper to study combining a click and text query for disambiguation. In their experiments on refCOCO, they study different combinations of interactions, and the closest comparison to our work are the experiments they ran with 2 foreground clicks and a single background click. They cite previous work from Xu et al. [36] for their experimental configuration. Negative background clicks are sampled either from other object instances present in a scene, from near the ground truth object boundary, or from anywhere that is not the ground truth object. Positive clicks are sampled subsequently with a minimum distance from each other. Finally, for every object instance in the ground truth dataset, random samples are taken to create augmentations in the training data [36]. The reccomended hyperparameters from Xu et al. include a 40 pixel minimum distance between sampled positive points, and 15 samples per each instance.\nTo focus on boundary quality of the generated interactive segmentations, we sampled negative points from two of the three strategies proposed by Xu et al. [36]: from other instances of the same class present in a given scene, and from points along the outside boundary of the ground truth points. Additionally, we sampled positive points with 150 minumum distance from each other. We instead took a single sample per instance. This remains a future inquiry to see how the baseline model and ours conditioned on text saliency perform with additional data augmentation. From some anecdotal studies on refCOCO, the performance increase to both the baseline model and ours conditioned on text saliency is roughly 2-3 mIoU. For the fully supervised experiments in Table 5, we take 5 samples per instance for the much smaller dataset refCOCO, and a single sample per instance for COCO." }, { "figure_ref": [ "fig_11", "fig_11" ], "heading": "Part Disambiguation and Visualizations for COCO", "publication_ref": [ "b20", "b22" ], "table_ref": [], "text": "In our main paper, we demonstrate visual examples for the validation set of OpenImages [21]. We also demonstrate in the Experiments section how text saliency improves novel class generalization. Here, we aim to provide examples of how the model trained on OpenImages performs on example images from COCO [23] in order to show model generalization across datasets. As evident in Figure 7, our model qualitatively outperforms the baseline click only model in these settings as well. We show here that conditioning on text saliency also improves the ability of a model to generalize between a whole object and its sub-parts. This is also illustrated in the results shown in Figure 7." }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Experiments in a Fully Supervised Setting", "publication_ref": [], "table_ref": [ "tab_4", "tab_4", "tab_4" ], "text": "We present results of a fully-supervised version of our text-and-click model and these results are shown in Table 5. The purpose of Table 5 is to understand the delta between the zeroshot segmentation model and a model trained in a fully-supervised manner. Whereas our zero shot segmentation model achieved 66.02 mIoU with a single click and text prompt on refCOCO, our fully supervised model in 5 achieves 68.07 mIoU with a single foreground click and text prompt, and 72.89 with two foreground clicks, one background click and a text prompt. This demonstrates that constraining the model to a limited set of classes does not lead to a significant performance drops, we attribute this to our text saliency conditioning. The baseline zero shot segmentation refCOCO model with only a single foreground click achieved 62.99 mIoU, indicating a more notable performance drop versus the model conditioned on text saliency. Similar results are seen for COCO in Figure 5 where the model trained in fully supervised mode achieved 47.17 mIoU for a single foreground click and text prompt; our zero shot model trained on only 20 seen classes achieves only 38.42.\nTable 5 additionally contains an ablation experiment which ablates the number of interaction signals received by the model and determines how performance varies. We establish that inputting a text prompt counts as an interaction, and we therefore compare various configurations of text prompt inputs, foreground and background clicks. We hypothesized that additional interactions would decrease the utility of text saliency, because the object or subpart to segment would already be clearly defined. We find this generally confirmed, especially for results on the COCO dataset. We can see this when comparing rows 1-2 and rows 3-4 in Figure 5: under the condition of a single foreground click, the addition of a text prompt boosts mIoU by 10.35 for COCO; whereas under the condition of 2 foreground clicks and a single background click, the addition of a text prompt only boosts mIoU by 2.19." }, { "figure_ref": [], "heading": "Comparison with SAM Model", "publication_ref": [], "table_ref": [], "text": "In our experiments, we compared our model to the Segment Anything Model (SAM) from Meta AI. Performing comparisons on the zero-shot segmentation setting was infeasible due to the extremely large number of GPUs required to retrain. To do so would require retraining the SAM model on a limited number of seen classes in its dataset. To the best of our knowledge, data category labels were not available at training time. In lieu of these experiments, we conduct comparisons to the pre-trained SAM network.\nIn Table 3, we explained that SAM outputs multiple mask proposals. We explore multiple strategies to filter their mask proposals to the best available proposal. We previously discussed using the CLIP similarity of each mask proposal crop to the ground truth text prompt. This was meant to produce an even comparison, since our model has access to the ground truth category label of an instance to generate the text saliency map. We also discussed the SAM confidence score. For sake of thoroughness, we also re-implement the Oracle score described in the SAM paper. They note that their model can be penalized by automated evaluation metrics because it suggests multiple masks; and note that the model produces SOTA results if allowed to compare its mask proposals to the ground truth one. See the Experiment sectionin the main paper for details. This suggests that the SAM model struggles with disambiguating mask proposals, though it can often suggest high quality ones." }, { "figure_ref": [ "fig_12" ], "heading": "Looking at Distractors and Neighboring Object Segmentation", "publication_ref": [], "table_ref": [], "text": "We analyze the role that distracting objects play in generating interactive segmentation. A given image can have multiple classes present, for example a table and a lamp. For a given class, multiple instances can be present, for example, a parking lot with multiple instances of the class 'vehicle'. Since the model learns to segment guided by a click and an openvocabulary salience map (generated from a text category). This becomes a more challenging task the more objects and instances that are present, particularly the closer they are in proximity. We achieve the best results on refCOCO, a subset of COCO data re-sampled to make human annotation easier.\nResults area available in Figure 8. In this experiment, we measure the number of instances of the same class present in a given image, and record the mIoU for each instance in the validation set along with the number of distractors present, only for instances of unseen classes. Our model consistently outperforms the baseline model, though the gap is similar across the number of distracting objcets present. " }, { "figure_ref": [ "fig_14" ], "heading": "Limitations and Future Work", "publication_ref": [ "b28", "b2", "b40" ], "table_ref": [], "text": "We identify two main failure modes of the proposed model. The first instance results as a cascading error in cases with a low quality heatmap. In our experiments, we tried a few saliency techniques including GradCAM [29], Chefer 2021 et al. [3] and MaskCLIP [40]. We found MaskCLIP to qualitatively perform the best, but improved saliency maps remains an important future line of inquiry. Sometimes, the heatmap helps to localize a given text query, but the segmentation network we train still fails to accurately segment it. We can see this second failure mode illustrated in Figure 9. In the example in the bottom row of the figure, the heatmap has reasonably high probability over the pixels of the car wheel however the predicted segmentation contains the pixels of the entire car. Similarly, in the example in top row -containing an instance segmentation for a hat -the heatmap for the hat is high quality, but it predicts part of his whole body. We suspect that this is due to an imbalance of annotations in the training data; there are plenty of instances of whole objects such as automobiles or entire person silhouettes, but very few of a wheel, license plate, or hat." }, { "figure_ref": [ "fig_14" ], "heading": "Comparison to Interactive Click Methods", "publication_ref": [ "b19", "b19", "b30" ], "table_ref": [], "text": "In Segment Anything [20] Sec 7.1, Krillov et al. compare SAM to other interactive segmentation baselines (RITM, SimpleClick, and FocalClick) on single-click segmentation across 23 datasets. In Figure 9c and 9d of [20], SAM significantly outperforms all other methods on a single click, though the gap is much smaller for 2,3, or 5 clicks. This is because many interactive segmentation models are trained for mask refinement as opposed to generating the optimal proposal from a single click. These papers report the number of clicks to achieve a target IoU, but we are interested in minimizing interactions by combining text and clicks. We compare to RITM [31] in Tab. 6, and show better generalization on COCO when trained on 20 VOC classes to unseen COCO classes. RITM is stronger on OpenImages, but the gap between seen and unseen is larger, suggesting RITM is a stronger click baseline but generalization could benefit from text saliency conditioning." }, { "figure_ref": [], "heading": "Boundary IoU Metrics", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "Please see table 7. As you can see, using boundary IoU instead of mIoU, we achieve similar results, where our model slightly beats SAM on COCO validation.z" }, { "figure_ref": [], "heading": "Comparison to Referring Expression Segmentation Methods", "publication_ref": [ "b32", "b10", "b37" ], "table_ref": [], "text": "Our method is able to generalize to unseen classes with text input by using pre-trained CLIP. We show this in the paper with our \"unseen\" metrics. PhraseClick, VLT and LAVT have no mechanism to do this and do no evaluate on unseen classes. The largest difference between our model, LAVT and VLT is that we can segment completely unseen classes at test time. We achieve 33 maps from Maskclip that is able to leverage all of the knowledge of a pre-trained CLIP model. We compare with LAVT and LVT in fully supervised setting (all classes are seen) in Table 8 and show that we are able to match their performance with 3 clicks. LAVT and VLT have not published numbers from unseen classes. Also LAVT and VLT [11] require more specific language than our model (\"guy in black sitting to left leaned over\" (Fig 6 [37]) vs. ours -\"person\"). We achieve similar performance with less specific text supervision. This is important since annotating referring expression datasets at scale is expensive. In contrast, our method only requires the much more readily available, ground-truth class." }, { "figure_ref": [], "heading": "Comparison to Phraseclick", "publication_ref": [ "b11", "b11" ], "table_ref": [], "text": "The PhraseClick [12] was published before Vision-Language joint pretraining became a common method. Therefore, [12] propose an attention attribute mechanism, whereby the visual features are global average pooled into the same common dimension as the embedding dimension for the text representation. Text input is processed using word2vec and a trainable bi-LSTM. The text input is not initially aligned with the distribution of visual features. At inference time, if a novel query is presented, the PhraseClick model will be unable to use the text information to make an improved segmentation.\nIn our work meanwhile, we use the Maskclip technique to produce a spatial saliency map, for any possible novel text query, that provides a rough guess for the location of a given query. Maskclip retains the explicit spatial information providing a useful initial guess to the location of an object. Our model is trained in a class agnostic manner after extracting a" }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b11" ], "table_ref": [], "text": "heatmap guess, and so learns to segment any given prompt. PhraseClick [12] did not release code nor model weights, we cannot provide visual comparisons." } ]
Segmentation localizes objects in an image on a fine-grained per-pixel scale. Segmentation benefits by humans-in-the-loop to provide additional input of objects to segment using a combination of foreground or background clicks. Tasks include photoediting or novel dataset annotation, where human annotators leverage an existing segmentation model instead of drawing raw pixel level annotations. We propose a new segmentation process, Text + Click segmentation, where a model takes as input an image, a text phrase describing a class to segment, and a single foreground click specifying the instance to segment. Compared to previous approaches, we leverage open-vocabulary image-text models to support a wide-range of text prompts. Conditioning segmentations on text prompts improves the accuracy of segmentations on novel or unseen classes. We demonstrate that the combination of a single user-specified foreground click and a text prompt allows a model to better disambiguate overlapping or co-occurring semantic categories, such as "tie", "suit", and "person". We study these results across common segmentation datasets such as refCOCO, COCO, VOC, and OpenImages.
Text and Click inputs for unambiguous open vocabulary instance segmentation
[ { "figure_caption": "Figure 1 :1Figure 1: The benefit of text input for instance segmentation. The model in 1b struggles to guess the correct object based on only the point input from 1a. Our approach, which takes both text and click as input is successfully able to segment 1c and 1d. Both models are trained on OpenImages with 64 seen classes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a) Our prediction for 'Model globe' and 'Basket' (b) Our prediction for 'Kayak paddle' and 'Helmet' (c) Our prediction for 'Microscope' and 'Hairnet'", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Open vocabulary queries demonstrated on images from the web. These categories include 'Kayak Paddle', 'Basket', and 'Microscope' which are never seen by the model.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Our model architecture: we take as input a guided foreground click, the RGB image, and a text category. Then, the image-text saliency model (MaskCLIP here) produces a text-weighted feature map helping to localize the instance of interest. Finally, the original RGB image, clickmap, and saliency map are concatenated and fed into a modified fully convectional segmenation model, that accepts as input a 5 channel array.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Example of text saliency heatmaps produced by MaskCLIP[40]. The heatmaps give us a rough estimate of where the input text is localized, while supporting the large vocabulary learnt by CLIP[28].", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) RGB Input + Click (b) Baseline (c) Ours (d) RGB Input + Click (e) Baseline (f) Ours (g) RGB Input + Click (h) Baseline (i) Ours", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Inference examples on unseen classes for baseline versus our model for (a) \"Cheese\" and \"Knife\", (d) \"Roller Skates\" and \"Woman\", and (g) \"Keyboard\" and \"Mouse\". Conditioning on text saliency improves novel class segmentation and removes ambiguity. Model trained on OpenImages with 64 classes set as seen, compared to the click-only baseline without heatmaps.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Example comparisons to SAM for (a) \"Mobile phone\", and (b) \"Chest of drawers\". Model trained on OpenImages with 64 classes set as seen, compared to SAM baseline using highest confidence prediction.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "(a) RGB Input + Click (b) Baseline Prediction (c) Our Prediction for 'Shirt' and 'Hat' (d) RGB Input + Click (e) Baseline Prediction (f) Our Prediction for 'Tie' and 'Person'", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: A comparison of the click-only baseline to text-saliency segmentation on the task of part disambiguation. The model here was trained in a zero-shot manner on OpenImages with 64 seen classes, and evaluated on validation images from COCO. The categories chosen are from the unseen class set. Text saliency conditioning helps the segmentation model disambiguate subparts such as the \"tie\" from the overall object of \"person.\" Similarly, the segmentation model conditioned on text saliency is able to differentiate the classes of shirt, hat and person in the top row.", "figure_data": "", "figure_id": "fig_11", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Bar chart displaying the effect of number of objects on segmentation quality. Using the OpenImages dataset we plot the average mIoU for images with N number of objects of the same class present. This analysis is only for instances of classes not seen during training. The model is trained on OpenImages with 64 classes as seen, and evaluated on the OpenImages validation set with all classes available. The baseline here is the click-only model compare to ours conditioned on text saliency. Our model consistently outperforms the baseline model, though the gap is similar across the number of distracting objcets present.", "figure_data": "", "figure_id": "fig_12", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Examples of failure cases of text-click instance segmentation of our model. These examples show instances where despite the text saliency localizing the object of interest, the segmentation mask fails. We observe this largely happens in overlapping objects or when the queried category is a sub-part of a larger object. Categories: (a) \"Cowboy hat\", (b) \"Vehicle registration plate\", (c) \"Wheel\".", "figure_data": "", "figure_id": "fig_14", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Results for Text+Click model on seen and unseen classes. We used one click for all models and trained using only seen classes. For OpenImages we use 64 seen classes. We convert text input to a heatmap using Maskclip.", "figure_data": "DatasetText InputmIoUOverallSeenUnseenrefCOCO✓ 66.02 (+3.03) 70.30 (+1.86)56.35 (+5.68)refCOCO62.9968.4450.67VOC✓ 57.76 (+4.52)59.31 (+3.2) 50.73 (+10.45)VOC53.2456.1140.28COCO✓ 38.42 (+3.89) 42.06 (+1.72)33.45 (+6.98)COCO34.5340.3426.47OpenImages✓ 57.05 (+4.40) 67.03 (+3.35)53.92 (+4.74)OpenImages52.6563.6849.18Seen Classes Text InputmIoUOverallSeenUnseen64✓57.05 (+4.4) 67.03 (+3.35) 53.92 (+4.74)6452.6563.6849.1834✓55.10 (+5.82) 62.03 (+5.19) 52.95 (+6.12)3449.2856.8446.8323✓53.65 (+7.89) 61.64 (+8.38) 51.14 (+7.62)2345.8653.2643.53", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Difference in performance as the number of seen classes in OpenImages changes. Note that gap between our approach (Text+Click) and the click-only baseline increases with a smaller set of seen classes. We convert text input to a heatmap using Maskclip.", "figure_data": "4.3 Comparison with SAMDatasetSAM1-ClickOursDatasetModelmIoUCLIP Conf.OverallSeen UnseenCOCO36.43 39.3136.82 47.17COCOOurs38.42 42.0633.45refCOCO 47.07 52.4866.16 68.07COCOSAM39.3141.7337.59Table 3: SAM[20].Comparing mIOU of our model with SAM outputs 3 predictions and werefCOCO refCOCOOurs SAM66.02 70.30 52.48 61.1856.35 48.64choose one using SAM's confience (Conf.) or CLIPOIOurs57.05 63.6853.92score(CLIP). Our models trained on all classes inOISAM63.8863.6064.47COCO and refCOCO outperform SAM.", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison with SAM while our model only trains on a subset of classes. Note that we outperform SAM on refCOCO. We use SAM's confience score to rank proposals in this experiment because it showed better results in Table3. OI=OpenImages.", "figure_data": "", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of our model in the fully supervised setting over the COCO and refCOCO datasets for the textclick instance segmentation task. We convert text input to a heatmap using Maskclip. The left hand side of the table shows the number of inputs given to the model in terms of text-saliency heatmaps, positive clicks (PClicks) and negative clicks (NClicks). The interaction setting with the highest mIoU is bolded for reference.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ".45 mIOU on 60 COCO unseen classes while training on only 20 seen classes. We unlock this capability by training on saliency Comparing generalization of models to unseen classes. Comparison of RITM interactive segmentation model with a single click and our model with a click + text label. Using RITM checkpoint trained on all VOC classes with the SBD data. Our model was trained on VOC classes of COCO. COCO validation has 80 classes, and OpenImages has 300 classes. We will include a comparison while training on SBD in the camera ready.", "figure_data": "ModelTrain DataEval DatamIoUOverallSeen UnseenRITM [31]SBD COCO Validation38.86 45.0030.33OursCOCO[voc classes]COCO Validation38.42 42.0633.45RITM [31]SBDOpenImages49.42 67.3947.17OursCOCO[voc classes]OpenImages44.55 53.1641.87DatasetModelmIoUOverallSeen UnseenCOCOOurs39.6240.5138.47COCOSAM [20]38.93 37.6340.65", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Boundary IoU comparisons on MS COCO.", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" } ]
Nikolai Warner; Meera Hahn; Jonathan Huang; Irfan Essa
[ { "authors": "Maxime Bucher; Tuan-Hung Vu; Matthieu Cord; Patrick Pérez", "journal": "", "ref_id": "b0", "title": "Zero-shot semantic segmentation", "year": "2019" }, { "authors": "Zhaowei Cai; Nuno Vasconcelos", "journal": "", "ref_id": "b1", "title": "Cascade r-cnn: Delving into high quality object detection", "year": "2017" }, { "authors": "Hila Chefer; Shir Gur; Lior Wolf", "journal": "", "ref_id": "b2", "title": "Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers", "year": "2021" }, { "authors": "Liang-Chieh Chen; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b3", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "Liang-Chieh Chen; Yukun Zhu; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b4", "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "Xi Chen; Zhiyan Zhao; Yilei Zhang; Manni Duan; Donglian Qi; Hengshuang Zhao", "journal": "", "ref_id": "b5", "title": "Focalclick: towards practical interactive image segmentation", "year": "2022" }, { "authors": "Yi-Wen Chen; Yi-Hsuan Tsai; Tiantian Wang; Yen-Yu Lin; Ming-Hsuan Yang", "journal": "", "ref_id": "b6", "title": "Referring expression object segmentation with caption-aware consistency", "year": "2019" }, { "authors": "Zhe Chen; Yuchen Duan; Wenhai Wang; Junjun He; Tong Lu; Jifeng Dai; Yu Qiao", "journal": "", "ref_id": "b7", "title": "Vision transformer adapter for dense predictions", "year": "2022" }, { "authors": "Bowen Cheng; Alexander G Schwing; Alexander Kirillov", "journal": "", "ref_id": "b8", "title": "Per-pixel classification is not all you need for semantic segmentation", "year": "2021" }, { "authors": "", "journal": "MMSegmentation Contributors", "ref_id": "b9", "title": "MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark", "year": "2020" }, { "authors": "Henghui Ding; Chang Liu; Suchen Wang; Xudong Jiang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b10", "title": "VLT: Vision-language transformer and query generation for referring segmentation", "year": "2023-06" }, { "authors": " Henghui", "journal": "", "ref_id": "b11", "title": "Phraseclick: Toward achieving flexible interactive segmentation by phrase and click", "year": "2020" }, { "authors": "Jian Ding; Nan Xue; Gui-Song Xia; Dengxin Dai", "journal": "", "ref_id": "b12", "title": "Decoupling zero-shot semantic segmentation", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b13", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Mark Everingham; Luc Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "International journal of computer vision", "ref_id": "b14", "title": "The pascal visual object classes (voc) challenge", "year": "2010" }, { "authors": "Zhangxuan Gu; Siyuan Zhou; Li Niu; Zihan Zhao; Liqing Zhang", "journal": "", "ref_id": "b15", "title": "Context-aware feature generation for zero-shot semantic segmentation", "year": "2020" }, { "authors": "Shijie Hao; Yuan Zhou; Yanrong Guo", "journal": "Neurocomputing", "ref_id": "b16", "title": "A brief survey on semantic segmentation with deep learning", "year": "2020" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b17", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Sahar Kazemzadeh; Vicente Ordonez; Mark Matten; Tamara Berg", "journal": "", "ref_id": "b18", "title": "Referitgame: Referring to objects in photographs of natural scenes", "year": "2014" }, { "authors": " Alexander", "journal": "", "ref_id": "b19", "title": "Segment anything", "year": "2023" }, { "authors": "Alina Kuznetsova; Hassan Rom; Neil Alldrin; Jasper Uijlings; Ivan Krasin; Jordi Pont-Tuset; Shahab Kamali; Stefan Popov; Matteo Malloci; Alexander Kolesnikov", "journal": "International Journal of Computer Vision", "ref_id": "b20", "title": "The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale", "year": "2020" }, { "authors": "Boyi Li; Q Kilian; Serge Weinberger; Vladlen Belongie; René Koltun; Ranftl", "journal": "", "ref_id": "b21", "title": "Language-driven semantic segmentation", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b22", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Qin Liu; Zhenlin Xu; Gedas Bertasius; Marc Niethammer", "journal": "", "ref_id": "b23", "title": "Simpleclick: Interactive image segmentation with simple vision transformers", "year": "2022" }, { "authors": "Junhua Mao; Jonathan Huang; Alexander Toshev; Oana Camburu; Alan L Yuille; Kevin Murphy", "journal": "", "ref_id": "b24", "title": "Generation and comprehension of unambiguous object descriptions", "year": "2016" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b25", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "Yujian Mo; Yan Wu; Xinneng Yang; Feilin Liu; Yujun Liao", "journal": "Neurocomputing", "ref_id": "b26", "title": "Review the stateof-the-art technologies of semantic segmentation based on deep learning", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b27", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Abhishek Ramprasaath R Selvaraju; Ramakrishna Das; Michael Vedantam; Devi Cogswell; Dhruv Parikh; Batra", "journal": "", "ref_id": "b28", "title": "Grad-cam: Why did you say that?", "year": "2016" }, { "authors": "Konstantin Sofiiuk; Ilya A Petrov; Anton Konushin", "journal": "IEEE", "ref_id": "b29", "title": "Reviving iterative training with mask guidance for interactive segmentation", "year": "2022" }, { "authors": "Konstantin ", "journal": "", "ref_id": "b30", "title": "Reviving iterative training with mask guidance for interactive segmentation", "year": "2022" }, { "authors": "Robin Strudel; Ricardo Garcia; Ivan Laptev; Cordelia Schmid", "journal": "", "ref_id": "b31", "title": "Segmenter: Transformer for semantic segmentation", "year": "2021" }, { "authors": "Xinlong Wang; Rufeng Zhang; Tao Kong; Lei Li; Chunhua Shen", "journal": "", "ref_id": "b32", "title": "Solov2: Dynamic and fast instance segmentation", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b33", "title": "", "year": "2020" }, { "authors": "Yongqin Xian; Subhabrata Choudhury; Yang He; Bernt Schiele; Zeynep Akata", "journal": "", "ref_id": "b34", "title": "Semantic projection network for zero-and few-label semantic segmentation", "year": "2019" }, { "authors": "Jiarui Xu; Shalini De Mello; Sifei Liu; Wonmin Byeon; Thomas Breuel; Jan Kautz; Xiaolong Wang", "journal": "", "ref_id": "b35", "title": "Groupvit: Semantic segmentation emerges from text supervision", "year": "2022" }, { "authors": "Ning Xu; Brian L Price; Scott Cohen; Jimei Yang; Thomas S Huang", "journal": "", "ref_id": "b36", "title": "Deep interactive object selection", "year": "2016" }, { "authors": " Zhao", "journal": "", "ref_id": "b37", "title": "Lavt: Language-aware vision transformer for referring image segmentation", "year": "2022" }, { "authors": "Licheng Yu; Patrick Poirson; Shan Yang; Alexander C Berg; Tamara L Berg", "journal": "Springer", "ref_id": "b38", "title": "Modeling context in referring expressions", "year": "2016" }, { "authors": "Bowen Zhang; Zhi Tian; Quan Tang; Xiangxiang Chu; Xiaolin Wei; Chunhua Shen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b39", "title": "Segvit: Semantic segmentation with plain vision transformers", "year": "2022" }, { "authors": "Chong Zhou; Chen Change Loy; Bo Dai", "journal": "", "ref_id": "b40", "title": "Denseclip: Extract free dense labels from CLIP", "year": "2021" } ]
[]
10.18653/v1/2021.acl-demo.37
2023-11-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Machine translation is ubiquitous in modern society, however training high quality machine translation systems is not trivial. A lot of the knowledge about how to build high quality systems is not well defined, comes from experience and at times may seem counter intuitive. With OpusTrainer and OpusCleaner we aim to explicitly address the main challenges in a user friendly manner and simplify the workload for machine translation researchers.\nThere are several challenges when it comes to building high quality MT systems:" }, { "figure_ref": [], "heading": "Data Sources", "publication_ref": [], "table_ref": [], "text": "Parallel data for machine translation systems comes from many different sources, that have widely varying quality. As an example, using Opus's website1 , and filtering parallel data sources for Chinese to English, we are presented with a dozen different corpora. Here we find that some are in traditional script, others are in simplified script, and these may or may not have been tokenized. This is before noting any language identification issues. In order to build a high quality translation system, we need to first have quality data, which necessarily means auditing each corpus manually and then deciding how to preprocess it." }, { "figure_ref": [], "heading": "Training schedule", "publication_ref": [ "b7" ], "table_ref": [], "text": "High quality machine translation systems require the use of backtranslation [Sennrich et al., 2016], usually included in the form of pretraining. Often at the end of training models are fine tuned to in-domain data. Without a training scheduler that supports different training stages, start-and-stop training approach is necessary which presents challenge for automation and increases the burden on the researcher." }, { "figure_ref": [], "heading": "Data Mixing", "publication_ref": [], "table_ref": [], "text": "Noisy web-crawled data is useful for translation quality, but including it too early in the training may lead to model divergence. Furthermore dirty data is orders of magnitude more available than clean manually curated parallel data. Without any upsampling, clean data might be overshadowed by dirty data, but upsampling is wasteful in terms of disk space. Finally, multilingual models require careful data mixing such that low resource language languages are not overwhelmed by high resource ones, without a training scheduler that supports data source mixing, this is achieved by upsampling low resource data and carefully mixing and shuffling it in the training data." }, { "figure_ref": [], "heading": "Data Augmentation", "publication_ref": [], "table_ref": [], "text": "Machine translation models are training on sanitised parallel data that is usually not representative of noisy user input:\n• Typos are quite rare in clean data, and spellchecker is often used on web-crawled data.\n• All caps and title-case text are often missing.\n• Emoji are basically non existent in parallel data.\n• Models are not trained to cope with untranslatable tokens, which should be copied between the source and the target language.\nOpusTrainer and OpusCleaner are designed to resolve the above issues, and make it easy for a novice user to build high quality translation systems, by explicitly setting the expectations that training data must be carefully audited, and training data must be scheduled." }, { "figure_ref": [ "fig_0" ], "heading": "OpusCleaner", "publication_ref": [], "table_ref": [], "text": "In order to address the daunting task of data cleaning, we developed OpusCleaner, a single graphical frontend that does data downloading and cleaning, while being modular to allow for custom modifi-cations depending on the language pair in question. We show screenshots of the welcome screen on Figure 1. Additionally, adding one's own custom datasets is possible." }, { "figure_ref": [ "fig_2" ], "heading": "Data Cleaning", "publication_ref": [], "table_ref": [], "text": "Once all datasets are acquired, we can navigate to the Data Tailor screen (Figure 4) where we can label every dataset with an arbitrary label (Such as medium or dirty) so that we can keep track of the overall quality of each dataset." }, { "figure_ref": [ "fig_3", "fig_4", "fig_5", "fig_6" ], "heading": "Filter and preview", "publication_ref": [], "table_ref": [], "text": "For each dataset, we visualise a sample of 3000 sentences that includes the first 100, the last 100 and random lines in between. From this window we can identidy the idiosyncraticies of that dataset and add the appropriate filters to fix them. For example, if we spot that some lines are in the wrong language (Figure 5) we can add language identifier filter and see the result of it in the preview window (Figure 6).\nAnother example is finding mismatched punctuation on the source and the target (Figure 7). We can then create a simple filter that fixes the issue and apply it, see the result (Figure 8)." }, { "figure_ref": [ "fig_7" ], "heading": "Filters and pipelines", "publication_ref": [], "table_ref": [], "text": "OpusCleaner is designed to clean data in a pipelined manner. Multiple filters are chained where every filter receives data on stdin and outputs it on stdout. OpusCleaner itself takes care of managing the pipeline. A typical pipeline would have a number of filters chained up as shown on Figure 9.\nWe support 28 built in filters with custom user filters supported by simply providing a json configuration file that specifies path to filter executable and optionally what arguments it should have." }, { "figure_ref": [], "heading": "Processsing all data", "publication_ref": [], "table_ref": [], "text": "Once we have determined filters for every single downloaded dataset, we run a command line utility that does batch processing of all datasets, taking care of also cutting up files and parallelising processing. Once all processing is done, we provide an utility to deduplicate the data but preserving the split of datasets and then the user can proceed with training the machine translation system.\nOpusCleaner2 is open source, under active development and available for free for anyone to use. 3 OpusTrainer\nAs discussed in section 1, training high quality machine translation systems requires carefully combining parallel data from different sources and quality levels; applying on the fly modifications to it and more.\nThis is challenging to achieve with neural network toolkits that make use of static training data, because ideally we want to modify the data mixture and potentially augment it on the fly, without having to prepare the data first and write it to disk which is wasteful." }, { "figure_ref": [], "heading": "Multilingual model training", "publication_ref": [ "b1" ], "table_ref": [], "text": "The problem is exacerbated when training many-to-many or Englishto-many multilingual models where high resource languages would often have orders of magnitude more data than low resource languages. In order for a multilingual model to train well in this setting, it needs to see balanced data from all languages [Freitag and Firat, 2020]. Doing this by concatenating and upsampling data (in order to get equal amounts of data seen for all languages), would waste multiple terabytes of disk space." }, { "figure_ref": [], "heading": "Data Scheduling", "publication_ref": [], "table_ref": [], "text": "OpusTrainer solves this problem by streaming and mixing data from multiple sources. OpusTrainer uses a simple yaml configuration file where the user can declare all of their data sources and a desired mix of them for different stages of training. OpusTrainer then reads in the data from different sources and then outputs the desired mix to stdout. OpusTrainer is meant to be used with neural network " }, { "figure_ref": [], "heading": "Data Augmentation", "publication_ref": [], "table_ref": [], "text": "Humans are very robust to decoding noisy texts, but this can pose a major challenge to machine translation systems due to the way we collect our training data:\n• Title Case and Upper Case parallel data is quite rare in training data, and is sometimes regularised during acquisition. • Typos are also comparatively rare in training data, because either we use clean sources or we perform spellchecking on web crawled sources. • Emojis, which human readers expect to be copied over from the source to the target, are not seen during training, because typically lines containing emojis are removed from the training data at preprocessing steps. In order to alleviate these issues, OpusTrainer provides multiple data modifiers which can be applied on the fly, at random on the training data:\n• UpperCaser and TitleCaser\n• Typo modifier, which inserts typos in words during training\n• Merge modifier, which randomly merges several input sentences together to help the model be more robust to longer sentences.\n• Noise modifier, that generates random sentences consisting of unicode noise, identical on both the source and the target side. This modifier teaches the model to copy unknown strings to the target side.\n• Inline Noise modifier: A more complicated version of the above that uses word alignments in order to inject noisy unicode characters (including Emoji) in approximately the same logical place on both the source and the target side. This modifier teaches the model that unknown sequences of <unk> characters should be just copied on the target side. " }, { "figure_ref": [ "fig_10", "fig_1" ], "heading": "Terminology", "publication_ref": [ "b0" ], "table_ref": [], "text": "OpusTrainer is able to leverage word alignment information to produce terminology augmented systems, precisely as the one described in Bogoychev and Chen [2023]. This is achieved by finding bijective word alignment mappings between the source and the target sentences and at randomly injecting terminology hints in the source, precisely like the one show on 12.\nThese terminology hints can then be used at inference time, and the model will know how to incorporate the desired terminology hint at the target side. The relevant training options are shown on figure 13 OpusTrainer is open source and available on GitHub,3 with ample documentation and examples.\nOpusTrainer is designed to be used mainly with neural network toolkits that read in training input on stdin, as it takes care of shuffling between epochs, resuming training and all other functions Where is the airport? ↔ Wo ist der Flughafen? Where is the airport __target__ Flughafen __done__? ↔ Wo ist der Flughafen?\nFigure 12: Terminology augmentation in practise. During training it is hinted that the target word Flughafen corresponds to Airport, so that at inference when providing the model with terminology hints it will know how to incorporate them at the output.\nnormally done by the data module of a neural network toolkit. It can, however, also be used to write a preprocessed training corpus on disk so toolkits that do not support reading stdin can also make use of it." }, { "figure_ref": [], "heading": "Case study: A Robust French-English system", "publication_ref": [], "table_ref": [], "text": "We highlight the use cases of data augmentation by using OpusCleaner and OpusTrainer to train a French-English machine translation system. We define robustness as the following criteria, which are all common concerns for real world web text.\n• Accurate translation of URLs (URLs need to be copied to the target side without any modification).\n• Accurate copy behaviour on OOV tokens such as emoji or snippets of foreign language texts. The latter often occur in wikipedia, where foreign language terms such as named entities appear alongside their local language transliteration.\n• No quality loss when translating Upper Case and Title case texts compared to normal cased text (All caps and tittle case often appear in tittles of newspapers).\n• Robustness to typos (social media users). We aim for our model to be able to reproduce those at decode time." }, { "figure_ref": [], "heading": "Test set design", "publication_ref": [ "b5" ], "table_ref": [], "text": "As a baseline test set we use newstest15 and we make several version of it to more accurately measure robustness.\n• Title Case version of the test set • All caps version of the test set • Typo-ed version of the test set, where we insert 4 typos in each line using the python's typo library.4 • Emoji augmented test set where we insert random emoji in corresponding places on the source and the target, by using precomputed word alignments in order to place the emoji in both texts in the correct corresponding location. Example on figure 14. • Random unicode sequence augmented test set where the random unicode sequences are inserted in the same manner as the emoji. Example on figure 14.\nOn top of that we prepare a dataset of sentences containing URLs from the paracrawl project. We take sentences containing exactly the same URLs on both the source and the target, then we remove the URLs and take the top 1500 sentences according to their bicleaner-ai [\"Zaragoza-Bernabeu et al., \"2022\"] score and reinsert the URLs.\nFor quality we report BLEU, but we also use several specific metrics. For the URL test set we measure the percentage of exact matches of URLs. For datasets with tittle case and all caps we measure as well BLEU-uncased to see how good translation quality is, regardless of the case outputted. Finally, for datasets with emoji and unicode sequences, we extract all of the OOV characters and measure ChrF [Popović, 2015] on them only, so that we can see how effective our system is at copying them to the target side." }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b2", "b3", "b8", "b4" ], "table_ref": [], "text": "For training data we use all of the available French-English data accessible through MTData [Gowda et al., 2021] and we clean it using OpusCleaner.\nWe split the data into four categories based on its providence and subjective perceived quality through manual inspection:\n• Canonically clean datasets such as Europarl, Un are designated as clean (22M parallel sentences). • Slightly less clean data (9M), designated as cleanish.\n• Not clean data, but not generated from crawled sources (16M), designated as medium.\n• Web crawled data is designated as dirty (363M)\nWe use Marian [Junczys-Dowmunt et al., 2018] to train transformer-big Vaswani et al. [2017] models on the training data with varying degree of data augmentation. We train 7 different models with various additional perks, some related to data augmentation, some not in order to show how we progressively achieve a more robust model. 1. Pure model 2. + Sentencepiece sampling [Kudo and Richardson, 2018]. Sentencepiece sampling makes splits of words non-deterministic, potentially making unseen words handling more robust. 3. + UpperCase and LowerCase 4. + typos 5. + Unicode Vocabulary Fallback. Sentencepiece models can't split OOV tokens such as Chinese characters into subwords, but if we consider that every character is represented by unicode bytes, we can split unseen characters such as emoji and hanzi 6. + noisy sentences 7. + inline noise" }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "We present our results on table 1. We train 7 different systems with different degrees of augmentation. We can see that progressively, as we add more modifiers to the training set up, the model becomes more robust to various sources of noisy user input. System 3 onwards have capture TitleCase and UpperCase with relatively small performance loss compared to plain sentences. System 5 that uses UTF fallback for OOV tokens starts capturing emoji and other OOV tokens. Systems 6 and 7 enhance the training data with lots of noisy examples and that leads to really good copy rate of OOV tokens to the target side, as shown in the two ChrF columns. " }, { "figure_ref": [], "heading": "Caveats", "publication_ref": [], "table_ref": [], "text": "There are some caveats that come with our test results. The more modifiers are used, the more difficult the training data seems to be to model, and therefore it takes more iterations through the training data to achieve convergence. Therefore all models presented have seen different amounts of training data. We will control for this setting in future work.\nFurthermore we see slight degradation in terms of translation quality when we add modifications to the training data on the plain test set. This suggests that the gains we have are not entirely for free. Finally, we observe slight deterioration on URLs. We measure only exact matches on URLs because an almost correct URL is not useful. This regression bodes for further investigation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b0", "b6" ], "table_ref": [], "text": "We present a feature complete data preprocessing and data scheduling toolkit for training machine translation systems (but also just as useful for Large Language Models). Our tools are designed with novice and experts in mind so that they lower the entry barrier to the field of machine translation, while still allowing for state of the art results. Our data augmentation utilities are crucial for producing robust machine translation systems, as well as terminology systems [Bogoychev and Chen, 2023].\nOur toolkit was developed concurrently and independently to Sotastream [Post et al., 2023] and provides similar functionality." } ]
Developing high quality machine translation systems is a labour intensive, challenging and confusing process for newcomers to the field. We present a pair of tools OpusCleaner and OpusTrainer that aim to simplify the process, reduce the amount of work and lower the entry barrier for newcomers. OpusCleaner is a data downloading, cleaning, and proprocessing toolkit. It is designed to allow researchers to quickly download, visualise and preprocess bilingual (or monolingual) data that comes from many different sources, each of them with different quality, issues, and unique filtering/preprocessing requirements. OpusTrainer is a data scheduling and data augmenting tool aimed at building large scale, robust machine translation systems and large language models. It features deterministic data mixing from many different sources, on-the-fly data augmentation and more. Using these tools, we showcase how we can use it to create high quality machine translation model robust to noisy user input; multilingual models and terminology aware models.
OpusCleaner and OpusTrainer, open source toolkits for training Machine Translation and Large language models
[ { "figure_caption": "Figure 1 :1Figure 1: Initial screen of OpusCleaner", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Search dataset pane", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Initial screen of data tailoring, as well as dataset labelling.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Initial view of dataset cleaning with some sentences obviously in the wrong language.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Fasttext langid filter removes lines in wrong language.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Mismatched punctuation on the source and the target.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Fixing mismatched punctuation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Adding multiple filters and visualising the difference.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: OpusTrainer basic configuration defining the data scheduling for training a model.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Different modifiers specified in YAML format to be used during training.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Tag modifier is used to add terminology hints to the source during training. Values of 3% to 7% seem to work well in practise.", "figure_data": "", "figure_id": "fig_10", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Example cases of noise/emoji inside the source and the corresponding target translation.We aim for our model to be able to reproduce those at decode time.", "figure_data": "", "figure_id": "fig_11", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Results table1 ChrF score was calculated on the noise/emoji only, meaning we only measure how well our model copies just OOV tokens without considering translation quality.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Nikolay Bogoychev; Jelmer Van Der Linde; Graeme Nail; Barry Haddow; Jaume Zaragoza-Bernabeu; Gema Ramírez-Sánchez; Lukas Weymann; Tudor Nicolae Mateiu; Jindřich Helcl; Mikko Aulamo
[ { "authors": "Nikolay Bogoychev; Pinzhen Chen", "journal": "", "ref_id": "b0", "title": "Terminology-aware translation with constrained decoding and large language model prompting", "year": "2023" }, { "authors": "Markus Freitag; Orhan Firat", "journal": "", "ref_id": "b1", "title": "Complete multilingual neural machine translation", "year": "2020-11" }, { "authors": "Thamme Gowda; Zhao Zhang; Chris Mattmann; Jonathan May", "journal": "", "ref_id": "b2", "title": "Many-to-English machine translation tools, data, and pretrained models", "year": "2021-08" }, { "authors": "Marcin Junczys-Dowmunt; Roman Grundkiewicz; Tomasz Dwojak; Hieu Hoang; Kenneth Heafield; Tom Neckermann; Frank Seide; Ulrich Germann; Alham Fikri Aji; Nikolay Bogoychev; F T André; Alexandra Martins; Birch", "journal": "", "ref_id": "b3", "title": "Marian: Fast neural machine translation in C++", "year": "2018-07" }, { "authors": "Taku Kudo; John Richardson", "journal": "", "ref_id": "b4", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018-11" }, { "authors": "Maja Popović", "journal": "", "ref_id": "b5", "title": "chrF: character n-gram F-score for automatic MT evaluation", "year": "2015-09" }, { "authors": "Matt Post; Thamme Gowda; Roman Grundkiewicz; Huda Khayrallah; Rohit Jain; Marcin Junczys-Dowmunt", "journal": "", "ref_id": "b6", "title": "Sotastream: A streaming approach to machine translation training", "year": "2023" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "", "ref_id": "b7", "title": "Improving neural machine translation models with monolingual data", "year": "2016-08" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b8", "title": "Attention is all you need", "year": "2017" }, { "authors": "", "journal": "Curran Associates Inc", "ref_id": "b9", "title": "", "year": "" }, { "authors": "\" Jaume; Gema Zaragoza-Bernabeu; Marta Ramírez-Sánchez; Sergio\" Ortiz Bañón; Rojas", "journal": "", "ref_id": "b10", "title": "bicleaner AI: Bicleaner goes neural", "year": "2022-06" } ]
[]
10.21227/w3aw-rv39
2023-11-24
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b12", "b3", "b2" ], "table_ref": [], "text": "Chest angiogram was performed according to the pulmonary thromboembolism protocol and showed normal pulmonary arteries with no filling defects. Dull glass lesion and consolidative area in the right upper lobe suggested pneumonia... Lower lung atelectasis with probable left lower lobe pneumonia. Mild edema difficult to exclude. PA and lateral views of the chest provided. Airspace consolidation in the left lower lung is concerning for pneumonia likely within the left lower lobe… In recent years, the field of medical image analysis has witnessed significant advancements, largely driven by the application of deep learning techniques and the increasing availability of medical imaging data. Notably, Visual-Language Pre-training (VLP) (Huang et al., 2021;Boecking et al., 2022;Bannur et al., 2023) attracts lots of attention, as it reduces the need for costly and time-consuming manual annotations by leveraging the vast amount of information in radiology reports and unlabeled data. Despite these success, further expanding the data scale for medical VLP remains non-trivial, because the availability of single-modality medical images is limited, especially when compared to the general domain. This introduces a strong need to integrate multimodality medical images (e.g., X-rays, Computed Tomography (CT) and Magnetic Resonance Learning a common semantic space for different medical images is non-trivial, even with language guidance, and UniMedI can well handle this integration. We use circles to highlight differences between different images.\nImaging(MRI)) within a unified VL framework. However, fully leveraging the information across multi-modal images within this VL framework is unexplored.\nOn the above aspect, the inherent heterogeneity of medical imaging from different modalities obstructs their effective integration. One obvious and important problem is that medical images have different dimensions. For example, X-rays are 2D images, while CT scans are 3D images. To tackle this challenge, we start from the following key observation: despite big differences, medical images from various modalities share a common semantic latent space, which captures the underlying features of an individual's health status, and such status are reflected in medical reports via language. As shown in Fig. 1, the X-ray and CT scan can contribute to a comprehensive understanding of pneumonia, reflecting the commonality within the latent space, and these abnormalities are listed in reports. This observation motivate us to map data from various medical image modalities into the shared semantic space, which is guided by language in reports. This strategy not only tackles data-related issues but also fosters synergy and collaboration among distinct modalities, ultimately resulting in a more holistic understanding of an individual's health condition.\nHowever, creating a unified model that effectively maps data from different sources into a common space for combined learning is challenging, even with language guidance in reports. Figure 2a demonstrates the representation space of two distinct modalities with different dimensions (i.e., 2D X-rays and 3D CT scans) when trained individually via VLP. They are far apart in the representation space, even with same pathological information in reports. Furthermore, Figure 2b shows simply unifying them in one model does not solve the problem. Although the distance between representations of two modalities are shortened to some extent, their representations remain insufficiently compact, since only little space are shared between them.\nTo address the above challenge, we propose UniMedI, a novel Unified VL framework, designed to effectively integrate Medical multi-modal Images into a language-guided common semantic space. First, under the dilemma that paired 2D and 3D medical images are unavailable, and naively integration is not effectively as we shown above, we first design an attentive selection method to accurately identify text-relevant 2D slices without extra annotations. This builds a data bridge between 2D and 3D medical images. Then, we devise a cross-dimensional VLP method to bring both 3D data and selected 2D slices closer to the same report representation space, constructing a unified VL framework. Moreover, we introduce a self-distillation technique using a teacher-student structure and construct a masking and recovery task, further enhancing the associations between 2D and 3D data within the image space. Figure 2c shows UniMedI significantly reduces the distance between 2D and 3D features after undergoing our effective design for cross-dimensional pre-training.\nTo further demonstrate the effectiveness of our approach, we conduct extensive visualizations and experiments to showcase the working mechanisms and superior representational capabilities of our model. We evaluate our UniMedI framework on 10 real-world medical datasets and various downstream tasks (i.e., classification, segmentation and retrieval). The results consistently show superior performance, regardless of whether UniMedI is applied to full-scale data or limited data scenarios. We also provide visualizations on regions and slices selected by UniMedI, verifying our claim that UniMedI can identify key information from both 2D and 3D medical images." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b22", "b27", "b30", "b20", "b34", "b12", "b28", "b17", "b4" ], "table_ref": [], "text": "Medical Self-supervised Learning In the domain of medical image analysis, a number of selfsupervised learning (SSL) techniques have been developed to exploit the unique characteristics of medical data. These methods construct feature embedding spaces by designing pre-text tasks, such as solving jigsaw puzzles Noroozi & Favaro and inpainting tasks Pathak et al. (2016). Recently, researchers have explored the use of 3D convolutional neural network (CNN) architectures while retaining established SSL tasks on 2D CNNs Tang et al. (2022). However, the diversity of medical data poses a significant challenge, as the development of a unified visual representation that adequately captures the intricacies of different data types remains a crucial yet complex task that requires further investigation. To address this challenge, Xie et al. (2022) proposed Unimiss, a universal medical self-supervised representation learning framework that overcomes the dimensionality barrier. Furthermore, Nguyen et al. (2023) introduced Joint, an SSL framework capable of accommodating various data dimensions and generating versatile pre-trained weights for both 2D and 3D downstream applications. These approaches have made notable contributions to handling data from different modalities. However, they have given relatively less attention to the relationships and connections between different types of medical data.\nMedical Vision-Language Processing Medical Vision-Language Processing (VLP) has emerged as a promising approach for learning medical visual representations by leveraging naturally occurring paired descriptive text Zhang et al. (2022). Huang et al. (2021) propose Gloria, an attentionbased framework that contrasts image sub-regions and words in the paired report to learn global and local representations. Wang et al. (2022) further optimize the framework from the perspective of disease in their method MGCA. These methods exhibit remarkable performance in various downstream tasks involving medical images. However, the application of medical VLP is primarily limited to 2D images, mainly due to the limited availability of extensive 3D medical image-text datasets. Compared to 2D medical image-text pairs, 3D images and reports contain more abundant information, which offers clear advantages for learning visual representations. While some methods Liu et al. (2023); Chen et al. (2023) attempt to address this limitation by converting 3D data into 2D slices and subsequently employing generative models to generate captions for 3D medical data, this approach results in a loss of the original 3D volume structure information. Therefore, it is imperative to develop strategies that can effectively harness the valuable information present in 3D images and reports while preserving the structural integrity of the data. This will facilitate the enhancement of the learning process for visual representations in medical VLP." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "Figure 3 illustrates UniMedI and its designs to realize integration of 2D and 3D medical images. Generally, to overcome the challenges that no paired 2D and 3D image data exists, UniMedI employes the following pipeline. When the input is a 3D volume, we first extract a portion of 2D slices from it which most relevant to the report, and then regard the selected slices as 2D image. Those selected 2D slices are fed into the network along with the original 3D volume, allowing us to jointly learn the relationships between 2D, 3D, and radiology reports, and ultimately form a unified feature space. When the input is a 2D image, the slice selection process is omitted.\nIn Section 3.1, we demonstrate our designed attentive slice selection method, which can identify more relevant 2D slices in 3D data related to the report text, helping us learn the unified space between 2D and 3D data guided by report. In Section 3.2, we design a method to bring together 3D data and selected 2D slices closer to the same report representation, which serves as the foundation for our language-guided construction of a unified model. In Section 3.3, we design a self-distillation technique to EMA teacher for the visual encoder, constructing image-level and patch-level contrastive learning tasks, further enhancing the connection between 2D and 3D data." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "ATTENTIVE SLICE SELECTION", "publication_ref": [], "table_ref": [], "text": "In order to construct a cross-modal unified representation space, we have chosen language as the bridge. Therefore, we need to extract key information from various image modalities that correspond to the information in medical reports. Particularly, important 2D slices relevant with reports should \n•• ••• ••• L icl (2) 3D images\nFigure 3: Illustration of the proposed UniMedI framework. To effectively integrate multi-modal medical images in a language-guided common semantic space, UniMedI incorporates following designs. (1) An strategy of attentive slice selecting from 3D volume to bridge 2D and 3D images even without paired 2D and 3D data (details shown in Fig. 4). The concatenated inputs of 2D and 3D allow us to perform joint modeling across dimensions.\n(2) Shared backbone Ev for 2D and 3D images and separate tokenizer T2D and T3D. (3) Language-guidance for unified image representation provided by the language encoder E l and vision-language loss L vl . (4) Selfdistillation (implemented by image contrastive learning loss L icl and patch contrastive learning loss L pcl ) to enhance interactions between images tokens from different modalities. The distillation target comes the teacher network Ev, which is updated by exponential moving averaged (EMA) over the student network Ev.\nbe selected from 3D volume. This process is similar to how doctors view CT scans; they also base their report descriptions on some important slices.\nAs shown in Figure 4, in order to better locate the lesion-related 2D slices in the 3D data, we use the attention weights of the [CLS] token in the EMA teacher as the basis for calculation. The visual encoder's [CLS] token is directly supervised by the radiology report features from the language encoder, reflecting the most likely lesion areas described in the report. For the attentive score at token location P :\ns P = 1 HL L l=1 H h=1 Softmax f q lh (CLS) • f k lh (P ) √ C ,(1)\nwhere l denotes the layer index; h denotes the attention head index; f q lh (CLS) denotes the query embedding of the [CLS] token at Layer l and Head h; f k lh (P ) denotes the key embedding of Layer l and Head h for an 3D image token at location P ; C is the number of channels for the query and key embedding.\nThe important slices located strategy is based on the token-level score. Each token in the original CT volume represents a small voxel. By aggregating scores based on the slice dimension, we can calculate the total score for each group of slices:\ns i = 1 N N j=1 s P ij ,(2)\nwhere s i is the attentive score for the i-th slice, s P ij is the token-level attentive score for the j-th voxel in i-th slice, N represents the total number of voxels included in a slice. After aggregating the attentive scores, we can obtain text relevance scores for each 2D slice. We then choose the top k slices to establish a connection with the 3D data and the report, allowing us to learn a shared feature space." }, { "figure_ref": [], "heading": "CROSS-DIMENSIONAL MEDICAL VISUAL-LANGUAGE PRETRAINING.", "publication_ref": [], "table_ref": [], "text": "We use CLIP Radford et al. (2021) loss for cross-modal pre-training of 2D and 3D medical images and their corresponding reports. CLIP is a powerful tool that enables the alignment of features from two modalities after large-scale contrastive learning. For 2D X-ray training, we directly use T 2D and E v for feature extraction, obtaining the global image feature [CLS] token, and then aligning it with the language encoder E l 's [CLS] token. For the training of 3D CT scan data, the 2D slices within it also carry the content of the same radiology report, so we select attentive 2D slices according to the method in Section 3.1 as joint input. Through this approach, we bring the 2D slice features and 3D features closer to the same language encoder's features, using radiology reports as a medium to form cross-dimensional interactions.\nA highlight of our work is the use of attentive slices selection to ensure that the selected 2D slices are sufficiently representative. Only in this way can these 2D slices carry the supervision information from the report and, together with the 3D features, construct a joint feature space. If we were to use random selection, it would be easy to cause mismatches between the visual and textual information, and the noise generated would make the model's understanding on 2D data very confusing. Once the common coordinates from the report are no longer accurate, it would not be possible to effectively form a cross-dimensional information bridge." }, { "figure_ref": [], "heading": "ENHANCING DIMENSIONAL INTERACTIONS VIA SELF-DISTILLATION", "publication_ref": [ "b32", "b36" ], "table_ref": [], "text": "In Section 3.1, we introduced the method for selecting 2D slices that can share the same report. Then, in Section 3.2, we aligned them across dimensions using text as shared coordinates for visualtextual training. In fact, apart from using text as a medium, the projected representative 2D slice features and 3D features with global information also possess strong correlations. We aim to construct an auxiliary task to directly leverage this correlation, further enhancing the cross-dimensional communication.\nWe adopted a simple and straightforward auxiliary task design: mask and recovery. We chose to use the self-distillation method for implementation Yang et al. (2023); Zhou et al. (2021), due to its simplicity and effectiveness. During the training process, we mask a certain proportion of both 2D and 3D tokens in the online encoder, while keeping the complete input in the EMA encoder. Therefore, this non-trivial task requires predicting the EMA encoder's features directly from the online encoder, as there is a significant amount of missing information. For both 2D and 3D recovery tasks, the model has to learn the correlation with the other modality to obtain more reference information, thus directly strengthening the interaction between 2D and 3D features within the network.\nSimilarly, during the token masking phase, we also employed the attentive selection design. While passing through the EMA encoder, we calculated the patch scores as described in Equation 1, and retained the portion with the highest scores. This approach minimizes the disruption of effective lesion structures, thereby avoiding ambiguity and making the cross-modal interaction more meaningful.\nDuring the feature distillation process, we utilized the head and loss from BYOL Grill et al. (2020).\nWe applied this loss to both the global [CLS] tokens and all local patch tokens in the output 2D and 3D features, thereby enabling interaction at different granularities to enhance feature robustness. the effectiveness of the multi-modal vision representations. In the following subsections, we first present the pre-training experiments settings in Section 4.1 and two main downstream tasks in Section 4.2. In addition, we compare the performance of our proposed approach with the state-of-the-art vision-language processing methods in Section 4.3. Finally, we perform plenty of ablation experiments on multi-modal downstream tasks and visualization to show the validity of each module of our framework. 2022) is the state-of-the-art unified method to process 2D and 3D medical images. We show the performances of both UniMiSS and UniMedI, where the results are that our method achieves a +22.6%, +2.0% and +0.8% ACC gain on CC-CCII dataset comparing with the UniMiSS under the 1%, 10%, 100% training ratio respectively. The significant improvement indicates the data efficiency and effectiveness of our method." }, { "figure_ref": [], "heading": "PRE-TRAINING SETUP", "publication_ref": [], "table_ref": [], "text": "When fine-tuning the total vision encoder and the linear classification head with full training data, as listed in Table 3, our method gets the best performance on the multiple 3D medical volume classification datasets (CC-CCII and LUNA2016-v2) compared with other methods. It is observed that our method achieves with 93.8% ACC on CC-CCII dataset, and 95.9% ACC on LUNA2016-v2 dataset respectively. The remarkable performance of our method shows the generalization of our method for 2D and 3D medical classification tasks. It demonstrates our framework possesses the ability of extracting universal features for multi-modal data. " }, { "figure_ref": [], "heading": "RSNA", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "RESULTS ON MEDICAL SEMANTIC SEGMENTATION", "publication_ref": [], "table_ref": [], "text": "Table 6 and Table 7 report the results of Semantic Segmentaion on 2D and 3D medical data. In 2D semantic segmentation task, our method UniMedI significantly outperforms the current state-of-theart algorithm, MGCA. When using 1% training data, UniMedI achieves 67.8% Dice, surpasssing the MGCA 1.6%. Meanwhile, concurrently, UniMedI also demonstrates exceptional performance in 3D semantic segmentation tasks. In the BCV dataset, UniMedI achieves 0.6% and 0.4% performance gain under 20% and 40% label settings compared with Unimiss. These results underscore the exceptional performance of our method in dense prediction tasks. " }, { "figure_ref": [ "fig_5", "fig_5", "fig_5" ], "heading": "ANALYSIS OF OUR FRAMEWORK", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Visualization To better demonstrate the effectiveness of our selection process guided by language, we visualize the original X-rays, masked X-rays, their corresponding reports, and original CT scans, as well as the selected lesion slices in Figure 5. On the left side of Figure 5, the first row effectively demonstrates how UniMedI accurately captures the areas referenced in the report, including the \"Normal post-operative alignment of the sternal wires\" and \"Bilateral pleural effusions of mild-tomoderate extent persist\". In addition, the second and third cases adeptly showcase the detection of pleural effusion and scoliosis, further emphasizing the method's precision. The right side of Figure 5 displays the comprehensive slice selection process employed by UniMedI. Amidst the extensive collection of CT scan slices, our method exhibits remarkable accuracy in pinpointing the slices containing lesions. As an example, the presence of pulmonary nodules is clearly noticeable in slices 28-31.\nAblation Study of Component Design We conduct ablation experiments primarily focusing on two aspects: training mode and framework module.\nTraining mode We pre-train our framework separately using only 2D data, only 3D data, and a combination of 2D and 3D data. Subsequently, we evaluated the performance on downstream 2D dataset CheXpert, RSNA and 3D dataset CC-CCII on linear classification task respectively, with the results presented in Table 3. It can be observed that the pretraining approach combining 2D and 3D data yields benefits for both single-modal 2D and 3D data classification tasks. Particularly, the enhancement achieved with the use of multimodal data on the 3D dataset is remarkably significant. We obtained improvements of +16.8% ACC, +8.3% ACC, +9.8% ACC when using 1%, 10%, and 100% of the training data, respectively.\nFramework module In this section, we further analyze the effects of self feature distillation and attentive slices selection on our framework. We conduct a linear classification task on downstream 2D datasets CheXpert and RSNA, as well as the 3D dataset CC-CCII. The results are summarized in Table 5. The experimental results show that incorporating both self feature distillation and attentive slices selection into our framework significantly improves the performance across all data splits and datasets." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel approach called UniMedI that leverages diagnostic reports as a shared semantic space to create unified representations for diverse modalities of medical images, with a specific emphasis on 2D and 3D images. By using medical diagnostic reports as a bridge, we establish the unified vision-language framework that connects visual medical data across different modalities. Moreover, with the guidance of the text, we effectively extract visual modality information and accurately identify affected areas in 2D images and lesion slices in 3D CT scans, thereby enhancing consistency across various visual data modalities. Extensive experiments demonstrate UniMedI's superior performance in these downstream tasks(classification, segmentation, and retrieval) on various 2D and 3D medical image datasets. We hope our work can promote the exploration of VLP in medical image processing. " }, { "figure_ref": [], "heading": "C.3 DIFFERENT METRICS IN COVIDX", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We applied two distinct evaluation metrics, namely AUC (Area Under the Curve) and ACC (Accuracy), to assess the performance of our model on the COVIDx dataset. AUC is a widely used metric in machine learning and it represents the probability that a random positive example will be ranked higher than a random negative example. A higher AUC indicates better model performance. the predictions it makes. It is calculated as the number of correct predictions divided by the total number of predictions. The results of our evaluation using these metrics on the COVIDx dataset are presented in Table 10. These findings provide insights into the robustness of our model. " }, { "figure_ref": [], "heading": "A MORE IMPLEMENTATION DETAILS OF PRE-TRAINING", "publication_ref": [ "b12", "b7", "b19", "b18" ], "table_ref": [], "text": "A.1 IMPLEMENTATION DETAILS Following Gloria Huang et al. (2021), we utilize ViT-B/16 Dosovitskiy et al. (2020) as the vision encoder to extract representations in the common feature space for 2D and 3D visual data. We use BioClinicalBERT Alsentzer et al. (2019) as the text encoder to obtain the report embeddings. The vision encoder and text encoder are universal among 2D X-rays and 3D CT scans data. It is worth noting that the patch embed module of vision encoder has different operations for 2D X-rays and 3D CT scans. In general, the image size of 2D images is 224 × 224 and the volume size of 3D volumes is 128 × 128 × 32. We pre-train our UniMedI framework 50 epochs on 8 pieces of Tesla V100 GPUs with batch size of 128. The optimizer is AdamW Loshchilov & Hutter (2017) with learning rate of 2e -5 and weight decay of 0.05, where the learning rate follows a linear warmup with cosine annealing scheduler Loshchilov & Hutter (2016). We initialize learning rate as 1e -8 and warmup epoch as 20." }, { "figure_ref": [], "heading": "B MORE IMPLEMENTATION DETAILS OF DOWNSTREAM TASKS B.1 MEDICAL CLASSIFICATION", "publication_ref": [ "b19", "b33", "b1", "b30" ], "table_ref": [], "text": "2D Medical Image Classification. Except for the fine-tuning of the entire CheXpert dataset, where we use a batch size of 96, we use a batch size of 48 for the rest of the linear classification settings. Similar to the image preprocessing of MIMIC-CXR, we resize the larger dimension to 256 and pad zeros on the smaller side, resulting in an image size of 256 × 256. Then, we randomly crop (for training) or centered crop (for validation and testing) an image to 224 × 224 and normalize it into the range [0, 1] as the input for the model. The optimizer used is AdamW Loshchilov & Hutter (2017) with a learning rate of 5e-4 (except for COVIDx where we use 5e-3) and weight decay of 1e-6. We fine-tune the image classifier for 50 epochs and implement early stopping when the validation loss does not decrease for 10 consecutive runs. Afterward, we save the checkpoint model with the lowest validation loss for testing.\n3D Medical Image Classification. (1) CC-CCII Zhang et al. (2020) contains a total number 617,775 slices of 6,752 CT scans from 4,154 patients. The task is to classify each volume into three categories: novel coronavirus pneumonia, common pneumonia, and normal. We use a batch size of 8. We resize the 3D volumes to 32 × 128 × 128. We use the randomflip to augment the train set. The optimier used is AdamW and we train the classifier for 50 epochs. (2) LUNA 16 Setio et al. ( 2017), which is established from LIDC-IDRI Armato III et al. (2011). It finally contains 888 CT scans with annotations which removes CT scans with slice thickness greater than 3mm of LIDC-IDRI database. The task is a binary classification, i.e., classifying each CT volume into pulmonary nodule or normal. The optimier used is AdamW and we train the whole network for 100 epochs. Our baseline methods include UniMiss Xie et al. (2022) andJoint Nguyen et al. (2023), which belongs to 2D and 3D co-learning methods. Unimiss not only learns 2D and 3D representations but also concurrently learn all 2D sections derived from 3D volumes, along with all 2D X-ray data. Joint directly learn all 2D sections derived from 3D volumes, along with all 2D X-ray data through contrastive learning." }, { "figure_ref": [], "heading": "B.2 MEDICAL SEGMENTATION.", "publication_ref": [ "b35", "b12" ], "table_ref": [], "text": "2D Medical Image Segmentation. In the case of the RSNA dataset, we create masks for the pneumonia-affected areas based on the provided bounding boxes. These images and corresponding masks are then resized to dimensions of 224 × 224. To augment the training set, we implement ShiftScaleRotate, encompassing random affine transformations such as translation, scaling, and rotation. Following this, the images are normalized to fall within the [0, 1] range before being supplied to the semantic segmentation model. we use the SETR-PUP (progressive upsample) architecture in Zheng et al. (2021) by replacing the encoder with UniMedI. We freeze the pre-trained image encoder and only train decoder portion. The training process involves the use of the AdamW optimizer with a learning rate of 5e-4 and a weight decay of 1e-6. As suggested by Huang et al. (2021), we adopt a combined loss equation of α× FocalLoss + DiceLoss, with α set to 10. The semantic segmentation model undergoes fine-tuning for 50 epochs, with batch size 16 and early stopping im-" } ]
Vision-Language Pre-training (VLP) has shown the merits of analysing medical images, by leveraging the semantic congruence between medical images and their corresponding reports. It efficiently learns visual representations, which in turn facilitates enhanced analysis and interpretation of intricate imaging data. However, such observation is predominantly justified on single-modality data (mostly 2D images like X-rays), adapting VLP to learning unified representations for medical images in real scenario remains an open challenge. This arises from medical images often encompass a variety of modalities, especially modalities with different various number of dimensions (e.g., 3D images like Computed Tomography). To overcome the aforementioned challenges, we propose an Unified Medical Image Pre-training framework, namely UniMedI, which utilizes diagnostic reports as common semantic space to create unified representations for diverse modalities of medical images (especially for 2D and 3D images). Under the text's guidance, we effectively uncover visual modality information, identifying the affected areas in 2D X-rays and slices containing lesion in sophisticated 3D CT scans, ultimately enhancing the consistency across various medical imaging modalities. To demonstrate the effectiveness and versatility of UniMedI, we evaluate its performance on both 2D and 3D images across 10 different datasets, covering a wide range of medical image tasks such as classification, segmentation, and retrieval. UniMedI has demonstrated superior performance in downstream tasks, showcasing its effectiveness in establishing a universal medical visual representation.
UNIFIED MEDICAL IMAGE PRE-TRAINING IN LANGUAGE-GUIDED COMMON SEMANTIC SPACE
[ { "figure_caption": "Figure 1 :1Figure 1: An example showing X-ray (up) and CT scan (down) both demonstrate similar abnormality, recording in the report.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: t-SNE visualizations of image representations by models trained with different methods (2D: X-rays, 3D: CT, both modalities denote the same disease, pneumonia.). (a) Two models for different image modalities are trained individually in separate VLP process. (b) One models for different image modalities are trained in one VLP processes, but without designes in UniMedI. (c) UniMedI.Learning a common semantic space for different medical images is non-trivial, even with language guidance, and UniMedI can well handle this integration. We use circles to highlight differences between different images.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Attentive slice selection from 3D volume. Generally, the slice is selected according to the attention weights of the [CLS] token attending to other tokens, and the [CLS] token is also guided by language in the report. We compute the average attention weights within each sliced area, and then select the top K slices with the highest scores.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualization of mask and slices selection result under the guidance of language.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure 6: t-SNE visualizations of image representations by models trained with different methods. (a) Two models for different image modalities are trained individually in separate VLP process. (b) One models for different image modalities are trained in one VLP processes, but without designes in UniMedI. (c) UniMedI.We use circles to highlight differences between different images.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "C. 44VISUALIZATION OF FEATURE QUALITY We add three t-SNE visualization in Figure. 6. Compared to Figure. 2, Figure. 6 add more class (cardiomegaly) to demonstrate the ability to unify different modal representations. We have marked the changes in distances between different modalities in the figure. As shown in Figure. 6, UniMedI effectively reduces the distance between different modalities and stablishes a unified representation space.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Linear classification results on CheXpert, RSNA and COVIDx with 1%, 10%, 100% training data.", "figure_data": "CheXpert(AUC) RSNA(AUC)COVIDx(ACC)Method1% 10% 100% 1% 10% 100% 1% 10% 100%Random Init56.1 62.6 65.7 58.9 69.4 74.1 50.5 60.3 70.0ImageNet Init74.4 79.9 81.4 74.9 74.5 76.3 64.8 78.8 86.3pre-trained on CheXpertDSVE Engilberge et al. (2018)50.1 51.0 51.5 49.7 52.1 57.8---VSE++ Faghri et al. (2017)50.3 51.2 52.4 49.4 57.2 57.9---GLoRIA Huang et al. (2021)86.6 87.8 88.1 86.1 88.0 88.6 67.3 77.8 89.0pre-trained on MIMIC-CXRCaption-Transformer Cornia et al. (2020) 77.2 82.6 83.9------Caption-LSTM Xu et al. (2015)85.2 85.3 86.2------Contrastive-Binary Tan & Bansal (2019) 84.5 85.6 85.8------ConVIRT Zhang et al. (2022)85.9 86.8 87.3 77.4 80.1 81.3 72.5 82.5 92.0GLoRIA-MIMIC Huang et al. (2021)87.1 88.7 88.0 87.0 89.4 90.2 66.5 80.5 88.8MGCA (ResNet-50) Wang et al. (2022) 87.6 88.0 88.2 88.6 89.1 89.9 72.0 83.5 90.5MGCA (ViT-B/16) Wang et al. (2022)88.8 89.1 89.7 89.1 89.9 90.8 74.8 84.8 92.3UniMedI (Ours, ViT-B/16)89.4 89.7 90.5 90.0 90.4 91.5 80.3 92.4 94.6", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study of training mode on linear classification (2D dataset CheXpert, RSNA and 3D dataset CC-CCII) settings. We report Area under ROC curve (AUROC [%]) on CheXpert and RSNA datasets, and (Acc [%]) on CC-CCII dataset. Best results of each setting are in boldface. CheXpert Irvin et al. (2019), which contains 191,229 frontal-view chest radiographs. The task is to classify each image into 5 individual binary labels: atelectasis, cardiomegaly, consolidation, edema, and pleural effusion. Following Zhang et al. (2022); Huang et al. (2021), we hold out the expert-labeled validation set as test data and randomly select 5,000 radiographs from training data for validation. (2) RSNA Pneumonia Shih et al. (2019). We use the stage 2 version, which contains around 29,700 frontal view chest radiographs. The task is a binary classification, i.e., classifying each chest image into normal or pneumothorax positive. Following Huang et al. (2021), we manually split the dataset into training, validation, and test set with 70%/15%/15% ratio. (3)", "figure_data": "Implementation Details Following Gloria Huang et al. (2021), we utilize ViT-B/16 Dosovitskiyet al. (2020) as the vision encoder to extract representations in the common feature space for 2D and3D visual data. We use BioClinicalBERT Alsentzer et al. (2019) as the text encoder to obtain thereport embeddings.CC-CCII 1% 10% 100% 43.4 69.7 74.8 UniMiSS * Xie et al. (2022) 41.6 73.1 84.1 Method Random Init UniMedI * 64.2 75.1 84.9 UniMedI 75.6 84.8 89.4Method supervised ResNet3D101 CovidNet3D-L unsupervised Joint Nguyen et al. (2023) UniMedICC-CCII LUNA 85.5 -88.7 --94.2 93.8 95.9", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "We conduct medical volume classification on two representative datasets: (1) CC-CCIIZhang et al. (2020) and LUNA 16 Setio et al. (2017). More details about the 3D datasets are in Appendix.we use the Linear Classification setting to evaluate the representative ability of our universal visionlanguage pre-training framework. Apart from this, we also apply Classification to evaluate UniMedI for 3D data. Linear Classification freezes the pre-trained ViT vision encoder and only training a randomly initialized linear classification head for the downstream classification task with 1%, 10%, and 100% training data on each classification dataset.We evaluate the segmentation performance with the paradigm that we use the pre-trained vision encoder as a frozen encoder and train a decoder portion using 1%, 10% and 100% training data on RSNA dataset and 20%, 40%, 100% training data on BCV dataset. Dice scores are reported to evaluate the segmentation performance. Table1reports the results of Linear Classification on three 2D medical image classification datasets (CheXpert, RSNA and COVIDx). The results of other methods on CheXpert and RSNA are from original paperWang et al. (2022). The methods including UniMedI shown in the table are pre-trained on MIMIC-CXR dataset, which achieves a fair comparison. As for the state-of-the-art method, MGCA, we mainly compare the performance with the MGCA (ViT-B/16) which employs the ViT as the visual encoder. It is obvious that our method shows the best performance in the three 2D medical image classification for the different training data ratio (1%, 10%, 100%), outperforming the state-of-the-art MGCA (ViT-B/16) by a large margin. Specifically, our method outperforms MGCA with ViT-B/16 backbone with +0.6%, +0.6%, +0.8% AUROC on CheXpert dataset, +0.9%, +0.5%, +0.7% AUROC on RSNA dataset and +5.5%, +7.6%, +2.3% ACC on COVIDx dataset under the 1%, 10%, 100% training ratio respectively. The significant improvement indicates the data efficiency and effectiveness of our method.3D Medical Volume ClassificationTable 2 reports the results of Linear Classification on the 2D medical image classification dataset, CC-CCII. We compare UniMedI with UniMiss Xie et al. (2022). To our knowledge, the UniMiSS Xie et al. (", "figure_data": "Training tasksCheXpert (AUC)RSNA (AUC)CC-CCII (Acc)V L F D Attn 1% 10% 100% 1% 10% 100% 1% 10% 100%✓87.4 88.188.5 88.9 89.390.6 72.4 80.086.2✓✓89.0 89.390.1 89.5 90.191.2 74.6 80.986.7✓✓✓89.4 89.790.5 90.0 90.491.5 75.6 84.889.4Medical Semantic Segmentation We conduct experiments to evaluate the performance of ourframework for medical semantic segmentation on RSNA and BCV datasets: (1) RSNA PneumoniaShih et al. (2019), contains 29700 frontal view radiograph. The task is to predict bounding boxes in-", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "3D Semantic Segmentation results on AMOS (Dice [%]). AMOS is fine-tuned with 20%, 40%, 100% training data. Best results are in boldface.plemented if the validation loss ceases to decrease after 10 consecutive runs. The checkpoint model that exhibits the minimum validation loss is then preserved for testing.3D Medical Image Segmentation. In the case of the BCV dataset, the images and correspoinding. The 3D volumes are resized to 48 × 224 × 224. To augment the traning set, we implement random rotation, scaling, flipping, adding white Gaussian noise, Gaussian blurring, adjusting rightness and contrast, simulation of low resolution, and Gamma transformation. We use the UN-ETRHatamizadeh et al. (2022) architecture by replace the encoder with pre-trained UniMedI. We freeze the pre-trained image encoder and only train decoder portion. The training process involves the use of the AdamW optimizer with a learning rate of 1e-4. We adopt a combined loss equation of Dice + CE. The semantic segmentation model fintunes for 25,000 iterations with batch size 2. .1 PNEUMONIA DETECTION IN RSNA We evaluate the localized performance of pre-trained image encoder on RNSA Pneumonia. RSNA contains 29700 frontal view radiograph. The task is to predict bounding boxes indicating evidence of pneumonia. Due to use ViT-B as our bakcbone, it is sufficient to build a simple feature pyramid from a single-scale feature map. Therefore, we evaluate the detection performance byViTDet Li et al. (2022) with using the pre-trained ViT-B as a frozen backbone and only finetuning the nonbackbone layers. Similarly, we finetune the model by 1%, 10% and 100% training data to evaluate the data efficiency.C.2 3D MEDICAL SEGMENTATION IN AMOSAMOS is a large-scale, diverse, clinical dataset for abdominal organ segmentation, which is divided into 200/100 CTs for training/validation. We use the validation set as our test set and the training details is the same as B.2. We report the Dice score (%) training with 20%, 40%, and 100% portion.", "figure_data": "RSNAMethod1%10%100%ConVIRT8.25.617.9GLoRIA9.814.818.8GLoRIA-MIMIC11.616.124.8MGCA (ResNet-50)12.916.824.9MGCA (ViT-B)14.718.425.8UniMedI15.519.226.6Table 8: Object detection results (mAP [%]) on RSNA. Each dataset is fine-tuned with 1%, 10%, 100% trainingdata. Best results are in boldface.AMOSMethod20%40%100%UniMiss79.582.385.8UniMedI78.882.986.4", "figure_id": "tab_5", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "On the other hand, Accuracy (ACC) is a measure of how many predictions a model gets right out of all 3D Semantic Segmentation results on AMOS (Dice [%]). AMOS is fine-tuned with 20%, 40%, 100% training data. Best results are in boldface.", "figure_data": "COVIDx (Acc/AUC)Method1%10%100%MGCA74.8/89.084.8/97.092.3/97.9UniMedI80.3/93.592.4/98.194.6/98.1", "figure_id": "tab_6", "figure_label": "10", "figure_type": "table" } ]
Xiaoxuan He; Yifan Yang; Xinyang Jiang; Xufang Luo; Haoji Hu; Siyun Zhao; Dongsheng Li; Yuqing Yang; Lili Qiu
[ { "authors": "Emily Alsentzer; John R Murphy; Willie Boag; Wei-Hung Weng; Di Jin; Tristan Naumann; Matthew Mcdermott", "journal": "", "ref_id": "b0", "title": "Publicly available clinical bert embeddings", "year": "2019" }, { "authors": "Iii Samuel G Armato; Geoffrey Mclennan; Luc Bidaut; F Michael; Charles R Mcnitt-Gray; Anthony P Meyer; Binsheng Reeves; Denise R Zhao; Claudia I Aberle; Eric A Henschke; Hoffman", "journal": "Medical physics", "ref_id": "b1", "title": "The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans", "year": "2011" }, { "authors": "Shruthi Bannur; Stephanie Hyland; Qianchu Liu; Fernando Perez-Garcia; Maximilian Ilse; C Daniel; Benedikt Castro; Harshita Boecking; Kenza Sharma; Anja Bouzid; Thieme", "journal": "", "ref_id": "b2", "title": "Learning to exploit temporal structure for biomedical vision-language processing", "year": "2023" }, { "authors": "Benedikt Boecking; Naoto Usuyama; Shruthi Bannur; C Daniel; Anton Castro; Stephanie Schwaighofer; Maria Hyland; Tristan Wetscherek; Aditya Naumann; Javier Nori; Alvarez-Valle", "journal": "Springer", "ref_id": "b3", "title": "Making the most of text semantics to improve biomedical vision-language processing", "year": "2022" }, { "authors": "Yinda Chen; Che Liu; Wei Huang; Sibo Cheng; Rossella Arcucci; Zhiwei Xiong", "journal": "", "ref_id": "b4", "title": "Generative text-guided 3d vision-language pretraining for unified medical image segmentation", "year": "2023" }, { "authors": "Marcella Cornia; Matteo Stefanini; Lorenzo Baraldi; Rita Cucchiara", "journal": "", "ref_id": "b5", "title": "Meshed-memory transformer for image captioning", "year": "2020" }, { "authors": "Maria De La Iglesia; Jose Vayá; Joaquim Manuel Saborit-Torres; Angel Montell; Elena Serrano; Antonio Oliver-Garcia; Aurelia Pertusa; Miguel Bustos; Joaquin Cazorla; Xavier Galant; Domingo Barber; Francisco Orozco-Beltrán; Marisa García-García; Germán Caparrós; Jose María González; Salinas", "journal": "", "ref_id": "b6", "title": "Bimcv covid-19+: a large annotated dataset of rx and ct images from covid-19 patients", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b7", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Martin Engilberge; Louis Chevallier; Patrick Pérez; Matthieu Cord", "journal": "", "ref_id": "b8", "title": "Finding beans in burgers: Deep semantic-visual embedding with localization", "year": "2018" }, { "authors": "Fartash Faghri; David J Fleet; Jamie Ryan Kiros; Sanja Fidler", "journal": "", "ref_id": "b9", "title": "Vse++: Improving visualsemantic embeddings with hard negatives", "year": "2017" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar", "journal": "Advances in neural information processing systems", "ref_id": "b10", "title": "Bootstrap your own latent-a new approach to self-supervised learning", "year": "2020" }, { "authors": "Ali Hatamizadeh; Yucheng Tang; Vishwesh Nath; Dong Yang; Andriy Myronenko; Bennett Landman; Daguang Holger R Roth; Xu", "journal": "", "ref_id": "b11", "title": "Unetr: Transformers for 3d medical image segmentation", "year": "2022" }, { "authors": "Shih-Cheng Huang; Liyue Shen; Serena Matthew P Lungren; Yeung", "journal": "", "ref_id": "b12", "title": "Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition", "year": "2021" }, { "authors": "Jeremy Irvin; Pranav Rajpurkar; Michael Ko; Yifan Yu; Silviana Ciurea-Ilcus; Chris Chute; Henrik Marklund; Behzad Haghgoo; Robyn Ball; Katie Shpanskaya", "journal": "", "ref_id": "b13", "title": "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison", "year": "2019" }, { "authors": "E W Alistair; Tom J Johnson; Seth J Pollard; Nathaniel R Berkowitz; Greenbaum; Chih-Ying Matthew P Lungren; Roger G Deng; Steven Mark; Horng", "journal": "Scientific data", "ref_id": "b14", "title": "Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports", "year": "2019" }, { "authors": "Zhoubing Bennett Landman; J Xu; Martin Igelsias; T Styner; Arno Langerak; Klein", "journal": "", "ref_id": "b15", "title": "Miccai multi-atlas labeling beyond the cranial vault-workshop and challenge", "year": "2015" }, { "authors": "Yanghao Li; Hanzi Mao; Ross Girshick; Kaiming He", "journal": "Springer", "ref_id": "b16", "title": "Exploring plain vision transformer backbones for object detection", "year": "2022" }, { "authors": "Jie Liu; Yixiao Zhang; Jie-Neng Chen; Junfei Xiao; Yongyi Lu; Yixuan Bennett A Landman; Alan Yuan; Yucheng Yuille; Zongwei Tang; Zhou", "journal": "", "ref_id": "b17", "title": "Clip-driven universal model for organ segmentation and tumor detection", "year": "2023" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b18", "title": "Sgdr: Stochastic gradient descent with warm restarts", "year": "2016" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b19", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Hoang Duy Mh Nguyen; Nguyen; T N Truong; Tri Mai; Cao; T Binh; Nhat Nguyen; Paul Ho; Shadi Swoboda; Pengtao Albarqouni; Daniel Xie; Sonntag", "journal": "", "ref_id": "b20", "title": "Joint self-supervised image-volume representation learning with intra-inter contrastive clustering", "year": "2023" }, { "authors": "Mehdi Noroozi; Paolo Favaro", "journal": "", "ref_id": "b21", "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "year": "2016" }, { "authors": "Deepak Pathak; Philipp Krahenbuhl; Jeff Donahue; Trevor Darrell; Alexei A Efros", "journal": "", "ref_id": "b22", "title": "Context encoders: Feature learning by inpainting", "year": "2016" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b23", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Arnaud Arindra; Adiyoso Setio; Alberto Traverso; Thomas De Bel; Moira Sn Berens; Cas Van Den; Piergiorgio Bogaard; Hao Cerello; Qi Chen; Maria Evelina Dou; Bram Fantacci; Geurts", "journal": "Medical image analysis", "ref_id": "b24", "title": "Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the luna16 challenge", "year": "2017" }, { "authors": "George Shih; Carol C Wu; Safwan S Halabi; Marc D Kohli; Luciano M Prevedello; Tessa S Cook; Arjun Sharma; Judith K Amorosa; Veronica Arteaga; Maya Galperin-Aizenberg", "journal": "Radiology: Artificial Intelligence", "ref_id": "b25", "title": "Augmenting the national institutes of health chest radiograph dataset with expert annotations of possible pneumonia", "year": "2019" }, { "authors": "Hao Tan; Mohit Bansal", "journal": "", "ref_id": "b26", "title": "Lxmert: Learning cross-modality encoder representations from transformers", "year": "2019" }, { "authors": "Yucheng Tang; Dong Yang; Wenqi Li; Bennett Holger R Roth; Daguang Landman; Vishwesh Xu; Ali Nath; Hatamizadeh", "journal": "", "ref_id": "b27", "title": "Self-supervised pre-training of swin transformers for 3d medical image analysis", "year": "2022" }, { "authors": "Fuying Wang; Yuyin Zhou; Shujun Wang; Varut Vardhanabhuti; Lequan Yu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Multi-granularity cross-modal alignment for generalized medical visual representation learning", "year": "2022" }, { "authors": "Linda Wang; Zhong Qiu Lin; Alexander Wong", "journal": "Scientific reports", "ref_id": "b29", "title": "Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images", "year": "2020" }, { "authors": "Yutong Xie; Jianpeng Zhang; Yong Xia; Qi Wu", "journal": "Springer", "ref_id": "b30", "title": "Unimiss: Universal medical self-supervised learning via breaking dimensionality barrier", "year": "2022" }, { "authors": "Kelvin Xu; Jimmy Ba; Ryan Kiros; Kyunghyun Cho; Aaron Courville; Ruslan Salakhudinov; Rich Zemel; Yoshua Bengio", "journal": "PMLR", "ref_id": "b31", "title": "Show, attend and tell: Neural image caption generation with visual attention", "year": "2015" }, { "authors": "Yifan Yang; Weiquan Huang; Yixuan Wei; Houwen Peng; Xinyang Jiang; Huiqiang Jiang; Fangyun Wei; Yin Wang; Han Hu; Lili Qiu; Yuqing Yang", "journal": "", "ref_id": "b32", "title": "Attentive mask clip", "year": "2023-10" }, { "authors": "Kang Zhang; Xiaohong Liu; Jun Shen; Zhihuan Li; Ye Sang; Xingwang Wu; Yunfei Zha; Wenhua Liang; Chengdi Wang; Ke Wang", "journal": "Cell", "ref_id": "b33", "title": "Clinically applicable ai system for accurate diagnosis, quantitative measurements, and prognosis of covid-19 pneumonia using computed tomography", "year": "2020" }, { "authors": "Yuhao Zhang; Hang Jiang; Yasuhide Miura; Christopher D Manning; Curtis P Langlotz", "journal": "PMLR", "ref_id": "b34", "title": "Contrastive learning of medical visual representations from paired images and text", "year": "2022" }, { "authors": "Sixiao Zheng; Jiachen Lu; Hengshuang Zhao; Xiatian Zhu; Zekun Luo; Yabiao Wang; Yanwei Fu; Jianfeng Feng; Tao Xiang; Philip Hs Torr", "journal": "", "ref_id": "b35", "title": "Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers", "year": "2021" }, { "authors": "Jinghao Zhou; Chen Wei; Huiyu Wang; Wei Shen; Cihang Xie; Alan Yuille; Tao Kong", "journal": "", "ref_id": "b36", "title": "ibot: Image bert pre-training with online tokenizer", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 282.24, 105.81, 218.64, 82.52 ], "formula_id": "formula_0", "formula_text": "•• ••• ••• L icl (2) 3D images" }, { "formula_coordinates": [ 4, 200.84, 412.39, 303.16, 30.55 ], "formula_id": "formula_1", "formula_text": "s P = 1 HL L l=1 H h=1 Softmax f q lh (CLS) • f k lh (P ) √ C ,(1)" }, { "formula_coordinates": [ 4, 273.54, 540, 230.46, 30.32 ], "formula_id": "formula_2", "formula_text": "s i = 1 N N j=1 s P ij ,(2)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b28", "b10", "b30", "b20", "b3", "b1", "b24", "b35", "b34", "b34" ], "table_ref": [], "text": "Deep learning has experienced a remarkable rise in recent years [29,30,32], with highly sophisticated and overparameterized models leading the way [11,31]. Consequently, these cutting-edge models find application across a diverse set of domains, including image processing [39], natural language [21], healthcare [14], finance [2], judiciary systems [42], and more, showcasing their versatility and potential impact. However, the increasing deployment of these models has sparked concerns about their trustworthiness. To confront these issues head-on, the global community has embraced a range of trustworthy machine learning practices and metrics [19,38]. These efforts are geared towards ensuring that these models are not only accurate in their predictions, but are also fair to various groups in the dataset [3,25], robust to distribution shifts [36], maintain the privacy of the individuals whose data was collected [13], and secure against adversarial attacks [7]. These metrics collectively strive to make deep learning deployments more reliable, fostering trust and acceptance in its widespread applications.\nAlongside the discussion of trustworthy ML, the presence of multiplicity in deep learning has emerged as a significant concern yet a welcome opportunity [5,10]. Model multiplicity is the existence of multiple high-performing models that achieve similar accuracy on a given task but can display diverse predictive behaviours due to varying decision boundaries and underlying learned functions. Model multiplicity is the result of an under-specified and over-parameterized training regime, and can be affected by design choices like model architectures, hyperparameters, training configurations, or even arbitrary choices like the randomness in training.\nModel multiplicity in deep learning has significant implications. For instance, it has been shown that changes in the training configuration can lead to considerable variations in the biases present in a model [15,27]. Deploying such models without considering the impact of multiplicity can result in the unintentional deployment of unfair models in real-world applications. Conversely, if we manage multiplicity with appropriate constraints, it presents an opportunity to deploy fairer models without compromising its utility. Thus, addressing the challenges of model multiplicity is a crucial step towards creating trustworthy systems.\nExisting literature on investigating model multiplicity is limited to specialized settings that do not generalize. For instance, Somepalli et al. [35] provides an empirical quantification of similarity in the decision boundary of two models. However, the similarity of decision boundaries may not nec-essarily provide any information about its trustworthiness. Models with significantly different decision boundaries can still provide similar accuracy, fairness, robustness, security, and privacy. Similarly, Ganesh et al. [15] investigates the impact of random seeds on fairness, but their discussion focuses on model predictions, and thus may not extend to other trustworthy metrics like robustness, security, or privacy. Furthermore, these investigations are not directly comparable to each other. For example, a 70% agreement between the decision boundaries of two models as defined by Somepalli et al. [35] has no comparative value to a 10% gap in equalized odds (a fairness metric) between the same set of models [15]. Thus, while these works provide a deeper investigation into a single metric in isolation, they fail to provide a comprehensive view of the overall trends of multiplicity.\nIn this paper, we address this gap by proposing a framework to measure multiplicity that can not only dive deeper into multiplicity trends for a single metric but also provide comparisons across different metrics. We start by converting various trustworthy metrics to a common scale, which we refer to as accuracy under intervention, facilitating the comparison of multiplicity across different metrics. We then create multiplicity sheets that capture the multiplicity of accuracy under intervention for each metric separately. To illustrate our framework, we present an image classification case study that compares multiplicity across model hyperparameters, random seeds, and architecture choices, repeating the setup for various trustworthy ML metrics, namely fairness, robustness, privacy, and security.\nWe end our discussion by presenting the results of combining various metrics together to improve the model specification and reduce multiplicity. However, despite following the recommendations of recent literature and providing additional specifications using trustworthy metrics, our study reveals that model multiplicity can still create unforeseen failure cases. This highlights the need for future research to gain a more holistic understanding of model multiplicity.\nSetting Expectations and Our Contributions: Before delving into our contributions, it is essential to first clarify the scope of our work. Our goal is not to present novel findings on the multiplicity of any specific metric. In fact, we will revisit many existing results in the literature during our case study. Rather, we seek to establish a normative language to record model multiplicity that can be used to highlight multiplicity trends across different metrics, thus providing an overall picture of multiplicity in deep learning.\nMore specifically, our contributions are:\n• We propose a standardized framework to measure and study model multiplicity in deep learning.\n-We introduce a new class of metrics called accuracy under intervention. We showcase techniques to convert any metric into accuracy using appropriate interventions, thus providing a common scale of comparison. -We suggest using multiplicity sheets, a comprehensive yet compact method to record and study model multiplicity for any target metric.\n• We present a case study of model multiplicity in image classification, by providing an empirical benchmark to highlight the advantages of our framework.\n-We take an all-encompassing view of model multiplicity and its impact on trustworthy ML by comparing multiplicity across fairness, robustness, privacy, and security. -We study the influence of various axes of model variations on multiplicity, including model architecture, training randomness, and hyperparameter choices.\n• We combine several trustworthy metric specifications to challenge over-parameterization and assess its impact on multiplicity. Despite this, we see persistent multiplicity on trustworthy issues not seen during model selection, underscoring the need for better safeguards against multiplicity when deploying models in the real world." }, { "figure_ref": [], "heading": "Measuring Multiplicity", "publication_ref": [], "table_ref": [], "text": "We will start by discussing our framework to study multiplicity. We introduce the concept of accuracy under intervention to translate any metric into an accuracy metric, followed by our proposal to use multiplicity sheets to record and compare the said accuracy under intervention." }, { "figure_ref": [], "heading": "Accuracy Under Intervention", "publication_ref": [], "table_ref": [], "text": "We want to establish a standardized way to measure model multiplicity that would allow easy comparison across different scenarios. However, measuring multiplicity for various trustworthy objectives relies on vastly different metrics, making it a complex task. For instance, comparing accuracy multiplicity (difference in accuracy) with security multiplicity (difference in minimum adversarial distance to flip the label), is not straightforward. These two metrics are not comparable since they are based on different factors, one on performance and the other on distance.\nWe need a method that can translate these metrics to a common scale for a fair comparison. To achieve this, we propose converting each metric into accuracy through appropriate interventions. Simply put, we want to measure model accuracy under a well-designed intervention that represents a proxy for our original trustworthy metric. For instance, when testing the security of a model against adversarial attacks, instead of measuring the minimum adversarial distance to flip the label, we can measure the model accuracy under a fixed adversarial distance budget. Once a metric is translated to accuracy under such an intervention, we can now compare " }, { "figure_ref": [], "heading": "Multiplicity Sheets", "publication_ref": [], "table_ref": [], "text": "After converting every metric to accuracy under intervention, we propose a method for recording these values to facilitate easy comparison and visualization. We want a method that can provide both summaries for a quicker scan and detailed results for a more in-depth analysis. To achieve this, we create multiplicity sheets, a straightforward and highly intuitive approach to documenting multiplicity.\nA multiplicity sheet is a collection of tables, where each table compares two axes of multiplicity. The information in our multiplicity sheets has three levels of readability. The first level shows the raw metric scores, in this case, accuracy under intervention, ensuring no loss of information. The second level aggregates multiplicity across each axis in every table by taking the maximum difference in scores denoted by ∆ max . This allows easy visualization of various trends and the influence of different hyperparameters on model multiplicity. Finally, the third level further aggregates the complete multiplicity sheet by taking the maximum difference across all raw metric scores to get a single value representing the overall multiplicity of the given metric, denoted as ∆ all max . Given that we use accuracy under intervention for all metrics, ∆ all max serves as a useful measure to compare multiplicity across different metrics, i.e., different multiplicity sheets.\nAn example of a multiplicity sheet can be seen in Fig. 1, where we record the accuracy multiplicity on the UTKFace dataset under various training configurations (more details in Section 3). Throughout our paper, we will designate one axis of multiplicity in each table to be the random seeds. This is to filter chance trends when comparing different hyperparameters by balancing them against multiple runs with changing random seeds. It should be noted that multiplicity sheets can be created for any metric, not just accuracy under intervention. However, using accuracy under intervention allows us to compare the multiplicity trends across different sheets, which wouldn't be possible with just any metric. We will now move to our case study to highlight the benefits of our framework, while also providing an empirical benchmark of multiplicity in image classification that can be directly useful for researchers and practitioners." }, { "figure_ref": [], "heading": "Image Classification on UTKFace", "publication_ref": [], "table_ref": [], "text": "To demonstrate the utility of our framework, we will perform a case study of the model multiplicity in image classification on the UTKFace dataset. We first outline our experimental setup, followed by a comprehensive discussion of multiplicity in fairness, robustness, privacy, and security. To provide diversity in experiments, we also perform a separate case study on the CIFAR10 dataset in Appendix ??." }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [ "b15" ], "table_ref": [], "text": "Dataset We will be studying the UTKFace dataset, containing facial images that have been labelled according to Figure 3. Distribution of fairness multiplicity (i.e., group accuracy) across different intersections of racial and age groups in the UTKFace dataset. Each distribution is a condensed representation of a multiplicity sheet, containing the group accuracy of 65 independently trained models across all axes of multiplicity described in Section 3.1. Different groups have varying ranges of multiplicity, with a specially amplified variance from intersectional groups, highlighting the concerns and opportunities of multiplicity in fairness. Note: The minor perturbations along the x-axis for any group are only present for enhanced visualization and do not convey any additional signal. their perceived gender, race, and age. We will focus on the binary classification task of perceived gender. We split the dataset into 80% training and 20% testing, and we maintain the same split throughout our paper, i.e., we do not consider potential multiplicity introduced by the train-test splits.\nTraining Details By default, we train our models using a learning rate of 0.1, a batch size of 128, the data augmentation RandAugment [9], the SGD optimizer, and the ResNet-18 architecture [16]. For a simpler analysis, all models are trained from scratch, i.e., without the use of pretrained weights. We use a single random seed to control all forms of randomness in model training. We leave the decoupled analysis of various sources of randomness for future work. Finally, all models are trained with cross-entropy (CE) loss for 50 epochs, without any early stopping." }, { "figure_ref": [], "heading": "Axes of Multiplicity", "publication_ref": [ "b25", "b15", "b15" ], "table_ref": [], "text": "We will investigate the following different axes of multiplicity, (i) Learning Rate: {0.1, 0.05, 0.01}, (ii) Batch Size: {128, 256, 640}, (iii) Data Augmentation: RandAugment [9] and TrivialAugment [26], (iv) Optimizer: SGD and Adam, (v) Model Architecture: ResNet-18 [16], ResNet-50 [16], and WideResNet-50x2 [43]. We also compare multiplicity across changing randomness in model training." }, { "figure_ref": [], "heading": "Group Fairness", "publication_ref": [ "b0", "b43", "b33" ], "table_ref": [], "text": "Group fairness is a measure of performance disparity between different protected groups, rooted in concerns of algorithmic bias propagated from the dataset to the model [1,4,8,44]. Traditionally, group fairness is measured as the difference in performance between different groups in the dataset. For accuracy under intervention in our setup, we calculate the accuracy on the minority group (which can also be extended to other groups). More specifically, we consider racial labels for fairness and measure the accuracy of the racial minority in the UTKFace dataset, i.e., Asians.\nIn Fig. 2, we present the multiplicity sheet for Group Accuracy (Asian). We use the sheet to highlight the importance of random seeds in fairness and also contrast it to the multiplicity sheet for Accuracy in Fig. 1. As can be seen clearly from the ∆ max values of changing random seeds compared to different hyperparameter choices, random seeds have the most significant impact on fairness multiplicity. Moreover, we can also observe that the overall fairness multiplicity (∆ all max = 3.24) is three times higher than the accuracy multiplicity (∆ all max = 1.12). These trends of fairness variance and the impact of random seed have been previously noted in literature [15,34], however here we show the ease with which they can be spotted in our multiplicity sheets.\nWe repeat the experiment for various groups and plot the distribution of fairness multiplicity across all axes of multiplicity, in Fig. 3, with groups formed at the intersection of two different protected attributes, i.e., race and age. Our findings reveal that the severity of fairness multiplicity is even higher for intersectional groups. To put this into perspective, consider selecting a model from the distribution of models in Fig. 3. While the choice may only affect the overall accuracy in the range of 92.05% to 93.17%, it can significantly alter the accuracy for older Asian individuals, ranging from 88.46% to 98.08%. To sum up, our analysis clearly shows the alarmingly high fairness variance, and the need to address this multiplicity such that deep learning models treat diverse groups fairly during deployment." }, { "figure_ref": [], "heading": "Out-of-Distribution Robustness", "publication_ref": [ "b16", "b23" ], "table_ref": [], "text": "Out-of-distribution (OOD) robustness refers to the ability of a machine learning model to perform well on data points that are different from those it was trained on. Models that lack OOD robustness might make unreliable or incorrect predictions when faced with new, unfamiliar data, potentially leading to undesirable outcomes after deployment. Traditionally, OOD robustness is measured as the model's performance on an OOD dataset. Since it is already an accuracy metric, we do not perform any additional intervention for robustness. More specifically, we simply use the model's accuracy on the FairFace dataset [18], a facial image dataset with a different distribution than UTKFace, as the measure of OOD robustness multiplicity.\nIn Fig. 4, we present the multiplicity sheet for Accuracy on the FairFace dataset. We see the impact of learning rate, batch size, and architecture on robustness emerge from the multiplicity sheets, an unsurprising result based on existing work on the benefits of smaller batch size, larger learning rate, and higher risks of overfitting in bigger models [17,24]. The range of overall robustness multiplicity (∆ all max = 3.51) is of the same range as fairness multiplicity, i.e. three times higher than accuracy multiplicity. Thus, addressing multiplicity in OOD robustness is essential to making sure the model doesn't fail even under minor distribution shifts." }, { "figure_ref": [ "fig_0" ], "heading": "Differential Privacy", "publication_ref": [ "b5", "b11" ], "table_ref": [], "text": "Deep learning models tend to memorize data points from their training dataset, compromising the privacy of the individuals in the dataset. For instance, an adversary with access to only the outputs of the model is capable of extracting The random seed has very little influence on the perturbation accuracy, while the default hyperparameter choices are noticably dominant over other hyperparameters, with the biggest drop caused by using a large batch size. The overall multiplicity (∆ all max ) is also quite high, five times larger than accuracy multiplicity, but clearly dependent on the choice of the rate parameter λ.\nsensitive information from the model [6,33]. To address this issue, researchers often study differential privacy [12], which aims to make models trained on datasets differing at exactly one data point indistinguishable. One way to achieve this is by adding noise to the model's outputs. However, adding noise can also hurt the model's performance, thus creating a trade-off between privacy and accuracy. It is this very trade-off that we will exploit to define our accuracy under intervention, i.e., we will measure the accuracy of the model under output perturbations from an exponential distribution with a fixed rate parameter λ, for privacy multiplicity. We present the multiplicity sheet for privacy by recording the Perturbation Accuracy with λ = 5 in Fig. 5. Interestingly, unlike other trustworthy metrics, the random seed has minimal impact on the privacy multiplicity. Instead, one hyperparameter choice along each axis is clearly the best when it comes to the privacy-accuracy trade-off, in line with the existing literature on practical tips for privacy [28]. Additionally, the overall privacy multiplicity range (∆ all max = 5.30) is almost five times larger than the accuracy multiplicity. These results are clearly dependent on the rate parameter λ used to calculate accuracy under intervention and to emphasize this, we plot the distribution of privacy multiplicity for different rate parameter values in Fig. 6. It is evident that choosing the right model by accounting for privacy multiplicity is crucial to achieving better privacy-utility trade-offs during inference, and this choice becomes even more critical with a decrease in privacy budget (i.e., higher values of rate parameter λ)." }, { "figure_ref": [ "fig_1" ], "heading": "Security against Adversarial Attacks", "publication_ref": [ "b36", "b22", "b40" ], "table_ref": [], "text": "Machine learning models are vulnerable to various attacks that can manipulate the model's behaviour to suit the attacker's desires. One of the most common adversarial attacks studied in literature is the perturbation-based attack [37], which takes advantage of the brittle decision boundaries of deep learning models. In this attack, the objective of the adversary is to perturb the input image by a minimum amount that can incite adversarial outputs, while keeping the perturbation imperceptible to the human eye. Instead of measuring the minimum distance of the perturbed image to the original image (measured as L ∞ ), we will measure the accuracy under the intervention of a fixed distance budget represented by δ. Specifically, we use projected gradient descent (PGD) [23] to progressively move out of the local minima until we reach the given distance budget, and then measure the accuracy of the model under these perturbations. We present the multiplicity sheet for PGD Accuracy with δ = 0.005 in Fig. 7. The trends for security multiplicity are similar to the accuracy multiplicity trends we observed previously, i.e., no single factor dominates the multiplicity, except for architecture choice. Surprisingly, the larger model ResNet-50 had a negative impact on security multiplicity, while the even larger but wider model WideResNet-50x2 improved it, which contradicts previous findings in literature [41] and raises interesting questions for future research. Similar to privacy multiplicity, the overall multiplicity range (∆ all max = 5.53) for security is almost five times larger than the accuracy multiplicity and depends on the adversarial distance budget δ. We plot the distribution of security multiplicity for different adversarial distance budget values in Fig. 8. Our results have shown that a model's robustness to adversarial attacks suffers from severe multiplicity, and needs to be addressed to provide robust models. " }, { "figure_ref": [ "fig_2" ], "heading": "Model Selection", "publication_ref": [ "b21" ], "table_ref": [], "text": "In our case study, we found significant multiplicity in various trustworthy metrics that can hurt model deployment, if left unchecked. To address this multiplicity, the literature suggests providing appropriate specifications during model selection [5]. This involves imposing additional constraints based on some chosen metrics, in our case the trustworthy metrics. For example, one can measure the fairness scores of the model under different hyperparameters and only choose the configurations with bias scores less than some threshold. This ensures that unfair models are not selected.\nThese recommendations stem from the belief that implementing extra measures during the selection of a model will decrease its variability, ensuring predictable behaviour upon deployment. However, as we will demonstrate in this section, over-parameterized models can still encounter unforeseen failure cases during deployment, which are not simply solved with appropriate specifications during model selection.\nModel Specifications: We first define the following criteria to simulate model selection. We choose models that rank in the top k% of every metric under varying training configurations. We assess fairness by measuring accuracy for the Asian racial group, robustness by evaluating test performance on FairFace, privacy by measuring accuracy under output perturbations with a rate parameter λ = 5, and security by measuring accuracy under PGD attacks with an adversarial distance budget of δ = 0.005. Our approach ensures that we only select models that meet the high standards for all four metrics mentioned above. Unforeseen Circumstances: We now introduce a new set of metrics to account for situations that were not previously considered in our specifications. To simplify the discussion, we will just make minor adjustments to the model specifications and create these 'unforeseen circumstances'. To test fairness, we will measure group accuracy for the age group 59 -116 instead of the Asian racial group. To test robustness, we will evaluate the performance on the CelebA dataset [22] instead of FairFace. To test privacy, we will measure accuracy under input perturbations (with a rate parameter of λ = 1) instead of output perturbations. Finally, to test security, we will increase the distance budget from δ = 0.005 to δ = 0.01, thus creating a stronger adversary.\nIn Fig. 9, we plot the distribution of multiplicity for all four unforeseen metrics before any model selection, and then after model selection for k% = 75% and k% = 50% respectively. We see a noticeable drop in unforeseen fairness and security multiplicities while maintaining decent fairness and security accuracy scores under intervention. However, we do not see this improvement in unforeseen robustness or privacy multiplicity. That is, despite the highly rigorous model selection on four different trustworthy ML metrics, the overall range of multiplicity in these two unforeseen metrics remains the same, and thus they will face the same issues during deployment. Clearly, incorporating additional specifications while selecting models can only provide limited assistance, leaving a substantial level of multiplicity that cannot be managed in the same way. Thus, addressing multiplicity with a checklist of trustworthy requirements is still likely to create models that face the same risks of failure in unforeseen circumstances, emphasizing the need for a more fundamental investigation into model multiplicity." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b16", "b23", "b33" ], "table_ref": [], "text": "Model multiplicity has been an active subject of research in the deep learning literature, despite not being in the spotlight. Much of the related work in multiplicity is indirect, often disguised as research on the impact of hyperparameter choices or randomness on trustworthy ML [15, 17,24,28,34].\nVery few works in literature have focused solely on mul-tiplicity. Black et al.\n[5] provides a discussion on the opportunities and concerns of multiplicity within the context of machine learning. However, their work is highly qualitative and does not provide any framework to quantify and measure multiplicity. On the other hand, D'Amour et al.\n[10] offer a more quantitative perspective to underspecification in machine learning. Nevertheless, their analysis is fragmented across different case studies and does not provide a common language on multiplicity measurement that can be adapted for future works on model multiplicity." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this paper, we contribute to the discussion of model multiplicity, specifically in the context of image classification. By establishing a consistent and comprehensive language for multiplicity, we have created a foundation for more effective communication in the field. Our multiplicity sheets offer an intuitive and structured approach to capturing the various facets of multiplicity. Furthermore, through a detailed case study, we demonstrated the practical implementation of our framework, shedding light on the complexities that arise when dealing with model multiplicity. The insights derived from the case study not only showcased the utility of our approach but also unveiled intriguing trends within the multiplicity scores. Finally, we show empirically that the challenge of model multiplicity cannot be simply resolved by providing additional specifications or constraints.\nWhile we emphasize a specific structure for multiplicity sheets in this paper, it is important to acknowledge that further research is required to develop more effective methods for recording multiplicity. Moreover, our recommendation to use accuracy under intervention is primarily applicable to classification tasks. Nevertheless, the challenge of model multiplicity is a major issue in deep learning that goes beyond classification alone. Consequently, it is imperative that the community engages in further discussion on the topic of model multiplicity. We must shift away from treating multiplicity as an auxiliary discussion and bring it to the forefront to address potential unforeseen failures in real-world deployment scenarios and create truly trustworthy systems." }, { "figure_ref": [], "heading": "A. Image Classification on CIFAR10-Skewed", "publication_ref": [], "table_ref": [], "text": "The UTKFace dataset is an excellent choice for our main paper as it contains valuable metadata and has been extensively studied in trustworthy ML literature. However, it does come with certain limitations, such as its focus on binary classification tasks, and being confined to a highly specialized domain, i.e., facial images. To address these limitations, we will conduct a second case study using a skewed version of the CIFAR-10 dataset (more details below)." }, { "figure_ref": [], "heading": "A.1. Experiment Setup", "publication_ref": [ "b39" ], "table_ref": [], "text": "Dataset We will adopt the CIFAR10-Skewed setup from Wang et al. [40] for our case study. In this setup, the 10 object classes of CIFAR10 [20] are divided into two groups, i.e., colour majority and grayscale majority. The first 5 classes (airplane, automobile, bird, cat, deer) are marked as the colour majority, i.e., 95% of images from these classes are left as is, while the other 5% are converted into grayscale images. Conversely, the last 5 classes (dog, frog, horse, ship, truck). are marked as the grayscale majority, i.e., 95% of images from these classes are converted into grayscale images, while the other 5% are left as is." }, { "figure_ref": [], "heading": "Training Details", "publication_ref": [], "table_ref": [], "text": "The training details are the same as the UTKFace Setup, except that we train the models for only 20 epochs on CIFAR10-Skewed." }, { "figure_ref": [], "heading": "Axes of Multiplicity", "publication_ref": [], "table_ref": [], "text": "The axes of multiplicity are the same as the UTKFace Setup, except for the model architecture. We will only use a modified ResNet18 model (adapted for CIFAR10 images of size 32x32) and we will not study multiplicity across changing architecture in this case study." }, { "figure_ref": [], "heading": "A.2. Accuracy", "publication_ref": [], "table_ref": [], "text": "The multiplicity sheet for accuracy on the CIFAR10-Skewed dataset is created in Fig. 10. Note that the test dataset for CIFAR10-Skewed, i.e., the dataset on which we measure this accuracy, is also skewed and has the same formulation as the training dataset defined above. The trends of accuracy multiplicity are quite similar to that of UTKFace, i.e., no significant accuracy variance is present across any hyperparameter choice or random seeds." }, { "figure_ref": [], "heading": "A.3. Group Fairness", "publication_ref": [], "table_ref": [], "text": "For the CIFAR10-Skewed dataset, grayscale images in the 'colour majority' object classes and colour images in the 'grayscale majority' object classes are both minority groups. For this particular case study, we will measure the group accuracy of the GS Minority, i.e., the grayscale minority in the colour majority object classes, as the fairness score under intervention. The results are collected in Fig. 11.\nAs previously noted, the multiplicity sheet in Fig. 11 highlights the importance of random seeds in fairness. While other hyperparameter choices do have a noticeable impact (in order -training data augmentation, batch size, learning rate, and the optimization algorithm), clearly the most consistently dominant source of fairness multiplicity is the randomness in model training. Moreover, the overall fairness multiplicity (∆ all max = 11.16) is almost seven times higher than the accuracy multiplicity (∆ all max = 1.63), further highlighting the severe impact of multiplicity on trustworthy metrics." }, { "figure_ref": [], "heading": "A.4. Out-of-Distribution Robustness", "publication_ref": [], "table_ref": [], "text": "We will use accuracy on a grayscale version of the CI-FAR10 dataset [20] as the measure of our OOD robustness multiplicity. In Fig. 12, we present the multiplicity sheet for Accuracy on the CIFAR10-GS dataset. The results show similar trends to robustness multiplicity for UTKFace, with both hyperparameter choices and the random seed being equally important in affecting the model's robustness. The range of overall robustness multiplicity (∆ all max = 4.01) is a little more than two times higher than accuracy multiplicity, which is unsurprising since despite being grayscale, the test images still belong to CIFAR10. A more severe robustness check on a dataset that is quite different from CIFAR10 might introduce and even higher OOD robustness multiplicity." }, { "figure_ref": [], "heading": "A.5. Differential Privacy", "publication_ref": [], "table_ref": [], "text": "We use the same perturbation and trade-off setup for privacy multiplicity as done for UTKFace, i.e., we will measure the accuracy of the model under output perturbations from an exponential distribution with a fixed rate parameter λ. We present the multiplicity sheet for privacy by recording the Perturbation Accuracy with λ = 5 in Fig. 13. The same trends as Fig. 5 are noticed, i.e., the random seed has minimal impact and it's the hyperparameters that dramatically influence the privacy multiplicity, in line with the existing literature on practical tips for privacy [28]. The overall privacy multiplicity range (∆ all max = 10.28) is also almost six times larger than the accuracy multiplicity but would depend on the rate parameter λ." }, { "figure_ref": [], "heading": "A.6. Security against Adversarial Attacks", "publication_ref": [ "b22" ], "table_ref": [], "text": "We will use the same setup for security multiplicity as UTKFace, i.e., we use projected gradient descent (PGD) [23] to progressively move out of the local minima until we reach the given distance budget, and then measure the accuracy of the model under these perturbations. We present the multiplicity sheet for PGD Accuracy with δ = 0.005 in Fig. 14. The trends are again similar to the ones seen in the main paper, i.e., no single factor dominates the multiplicity. The overall multiplicity range (∆ all max = 4.51) for security is almost five times larger than the accuracy multiplicity and clearly depends on the adversarial distance budget δ. " }, { "figure_ref": [], "heading": "A.7. Discussion", "publication_ref": [], "table_ref": [], "text": "We have provided an additional case study on the CIFAR10-Skewed dataset as a companion to our main case study on the UTKFace dataset. These results help us cement certain trends, for example, the impact of random seeds on fairness, the impact of hyperparameter choices on privacyutility trade-off, etc., all of which are unsurprising as these trends have been noted previously in the literature (albeit in isolated settings). We believe these experiments will serve as a useful companion to our main paper, and help establish the importance of multiplicity sheets in image classification. " } ]
Deep learning models have proven to be highly successful. Yet, their over-parameterization gives rise to model multiplicity, a phenomenon in which multiple models achieve similar performance but exhibit distinct underlying behaviours. This multiplicity presents a significant challenge and necessitates additional specifications in model selection to prevent unexpe cted failures during deployment. While prior studies have examined these concerns, they focus on individual metrics in isolation, making it difficult to obtain a comprehensive view of multiplicity in trustworthy machine learning. Our work stands out by offering a one-stop empirical benchmark of multiplicity across various dimensions of model design and its impact on a diverse set of trustworthy metrics. In this work, we establish a consistent language for studying model multiplicity by translating several trustworthy metrics into accuracy under appropriate interventions. We also develop a framework, which we call multiplicity sheets, to benchmark multiplicity in various scenarios. We demonstrate the advantages of our setup through a case study in image classification and provide actionable insights into the impact and trends of different hyperparameters on model multiplicity. Finally, we show that multiplicity persists in deep learning models even after enforcing additional specifications during model selection, highlighting the severity of over-parameterization. The concerns of under-specification thus remain, and we seek to promote a more comprehensive discussion of multiplicity in trustworthy machine learning.
An Empirical Investigation into Benchmarking Model Multiplicity for Trustworthy Machine Learning: A Case Study on Image Classification
[ { "figure_caption": "Figure 6 .6Figure 6. Distribution of privacy multiplicity (i.e., perturbation accuracy) across different values of the rate parameter λ. The higher the rate parameter, the larger the output perturbations, which in turn creates larger drops in accuracy and a larger range of multiplicity. Refer to Fig.3for further details on distribution visualization.", "figure_data": "", "figure_id": "fig_0", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure8. Distribution of security multiplicity (i.e., accuracy after PGD) across different adversarial distance budget values δ. A higher budget corresponds to a more powerful adversary, which in turn results in lower accuracy and higher security multiplicity. Refer to Fig.3for further details on distribution visualization.", "figure_data": "", "figure_id": "fig_1", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure9. Distribution of multiplicity across unforeseen metrics under various degrees of model selection. The range of multiplicity across unforeseen metrics might remain unchanged even after we provide additional specifications for known trustworthy metrics, highlighting the severity of over-parameterization and the need to address multiplicity beyond just a checklist of metrics.", "figure_data": "", "figure_id": "fig_2", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Multiplicity sheet for Accuracy on UTKFace dataset. R18/50: ResNet-18/50; WR50: WideResNet-50x2. This multiplicity sheet records accuracy scores across different hyperparameter choices and random seeds, representing the first level of readability without any loss of information. It then aggregates multiplicity by measuring ∆max across random seeds for each hyperparameter and across hyperparameter choices for each random seed. This is the second level of readability, vital for extracting multiplicity trends. For instance, by studying the ∆max values, we see the equal importance of both random seeds and hyperparameter choices on accuracy multiplicity. Finally, we aggregate the overall multiplicity ∆ all max , i.e., the third level of readability, condensing accuracy multiplicity for UTKFace into a single value.", "figure_data": "Learning RateBatch SizeAugmentationOptimizerArchitecture0.10.050.01∆max128256640∆maxRand Trivial∆maxSGD Adam∆maxR18R50 WR50∆maxChanging Random Seeds92.85 92.85 92.37 92.89 92.64 92.60 92.47 92.51 92.43 93.17 92.91 92.640.49 0.30 0.08 0.5392.85 92.81 92.18 92.89 92.98 92.62 92.47 92.66 92.60 93.17 93.08 92.680.68 0.36 0.19 0.4992.85 92.51 92.89 92.89 92.47 92.87 93.17 92.790.34 0.00 0.40 0.3892.85 92.60 92.89 92.55 92.47 92.68 93.17 92.870.25 0.34 0.21 0.3092.85 92.22 92.13 92.89 92.49 92.05 92.47 92.24 92.45 93.17 92.05 92.200.72 0.84 0.23 1.1292.60 92.98 92.660.3892.60 92.32 92.070.5392.60 92.870.2792.60 92.450.1592.60 92.30 92.180.42∆max0.700.460.300.700.760.610.700.380.700.420.700.440.40Default Config: Learning Rate 0.1; Batch Size 128;Metric: Accuracy9294 02Augmentation Rand; Optimizer SGD; Architecture R18Dataset: UTKFace ∆ all max : 1.12Figure 1. them directly to each other and get a comprehensive under-standing of the multiplicity. More details on the specificinterventions for each metric are present in Section 3.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Multiplicity sheet for PGD Accuracy (δ = 0.005) on UTKFace dataset. R18/50: ResNet-18/50; WR50: WideResNet-50x2. The architecture choice stands out as the most influential factor in security multiplicity. The overall multiplicity (∆ all max ) is five times larger than the accuracy multiplicity, dependent on the choice of the adversarial distance budget δ.", "figure_data": "Learning RateBatch SizeAugmentationOptimizerArchitecture0.10.050.01∆max128256640∆maxRand Trivial∆maxSGD Adam∆maxR18R50 WR50∆maxChanging Random Seeds84.79 85.00 83.27 84.69 84.48 82.68 85.13 84.75 82.85 84.18 84.81 82.321.73 2.00 2.28 2.4984.79 83.72 83.72 84.69 85.66 83.95 85.13 83.36 83.51 84.18 82.98 83.401.08 1.71 1.77 1.2084.79 83.95 84.67 83.91 85.13 84.75 84.18 82.940.84 0.78 0.38 1.2584.79 83.63 84.69 83.49 85.13 83.70 84.18 83.511.16 1.20 1.43 0.6884.79 84.77 86.94 84.69 83.34 84.71 85.13 83.38 87.58 84.18 82.05 84.482.17 1.37 4.20 2.4384.14 84.24 82.371.8884.14 83.42 83.720.7284.14 83.590.5584.14 83.890.2584.14 84.05 85.401.35∆max0.990.760.950.992.680.550.991.810.990.400.992.723.10Default Config: Learning Rate 0.1; Batch Size 128;Metric: PGD Accuracy (δ = 0.005)8288 05Augmentation Rand; Optimizer SGD; Architecture R18Dataset: UTKFace ∆ all max : 5.53Figure 7.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Multiplicity sheet for Accuracy on CIFAR10-Skewed dataset.", "figure_data": "Learning RateBatch SizeAugmentationOptimizer0.10.050.01∆max128256640∆maxRand Trivial∆maxSGD Adam∆maxChanging Random Seeds92.04 91.87 91.65 91.93 91.87 91.43 91.90 92.21 91.78 91.65 91.89 91.380.39 0.50 0.43 0.5192.04 91.78 91.29 91.93 91.94 91.26 91.90 91.68 91.00 91.65 91.53 91.300.75 0.68 0.90 0.3592.04 90.80 91.93 91.08 91.90 90.58 91.65 90.801.24 0.85 1.32 0.8592.04 91.94 91.93 91.49 91.90 91.63 91.65 91.850.10 0.44 0.27 0.2091.98 91.81 91.680.3091.98 92.11 91.130.9891.98 90.961.0291.98 92.010.03∆max0.390.400.400.390.580.300.390.500.390.52Default Config: Learning Rate 0.1; Batch Size 128;Metric: Accuracy90.592.5 02Augmentation Rand; Optimizer SGDDataset: CIFAR10-Skewed ∆ all max : 1.63Changing Random Seeds54.96 54.13 52.89 58.27 58.68 54.96 59.09 57.85 55.37 Learning Rate Figure 10. 62.81 59.50 57.03 5.79 2.07 3.72 3.72 ∆max 0.1 0.05 0.0154.96 55.79 54.96 58.27 61.98 54.55 59.09 55.37 57.44 62.81 58.68 61.57 Batch Size 128 256 6400.83 7.44 3.72 4.13 ∆max54.96 57.85 58.27 51.65 59.09 54.96 62.81 56.20 Augmentation 2.89 6.61 4.13 6.61 ∆max Rand Trivial54.96 53.72 58.27 60.74 59.09 59.50 62.81 57.44 Optimizer 1.24 2.48 0.41 5.37 ∆max SGD Adam60.33 57.03 57.853.3160.33 59.09 54.136.2060.33 58.272.0760.33 58.681.65∆max7.855.374.967.856.617.447.856.617.857.035163 08", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Default Config: Learning Rate 0.1; Batch Size 128; Augmentation Rand; Optimizer SGD Metric: OOD Accuracy (CIFAR10-GS) Dataset: CIFAR10-Skewed ∆ all max : 4.01 Figure 12. Multiplicity sheet for OOD Accuracy (CIFAR10-GS) on CIFAR10-Skewed dataset.Figure 13. Multiplicity sheet for Perturbation Accuracy (λ = 5) on CIFAR10-Skewed dataset.", "figure_data": "Learning RateBatch SizeAugmentationOptimizer0.10.050.01∆max128256640∆maxRand Trivial∆maxSGD Adam∆maxChanging Random Seeds78.79 78.32 78.00 77.63 76.53 75.76 78.98 78.01 77.53 78.30 77.37 77.260.79 1.87 1.45 1.0478.79 78.39 76.96 77.63 76.48 75.50 78.98 78.00 76.46 78.30 77.56 76.361.83 2.13 2.52 1.9478.79 75.78 77.63 76.71 78.98 74.97 78.30 76.063.01 0.92 4.01 2.2478.79 77.20 77.63 75.54 78.98 77.43 78.30 78.701.59 2.09 1.55 0.4078.17 77.28 76.911.2678.17 77.41 75.552.6278.17 76.971.2078.17 77.830.34∆max1.351.79 Learning Rate 2.241.351.91 Batch Size 1.461.35 Augmentation 2.001.35 Optimizer 3.160.10.050.01∆max128256640∆maxRand Trivial∆maxSGD Adam∆max7579 04Changing Random Seeds84.10 84.32 84.62 83.67 84.06 84.47 84.11 84.02 83.87 83.08 83.78 84.020.52 0.80 0.24 0.9484.10 84.34 84.51 83.67 84.40 84.65 84.11 84.44 84.59 83.08 84.57 84.390.41 0.98 0.48 1.4984.10 82.87 83.67 81.10 84.11 82.27 83.08 80.851.23 2.57 1.84 2.2384.10 83.06 83.67 81.87 84.11 84.44 83.08 82.971.04 1.80 0.33 0.1183.98 84.18 85.361.3883.98 84.56 85.331.3583.98 81.192.7983.98 83.460.52∆max1.03Learning Rate 0.54 1.491.03Batch Size 0.23 0.94Augmentation 1.03 2.02Optimizer 1.03 2.570.10.050.01∆max128256640∆maxRand Trivial∆maxSGD Adam∆max8086 04Changing Random Seeds76.57 73.13 68.47 76.47 74.36 68.83 76.06 72.01 68.20 76.29 73.70 68.288.10 7.64 7.86 8.0176.57 73.66 67.71 76.47 73.86 68.48 76.06 72.68 66.60 76.29 72.63 67.428.86 7.99 9.46 8.8776.57 68.55 76.47 68.74 76.06 68.06 76.29 67.898.02 7.73 8.00 8.4076.57 69.50 76.47 68.22 76.06 67.56 76.29 69.337.07 8.25 8.50 6.9676.88 72.94 68.418.4776.88 73.47 67.079.8176.88 68.328.5676.88 69.237.65∆max0.822.350.630.821.231.880.820.850.821.94Default Config: Learning Rate 0.1; Batch Size 128;Metric: Pert. Accuracy (λ = 5)6777 010Augmentation Rand; Optimizer SGDDataset: CIFAR10-Skewed ∆ all max : 10.28", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
Ganesh Prakhar; Mila; A I Quebec; Insitute
[ { "authors": "Mohsen Abbasi; A Sorelle; Carlos Friedler; Suresh Scheidegger; Venkatasubramanian", "journal": "SIAM", "ref_id": "b0", "title": "Fairness in representation: quantifying stereotyping as a representational harm", "year": "2019" }, { "authors": "Shamima Ahmed; M Muneer; Anis El Alshater; Helmi Ammari; Hammami", "journal": "Research in International Business and Finance", "ref_id": "b1", "title": "Artificial intelligence and machine learning in finance: A bibliometric review", "year": "2022" }, { "authors": "Solon Barocas; Moritz Hardt; Arvind Narayanan", "journal": "Nips tutorial", "ref_id": "b2", "title": "Fairness in machine learning", "year": "2017" }, { "authors": "Solon Barocas; Andrew D Selbst", "journal": "Calif. L. Rev", "ref_id": "b3", "title": "Big data's disparate impact", "year": "2016" }, { "authors": "Emily Black; Manish Raghavan; Solon Barocas", "journal": "", "ref_id": "b4", "title": "Model multiplicity: Opportunities, concerns, and solutions", "year": "2022" }, { "authors": "Nicholas Carlini; Florian Tramer; Eric Wallace; Matthew Jagielski; Ariel Herbert-Voss; Katherine Lee; Adam Roberts; Tom Brown; Dawn Song; Ulfar Erlingsson", "journal": "", "ref_id": "b5", "title": "Extracting training data from large language models", "year": "2021" }, { "authors": "Anirban Chakraborty; Manaar Alam; Vishal Dey; Anupam Chattopadhyay; Debdeep Mukhopadhyay", "journal": "CAAI Transactions on Intelligence Technology", "ref_id": "b6", "title": "A survey on adversarial attacks and defences", "year": "2021" }, { "authors": "Kate Crawford", "journal": "Harvard business review", "ref_id": "b7", "title": "The hidden biases in big data", "year": "2013" }, { "authors": "Barret Ekin D Cubuk; Jonathon Zoph; Quoc V Shlens; Le", "journal": "", "ref_id": "b8", "title": "Randaugment: Practical automated data augmentation with a reduced search space", "year": "2020" }, { "authors": "Katherine Alexander D'amour; Dan Heller; Ben Moldovan; Babak Adlam; Alex Alipanahi; Christina Beutel; Jonathan Chen; Jacob Deaton; Eisenstein; Matthew D Hoffman", "journal": "The Journal of Machine Learning Research", "ref_id": "b9", "title": "Underspecification presents challenges for credibility in modern machine learning", "year": "2022" }, { "authors": "Mostafa Dehghani; Josip Djolonga; Basil Mustafa; Piotr Padlewski; Jonathan Heek; Justin Gilmer; Andreas Peter Steiner; Mathilde Caron; Robert Geirhos; Ibrahim Alabdulmohsin", "journal": "PMLR", "ref_id": "b10", "title": "Scaling vision transformers to 22 billion parameters", "year": "2023" }, { "authors": "Cynthia Dwork", "journal": "Springer", "ref_id": "b11", "title": "Differential privacy", "year": "2006" }, { "authors": "Cynthia Dwork; Aaron Roth", "journal": "Foundations and Trends® in Theoretical Computer Science", "ref_id": "b12", "title": "The algorithmic foundations of differential privacy", "year": "2014" }, { "authors": "Andre Esteva; Alexandre Robicquet; Bharath Ramsundar; Volodymyr Kuleshov; Mark Depristo; Katherine Chou; Claire Cui; Greg Corrado; Sebastian Thrun; Jeff Dean", "journal": "Nature medicine", "ref_id": "b13", "title": "A guide to deep learning in healthcare", "year": "2019" }, { "authors": "Prakhar Ganesh; Hongyan Chang; Martin Strobel; Reza Shokri", "journal": "", "ref_id": "b14", "title": "On the impact of machine learning randomness on group fairness", "year": "2023" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b15", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Stanislaw Jastrzebski; Zachary Kenton; Devansh Arpit; Nicolas Ballas; Asja Fischer; Yoshua Bengio; Amos Storkey", "journal": "", "ref_id": "b16", "title": "Three factors influencing minima in sgd", "year": "2017" }, { "authors": "Kimmo Karkkainen; Jungseock Joo", "journal": "", "ref_id": "b17", "title": "Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation", "year": "2021" }, { "authors": "Michael Kearns; Aaron Roth", "journal": "Oxford University Press", "ref_id": "b18", "title": "The ethical algorithm: The science of socially aware algorithm design", "year": "2019" }, { "authors": "Alex Krizhevsky", "journal": "", "ref_id": "b19", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Tianyang Lin; Yuxin Wang; Xiangyang Liu; Xipeng Qiu", "journal": "AI Open", "ref_id": "b20", "title": "A survey of transformers", "year": "2022" }, { "authors": "Ziwei Liu; Ping Luo; Xiaogang Wang; Xiaoou Tang", "journal": "", "ref_id": "b21", "title": "Deep learning face attributes in the wild", "year": "2015" }, { "authors": "Aleksander Madry; Aleksandar Makelov; Ludwig Schmidt; Dimitris Tsipras; Adrian Vladu", "journal": "", "ref_id": "b22", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2017" }, { "authors": "Dominic Masters; Carlo Luschi", "journal": "", "ref_id": "b23", "title": "Revisiting small batch training for deep neural networks", "year": "2018" }, { "authors": "Ninareh Mehrabi; Fred Morstatter; Nripsuta Saxena; Kristina Lerman; Aram Galstyan", "journal": "ACM computing surveys (CSUR)", "ref_id": "b24", "title": "A survey on bias and fairness in machine learning", "year": "2021" }, { "authors": "G Samuel; Frank Müller; Hutter", "journal": "", "ref_id": "b25", "title": "Trivialaugment: Tuningfree yet state-of-the-art data augmentation", "year": "2021" }, { "authors": "Michele Valerio Perrone; Muhammad Donini; Robin Bilal Zafar; Krishnaram Schmucker; Cédric Kenthapadi; Archambeau", "journal": "", "ref_id": "b26", "title": "Fair bayesian optimization", "year": "2021" }, { "authors": "Natalia Ponomareva; Hussein Hazimeh; Alex Kurakin; Zheng Xu; Carson Denison; Brendan Mcmahan; Sergei Vassilvitskii; Steve Chien; Abhradeep Guha; Thakurta ", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b27", "title": "How to dp-fy ml: A practical guide to machine learning with differential privacy", "year": "2023" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "PMLR", "ref_id": "b28", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b29", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Le Teven; Angela Scao; Christopher Fan; Ellie Akiki; Suzana Pavlick; Daniel Ilić; Roman Hesslow; Alexandra Castagné; Sasha Luccioni; Matthias Franc ¸ois Yvon; Gallé", "journal": "", "ref_id": "b30", "title": "Bloom: A 176b-parameter open-access multilingual language model", "year": "2022" }, { "authors": "Koosha Sharifani; Mahyar Amini", "journal": "World Information Technology and Engineering Journal", "ref_id": "b31", "title": "Machine learning and deep learning: A review of methods and applications", "year": "2023" }, { "authors": "Reza Shokri; Marco Stronati; Congzheng Song; Vitaly Shmatikov", "journal": "IEEE", "ref_id": "b32", "title": "Membership inference attacks against machine learning models", "year": "2017" }, { "authors": "Ioana Baldini Soares; Dennis Wei; Karthikeyan Natesan Ramamurthy; Moninder Singh; Mikhail Yurochkin", "journal": "", "ref_id": "b33", "title": "Your fairness may vary: pretrained language model fairness in toxic text classification", "year": "2022" }, { "authors": "Gowthami Somepalli; Liam Fowl; Arpit Bansal; Ping Yeh-Chiang; Yehuda Dar; Richard Baraniuk; Micah Goldblum; Tom Goldstein", "journal": "", "ref_id": "b34", "title": "Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective", "year": "2022" }, { "authors": "Adarsh Subbaswamy; Roy Adams; Suchi Saria", "journal": "PMLR", "ref_id": "b35", "title": "Evaluating model robustness and stability to dataset shift", "year": "2021" }, { "authors": "Christian Szegedy; Wojciech Zaremba; Ilya Sutskever; Joan Bruna; Dumitru Erhan; Ian Goodfellow; Rob Fergus", "journal": "", "ref_id": "b36", "title": "Intriguing properties of neural networks", "year": "2014" }, { "authors": "R Kush; Varshney", "journal": "XRDS: Crossroads, The ACM Magazine for Students", "ref_id": "b37", "title": "Trustworthy machine learning and artificial intelligence", "year": "2019" }, { "authors": "Athanasios Voulodimos; Nikolaos Doulamis; Anastasios Doulamis; Eftychios Protopapadakis", "journal": "Computational intelligence and neuroscience", "ref_id": "b38", "title": "Deep learning for computer vision: A brief review", "year": "2018" }, { "authors": "Zeyu Wang; Klint Qinami; Christos Ioannis; Kyle Karakozis; Prem Genova; Kenji Nair; Olga Hata; Russakovsky", "journal": "", "ref_id": "b39", "title": "Towards fairness in visual recognition: Effective strategies for bias mitigation", "year": "2020" }, { "authors": "Boxi Wu; Jinghui Chen; Deng Cai; Xiaofei He; Quanquan Gu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b40", "title": "Do wider neural networks really help adversarial robustness?", "year": "2021" }, { "authors": "M Yassine; Esghir; Ibrihich", "journal": "Procedia Computer Science", "ref_id": "b41", "title": "Using artificial intelligence tools in the judicial domain and the evaluation of their impact on the prediction of judgments", "year": "2023" }, { "authors": "Sergey Zagoruyko; Nikos Komodakis", "journal": "British Machine Vision Association", "ref_id": "b42", "title": "Wide residual networks", "year": "2016" }, { "authors": "Jieyu Zhao; Tianlu Wang; Mark Yatskar; Vicente Ordonez; Kai-Wei Chang", "journal": "", "ref_id": "b43", "title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints", "year": "2017" } ]
[]
10.1093/qje/qjab023
2023-11-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b4", "b1", "b9" ], "table_ref": [], "text": "Policymakers rely on information provided by external stakeholders to help design new regulations. For U.S. federal regulators, this process is formalized by the Administrative Procedures Act which requires that whenever an agency is going to make a policy change (known as a \"rule\"), they must first publish a proposed rule and accept public comment. Then, in the final rule, the agency must respond to comments they received. The number of comments received by regulators has been growing over time, and the federal government now regularly received more than a million comments per year.\nExisting research suggest that public comments can have substantial impacts on public policy (Yackee, 2019). However, measuring the influence of in-dividual organizations or tracking patterns of influence over time has been limited by the challenging nature of the data. Both comments and regulator responses are in gigantic scale and take the form of complex natural language text. Prior attempts at large-scale analysis have borrowed insights from the research field of NLP by measuring the lexical overlap between comments and rule text, with researchers assuming that a high degree of overlap is suggestive of influence (Bertrand et al., 2021;Dwidar, 2022;Carpenter et al., 2022) . However, this approach provides at best a noisy measure of influence, which is difficult to verify. Therefore, we aim for pursuing a more precise and efficient measure based on analyzing the regulator's responses to comments and then matching comments to specific responses. Given that some responses are positive, with agencies accepting commenter's suggestions while others are negative, with the agency rejecting the comment, it is very important to link the right comments to the right responses.\nIn this paper, we propose a simple yet effective iterative contrastive learning paradigm to train a neural-based comment-response matcher in an unsupervised manner. Specifically, we first construct a pseudo training dataset comprising of hard positive and negative samples generated by the inital setup of our proposed comment-response matcher (SBERT (Reimers and Gurevych, 2019) as the backbone). This matcher is then optimized on the obtained training pseudo data and subsequently utilized to generate the hard positive and negative examples for the next iteration. Through empirical evaluation on a human-annotated test set, our proposed comment-response matcher not only surpasses selected unsupervised text-matching benchmarks utilized in previous literature but also achieves comparable performance with the state-ofthe-art gigantic language model -GPT-4 (OpenAI, 2023), while remaining more cost-effective to deploy on the full-scale comment-response matching. " }, { "figure_ref": [ "fig_0" ], "heading": "Comment-Response Matcher", "publication_ref": [], "table_ref": [], "text": "In this paper, we aim to design a text matching model (Section 2.1) that can effectively and efficiently assess the semantic relevance between the public comment text and responses produced by regulators. In essence, given a comment chunk from public c = {c 1 , • • • , c m } and a regulator's response r = {r 1 , • • • , r n }, where each c k is a token in the comment and each r k is a token in the response, our goal is to learn a function f : (c, r) → s that predicts the score s indicating the likelihood that comment c and regulator's response r pertain to the same topic, and that the concern in c is addressed in r.\nAs illustrated in Figure 1, we employ an iterative contrastive learning paradigm with the training procedure (Section 2.2) consisting of two steps performed alternatingly, namely hard pos./neg. mining and model updating." }, { "figure_ref": [], "heading": "Model Architecture", "publication_ref": [ "b9" ], "table_ref": [], "text": "Our proposed comment-response matcher functions as a binary classifier essentially comprising two components: a text encoder with SBERT2 (Reimers and Gurevych, 2019) as its underlying structure, followed by a scoring layer yielding the likelihood of a pair of comment and response being a match. More formally, given a pair of randomly sampled comment chunk and response c i and r j , we first separately acquire the embeddings for these two textual units:\nv c i = S-BERT (c i ),(1)\nv r j = S-BERT (r j )(2)\nThen the probability of r j responds to c i (a match) is computed with the negative exponent of cosine distance between v c i and v r j :\np(match|v c i , v r j ) = exp(-α * (1-v c i •v r j )) (3)\nwhere α serves as a hyper-parameter that controls the decay rate of the matching probability. A greater value of α results in a more pronounced decrease in matching probability when cosine distance increases. Throughout the training process, we optimize the model with cross-entropy loss." }, { "figure_ref": [ "fig_0" ], "heading": "Training Scheme", "publication_ref": [ "b5", "b12", "b16" ], "table_ref": [], "text": "Generally, we use contrastive learning paradigm (Hadsell et al., 2006) to train our proposed comment-response matcher. More concretely, we optimize the text encoder in the matcher on selected hard positive and negative samples to effectively capture signals indicating the semantic relevance between public comments and responses from the regulator. This process is therefore arguably conducive to accurately predict whether a comment is discussed in a given response. The training scheme for the matching model spans several iterations, with each iteration consisting of two steps:\n-1: Hard Pos./Neg. Mining. As illustrated in Figure 1, our preliminary regulatory data for rulemaking is structured in the form of rule observations ( §3.1). Each of these rule observations consists of a set of comment chunks and a set of responses associated with a particular rule. As this raw dataset does not have any explicit ground-truth labels about matching between responses and comments within the rule, we make the model training entirely rely on the labels of created pseudo positive and negative comment-responese sample pairs. To do so, we first identify a set of \"positive pairs\" from the raw data. More specifically, for each response, we find its most similar comment chunk within the same rule observation. This similarity is calculated based on the embeddings of the model's text encoder optimized from the prior iteration. In this way we obtain 11,828 positive comment-response pairs. In order to improve the robustness and efficiency of model training, within one training step, we first draw a batch of M comment/response strings and then extract hard positive and negative samples associated with strings in the batch. Subsequently, we update the encoder-based matching model on these hard positive/negative samples utilizing in-batch contrastive learning (Wu et al., 2020;Zhou et al., 2022). In practice, we initially apply the match-ing model, derived from the last training iteration, to all comment/response strings, yielding a total of 11, 828 × 2 = 23, 656 embeddings. We then pair each of the M strings in the sampled batch with all embeddings, compute the loss, and generate a loss matrix l ∈ R M ×23656 . Subsequently, we perform argmax on each row of l to identify the response-comment pair corresponding to the maximum loss, ultimately producing M hard positive/negative samples. Each hard positive sample refers to a possibly matched pair which the model struggles to allocate high matching probability to, whereas each hard negative sample refers to a possibly mismatched pair to which the model tends to assign high matching probability.\n-2: Model Updating.\nOnce hard positives/negatives for a training step are obtained, in this phase, we update weights of the commentresponse matching model by minimizing the crossentropy loss as described in Section 2.1. This allows us to pull the matched comments and responses closer and push the unmatched ones far apart. The model updated in the current iteration will be fixed and serve as the text encoder to mine hard positive and negative samples again for the next training iteration." }, { "figure_ref": [], "heading": "Experiments and Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Experimental Setup", "publication_ref": [ "b10", "b6", "b9", "b11", "b9", "b6", "b7" ], "table_ref": [], "text": "Datasets. As mentioned in §2.2, our preliminary regulatory data for rule-making is structured in the form of rule observations where each rule observation is with a hierarchy depicted as follows:\n• A rule observation (about one rule document) -A set of comment documents associated with the rule (Comment A, B, . . .) For test data construction, we uniformly (see Appendix B) sample 160 pairs of comment chunk and response from all possible combination in the dataset, and recruite seven students from the law program of our institution to annotate this test set. Annotators were asked to score the relevance of each comment chunk to the accompanying response using a 5-point Likert scale (see Appendix A for detailed annotation instructions). Each sample was assigned to multiple annotators, thus we received 3-5 independent evaluations for each testing pair.\n* A set of comment chunks in Comment A (Comment A-1, A-2, . . .) * A set of comment chunks in Comment B (Comment B-1, B-2, . . .) . . . -A\nWe include more details about our dataset construction pipeline in Appendix B.\nBaselines. We compare our proposal with three baseline text matching algorithms (see Figure 2). They are: (1) Normalized BM25 (Robertson and Zaragoza, 2009), as a widely used term weightingbased ranking model usually applied for information retrieval. We calculate BM25 scores for the corresponding responses and comments tied to the same rule. These scores are then normalized on a per-rule basis; (2) RoBERTa Score (Liu et al., 2019), which employs the vanilla RoBERTa base as text encoder, transforming both comments chunks and responses into embeddings, which are then used for matching score computing. As we employ the same scoring layer (in Section 2.1), this baseline is essentially equivalent to our proposed matching model in iteration 0; (3) SBERT Score (Reimers and Gurevych, 2019), which employs the SBERT (multi-qa-mpnet-base-dot-v1)3 as text en- coder. The score computing of this baseline is in a manner similar to the RoBERTa Score introduced above; (4) Llama-2-Chat (70B) (Touvron et al., 2023), currently the top-performing fundamental gigantic language model within the open-sourced Llama family. We essentially deem it as a human evaluator by providing it with the same guidelines giving to human annotators and then task it to assign a score on the 5-point Likert scale for each pair of comment and response; (5) GPT-4 (Ope-nAI, 2023), currently the state-of-the-art gigantic language model, leading in both open-sourced and closed-sourced domains. We prompt it to assign scores for comment-response pairs in the same manner as Llama-2-Chat (70B) introduced above4 .\nImplementation Details. As in Section 2.1, we use SBERT(multi-qa-mpnet-base) (Reimers and Gurevych, 2019) as the backbone text encoder to demonstrate our proposed comment-response matcher, given its superior performance. However, to validate the model-agnostic nature of our proposed iterative contrastive learning framework, we also test with the vanilla RoBERTa base (Liu et al., 2019) as an alternate backbone text encoder, aiming to discern if improvements brought by the iterative contrastive learning framework extend beyond just one particular text encoder. For both, We take the mean of the contextualized representation of the last hidden layer as text embeddings. For the scoring layer, we set hyper-parameter α = 50. For training, we use AdamW (Loshchilov and Hutter, 2017) with with lr = 1e -5 and batch size = 8. We conduct 5 iterations of model training, with each iteration detailed in §2.2." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "To investigate how well the baselines and our proposed comment-response matching model align with human judgments, in Figure 2, we use scatter plots to visualize their correlations with human scores, as well as report the Pearson's r correlation score. We can observe that even though GPT-4's predictions show the highest correlation with the 5-point Likert human annotations, our proposed matching model also demonstrates strong performance as ranked in the second place, outperform-ing all left baselines by a considerable margin.\nMore concretely, BM25 tends to underestimate the relevance between comments and responses, assigning low scores to many pairs that humans consider highly relevant in topic. As sharing the same rating scale with human, the predictions of GPT-4 align closely with human judgements, whereas Llama-2-Chat (70B) correlation with human is way less desirable. Interestingly, GPT-4 demonstrates a strong tendency to consistently assign score '2' to samples that humans rated within the range of [1,3], which may indicate that GPT-4 is cautious to determine a comment-response pair as entirely irrelevant. The vanilla RoBERTa without any tuning on our dataset extremely overestimates the relevance between comments and responses by assigning high similarity scores indiscriminately to both matched and unmatched sample pairs. On the other hand, SBERT, being a superior text matching model pre-trained on semantic search as a close analogue to our task, aligns more closely with human judgment, yet the similarity scores it produces for both matched and unmatched samples still fall within a relatively narrow range. When our proposed contrastive learning framework is applied to RoBERTa and SBERT, the correlation of these two base text encoders with human judgments increases from 0.22 and 0.70 to 0.71 and 0.79 respectively, bringing the improved SBERT's performance remarkably close to that of GPT-4 (0.82). It demonstrates the model-agnostic behavior of our iterative contrastive learning framework when effectively interacting with different base encoders. Hence, we believe that with a more advanced base encoder, we could potentially match or even surpass the performance of GPT-4.\nTo assess the effectiveness of the iterative contrastive learning scheme, Figure 3 showcases the performance of the RoBERTa-and SBERT-based comment-response matchers on the test set across different training iterations applying iterative contrastive learning. We can see the model's performance is improved iteratively across iterations, with the most notable enhancement occurring after the first iteration.\nEven though GPT-4 achieves slightly superior correlation with humans in our experiments, from the perspective of real-world application, the cost of deploying the model is also critical. Compared with our SBERT-based matcher, prompting GPT-4 using our designed instruction template incurs an additional cost of $4.63 on the test set based on its current pricing rate. Given the context that every year U.S. Federal Regulators receive an overwhelming volume of comment letters (usually over one million) from businesses, interest groups, and members of the public, our proposed SBERT-based matcher would be a more feasible option for such practical scenario due to its efficiency and costeffectiveness." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b8" ], "table_ref": [], "text": "In this paper, we propose a simple yet effective contrastive learning approach following the iterative data construction -model updating training scheme, aiming for automatically matching the responses in policy regulations and relevant comments they respond to. Our empirical study on a real-world test set demonstrates that our proposal outperforms a set of selected benchmarks for text matching in terms of correlation with human annotations, achieves comparable performance but is more costeffective than the most advanced gigantic large language model (i.e., GPT-4) for comments and regulator responses in larger scale. Our proposed approach can be easily adapted to other text matching applications dealing with text in rather different complexity, such as name matching (Peng et al., 2015), or extended to other more-resourced scenarios like semi-supervised settings, which we will leave as our future work." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The main limitation of our method is that, while it provides a substantial improvement over BM25 on our task, it is not as accurate as current large language models. It seems reasonable to guess that the cost of employing GPT4 and its successors will decline over time, and at some point, the computational efficiency of our approach may not be so important. Another limitation is that our approach depends on particular aspects of our task that may not be applicable in other domains. Specifically, our unsupervised training method relies on the existence of many groups of responses and comments in the data with the property that positive pairs are only possible within a group. This lets us make good guesses about a subset of the true positive pairs with only a weak model, and generate a large number of true negative pairs by matching strings across groups. However, it is interesting to consider what other tasks and data might have a similar structure." }, { "figure_ref": [], "heading": "A Prompt Templates for GPT-4", "publication_ref": [], "table_ref": [], "text": "See next page." }, { "figure_ref": [], "heading": "B Details for Regulatory Data Construction", "publication_ref": [ "b14", "b13" ], "table_ref": [], "text": "Our data comes from two main sources: Rules published in the Federal Register from 2000-2022, downloaded in bulk XML format from govinfo.gov, and all comments submitted to regulations.gov from 2000-2022, downloaded via the API. We extracted regulator responses to comments from the rules using a supervised classifier under development for a parallel research project. We extract comment text with the tika parser 5 , employing OCR when necessary to extract text from imageonly PDFs. The comment text is split into paragraphs, and body paragraphs are identified using a simple rule-based classifier. Finally, we group very short paragraphs (often improperly split by page breaks or other formatting issues) with adjacent paragraphs to form larger comment \"chunks\" 500-100 characters long. Paragraphs longer than 1000 characters are included as single chunks. Besides this rule-based chunk generation strategy, we believe topic segmentation techniques (Xing et al., 2020;Xing and Carenini, 2021) can potentially lead to comment chunks in better quality if the training data for segmentation in reasonable size is available.\nLinking comments to the appropriate rules requires additional data. We collect rule metadata from federalregister.gov and reginfo.gov and link regulations.gov documents to Proposed Rules, and Proposed Rules to Rules using Federal Register document numbers, agency docket identifiers, and Regulation Identification Numbers (RIN). This gives us a database of rules where, for each rule, we can identify the set of comments that the agency would likely be responding to.\nThe structure of the data is important for our training strategy. Each rule may contain multiple responses, and be linked to multiple comments with several paragraphs each. We can be reasonably confident that each response in a rule is responding to a small number of paragraphs from the linked comments. It is also unlikely that that a given response is related to comment paragraphs from other rules.\nWhen selecting the training data in our iterative 5 https://tika.apache.org/0.7/parser.html algorithm, we restrict our sample to rules with 1-10 comments, and fewer than 1000 unique linked comment paragraphs. We also select at most 10 responses from each rule. This gives us a base sample of 6,727 rules, 17,452 responses, 10,456 linked comments, and 193,143 comment chunks.\nTo evaluate the quality of the similarity scores learned on the full training set, we used an early iteration of the model to retrieve all pairs with a score greater than 0.1 on a subset of the data. Then we grouped the pairs into bins of width 0.1 by score and kept 10 observations per bin per response. This sampling approach gives us a relatively uniform distribution of match qualities for our test sample. Finally, we sampled 4 random batches of 40 pairs from this binned sample and distributed them to human annotators. The annotators were not shown the scores used to construct the sample.\nOur annotators consisted of seven students from the law program of our institution.All of the students had been working with us for several months and were familiar with our data. The annotators were asked to score the relevance of each comment chunk to the accompanying response using a 5-point Likert scale (see Appendix A for the annotation instructions). Each sample was assigned to multiple annotators, and we received 3-5 independent evaluations for each pair." }, { "figure_ref": [], "heading": "Content of Prompt", "publication_ref": [], "table_ref": [], "text": "I will give you a pair of comment-response texts in each turn, you should give a number between 1 and 5. The number should indicate degree of overlap between the topics discussed in the two texts and how likely it is that the agency's response text is intended as a response to the selected comment text: 1 = Incorrect match. Comment and response text are clearly discussing very different issues. The agency is definitely not responding to this comment text in the response text. 2 = Poor match. Comment and response text are somewhat related, but appear to be discussing different specific issues. It is unlikely that the agency is responding to this comment text in the response text.\n3 = Partial Match. Comment and response text are discussing related issues but the degree of overlap is either imperfect or somewhat ambiguous. 4 = Good match. Comment text appears closely related to the agency's response. It is likely that the agency is responding to this comment text. 5 = Perfect match. Comment text contains the exact argument or information that the agency is responding to in the response text. The agency is definitely responding to this specific comment text." }, { "figure_ref": [], "heading": "Note:", "publication_ref": [], "table_ref": [], "text": "1. The response text could also be addressing other comments as well. This should not detract from the score. For example, if the regulator is clearly responding to two different comments A and B, and the selected comment text appears to exactly match the summary of comment A, then enter a '5'.\n2. Sometimes there is a tension between recognizing that the comment is likely the one being discussed, and whether there is a good topic match. For example, both the comment and response might identify the commenter by name making it clear that this is the correct comment. However, if the topics do not match, the score should still be low (keep in mind this is only a sample of the comment text -it is likely that there is another omitted sample of the comment text that would be a better match).\nPlease give me the answer of the following comment-response pair in such format: number -explanation. ### Comment Text: ... Response Text: ... Table 1: The prompt templates we applied for the GPT-4 comment-response matching prediction. Text in blue is the content of annotation scheme we also showed to the annotators to label our test data." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "We thank the anonymous reviewers for their insightful comments. This research was supported by Social Sciences and Humanities Research Council of Canada (SSHRC)." } ]
U.S. Federal Regulators receive over one million comment letters each year from businesses, interest groups, and members of the public, all advocating for changes to proposed regulations. These comments are believed to have wideranging impacts on public policy. However, measuring the impact of specific comments is challenging because regulators are required to respond to comments but they do not have to specify which comments they are addressing. In this paper, we propose a simple yet effective solution 1 to this problem by using an iterative contrastive method to train a neural model aiming for matching text from public comments to responses written by regulators. We demonstrate that our proposal substantially outperforms a set of selected text-matching baselines on a human-annotated test set. Furthermore, it delivers performance comparable to the most advanced gigantic language model (i.e., GPT-4), and is more cost-effective when handling comments and regulator responses matching in larger scale.
Tracing Influence at Scale: A Contrastive Learning Approach to Linking Public Comments and Regulator Responses
[ { "figure_caption": "Figure 1 :1Figure 1: An overview of the iterative training scheme for our proposed comment-response matcher.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Scatter plots illustrating the correlation between human judgement and seven comment-response matching methods (including Ours (RoBERTa) and Ours (SBERT), which are RoBERTa and SBERT applied our iterative contrastive learning framework) on the 160 test samples. The Pearson's correlations are shown at bottom-right. The best performance achieved by our proposal is highlighted in the bolded box.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The performance (Pearson's correlation) of the RoBERTa-and SBERT-based comment-response matcher on our test set after each training iteration. \"Iteration 0\" represents the matcher initialized with RoBERTa base and SBERT(multi-qa-mpnet-base), thus with the correlation score equals to RoBERTa and SBERT score in Figure 2.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" } ]
Linzi Xing; Brad Hackinen; Giuseppe Carenini
[ { "authors": "Marianne Bertrand; Matilde Bombardini; Raymond Fisman; Brad Hackinen; Francesco Trebbi", "journal": "The Quarterly Journal of Economics", "ref_id": "b0", "title": "Hall of Mirrors: Corporate Philanthropy and Strategic Advocacy*", "year": "2021" }, { "authors": "Angelo Daniel P Carpenter; Devin Dagonel; Christopher T Judge-Lord; Brian Kenny; Steven Libgober; Jacob Rashin; Susan Webb Waggoner; Yackee", "journal": "", "ref_id": "b1", "title": "Inequality in administrative democracy: Methods and evidence from financial rulemaking", "year": "2022" }, { "authors": "Ilias Chalkidis; Manos Fergadiotis; Prodromos Malakasiotis; Nikolaos Aletras; Ion Androutsopoulos", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "LEGAL-BERT: The muppets straight out of law school", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "A Maraam; Dwidar", "journal": "American Political Science Review", "ref_id": "b4", "title": "Coalitional lobbying and intersectional representation in american rulemaking", "year": "2022" }, { "authors": "R Hadsell; S Chopra; Y Lecun", "journal": "", "ref_id": "b5", "title": "Dimensionality reduction by learning an invariant mapping", "year": "2006" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b6", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b7", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Nanyun Peng; Mo Yu; Mark Dredze", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "An empirical study of Chinese name matching and applications", "year": "2015" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "year": "2019" }, { "authors": "Stephen Robertson; Hugo Zaragoza", "journal": "Found. Trends Inf. Retr", "ref_id": "b10", "title": "The probabilistic relevance framework: Bm25 and beyond", "year": "2009" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurelien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b11", "title": "Llama 2: Open foundation and finetuned chat models", "year": "2023" }, { "authors": "Chien-Sheng Wu; C H Steven; Richard Hoi; Caiming Socher; Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue", "year": "2020" }, { "authors": "Linzi Xing; Giuseppe Carenini", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Improving unsupervised dialogue topic segmentation with utterance-pair coherence scoring", "year": "2021" }, { "authors": "Linzi Xing; Brad Hackinen; Giuseppe Carenini; Francesco Trebbi", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Improving context modeling in neural topic segmentation", "year": "2020" }, { "authors": "Susan Webb; Yackee ", "journal": "Annual Review of Political Science", "ref_id": "b15", "title": "The politics of rulemaking in the united states", "year": "2019" }, { "authors": "Zhihan Zhou; Dejiao Zhang; Wei Xiao; Nicholas Dingwall; Xiaofei Ma; Andrew Arnold; Bing Xiang", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Learning dialogue representations from consecutive utterances", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 134.26, 653.76, 155.61, 11.46 ], "formula_id": "formula_0", "formula_text": "v c i = S-BERT (c i ),(1)" }, { "formula_coordinates": [ 2, 134.72, 671.51, 155.14, 11.46 ], "formula_id": "formula_1", "formula_text": "v r j = S-BERT (r j )(2)" }, { "formula_coordinates": [ 2, 311.6, 98.94, 213.54, 11.46 ], "formula_id": "formula_2", "formula_text": "p(match|v c i , v r j ) = exp(-α * (1-v c i •v r j )) (3)" }, { "formula_coordinates": [ 3, 79.67, 595.81, 209.86, 85.3 ], "formula_id": "formula_3", "formula_text": "* A set of comment chunks in Comment A (Comment A-1, A-2, . . .) * A set of comment chunks in Comment B (Comment B-1, B-2, . . .) . . . -A" } ]
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b8" ], "table_ref": [], "text": "Temporal difference (TD) learning is one of the most common techniques for reinforcement learning (RL). Compared to policy gradient methods, TD methods tend to be significantly more data-efficient. One of the primary reasons for this data-efficiency is the ability to perform updates off-policy, where policy updates are performed using a dataset that was collected using a different policy. Unfortunately, due to the differences in distribution between the off-policy and on-policy datasets, TD methods that employ function approximation may diverge or result in arbitrarily poor value approximation when applied in the off-policy setting (Sutton & Barto, 2020, p. 260). This issue is exacerbated in the fully offline RL setting, where the training dataset is fixed and the agent must act off-policy in order to achieve high performance. Figure 1 illustrates this distribution shift issue on the Frozen Lake toy problem. In this example, we perform Q-Learning with a linear function approximate on a fixed dataset collected with one policy, the \"data-collection\" policy, to evaluate another policy, the \"evaluation\" policy. We can see that vanilla Q-learning diverges as the dataset shifts further off-policy.\nNearly all methods for addressing off-policy distribution shift fall into two categories: importance sampling methods that reweight samples from the dataset to approximate the on-policy distribution, or regularization methods that minimize the distribution shift directly or through penalizing the value function in low-support areas. The former may lead to extremely high variance gradient updates, and the latter only works well when the data-collection policy is close to optimal, which is not the case in most real-world datasets.\nAn alternative approach has been suggested by Kolter (2011), who provide a contraction mapping condition that, when satisfied, guarantees convergence of TD-learning to a unique fixed point. They propose to project the sampling distribution onto this convex condition and thereby significantly reduce approximation error. Figure 1 shows that when this contraction mapping condition is (approx- Off-policy evaluation on a simple grid environment, \"Frozen Lake\". The goal of this task is to evaluate a policy ( ) from a suboptimal data policy ( ) that is ϵ-dithered for sufficient coverage (ϵ = 0.2). The right plot shows approximation error from a linear Q-function trained using Vanilla Q-Learning and POP-QL (our method) with a dataset interpolated between off-policy and on-policy. Unlike Vanilla Q-Learning, POP-QL avoids divergence by projecting the sampling distribution µ onto the convex constraint E (s,a)∼µ [F π (s, a)] ⪰ 0, which enforced that the contraction mapping condition holds. The shaded region ( ), indicates when the dataset approximately satisfies this condition (without reweighting); specifically, where λ min (E (s,a)∼µ [F π (s, a)]) > -0.005.\nimately) satisfied, Q-learning converges with low approximation error. However, this approach does not scale to modern RL tasks, because it invokes a batch semi-definite programming (SDP) solver to solve the projection for each batch update.\nContribution In this paper, we build on Kolter (2011)'s theoretical contribution and propose Projected Off-Policy Q-Learning (POP-QL), which makes two significant improvements over previous work. First, we consider a new sampling projection that allows for a more computationally efficient algorithm, since a closed-form solution exists for the inner optimization of the dual problem. This computational improvement allows us to extend the technique to high-dimensional deep RL problems. Secondly, we extend this projection to the MDP setting and propose a new Q-Learning algorithm that jointly projects the policy and sampling distribution. Our proposed algorithm can plug into any Q-learning algorithm (such as Soft Actor Critic (Haarnoja et al., 2018)), significantly reducing the approximation error of the Q-function and improving policy performance. We evaluate our method on a variety of offline RL tasks and compare its performance to other offline RL approaches. Although, in its current iteration, our method struggles to reach state-of-the-art performance on tasks with near-expert data-collection policies, POP-QL is able to outperform many other methods when the data-collection policies are far from optimal (such as the \"random\" D4RL tasks). Most importantly, our results illustrate the power of the contraction mapping condition and the potential of this new class of offline RL techniques." }, { "figure_ref": [], "heading": "Background and Related Work", "publication_ref": [ "b29", "b24", "b9", "b5", "b29", "b18", "b17", "b28", "b32", "b11", "b27", "b4", "b30", "b19", "b20", "b25", "b16", "b14", "b31", "b12", "b13" ], "table_ref": [], "text": "Off-policy TD learning introduces two distinct challenges that can result in divergence, which we refer to as support mismatch and projected-TD instability. Support mismatch, where the support of the data distribution differs from the on-policy distribution, can result in large Q-function errors in low-support regions of the state-action space. These Q-function errors can lead to policies that seek out regions of the state-action space for which the Q-function is overly-optimistic, thus leading to an increased support mismatch. This positive feedback loop can quickly diverge. In the online setting, this problem can be solved by frequently resampling data. However, in the fully offline setting, specific techniques are required to avoid such divergences, which we describe below.\nThe other challenge is instability inherent to off-policy TD learning. First described by Tsitsiklis & Van Roy (1996), the use of TD learning, function approximation, and off-policy sampling together, known as the deadly triad (Sutton & Barto, 2020, p. 264), can cause severe instability or divergence. This instability is caused by projecting TD-updates onto our linear basis1 , which can result in TDupdates that increase value error and, in some cases, diverge. See Appendix A for a three-state example of this divergence.\nThere are many methods that address the challenges of off-policy RL, most of which fall into two main categories. The first is importance sampling (IS). First proposed by Precup (2000), IS methods for RL approximate the on-policy distribution by reweighting samples with the ratio of the on-policy and data distribution densities. The challenge with this approach is the high variance in the updates of the re-weighting terms, which grows exponentially with the trajectory length. Many approaches have looked at methods for reducing this variance (Hallak & Mannor, 2017;Gelada & Bellemare, 2019;Nachum et al., 2019a;b;Liu et al., 2018;Lee et al., 2021). Emphatic-TD (Sutton et al., 2016;Zhang et al., 2020;Jiang et al., 2021) and Gradient-TD (Sutton et al., 2008) are other importance sampling approaches that are provably stable in the full-support linear case. One critical challenge with IS methods is that they do not address support mismatch and, thus, tend to perform poorly on larger scale problems.\nThe second category of methods involves regularizing the policy towards the data-policy. This policy regularization can be done explicitly by ensuring the learned policy remains \"close\" to the data collection policy or implicitly through conservative methods. Explicit policy regularization can be achieved by penalizing KL-divergence between the learned policy and data policy (Fujimoto et al., 2019;Wu et al., 2019) or by regularizing the policy parameters (Mahadevan et al., 2014). However, even in small-scale settings, policy-regularization methods can be shown to diverge (Manek & Kolter, 2022). Conservative methods, on the other hand, involve making a conservative estimate of the value function, thus creating policies that avoid low-support regions of the state-action space. In the online learning setting, one of the most common conservative methods is Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). However, TRPO does not extend well to the fully offline setting since the value estimates tend to be overly conservative as the learned policy diverges from the data policy. One of the most successful algorithms for offline RL is Conservative Q-Learning (CQL, Kumar et al. (2020b)), which adds a cost to out-of-distribution actions. Other methods use conservative value estimates with ensembles (Kumar et al., 2019) or model-based approaches (Yu et al., 2020;Kidambi et al., 2020). One downside to policy regularized methods (explicit or implicit) is that regularization can reduce policy performance when the data policy is sub-optimal, which we demonstrate in our experiments.\nThere are a few notable approaches that do not fit into either of these categories. Kumar et al. (2020a) present DisCor, which reweights the sampling distribution such that the TD fixed point solution minimizes Q-approximation error. However, approximations are needed to make this approach tractable. Additionally, this algorithm does address the support mismatch problem.\nAnother approach is TD Distribution Optimization (TD-DO) (Kolter, 2011), which seeks to reweight the sampling distribution such that the TD updates satisfy the contraction mapping condition, thereby ensuring that TD updates converge. Unfortunately, the use of this approach has been limited because it does not scale to modern RL tasks. This is due to the need to run a batch SDP solver to solve the associated distributional optimization task for each batch update. Our method, Projected Off-Policy Q-Learning (POP-QL), builds off this approach. In this work, we propose a new projection that easily scales to larger domains and extend our method to policy optimization, creating a novel algorithm that addresses both the support mismatch and projected-TD instability of off-policy RL." }, { "figure_ref": [], "heading": "Preliminaries and Problem Setting", "publication_ref": [], "table_ref": [], "text": "In this work, we consider learning a policy that maximizes the cumulative discounted reward on a Markov Decision Process (MDP) defined as the tuple (S, A, p, r, γ), where S and A are the state and action spaces, p(•|s, a) and r(s, a) represent the transition dynamics and reward functions, and γ ∈ [0, 1) is the discount factor. We approximate the action-value function as Q(s, a) ≈ w ⊤ ϕ(s, a), where ϕ : S × A → {x ∈ R k : ∥x∥ 2 = 1} is a normalized basis function and w ∈ R k are the parameters of the final, linear layer.\nIn the off-policy setting, we assume the agent cannot directly interact with the environment and instead only has access to samples of the form (s, a, r(s, a), s ′ ), where s ′ ∼ p(•|s, a) and (s, a) ∼ µ for some arbitrary sampling distribution µ. Because we can assume a fixed policy for much of our derivations and theory, we can simplify the math significantly by focusing on the finite Markov Reward Process setting instead." }, { "figure_ref": [], "heading": "Simplified Setting -Finite Markov Reward Process (MRP)", "publication_ref": [], "table_ref": [], "text": "Consider the finite n-state Markov Reward Process (MRP) (S, p, r, γ), where S is the state space, p : S ×S → R + and r : S → R are the transition and reward functions, and γ ∈ (0, 1) is the discount factor.2 Because the state-space is finite, it can be indexed as S = {1, . . . , n}, which allows us to use matrix rather than operator notation. In matrix notation, we use matrices P and R, to represent the functions p and r, where each row corresponds to a state. The value function associated with the MRP is the expected γ-discounted future reward of being in each state\nV (s) := E ∞ t=0 γ ⊤ r(s t ) s 0 = s . The value function is consistent with Bellman's equation in matrix form, V = R + γP V.(1)\nWe approximate the value function as V (s) ≈ w ⊤ ϕ(s), where ϕ : S → {x ∈ R k : ∥x∥ 2 = 1} is a fixed normalized basis function and we estimate parameters w ∈ R k . In matrix notation, we write this as V ≈ Φw. In the off-policy setting, the sampling distribution µ differs from the stationary distribution ν. In this setting, the temporal difference (TD) solution is the fixed point of the projected Bellman equation:\nΦw ⋆ = Π µ (R + γP Φw ⋆ ),(2)\nwhere\nΠ µ = Φ(Φ ⊤ D µ Φ) -1 Φ ⊤ D µ\nis the projection onto the column space of Φ weighted by the data distribution µ through the matrix D µ = diag(µ). This projection may be arbitrarily far from the true solution so that the error may be correspondingly large. In practice, w ⋆ is often computed using TD-learning, a process that starts from some point w 0 ∈ R k and iteratively applies Bellman updates,\nw t+1 = w t -λE µ ϕ(s) ⊤ w t -r -ϕ (s ′ ) ⊤ w t ϕ(s) .(3)\nUnfortunately, in the off-policy setting, TD-learning is not guaranteed to converge." }, { "figure_ref": [], "heading": "Contraction Mapping Condition", "publication_ref": [ "b13" ], "table_ref": [], "text": "A γ-contraction mapping3 is any function, f : R n → R n , such that for some distribution µ and any\nx 1 , x 2 ∈ R n : ∥f (x 1 ) -f (x 2 )∥ µ ≤ γ∥x 1 -x 2 ∥ µ ,(4\n) where γ ∈ [0, 1) and ∥ • ∥ µ is the weighted 2-norm. A key property of contraction mappings is that iteratively applying this function to any starting point x 0 ∈ R n converges to a unique fixed point x * = f (x * ). This principle is used to prove convergence of on-policy TD-learning.\nUnder on-policy sampling, µ = ν, the projected Bellman operator,\nΠ µ B(x) = Π µ (R + γP x), is a contraction mapping. ∥Π µ B(Φw 1 ) -Π µ B(Φw 2 )∥ µ ≤ γ∥Φw 1 -Φw 2 ∥ µ ∀ w 1 , w 2 ∈ R k .\n(5)\nTsitsiklis & Van Roy (1996) use this property to both prove that on-policy TD Q-learning learning converges to a unique point and bound the approximation error of the resulting fixed point (Tsitsiklis & Van Roy, 1996, Lemma 6). However, in the off-policy setting with µ ̸ = ν this property does not always hold. In fact, this condition can be violated even in MRPs with very small state spaces, see Appendix A for an example. Thus, the TD updates are not guaranteed to converge and can diverge under some off-policy sampling distributions.\nTo get around this challenge, Kolter (2011) proposed a new approach. First, they transformed the contraction mapping condition into a linear matrix inequality (LMI) through algebraic manipulation:\nE s∼µ [F (s)] ⪰ 0, where F (s) = E s ′ ∼p(•|s) ϕ(s)ϕ(s) ⊤ ϕ(s)ϕ(s ′ ) ⊤ ϕ(s ′ )ϕ(s) ⊤ ϕ(s)ϕ(s) ⊤ . (6)\nWe provide a derivation of this LMI in Appendix B.1. Using this formulation, they present an algorithm to find a new sampling distribution that satisfies this contraction mapping condition and proves a bound on the approximation error of their approach. Unfortunately, this method scales poorly because it requires solving an SDP problem alongside each batch update. Thus, the method remains impractical for the deep RL tasks, and has seen virtually no practical usage in the years since.\n4 Projected Off-Policy Q-Learning (POP-QL)\nOur method, Projected Off-Policy Q-Learning (POP-QL), is also centered on the contraction mapping condition (Eq. ( 6)). However, unlike previous work, we propose a new method that significantly improves the computational cost of the projection, allowing POP-QL to scale to large-scale domains. Additionally, we introduce a new policy optimization algorithm that simultaneously projects the policy and sampling distribution in order to satisfy the contraction mapping condition. This policy optimization algorithm allows POP-QL to address both the support mismatch and projected-TD instability challenges of off-policy Q-learning.\nWe start by deriving the POP-QL reweighting procedure in the finite MRP setting under a fixed policy and later extend our method to the MDP setting together with policy regularization." }, { "figure_ref": [], "heading": "POP-QL on Markov Reward Processes", "publication_ref": [ "b13" ], "table_ref": [], "text": "In the MRP setting (or the fixed-policy setting), the goal of POP-QL is to compute a new sampling distribution that satisfies the contraction mapping condition in Eq. ( 6) and thus stabilizes off-policy training. However, if the target distribution differs significantly from the source distribution µ, this can result in large reweighting factors, which can decrease the stability of the training process. Thus, we are looking for the \"closest\" distribution that satisfies Eq. ( 6). Unlike Kolter (2011), we propose to use the I-projection instead of the M-projection, which allows us to find an analytical solution to the inner part of the Lagrangian dual and thereby significantly simplifies the problem. Additionally, we argue in Appendix C.1 that this choice is more suitable for the RL setting. We first formulate the problem as minimizing the KL divergence between the data distribution µ and a reweighted distribution q such that TD update is stable under q,\nminimize q D KL (q ∥ µ) s.t. E s∼q [F (s)] ⪰ 0.(7)\nThe corresponding unconstrained dual problem based on a Lagrange variable Z ∈ R 2k is given by\nmaximize Z⪰0 minimize q D KL (q∥µ) -tr Z ⊤ E q [F (s)].(8)\nThe solution to this dual problem is equal to the primal problem under strong duality, which holds in practice due to the fact that this corresponds to a convex optimization problem. Now, consider the inner optimization problem over q in Eq. ( 8). This optimization problem can be rewritten as minimize q -H(q) -E q log µ(s) + tr Z ⊤ F (s) , which has a simple analytical solution:\nq ⋆ (s) ∝ exp log µ(s) + tr Z ⊤ F (s) = µ(s) exp tr Z ⊤ F (s) .(9)\nNotice that our target distribution q ⋆ is simply a reweighting of the source distribution µ with weights exp tr Z ⊤ F (s) . To compute the weights, we need to solve for the Lagrange variable Z. Plugging the analytical solution for q ⋆ back into Eq. ( 8) yields\nminimize Z⪰0 E µ exp tr Z ⊤ F (s) .(10)\nIn practice, we minimize over the set Z ⪰ 0 by re-parametrizing Z as\nZ = A B A B ⊤ (11)\nwhere A, B ∈ R k×k . This formulation ensures that Z is positive semi-definite, Z ⪰ 0, for any A and B. Thus, we can directly optimize over A and B and ignore the positive semi-definite condition. With this formulation for Z and plugging the definition of F (Eq. ( 6)) we can rewrite the dual optimization problem (Eq. ( 10)) as:\nminimize A,B E µ exp ∥A ⊤ ϕ(s)∥ 2 2 + ∥B ⊤ ϕ(s)∥ 2 2 + 2E s ′ ∼p(•|s) ⟨B ⊤ ϕ(s), A ⊤ ϕ(s ′ )⟩(12)\nSolving for matrices A, B yields the I-projected sampling distribution q * according to Eq. ( 9).\nAlgorithm 1 Projected Off-Policy Q-Learning (POP-QL)\nInitialize: feature function ϕ θ ϕ , Q-function parameters w, policy π θ π , g-function g θ g , and Lagrange matrices A and B.\nfor step t in 1, . . . , N do (s, a, r, s ′ ) 1,...,m ∼ µ ▷ Sample minibatch from dataset ã ∼ π θ π (s) ã′ ∼ π θ π (s ′ ) ▷ Sample new actions from policy q target := r + γw ⊤ ϕ θ ϕ (s ′ , ã′ ) ▷ Compute Q-function target value y A , y B , y ′ A := A ⊤ ϕ θ ϕ (s, a), B ⊤ ϕ θ ϕ (s, a), A ⊤ ϕ θ ϕ (s ′ , a ′ ) ▷ Compute dual values u := exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2g θ g (s, a) /ū ▷ Compute minibatch-normalized weight A, B ← [A, B] -λ A,B u∇ A,B ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y ′ A ⟩ ▷ Update Lagrange matrices θ g ← θ g -λ g ∇ θ g (g θ g (s, a) -⟨y B , y ′ A ⟩) 2 ▷ Update g-function parameters [θ Q , w] ← θ Q -λ Q u∇ θ Q ,w w ⊤ ϕ θ ϕ (s, a) -q target 2 ▷ Update Q-function parameters θ π ← θ π -λ π ∇ θ π (L Q + αL entropy -βu⟨y B , y ′ A ⟩) ▷ Augment SAC policy loss" }, { "figure_ref": [], "heading": "Extension to Markov Decision Processes", "publication_ref": [ "b13" ], "table_ref": [], "text": "The theory presented in the previous section can be extended to the MDP setting through a simple reduction to an MRP. In a MDP, we also have to consider the action space, A, and the policy, π. In this setting, our contraction mapping LMI becomes\nE (s,a)∼q [F π (s, a)] ⪰ 0,(13)\nwhere\nF π (s, a) = E s ′ ∼p(•|s,a),a ′ ∼π(s ′ ) ϕ(s, a)ϕ(s, a) ⊤ ϕ(s, a)ϕ(s ′ , a ′ ) ⊤ ϕ(s ′ , a ′ )ϕ(s, a) ⊤ ϕ(s, a)ϕ(s, a) ⊤ . (14\n)\nUsing the idea that, given a fixed policy, any MDP reduces to an MRP, we extend Theorem 2 from Kolter (2011) to show that the TD-updates converge to a unique fixed point with bounded approximation error for any finite MDP where π and µ satisfy this condition.\nLemma 1. Let w ⋆ be the least-squares solution to the Bellman equation for a fixed policy π:\nw * = arg min w E (s,a)∼µ (ϕ(s, a) ⊤ w -r(s, a) -γE s ′ ∼p(•|s,a),a ′ ∼π(s ′ ) ϕ(s ′ , a ′ ) ⊤ w) 2 (15)\nand let µ be some distribution satisfying the MDP contraction mapping condition (Eq. ( 13)). Then\nE µ (ϕ(s, a) ⊤ w ⋆ -V (s, a)) 2 ≤ 1 + γ δ(ν, µ) 1 -γ min w E µ (ϕ(s, a) ⊤ w -V (s, a)) 2 , (16\n)\nwhere ν is the stationary distribution, δ(ν, µ) = max s,a,s,ã ν(s,a)\nµ(s.a) • µ(s,ã) ν(s,ã)\nSee Appendix B.2 for proof. As before, we are looking to project our sampling distribution to satisfy this condition. With this reduction, we rewrite the analytical solution for the projected sampling distribution from Eq. ( 9) as\nq ⋆ (s, a) ∝ µ(s, a) exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2E s ′ ∼p(•|s),a ′ ∼π(•|s ′ [⟨y B , y ′ A ⟩](17)\nwhere y A = A ⊤ ϕ(s, a), y B = B ⊤ ϕ(s, a), and y ′ A = A ⊤ ϕ(s ′ , a ′ ) and A and B are the solutions to the following optimization problem, minimize\nA,B E µ exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2E s ′ ∼p(•|s) [⟨y B , y ′ A ⟩](18)\nNow, by Lemma 1 and assuming strong duality holds, the fixed point of the projected Bellman equation under the sampling q ⋆ has bounded approximation error." }, { "figure_ref": [], "heading": "Practical Implementation", "publication_ref": [ "b1" ], "table_ref": [], "text": "Two-Time-Scale Optimization Because there is an expectation inside an exponential, we must perform a two time-scale optimization to be able to use sample-based gradient descent. We introduce a new function approximator g θ to approximate the inner expectation: On-Policy\ng θ (s, a) ≈ E s ′ ∼p(s ′ |s),a ′ ∼π(•|s ′ ) [⟨y B , y ′ A ⟩] ,(19)\n1\nFigure 2: Heat-maps of three state distributions for the \"Frozen Lake\" environment. On the left is the off-policy sampling distribution, on the right is the on-policy sampling distribution, and, in the middle, is the projection of off-policy sampling distribution onto the contraction mapping set (Eq. ( 6)) computed by POP-QL. Note that only a minor change to the off-policy sampling distribution is needed to satisfy the contraction mapping condition and, thus, guarantee convergence of TD-learning.\nwhich can be optimized using gradient descent.\nIf we assume g θ (s) has sufficient expressive power and has converged, we can estimate the gradient of our objective with respect to A and B, ∇ A,B E µ exp tr Z ⊤ F (s) , using samples from our sampling distribution, µ:\nE s,a∼µ,s ′ ∼p(s ′ |s),a ′ ∼π(•|s ′ ) u(s, a) • ∇ A,B ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y A ⟩(20)\nwhere u(s, a) are the sample reweighting terms defined as:\nu(s, a) = exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2g θ (s) , q * (s, a) = u(s, a)µ(s, a).(21)\nWith this approximation, we can perform two-time-scale gradient descent. The gradient updates for g θ and the Lagrange matrices A and B become\nθ ← θ -λ θ ∇ θ g θ (s, a) -E s ′ ∼p(s ′ |s,a),a ′ ∼π(s ′ ) [⟨y B , y ′ A ⟩] 2 , A, B ← [A, B] -λ A,B E µ,p u(s, a) • ∇ A,B ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y ′ A ⟩ ,(22)\nWe also found it helpful to normalize the g-function by the spectral norm of A and B. See Appendix C.2 for details.\nLow-Rank Approximation Empirically, we found the solution to our Lagrange dual optimization problem is typically a low-rank matrix with rank r ≤ 4; this is not surprising in hindsight: under the on-policy distribution the matrix E (s,a)∼q [F π (s, a)] is already positive definite (a consequence of the fact that TD will converge on-policy), and so it is intuitive that this bound would only need to be enforced on a low-dimensional subspace, corresponding to a low-rank dual solution. Thus, we can substantially reduce the computational cost of the method by using Lagrange matrices A, B ∈ R k×r , where r = 4. This is essentially akin to low-rank semi-definite programming (Burer & Monteiro, 2003), which has proven to be an extremely competitive and scalable approach for certain forms of semi-definite programs. Furthermore, while not a primary motivation for the method, we found this low-rank optimization improves convergence of the dual matrices. In total, this leads to a set of updates that are fully linear in the dimension of the final-layer features, and which ultimately presents a relatively modest increase in computational cost over standard Q-Learning." }, { "figure_ref": [], "heading": "Policy Optimization", "publication_ref": [], "table_ref": [], "text": "So far, we have assumed a fixed policy in order to compute a sampling distribution and use that sampling distribution to compute a Q-function with low approximation error. Next, we need to find a policy that maximizes this Q-function. However, we want to avoid policies that result in very large reweighting terms for a couple reasons: 1) state action pairs with large reweighting terms correspond to low-support regions of the state-space, and 2) large reweighting terms increase the variance of the Figure 3: Offline policy optimization performance on the Frozen Lake domain (Fig. 1) averaged over 5 random seeds (shaded area is standard error). As before, we varied the datasets by interpolating between the dataset collected by the \"data-collection policy\" ( ) and the dataset collected by the \"evaluation policy\" ( ), both with ϵ-dithering for sufficient coverage (ϵ = 0.2). Both our POP-QL (our method) and CQL are able to find a policy that outperforms behavior cloning without diverging.\ngradient updates of our Lagrange matrices. To keep these reweighting terms small, we jointly project π and the sampling distribution µ using a balancing term β ∈ R + :\nmaximize π,q E µ [Q π (s, a)] -βD KL (q ∥ µ) s.t. E (s,a)∼q [F π (s, a)] ⪰ 0(23)\nWe can solve this optimization using the technique from the previous section. The gradient updates for A, B, and the g-function stay the same and the gradient updates for the policy become\n∇ π E µ [Q π (s, a)] + 2βE q π [∇ π E π [⟨y B , y ′ A ⟩]] .(24)\nSee Appendix B.3 for a detailed derivation and Algorithm 1 for the pseudocode of our algorithm." }, { "figure_ref": [], "heading": "Experiments and Discussion", "publication_ref": [ "b2", "b2", "b2", "b2", "b2", "b8", "b16", "b6", "b16", "b14", "b4", "b2", "b0", "b7" ], "table_ref": [], "text": "Small Scale -Frozen Lake Frozen Lake is a small grid navigation task where the objective is to reach the goal state while avoiding the holes in the ice, which are terminal states. Figure 1 shows a visualization of the Frozen Lake environment. Note, unlike the standard Frozen Lake environment, our Frozen Lake has no terminal states. Instead, falling in the \"holes\" moves the agent back to the starting location. For tabular Q-learning, this is a very simple task. However, using function approximation with a linear function approximator can cause offline Q-Learning to quickly diverge.\nWe illustrate this divergence first with a policy evaluation task. In this task, the goal is to approximate the Q-function for an \"evaluation\" policy with data collected from a separate \"data-collection\" policy. We use a random featurize for the state-action space with dimension k = 63 (the true state-action space has a cardinality of 64). We train a linear function approximator with both vanilla Q-learning and POP-QL as we linearly interpolate the dataset between 100% offline (meaning collected entirely from the \"data collection\" policy) and 100% (meaning collected entirely from the \"evaluation\" policy).\nFigure 1 shows a graph of the results. When trained offline, vanilla Q-learning quickly diverges, whereas POP-QL remains stable for all the datasets. We also note that the datasets for which vanilla Q-learning converges with low approximation error correspond to those that roughly satisfy our contraction mapping condition, exactly as our theory would predict. Figure 2 illustrates how the projection made by POP-QL changes the sampling distribution only slightly compared to exactly projecting onto the on-policy distribution (importance sampling).\nWe also perform a policy optimization task with the same datasets. In this task, the goal is to compute the policy with the highest return using offline data. We compare our method against online SAC, offline SAC, and CQL. Appendix D discusses the hyper-parameter tuning procedures for these baseline methods. Figure 3 shows the expected normalized returns of the policies computed using various methods.\nD4RL Tasks D4RL (Fu et al., 2020) is a standardized collection of offline RL tasks. Each task consists of an environment and dataset. The datasets for each task are collected by using rollouts of a single policy or a mixture of policies. The goal of each task is to learn a policy exclusively from these offline datasets that maximizes reward on each environment.\nTable 1: Results on the D4RL MuJoCo offline RL tasks (Fu et al., 2020). We ran offline SAC (SAC-off), CQL, and POP-QL for 2.5M gradient steps. † Results reported from Fu et al. (2020). We can see our method, POP-QL, outperforms other methods on the very suboptimal datasets (\"random\"), but falls behind on the others. Table 2: Results on the D4RL kitchen offline RL tasks (Fu et al., 2020). We ran offline SAC (SAC-off), CQL, and POP-QL for 2.5M gradient steps. † Results reported by Fu et al. (2020). Our method, POP-QL, out-performs offline SAC and CQL, but falls behind BEAR and BCQ.\nSAC-off BEAR † BCQ † aDICE † CQL POP-QL kitchen-complete 0.12 0.00 8.10 0.00 0.00 0.00 kitchen-partial 0.00 13.10 18.90 0.00 6.12 6.38 kitchen-mixed 0.00 47.20 8.10 0.00 0.62 1.56\nWe compare our method against vanilla SAC (Haarnoja et al., 2018) run on offline data, Behavior Cloning, and CQL (Kumar et al., 2020b) (with the JaxCQL codebase (Geng, 2022). For each of these methods, we run for 2.5 million gradient steps. Using the JaxCQL codebase, we were not able to replicate the results of CQL with the hyper-parameters presented in Kumar et al. (2020b). Instead we performed our own small hyper-parameter sweep (details in Appendix D). For further comparison, we also include results for bootstrapping error reduction (BEAR) (Kumar et al., 2019), batch-constrained Q-learning (BCQ) (Fujimoto et al., 2019), and AlgaeDICE (Nachum et al., 2019b) reported in Fu et al. (2020). We looked at 2 categories of environments: 1) The OpenAI Gym (Brockman et al., 2016) environments Hopper, Half Cheetah, and Walker2D, and 3) the Franka Kitchen environments (Gupta et al., 2019). For each method, we used fixed hyper-parameters for environment category.\nTables 1 and2 show the results on these tasks. We can see that our method is competitive or outperforms all other methods on the \"random\" and \"medium\" datasets, but falls behind on the \"medium-expert\" environments. This is because our method, unlike most other offline methods, does not perform any regularization towards the data collection policy4 ." }, { "figure_ref": [], "heading": "Conclusion and Future Directions", "publication_ref": [ "b21", "b10", "b3" ], "table_ref": [], "text": "In this paper, we present Projected Off-Policy Q-Learning (POP-QL), a new method for reducing approximation errors in off-policy and offline Q-learning. POP-QL performs an approximate projection of both the policy and sampling distribution onto a convex set, which guarantees convergence of TD updates and bounds the approximation error.\nUnlike most other offline RL methods, POP-QL does not rely on pushing the learned policy towards the data-collection policy. Instead POP-QL finds the smallest adjustment to the policy and sampling distribution that ensures convergence. This property is exemplified in our experiments, especially when the data-collection policies are significantly sub-optimal. In small-scale experiments, we show that our method significantly reduces approximation error of the Q-function. We also evaluate our method on standardized Deep RL benchmarks. POP-QL outperforms other methods when the datasets are far from the optimal policy distribution, specifically the \"random\" datasets, and is competitive but falls behind the other methods when the dataset distribution gets closer to the on-policy distribution.\nThis paper illustrates the power of the contraction mapping condition first introduced by Kolter (2011) for offline RL and introduces a new class of offline RL techniques. While, in its current iteration, this method does not outperform the state-of-the-art methods on every domain, our results suggest the exciting potential of this new technique. We think this reduced performance on some the D4RL tasks is primarily due to training instabilities introduced by the min-max optimization of the policy and Lagrange matrices. As with many other RL algorithms, finding implementation tricks, such as target Q-networks (Mnih et al., 2015) and double Q-networks (Hasselt et al., 2016;Fujimoto et al., 2018), is critical to stabilizing learning. In future work, we hope to address the instability of the Lagrange matrix optimization, thus providing a method that consistently out-performs competing methods.\ns 3 s 1 s 2 1 ⁄4 1 ⁄2 1 ⁄4 1 ⁄4 1 ⁄2 1 ⁄4 1 ⁄4 1 ⁄4 1 ⁄2\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 µ = [p/2, p/2, 1p] \nE s∼µ [F (s)] ⪰ 0 Evaluation Points 1 -2 -1 0 1 2 p = 0.2, µ = [0.1, 0.1, 0.8] -2 -1 0 1 2 w * µ w * µ w * µ -2 -1 0 1 2 p = 0.5, µ = [0.2, 0.2, 0.5] w * µ w * µ w * µ -2 -1 0 1 2 p = 0.8, µ = [0.4, 0.4, 0.2] w * µ w * µ w * µ Iterative TD Traces 1 -2 -1 0 1 2 p = 0.2, µ = [0.1, 0.1, 0.8] -2 -1 0 1 2 w * q w * q w * q -2 -1 0 1 2 p = 0.5, µ = [0.2, 0.2, 0.5] w * q w * q w * q -2 -1 0 1 2 p = 0.8, µ = [0.4, 0.4, 0.2] w * q w * q w * q POP Q-Learning Traces 1 Figure 4:\nThe three-state Markov process by Manek & Kolter (2022) (top-left), a plot of Q-function approximation error over different sampling distributions using Iterative TD and POP-QL (top-right), and TD traces for at three different evaluation sampling distributions. We can see that when the contraction mapping condition is satisfied (p ≲ 0.55), the Iterative TD and POP-QL solutions are identical. However, when this condition is violated (p > 0.55), Iterative TD diverges, whereas POP-QL converges and retains a low approximation error." }, { "figure_ref": [], "heading": "A Off-Policy Contraction Mapping Example", "publication_ref": [], "table_ref": [], "text": "To illustrate how the contraction mapping condition impacts TD updates, we use the simple three-state MRP introduced in Manek & Kolter (2022) (Fig. 4). In this example, the value function is given by V = [1, 1, 1.05] ⊤ , with discount factor γ = 0.99, reward function R = (I -γP )V , and basis Φ where\nΦ = 1 0 0 -1. 1 /2(1.05 + ϵ) -1 /2(1.05 + ϵ) (25)\nThe basis includes the representation error term ϵ = 10 -4 .\nFor illustration purposes, we select the family of distributions µ = ( p /2, p /2, 1h) parameterized by p ∈ [0, 1]. This characterizes the possible distributions of data that we will present to POP-QL and naive TD in this experiment. The on-policy distribution corresponds to p = 0.5. The contraction mapping condition is satisfied for the left subset of sampling distributions where p ≲ 0.51 and not satisfied for the right subset where p ≲ 0.55. This is immediately apparent in Fig. 4, where we plot the error at convergence from running naive-and POP-QL above, and the effective distribution of TD updates after reweighing. In the left subset, where the NEC holds, POP-QL does not reweight TD updates at all. Therefore, the error of POP-TD tracks that of naive TD, and the effective distribution of TD updates in POP-TD and naive TD are the same as the data distribution. Fig. 4 also plots TD-Learning traces for three different sampling distributions using both Iterivate TD and POP-QL. We can see when p = 0.8 (right), the contraction mapping condition is violated Itertive TD diverges. However, POP-QL reweights the sampling distribution, yielding a new sampling distribution that satisfies the contraction mapping condition. Thus, POP-QL still converges in this example." }, { "figure_ref": [], "heading": "B Proofs and Derivations", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1 Derivation of Contraction Mapping LMI", "publication_ref": [], "table_ref": [], "text": "For a finite MRP, the projected Bellman equation can be written in matrix notation as:\nB(Φw) = Π µ (R + γP Φw),(26)\nBy the definition of a contraction mapping, the projected Bellman equation is a γ-contraction mapping if and only if for all w 1 , w 2 ∈ R k :\n∥B(Φw 1 ) -B(Φw 2 )∥ µ ≤ γ∥Φw 1 -Φw 2 ∥ µ .(27)\nNow, consider the left side of this equation. We can rewrite this as follows:\n∥B(Φw 1 ) -B(Φw 2 )∥ µ = ∥Π µ B(Φw 1 ) -Π µ B(Φw 2 )∥ µ = ∥Π µ (R + γP Φw 1 ) -Π µ (R + γP Φw 2 )∥ µ = γ∥Π µ P Φw 1 -Π µ P Φw 2 ∥ µ = γ∥Π µ P Φ w∥ µ\nwhere w = w 1w 2 . Thus, the projected Bellman equation is a contraction mapping if and only if for all w ∈ R k , ∥Π µ P Φw∥ µ ≤ ∥Φw∥ µ .\nPlugging in the closed form solution to the projection, we get,\nw T Φ T P T D µ Φ Φ T D µ Φ -1 Φ T D µ Φ Φ T D µ Φ -1 ΦD µ P Φ T w ≤ w T Φ T D µ Φw ⇔ w T Φ T P T D µ Φ Φ T D µ Φ -1 ΦD µ P Φ T -Φ T D µ Φ w ≤ 0 ⇔ Φ T P T D µ Φ Φ T D µ Φ -1 ΦD µ P Φ T -Φ T D µ Φ ⪯ 0\nFinally, using Schur Complements, we can convert this to an LMI,\nF µ ≡ Φ T D µ Φ Φ T D µ P Φ Φ T P T D µ Φ Φ T D µ Φ ⪰ 0.(29)\nThus, as long as F µ ⪰ 0, the projected Bellman equation is a contraction mapping." }, { "figure_ref": [], "heading": "B.2 Proof of Lemma 1", "publication_ref": [ "b13" ], "table_ref": [], "text": "Proof. To prove this, we will use a simple reduction to a Markov chain.\nFor a fixed policy, π, our MDP, M = (S, A, P, R, γ) reduces to a MRP. We can write this new MRP as M π = (X , P π , R, γ) where X = S × A is the Cartesian product of the state and action spaces of the MDP and P π : X × X → R + is defined as follows:\nP π ((s, a), (s ′ , a ′ )) := p(s ′ |s, a)π(a ′ |s ′ ) ∀ (s, a), (s ′ , a ′ ) ∈ X\nWe can see clearly that for all (s, a) ∈ X , (s ′ ,a ′ )∈X P π ((s, a), (s ′ , a ′ )) = 1.\nSince our MDP, M, is finite, we can define the feature matrix, Φ ∈ R n,k , using the MDP feature function, Φ i = ϕ(s i , π(s i ) for each s i ∈ S.\nNow, applying Theorem 2 from Kolter (2011) to our MRP, M π , we have that:\n∥Φw ⋆ -V π ∥ µ ≤ 1 + γ δ(ν, µ) 1 -γ ∥Π µ V π -V π ∥ µ (30)\nwhere w ⋆ is the unique fixed point of Eq. ( 2), V π is the unique fixed point of\nV π = R + γP π V π , ν is the stationary distribution, and δ(ν, µ) = max x,x ν(x) µ(x) • µ(x) ν(x)\n. Mapping this bound back onto the MDP, M, yields the stated bound." }, { "figure_ref": [], "heading": "B.3 Derivation of POP-QL Updates:", "publication_ref": [], "table_ref": [], "text": "Here we will detail the derivation of the POP-QL gradient updates. The methods here follow those of the MRP derivation. To start, we can rewrite Eq. ( 23) as:\nmaximize π,q 1 β E µ [Q π (s, a)] -D KL (q ∥ µ) s.t. E (s,a)∼q [F π (s, a)] ⪰ 0(31)\nNext, we introduce Lagrange variables to convert this into an unconstrained optimization problem:\nmaximize q,π minimize Z⪰0 1 β E µ [Q π (s, a)] -D KL (q∥µ) + tr Z ⊤ E q [F π (s, a)] = maximize π 1 β E µ [Q π (s, a)] + maximize q minimize Z⪰0 -D KL (q∥µ) + tr Z ⊤ E q [F π (s, a)]\nNow, we focus on the inner optimization problem over q and Z. As in the MRP version, we assume that strong duality holds in this inner optimization problem. Under this assumption, the inner optimization problem can be equivalently written as:\nminimize Z⪰0 maximize q -D KL (q∥µ) + tr Z ⊤ E q [F π (s, a)](32)\nThis problem can be solved as before. First, we solve for the analytical solution of q, q ⋆ (s, a) ∝ µ(s, a) exp tr Z π⊤ F (s, a) .\nNext, we plug this solution back into our inner optimization problem:\nminimize Z⪰0 log E µ exp tr Z ⊤ F π (s, a)(34)\nNow, using the reparameterization of Z in Eq. ( 11), the full optimization problem (Eq. ( 31) becomes: Again, we need to perform a 2-timescale gradient descent for A, B since we are approximating an expectation inside of a exponential. Thus, we also learn a parameterized function g θ to approximate the following: \nmaximize π 1 β E µ [Q π (s, a)] + minimize A,B log E µ exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2E s ′ ∼p(•\ng θ (\nC POP-QL Details\nHere we provide additional details for the POP-QL algorithm." }, { "figure_ref": [], "heading": "C.1 I-and M-Projections", "publication_ref": [], "table_ref": [], "text": "The information (I-) projection and moment (M-) projection are defined as follows:\nI-Projection: min q D KL (q∥µ) M-Projection: min q D KL (µ∥q) (39)\nSince the KL-divergence is an asymmetric measure, these projections are usually not equivalent.\nA key difference between the I-and M-projections is that the I-projection tends to under-estimate the support of the fixed distribution µ, resulting in more density around the modes of µ, while the M-projection tends to over-estimate the support of µ, resulting in a higher variance solution.\nIn the context of off-policy Q-learning, sampling states and actions with very low or zero support under the sampling distribution, µ, can result in over-estimating the Q-function, which in-turn results in a poor performing policy. For this reason, we argue the more conservative I-projection is a better fit for off-policy Q-learning." }, { "figure_ref": [], "heading": "C.2 g-function Normalization", "publication_ref": [], "table_ref": [], "text": "In practice, we found normalizing the g function by the spectral norm of the A and B matrices improved learning stability. Specifically, we train the g network to approximate the following quantity: We looked at using λ Q , λ π =, 3e-4, 1e-4, 3e-5, and 1e-5. The lower learning rates for the policy seemed to significantly improve asymptotic performance of our method. However, when setting\ng θ (s, a) ≈ 1 ∥A∥ 2 ∥B∥ 2 E s ′ ∼p(s ′" }, { "figure_ref": [], "heading": "D.2 D4RL D.2.1 Hyper Parameter Tuning", "publication_ref": [ "b16" ], "table_ref": [], "text": "As mentioned in the main paper, using the hyper-parameters presented in Kumar et al. (2020b), we were not able to reproduce the results of CQL presented in the paper. Specifically, setting Lagrange parameter τ to the value suggested in the paper resulted in very poor performance. Instead we did a cursory hyper-parameter sweep using τ = 3, 1, 0.3, and 0.1. For MuJoCo, we found τ = 0.3 peformed best. For kitchen, we found τ = 3 peformed best." }, { "figure_ref": [], "heading": "D.2.2 Network Structure", "publication_ref": [], "table_ref": [], "text": "For each baseline method, we use a 2 hidden fully connected neural network with a width of 256 and ReLU activations for both the policy network and Q-network. For the g-network for POP-QL, we use a 4 hidden layer network with a width of 1024 and ReLU activations." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was sponsored by Robert Bosch GMBH under award number 0087016732PCRPO0087023984. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "λ π = 1e-5, we found it took too long to learn a decent policy. Thus, for our experiments, we chose the middle-ground of λ π = 3e-5. We found λ Q = 1e-4 worked the best for POP-QL." }, { "figure_ref": [], "heading": "Lagrange Matrices Learning Rate", "publication_ref": [], "table_ref": [], "text": "Since we want to learn the Lagrange matrices assuming a fixed policy, we used an increased learning rate for the Lagrange matrices compared to the policy. We tried g-Function Learning Rate Once we normalized the g-function (as described in Appendix C.2), we found the performance of the algorithm seemed to be quite robust to the choice of g-function learning rate. We tried 1, 10, and 20 times the learning rate of the Lagrange Matrices and all performed roughly equally. We used λ g = 10λ [A,B] for our experiments." }, { "figure_ref": [], "heading": "KL-Q-value weighting parameter β", "publication_ref": [], "table_ref": [], "text": "This β term weights how much POP-QL's weights policy performance versus the KL-divergence between the new sampling distribution and the reweighted distribution. The larger β is, the more the policy is projected and the less significant the reweighting terms become.\nIn the D4RL problems, we need a large β term to make sure the policy does not drive the agent too far outside the data distribution. We tested β = 100, 30, 10, 3, 1, and 0.3. Since our choice of β depends on the estimated Q-values, we chose a different β beta for each class of domains. For both the Mujoco and Franka Kitchen domains, we chose β = 100." }, { "figure_ref": [], "heading": "C.4 Target Networks", "publication_ref": [], "table_ref": [], "text": "Just as in other actor-critic methods, we use a target network for the features and Q-function weight w to stabilize the Q-learning updates.\nWe also tested using these target features for training Lagrange matrices A and B, but found this reduced performance. So instead, we do not use any target networks in training the Lagrange matrices." }, { "figure_ref": [], "heading": "D Additional Experiment Details", "publication_ref": [], "table_ref": [], "text": "D.1 Frozen Lake" }, { "figure_ref": [], "heading": "D.1.1 Features", "publication_ref": [], "table_ref": [], "text": "To construct the features for this domain, we constructed a random vector of dimension k = 60 for each state-action pair. This vector was sampled from a uniform distribution U([0, 1] k ), then normalized to have a unit norm." }, { "figure_ref": [], "heading": "D.1.2 Hyper Parameter Tuning", "publication_ref": [], "table_ref": [], "text": "We performed a cursory hyper-parameter search for this problem. Online SAC has little problem converging and, thus, is not sensitive to hyper-parameters. In the offline case, when the dimension of the features, k, is greater than or equal to the size of the state-action space (64), offline SAC is also very stable, since this reduces to tabular RL. As soon as k < |S||A|, however, offline SAC becomes very unstable and sensitive to hyper-parameters.\nFor all methods, we used a Q-function learning rate of 1 × 10 -3 and a policy learning rate of 1 × 10 -4. We found that this lead to convergent behavior for both CQL and POP-QL.\nAll methods also used SAC with automatic entropy tuning. Through a course search, we found a target entropy of 0.5 worked well across CQL and POP-QL.\nBoth CQL and POP-QL have regularization parameters that keep the policy close to the datacollection policy. These parameters improve convergence at the expense of performance. We tuned this parameters by picking the lowest values that still lead to convergent behavior. For CQL, this was α = 0.5. For POP-QL, we useed β = 0.5." } ]
A key problem in off-policy Reinforcement Learning (RL) is the mismatch, or distribution shift, between the dataset and the distribution over states and actions visited by the learned policy. This problem is exacerbated in the fully offline setting. The main approach to correct this shift has been through importance sampling, which leads to high-variance gradients. Other approaches, such as conservatism or behavior-regularization, regularize the policy at the cost of performance. In this paper, we propose a new approach for stable off-policy Q-Learning. Our method, Projected Off-Policy Q-Learning (POP-QL), is a novel actor-critic algorithm that simultaneously reweights off-policy samples and constrains the policy to prevent divergence and reduce value-approximation error. In our experiments, POP-QL not only shows competitive performance on standard benchmarks, but also out-performs competing methods in tasks where the data-collection policy is significantly sub-optimal.
PROJECTED OFF-POLICY Q-LEARNING (POP-QL) FOR STABILIZING OFFLINE REINFORCEMENT LEARNING
[ { "figure_caption": "Figure1: Off-policy evaluation on a simple grid environment, \"Frozen Lake\". The goal of this task is to evaluate a policy ( ) from a suboptimal data policy ( ) that is ϵ-dithered for sufficient coverage (ϵ = 0.2). The right plot shows approximation error from a linear Q-function trained using Vanilla Q-Learning and POP-QL (our method) with a dataset interpolated between off-policy and on-policy. Unlike Vanilla Q-Learning, POP-QL avoids divergence by projecting the sampling distribution µ onto the convex constraint E (s,a)∼µ [F π (s, a)] ⪰ 0, which enforced that the contraction mapping condition holds. The shaded region ( ), indicates when the dataset approximately satisfies this condition (without reweighting); specifically, where λ min (E (s,a)∼µ [F π (s, a)]) > -0.005.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "|s,a),a ′ ∼π(•|s ′ ) ⟨y B , y ′ A ⟩ (35) where y A = A ⊤ ϕ θ ϕ (s, a), y B = B ⊤ ϕ θ ϕ (s, a), and y ′ A = A ⊤ ϕ θ ϕ (s ′ , a ′ ).", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "|s,a),a ′ ∼π(s ′ ) ⟨B ⊤ ϕ(s, a), A ⊤ ϕ(s ′ , a ′ )⟩ (40) where ∥A∥ 2 represents the spectral norm of A. Note that, by definition of the spectral norm and since ∥ϕ(s, a)∥ 2 = 1 for all s, this bounds the range of the g function, g θ (s, a) ∈ [-1, 1]. C.3 Hyper-Parameter Search Q-Function and Policy Learning Rates", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "s, a) ≈ E s ′ ∼p(s ′ |s,a),a ′ ∼π(s ′ ) [⟨y B , y ′ A ⟩](36)Using this approximation, the gradient of A and B can be expressed as:E s,a∼µ,s ′ ∼p(s ′ |s,a),a ′ ∼π(•|s ′ ) exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2g θ (s, a) •∇ A,B (∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y ′ A ⟩)Next, we derive the policy updates using both Lagrange variables.∇ π E µ [Q π (s, a)] + β log E µ,π exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y ′ A ⟩ ≈ ∇ π E µ [Q π (s, a)] + β exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2g θ (s, a) ∇ π E π ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y ′ A ⟩ = ∇ π E µ [Q π (s, a)] + 2β exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2g θ (s, a) ∇ π E π [⟨y B , y ′ A ⟩]Finally, the Q-learning reweighting terms are simply:u(s, a) ≈ exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2g θ (s, a)", "figure_data": "(37)", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" } ]
Melrose Roderick; Gaurav Manek; Felix Berkenkamp; J Zico Kolter
[ { "authors": "Greg Brockman; Vicki Cheung; Ludwig Pettersson; Jonas Schneider; John Schulman; Jie Tang; Wojciech Zaremba", "journal": "", "ref_id": "b0", "title": "Openai gym", "year": "2016" }, { "authors": "Samuel Burer; Renato Dc Monteiro", "journal": "Mathematical Programming", "ref_id": "b1", "title": "A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization", "year": "2003" }, { "authors": "Justin Fu; Aviral Kumar; Ofir Nachum; George Tucker; Sergey Levine", "journal": "", "ref_id": "b2", "title": "D4rl: Datasets for deep data-driven reinforcement learning", "year": "2020" }, { "authors": "Scott Fujimoto; Herke Hoof; David Meger", "journal": "PMLR", "ref_id": "b3", "title": "Addressing function approximation error in actorcritic methods", "year": "2018" }, { "authors": "Scott Fujimoto; David Meger; Doina Precup", "journal": "PMLR", "ref_id": "b4", "title": "Off-policy deep reinforcement learning without exploration", "year": "2019" }, { "authors": "Carles Gelada; Marc G Bellemare", "journal": "", "ref_id": "b5", "title": "Off-policy deep reinforcement learning by bootstrapping the covariate shift", "year": "2019" }, { "authors": "Xinyang Geng", "journal": "", "ref_id": "b6", "title": "Jaxcql: a simple implementation of sac and cql in jax", "year": "2022" }, { "authors": "Abhishek Gupta; Vikash Kumar; Corey Lynch; Sergey Levine; Karol Hausman", "journal": "", "ref_id": "b7", "title": "Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning", "year": "2019" }, { "authors": "Tuomas Haarnoja; Aurick Zhou; Pieter Abbeel; Sergey Levine", "journal": "PMLR", "ref_id": "b8", "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "year": "2018" }, { "authors": "Assaf Hallak; Shie Mannor", "journal": "PMLR", "ref_id": "b9", "title": "Consistent on-line off-policy evaluation", "year": "2017" }, { "authors": "Arthur Hado Van Hasselt; David Guez; Silver", "journal": "AAAI Press", "ref_id": "b10", "title": "Deep reinforcement learning with double qlearning", "year": "2016" }, { "authors": "Ray Jiang; Tom Zahavy; Zhongwen Xu; Adam White; Matteo Hessel; Charles Blundell; Hado Van Hasselt", "journal": "PMLR", "ref_id": "b11", "title": "Emphatic algorithms for deep reinforcement learning", "year": "2021" }, { "authors": "Rahul Kidambi; Aravind Rajeswaran; Praneeth Netrapalli; Thorsten Joachims", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Morel: Modelbased offline reinforcement learning", "year": "2020" }, { "authors": " Kolter", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b13", "title": "The fixed points of off-policy td", "year": "2011" }, { "authors": "Aviral Kumar; Justin Fu; Matthew Soh; George Tucker; Sergey Levine", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b14", "title": "Stabilizing off-policy q-learning via bootstrapping error reduction", "year": "2019" }, { "authors": "Aviral Kumar; Abhishek Gupta; Sergey Levine", "journal": "", "ref_id": "b15", "title": "Discor: Corrective feedback in reinforcement learning via distribution correction", "year": "2020" }, { "authors": "Aviral Kumar; Aurick Zhou; George Tucker; Sergey Levine", "journal": "", "ref_id": "b16", "title": "Conservative q-learning for offline reinforcement learning", "year": "2020-12-06" }, { "authors": "Jongmin Lee; Wonseok Jeon; Byungjun Lee; Joelle Pineau; Kee-Eung Kim", "journal": "PMLR", "ref_id": "b17", "title": "Optidice: Offline policy optimization via stationary distribution correction estimation", "year": "2021" }, { "authors": "Qiang Liu; Lihong Li; Ziyang Tang; Dengyong Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b18", "title": "Breaking the curse of horizon: Infinitehorizon off-policy estimation", "year": "2018" }, { "authors": "Bo Sridhar Mahadevan; Philip Liu; Will Thomas; Steve Dabney; Nicholas Giguere; Ian Jacek; Ji Gemp; Liu", "journal": "", "ref_id": "b19", "title": "Proximal reinforcement learning: A new theory of sequential decision making in primal-dual spaces", "year": "2014" }, { "authors": "Gaurav Manek; J Zico; Kolter ", "journal": "", "ref_id": "b20", "title": "The pitfalls of regularization in off-policy TD learning", "year": "2022" }, { "authors": "Volodymyr Mnih; Koray Kavukcuoglu; David Silver; Andrei A Rusu; Joel Veness; Marc G Bellemare; Alex Graves; Martin Riedmiller; Andreas K Fidjeland; Georg Ostrovski", "journal": "nature", "ref_id": "b21", "title": "Human-level control through deep reinforcement learning", "year": "2015" }, { "authors": "Ofir Nachum; Yinlam Chow; Bo Dai; Lihong Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections", "year": "2019" }, { "authors": "Ofir Nachum; Bo Dai; Ilya Kostrikov; Yinlam Chow; Lihong Li; Dale Schuurmans", "journal": "", "ref_id": "b23", "title": "Algaedice: Policy gradient from arbitrary experience", "year": "2019" }, { "authors": "Doina Precup", "journal": "Computer Science Department Faculty Publication Series", "ref_id": "b24", "title": "Eligibility traces for off-policy policy evaluation", "year": "2000" }, { "authors": "John Schulman; Sergey Levine; Pieter Abbeel; Michael Jordan; Philipp Moritz", "journal": "PMLR", "ref_id": "b25", "title": "Trust region policy optimization", "year": "2015" }, { "authors": "S Richard; Andrew G Sutton; Barto", "journal": "MIT press", "ref_id": "b26", "title": "Reinforcement learning: An introduction", "year": "2020" }, { "authors": "Csaba Richard S Sutton; Hamid Reza Szepesvári; Maei", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "A convergent o (n) algorithm for off-policy temporal-difference learning with linear function approximation", "year": "2008" }, { "authors": "A Richard S Sutton; Martha Rupam Mahmood; White", "journal": "The Journal of Machine Learning Research", "ref_id": "b28", "title": "An emphatic approach to the problem of off-policy temporal-difference learning", "year": "2016" }, { "authors": "B Jn Tsitsiklis; Van Roy", "journal": "Lab. Inf. Decis. Syst. Massachusetts Inst. Technol. Tech. Rep", "ref_id": "b29", "title": "An analysis of temporal-difference learning with function approximation", "year": "1996" }, { "authors": "Yifan Wu; George Tucker; Ofir Nachum", "journal": "", "ref_id": "b30", "title": "Behavior regularized offline reinforcement learning", "year": "2019" }, { "authors": "Tianhe Yu; Garrett Thomas; Lantao Yu; Stefano Ermon; James Y Zou; Sergey Levine; Chelsea Finn; Tengyu Ma", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Mopo: Model-based offline policy optimization", "year": "2020" }, { "authors": "Shangtong Zhang; Bo Liu; Hengshuai Yao; Shimon Whiteson", "journal": "PMLR", "ref_id": "b32", "title": "Provably convergent twotimescale off-policy actor-critic with function approximation", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 107.69, 154.53, 398.05, 38.89 ], "formula_id": "formula_0", "formula_text": "V (s) := E ∞ t=0 γ ⊤ r(s t ) s 0 = s . The value function is consistent with Bellman's equation in matrix form, V = R + γP V.(1)" }, { "formula_coordinates": [ 4, 251.86, 259.01, 252.8, 11.72 ], "formula_id": "formula_1", "formula_text": "Φw ⋆ = Π µ (R + γP Φw ⋆ ),(2)" }, { "formula_coordinates": [ 4, 135.3, 276.96, 114.73, 11.23 ], "formula_id": "formula_2", "formula_text": "Π µ = Φ(Φ ⊤ D µ Φ) -1 Φ ⊤ D µ" }, { "formula_coordinates": [ 4, 191.64, 327.25, 313.03, 19.85 ], "formula_id": "formula_3", "formula_text": "w t+1 = w t -λE µ ϕ(s) ⊤ w t -r -ϕ (s ′ ) ⊤ w t ϕ(s) .(3)" }, { "formula_coordinates": [ 4, 108, 404.11, 392.8, 29.69 ], "formula_id": "formula_4", "formula_text": "x 1 , x 2 ∈ R n : ∥f (x 1 ) -f (x 2 )∥ µ ≤ γ∥x 1 -x 2 ∥ µ ,(4" }, { "formula_coordinates": [ 4, 108, 468.95, 396, 45.07 ], "formula_id": "formula_5", "formula_text": "Π µ B(x) = Π µ (R + γP x), is a contraction mapping. ∥Π µ B(Φw 1 ) -Π µ B(Φw 2 )∥ µ ≤ γ∥Φw 1 -Φw 2 ∥ µ ∀ w 1 , w 2 ∈ R k ." }, { "formula_coordinates": [ 4, 156.26, 610.96, 348.4, 23.72 ], "formula_id": "formula_6", "formula_text": "E s∼µ [F (s)] ⪰ 0, where F (s) = E s ′ ∼p(•|s) ϕ(s)ϕ(s) ⊤ ϕ(s)ϕ(s ′ ) ⊤ ϕ(s ′ )ϕ(s) ⊤ ϕ(s)ϕ(s) ⊤ . (6)" }, { "formula_coordinates": [ 5, 212.94, 369.51, 291.72, 18.73 ], "formula_id": "formula_7", "formula_text": "minimize q D KL (q ∥ µ) s.t. E s∼q [F (s)] ⪰ 0.(7)" }, { "formula_coordinates": [ 5, 205.13, 417.03, 299.53, 18.73 ], "formula_id": "formula_8", "formula_text": "maximize Z⪰0 minimize q D KL (q∥µ) -tr Z ⊤ E q [F (s)].(8)" }, { "formula_coordinates": [ 5, 177.1, 497.66, 327.57, 18.44 ], "formula_id": "formula_9", "formula_text": "q ⋆ (s) ∝ exp log µ(s) + tr Z ⊤ F (s) = µ(s) exp tr Z ⊤ F (s) .(9)" }, { "formula_coordinates": [ 5, 236.97, 559.87, 267.7, 16.96 ], "formula_id": "formula_10", "formula_text": "minimize Z⪰0 E µ exp tr Z ⊤ F (s) .(10)" }, { "formula_coordinates": [ 5, 262.87, 606.31, 241.79, 24.42 ], "formula_id": "formula_11", "formula_text": "Z = A B A B ⊤ (11)" }, { "formula_coordinates": [ 5, 126.75, 695.31, 377.92, 18.73 ], "formula_id": "formula_12", "formula_text": "minimize A,B E µ exp ∥A ⊤ ϕ(s)∥ 2 2 + ∥B ⊤ ϕ(s)∥ 2 2 + 2E s ′ ∼p(•|s) ⟨B ⊤ ϕ(s), A ⊤ ϕ(s ′ )⟩(12)" }, { "formula_coordinates": [ 6, 117.96, 121.41, 386.39, 147.78 ], "formula_id": "formula_13", "formula_text": "for step t in 1, . . . , N do (s, a, r, s ′ ) 1,...,m ∼ µ ▷ Sample minibatch from dataset ã ∼ π θ π (s) ã′ ∼ π θ π (s ′ ) ▷ Sample new actions from policy q target := r + γw ⊤ ϕ θ ϕ (s ′ , ã′ ) ▷ Compute Q-function target value y A , y B , y ′ A := A ⊤ ϕ θ ϕ (s, a), B ⊤ ϕ θ ϕ (s, a), A ⊤ ϕ θ ϕ (s ′ , a ′ ) ▷ Compute dual values u := exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2g θ g (s, a) /ū ▷ Compute minibatch-normalized weight A, B ← [A, B] -λ A,B u∇ A,B ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y ′ A ⟩ ▷ Update Lagrange matrices θ g ← θ g -λ g ∇ θ g (g θ g (s, a) -⟨y B , y ′ A ⟩) 2 ▷ Update g-function parameters [θ Q , w] ← θ Q -λ Q u∇ θ Q ,w w ⊤ ϕ θ ϕ (s, a) -q target 2 ▷ Update Q-function parameters θ π ← θ π -λ π ∇ θ π (L Q + αL entropy -βu⟨y B , y ′ A ⟩) ▷ Augment SAC policy loss" }, { "formula_coordinates": [ 6, 258.09, 339.82, 246.58, 18.73 ], "formula_id": "formula_14", "formula_text": "E (s,a)∼q [F π (s, a)] ⪰ 0,(13)" }, { "formula_coordinates": [ 6, 170.14, 357.75, 330.38, 22.62 ], "formula_id": "formula_15", "formula_text": "F π (s, a) = E s ′ ∼p(•|s,a),a ′ ∼π(s ′ ) ϕ(s, a)ϕ(s, a) ⊤ ϕ(s, a)ϕ(s ′ , a ′ ) ⊤ ϕ(s ′ , a ′ )ϕ(s, a) ⊤ ϕ(s, a)ϕ(s, a) ⊤ . (14" }, { "formula_coordinates": [ 6, 500.52, 365.42, 4.15, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 6, 127.36, 436.97, 377.31, 18.73 ], "formula_id": "formula_17", "formula_text": "w * = arg min w E (s,a)∼µ (ϕ(s, a) ⊤ w -r(s, a) -γE s ′ ∼p(•|s,a),a ′ ∼π(s ′ ) ϕ(s ′ , a ′ ) ⊤ w) 2 (15)" }, { "formula_coordinates": [ 6, 126.83, 475.64, 373.69, 30.79 ], "formula_id": "formula_18", "formula_text": "E µ (ϕ(s, a) ⊤ w ⋆ -V (s, a)) 2 ≤ 1 + γ δ(ν, µ) 1 -γ min w E µ (ϕ(s, a) ⊤ w -V (s, a)) 2 , (16" }, { "formula_coordinates": [ 6, 500.52, 483.55, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 6, 349.42, 504.21, 52.62, 19.16 ], "formula_id": "formula_20", "formula_text": "µ(s.a) • µ(s,ã) ν(s,ã)" }, { "formula_coordinates": [ 6, 157.08, 560.7, 347.59, 18.44 ], "formula_id": "formula_21", "formula_text": "q ⋆ (s, a) ∝ µ(s, a) exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2E s ′ ∼p(•|s),a ′ ∼π(•|s ′ [⟨y B , y ′ A ⟩](17)" }, { "formula_coordinates": [ 6, 192.19, 603.21, 312.48, 18.73 ], "formula_id": "formula_22", "formula_text": "A,B E µ exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2E s ′ ∼p(•|s) [⟨y B , y ′ A ⟩](18)" }, { "formula_coordinates": [ 6, 219.88, 717.15, 284.79, 18.73 ], "formula_id": "formula_23", "formula_text": "g θ (s, a) ≈ E s ′ ∼p(s ′ |s),a ′ ∼π(•|s ′ ) [⟨y B , y ′ A ⟩] ,(19)" }, { "formula_coordinates": [ 7, 297.8, 222.79, 3.47, 5.46 ], "formula_id": "formula_24", "formula_text": "1" }, { "formula_coordinates": [ 7, 155.78, 339.2, 348.89, 18.73 ], "formula_id": "formula_25", "formula_text": "E s,a∼µ,s ′ ∼p(s ′ |s),a ′ ∼π(•|s ′ ) u(s, a) • ∇ A,B ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y A ⟩(20)" }, { "formula_coordinates": [ 7, 161.18, 379.24, 343.49, 18.44 ], "formula_id": "formula_26", "formula_text": "u(s, a) = exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2g θ (s) , q * (s, a) = u(s, a)µ(s, a).(21)" }, { "formula_coordinates": [ 7, 149.54, 435.39, 355.13, 36.47 ], "formula_id": "formula_27", "formula_text": "θ ← θ -λ θ ∇ θ g θ (s, a) -E s ′ ∼p(s ′ |s,a),a ′ ∼π(s ′ ) [⟨y B , y ′ A ⟩] 2 , A, B ← [A, B] -λ A,B E µ,p u(s, a) • ∇ A,B ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2⟨y B , y ′ A ⟩ ,(22)" }, { "formula_coordinates": [ 8, 163.87, 296.25, 340.8, 18.73 ], "formula_id": "formula_28", "formula_text": "maximize π,q E µ [Q π (s, a)] -βD KL (q ∥ µ) s.t. E (s,a)∼q [F π (s, a)] ⪰ 0(23)" }, { "formula_coordinates": [ 8, 213.17, 347.5, 291.5, 18.73 ], "formula_id": "formula_29", "formula_text": "∇ π E µ [Q π (s, a)] + 2βE q π [∇ π E π [⟨y B , y ′ A ⟩]] .(24)" }, { "formula_coordinates": [ 13, 132.3, 124.51, 120.15, 76.4 ], "formula_id": "formula_30", "formula_text": "s 3 s 1 s 2 1 ⁄4 1 ⁄2 1 ⁄4 1 ⁄4 1 ⁄2 1 ⁄4 1 ⁄4 1 ⁄4 1 ⁄2" }, { "formula_coordinates": [ 13, 108, 132.53, 392.31, 444.51 ], "formula_id": "formula_31", "formula_text": "E s∼µ [F (s)] ⪰ 0 Evaluation Points 1 -2 -1 0 1 2 p = 0.2, µ = [0.1, 0.1, 0.8] -2 -1 0 1 2 w * µ w * µ w * µ -2 -1 0 1 2 p = 0.5, µ = [0.2, 0.2, 0.5] w * µ w * µ w * µ -2 -1 0 1 2 p = 0.8, µ = [0.4, 0.4, 0.2] w * µ w * µ w * µ Iterative TD Traces 1 -2 -1 0 1 2 p = 0.2, µ = [0.1, 0.1, 0.8] -2 -1 0 1 2 w * q w * q w * q -2 -1 0 1 2 p = 0.5, µ = [0.2, 0.2, 0.5] w * q w * q w * q -2 -1 0 1 2 p = 0.8, µ = [0.4, 0.4, 0.2] w * q w * q w * q POP Q-Learning Traces 1 Figure 4:" }, { "formula_coordinates": [ 14, 228.15, 100.27, 276.52, 38.95 ], "formula_id": "formula_32", "formula_text": "Φ = 1 0 0 -1. 1 /2(1.05 + ϵ) -1 /2(1.05 + ϵ) (25)" }, { "formula_coordinates": [ 14, 249.15, 399.16, 255.52, 17.29 ], "formula_id": "formula_33", "formula_text": "B(Φw) = Π µ (R + γP Φw),(26)" }, { "formula_coordinates": [ 14, 215.45, 446.25, 289.22, 17.29 ], "formula_id": "formula_34", "formula_text": "∥B(Φw 1 ) -B(Φw 2 )∥ µ ≤ γ∥Φw 1 -Φw 2 ∥ µ .(27)" }, { "formula_coordinates": [ 14, 165.82, 482.39, 279.87, 52.41 ], "formula_id": "formula_35", "formula_text": "∥B(Φw 1 ) -B(Φw 2 )∥ µ = ∥Π µ B(Φw 1 ) -Π µ B(Φw 2 )∥ µ = ∥Π µ (R + γP Φw 1 ) -Π µ (R + γP Φw 2 )∥ µ = γ∥Π µ P Φw 1 -Π µ P Φw 2 ∥ µ = γ∥Π µ P Φ w∥ µ" }, { "formula_coordinates": [ 14, 141.44, 611.23, 328.85, 61.04 ], "formula_id": "formula_37", "formula_text": "w T Φ T P T D µ Φ Φ T D µ Φ -1 Φ T D µ Φ Φ T D µ Φ -1 ΦD µ P Φ T w ≤ w T Φ T D µ Φw ⇔ w T Φ T P T D µ Φ Φ T D µ Φ -1 ΦD µ P Φ T -Φ T D µ Φ w ≤ 0 ⇔ Φ T P T D µ Φ Φ T D µ Φ -1 ΦD µ P Φ T -Φ T D µ Φ ⪯ 0" }, { "formula_coordinates": [ 14, 222.57, 691.33, 282.1, 23.72 ], "formula_id": "formula_38", "formula_text": "F µ ≡ Φ T D µ Φ Φ T D µ P Φ Φ T P T D µ Φ Φ T D µ Φ ⪰ 0.(29)" }, { "formula_coordinates": [ 15, 174.53, 159.43, 261.48, 18.44 ], "formula_id": "formula_39", "formula_text": "P π ((s, a), (s ′ , a ′ )) := p(s ′ |s, a)π(a ′ |s ′ ) ∀ (s, a), (s ′ , a ′ ) ∈ X" }, { "formula_coordinates": [ 15, 204.13, 249.56, 300.54, 30.79 ], "formula_id": "formula_40", "formula_text": "∥Φw ⋆ -V π ∥ µ ≤ 1 + γ δ(ν, µ) 1 -γ ∥Π µ V π -V π ∥ µ (30)" }, { "formula_coordinates": [ 15, 108, 280, 395.37, 31.16 ], "formula_id": "formula_41", "formula_text": "V π = R + γP π V π , ν is the stationary distribution, and δ(ν, µ) = max x,x ν(x) µ(x) • µ(x) ν(x)" }, { "formula_coordinates": [ 15, 162.67, 390.66, 342, 23.78 ], "formula_id": "formula_42", "formula_text": "maximize π,q 1 β E µ [Q π (s, a)] -D KL (q ∥ µ) s.t. E (s,a)∼q [F π (s, a)] ⪰ 0(31)" }, { "formula_coordinates": [ 15, 119.39, 439.13, 365.89, 50.98 ], "formula_id": "formula_43", "formula_text": "maximize q,π minimize Z⪰0 1 β E µ [Q π (s, a)] -D KL (q∥µ) + tr Z ⊤ E q [F π (s, a)] = maximize π 1 β E µ [Q π (s, a)] + maximize q minimize Z⪰0 -D KL (q∥µ) + tr Z ⊤ E q [F π (s, a)]" }, { "formula_coordinates": [ 15, 191.63, 559.87, 313.03, 18.73 ], "formula_id": "formula_44", "formula_text": "minimize Z⪰0 maximize q -D KL (q∥µ) + tr Z ⊤ E q [F π (s, a)](32)" }, { "formula_coordinates": [ 15, 218.97, 642.76, 285.7, 16.96 ], "formula_id": "formula_46", "formula_text": "minimize Z⪰0 log E µ exp tr Z ⊤ F π (s, a)(34)" }, { "formula_coordinates": [ 15, 108, 687.44, 334.04, 23.78 ], "formula_id": "formula_47", "formula_text": "maximize π 1 β E µ [Q π (s, a)] + minimize A,B log E µ exp ∥y A ∥ 2 2 + ∥y B ∥ 2 2 + 2E s ′ ∼p(•" }, { "formula_coordinates": [ 16, 221.12, 124.27, 13.03, 10.32 ], "formula_id": "formula_48", "formula_text": "g θ (" }, { "formula_coordinates": [ 16, 162.57, 605.51, 123.94, 30.61 ], "formula_id": "formula_50", "formula_text": "g θ (s, a) ≈ 1 ∥A∥ 2 ∥B∥ 2 E s ′ ∼p(s ′" } ]
2023-11-30
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b13", "b14", "b15", "b25", "b31", "b32", "b33", "b34", "b34", "b41", "b44", "b1", "b1" ], "table_ref": [], "text": "The 3D anomaly detection task is crucial in objectcentered industrial quality inspection, where high accuracy and detection rates are often required. The goal is to identify anomaly regions and locate abnormal point clouds within the 3D context. Image-based anomaly detection algorithms under fixed perspectives [1,7,[14][15][16]18,24,26,[32][33][34][35]35,42,45] have limitations due to blind spots and do not perform as desired in object-centered scenarios. Consequently, researchers have increasingly focused on 3D information for anomaly detection [2].\nWith advancements in 3D sensor technology, several datasets such as MvtecAD-3D [2], Real3D-AD [25], and MAD [44] have been created to meet the increasing demand for 3D anomaly detection. MvtecAD-3D is designed for anomaly detection in scenarios with a single-angle camera, while the MAD dataset focuses on multi-pose anomaly detection. Only Real3D-AD specifically addresses anomaly detection on complete point clouds. Since 3D point clouds obtained from a 3D scanner generally contain more morphological information compared to data from multiple cameras, our primary objective is to advance the field of point cloud-based anomaly detection tasks.\nIn the field of point cloud anomaly detection, there are two main issues that need to be addressed: the lack of diverse distribution datasets and the need for more effective deep learning-based approaches. Firstly, the current high-quality real-world 3D point anomaly detection\n• We propose the IMRNet, a novel 3D point-cloud anomaly detection method, with an Geometry-aware Point-cloud Sample module, an iterative Point-patch Mask Reconstruction network and a dense feature comparision module, which outperforms the SoTA methods on Anomaly-ShapeNet and Real3D-AD.\n• We demonstrate that the proposed Geometry-aware Point-cloud Sample can help extract important anomaly points in 3D point clouds more effectively and the proposed PMR network can learn better representation of the 3D anomaly datasets." }, { "figure_ref": [], "heading": "Related works", "publication_ref": [ "b16", "b39", "b2", "b18", "b28", "b36", "b0", "b34", "b30", "b8", "b21", "b7", "b11", "b1" ], "table_ref": [], "text": "2D Anomaly Detection. Recent advances in unsupervised anomaly detection for two-dimensional data, such as RGB or grayscale images, have shown significant progress [17,27,40]. Traditional approaches often rely on autoencoders [3,19,38] and generative adversarial networks (GANs) [10, 29,37], typically employing random weight initialization. Alternatively, methods leveraging pretrained network descriptors for anomaly detection often surpass those trained from scratch [1, 6, 7, 14-16, 18, 24, 26, 32-35, 35, 36, 41, 42, 45]. Noteworthy examples include Self-Taught anomaly detection (ST) [1], which aligns features from a pre-trained teacher network with a randomly initialized student network to identify anomalies, Patch-Core [35] that utilizes a memory-bank approach for modeling normal data distributions, and Cutpaste [24], introducing a novel data augmentation strategy for self-supervised anomaly detection. The emergence of large-scale foundation models [23,31] has spurred new methodologies that capitalize on the robust zero-shot generalization capabilities of these models for anomaly detection tasks [9,13,22]. In our work, we extend the principles of reconstruction-based self-supervised learning from 2D to 3D contexts. We introduce a novel self-supervised network employing a masked reconstruction mechanism to advance scalable 3D anomaly detection. 3D Anomaly Detection. The domain of 3D anomaly detection remains less advanced compared to its 2D counterpart, hindered by intrinsic challenges such as data sparsity, increased dimensionality, and prevalent noise. classical 3D descriptors coupled with K-Nearest Neighbors (KNN) for anomaly detection. Although AST [36] demonstrated efficacy in certain scenarios, its primary focus on background suppression via depth information led to the omission of finer detail anomalies. The M3DM framework [39] innovatively combines 3D point data with conventional imaging for enhanced decision-making. CPMF [8] introduced a novel methodology that integrates a memory bank approach with KNN and enriches the detection process by rendering 3D data into multi-view 2D images. Conversely, EasyNet [12] presents a straightforward mechanism for 3D anomaly detection, circumventing the need for pre-training. Nonetheless, the scarcity of robust 3D anomaly detection datasets [2,25,44] constrains the scalability and generalizability of these models across varied 3D anomaly detection contexts. In light of this, our work aims to devise a 3D anomaly synthesis pipeline, enhancing the volume and diversity of data necessary for the development of more generalized 3D anomaly detection models." }, { "figure_ref": [ "fig_1" ], "heading": "Anomaly-ShapeNet 3D Dataset Synthesis", "publication_ref": [ "b10" ], "table_ref": [], "text": "Pipeline Overview. As is shown in Figure 2, the overall pipeline for constructing our Anomaly-ShapeNet dataset consists of three components: mesh subdivision, defects carving, and ground truth generation. Normal Data Sampling. To generate our dataset's original normal samples, we utilized the ShapeNet dataset [11], which is renowned for its diversity and high quality and is commonly employed in tasks such as point cloud segmentation, completion, and classification. For our data source, we specifically selected normal samples from the sub-dataset ShapeNetcoreV2 of the ShapeNet. Anomaly Data Synthesis. We developed a point cloud refinement module to enhance the limited number of points and faces found in certain point clouds from the ShapeNet dataset. To introduce more realistic defects, we utilized Blender, a widely-used software in the industrial design domain to sculpt various defects. Blender is an open-source industrial design software offering extensive features such as sculpting, refining, cropping, and various editing modes, which contributes to its popularity in this field. After acquiring the abnormal samples, we employed CloudCompare, point editing software, to obtain the ground truths. " }, { "figure_ref": [ "fig_2" ], "heading": "Self-Supervised Representation Learning for 3D Anomaly Detection: IMRNet", "publication_ref": [], "table_ref": [], "text": "Figure 3 shows the overall architecture of our IMRNet. Our IMRNet, which consists of three modules: GPS, PMR, and DFC, successfully detect and locate the anomaly in the abnormal samples. The details of each module will be illustrated in the following sections." }, { "figure_ref": [], "heading": "Geometry-aware Point Cloud Sampling", "publication_ref": [], "table_ref": [], "text": "In point cloud processing, uniform sampling or farthest point sampling is commonly used. However, when it comes to point cloud anomaly detection, arbitrary sampling methods can lead to ambiguous representations of the anomaly structures. In our geometry aware sampling module, we address this issue by first calculating the geometry features of the points to adaptively sample the point cloud.\nGeometry feature extraction. Given a point cloud (P ) as input, we define the set of neighboring points of a certain point P i within a radius r as N i (where |N i | = N ). To compute the normal vector N i for each point (P i ), we employ a local surface fitting method. The normal vector N i can be obtained by solving the following equation:\nmin Ni Pj ∈Ni |N i • v j -d| 2(1)\nwhere (v j ) is the vector from P i to P j , N i • v j denotes the dot product between the normal vector and the vector v j , and d represents the signed distance from the plane defined by the normal vector N i to the point P j .\nThe curvature of a point can be calculated using the normal vector and the eigenvalues of the covariance matrix C i of its neighboring points N i ). The curvature value (K i ) is given by:\nK i = min(λ 1 , λ 2 ) λ 1 + λ 2 + ϵ(2)\nwhere (λ 1 ) and (λ 2 ) are the eigenvalues of C i and ϵ is a small positive constant to avoid division by zero.\nTo quantify the rate of change in normal vectors and curvature between two points (P i and P j ), we can define the following formulas:\nR norm (P i , P j ) = |N i -Nj| |v ij | (3) R curv (P i , P j ) = |K i -K j |(4)\nwhere v ij is the vector from P i to P j , N i and N j are the normal vectors at each point, and K i and K j are the curvature values at each point. Geometry-aware sampling. During the sampling process of the input point cloud, we employ a geometry-aware point cloud sampling approach. To achieve this, we introduce a rate of change memory bank M that captures the local and global rates of change in the point cloud. For each point P i , the memory bank stores the rate of change value R i computed based on the differences in normal vectors and curvature values between neighboring points. Given the rate of change in normal vectors and curvature (R norm and R curv ), the overall rate of change value R is computed as the average of the rate of change values with its neighboring points:\nR i = 1 |N i | Pj ∈Ni (R norm (P i , P j ) + R curv (P i , P j )) (5)\nTo prioritize high-rate-of-change points, we sort them based on values in the memory bank M. Lower ranks indicate higher rates of change, capturing significant anomalous structures. We select points with greater rates of change using a threshold τ . By sampling points with ranks from 1 to ⌊τ • N ⌋, where N is the total point count, we capture more points with significant rates of change. The sampling process to derive the final sampled point set (S) can be represented by the following equation:\nS = {P k |Rank(P k ) ≤ ⌊τ • N ⌋}(6)\nBy incorporating this geometry-aware point cloud sampling strategy, we enhance the accuracy and effectiveness of point cloud anomaly detection by ensuring that the sampled points represent the underlying anomaly structures with their distinctive geometric characteristics." }, { "figure_ref": [], "heading": "Iterative Mask Reconstruction", "publication_ref": [ "b27", "b34", "b29", "b42" ], "table_ref": [], "text": "In 2D anomaly detection field, splitting normal images into patches of the same size and randomly masking them before reconstructing the masked patches back to normal is common. The underlying idea is the reconstruction network only receives the normal patches and learns only the normal feature distribution during training. During testing, input anomalous images are masked and can be restored to normal images. Different from images, point cloud can not be divided into regular patches like images. Furthermore, due to the unordered nature of point clouds, the reconstruction error cannot be directly computed using mean squared error (MSE) or structural similarity (SSIM). Based on these properties, we propose the PMR module, which mainly consists of three components: point-patch generation, random masking and embedding, and reconstruction target. Point-patch generation. Following Point-MAE [28], the input point cloud is divided into irregular overlapping patches via our Geometry-aware Point Sampling (GPS) and K-Nearest Neighborhood (KNN). Formally, given an point cloud P i with N points, P i ∈ R N ×3 , GPS is applied to sample C points as the patch centers. Based on C, KNN selects k nearest points around each center and formulate the point patches P . C and P are formulated as follows:\nC = GP S(P i ), C ∈ R n×3(7)\nP = KN N (P i , C), P ∈ R n×k×3(8)\nIt should be noted that, in our point patches, each center point represents its neighborhood like 2D patchcore [35].This not only leads to better convergence but also facilitates the detection and localization of anomalies. Random masking and embedding. With a masking ratio m at 40%, we randomly select a set of masked patches M ∈ R mn×k×3 , which is also used as the ground truth for reconstruction. After ramdomly masking, the visible points could be illustrated as:\nP vis = P ⊙ (1 -M )(9)\nwhere ⊙ denotes the spatial element-wise product.\nTo transform the visible patches and masked patches to tokens, we implement the PointNet [30], which mainly consists of MLPs and max pooling layers. Considering our point patches are represented by their coordinates, a simple Position Embedding (PE) map the centers C to the embedding P c . Setting the dimension as d, the visible tokens T v is defined as:\nT vis = P ointN et(P vis ), T vis ∈ R (1-m)n×d(10)\nReconstruction target. Our reconstruction backbone is entirely based on Point-Transformer(P T ) [43], with an asymmetric encoder-decoder architecture. In order to predict the masked points, we add a prediction head with a simple fully connected layer to the last layer of the decoder. During the training phase, both the visible tokens T vis and the mask tokens T m are sent to the Transformer P T with the global position embedding P c . At the last layer of the decoder, the prediction head F c tries to output the reconstructed points P pre . The process can be expressed as follows:\nP pre = F c {P T (T vis , T m , P c )}, P pre ∈ R mn×k×3 (11)\nThe target of our reconstruction net is to restore the masked point patches M , also called P gt . After obtaining the predicted point patches P pre and the ground truth P gt , we use l 2 Chamfer Distance as our reconstruction loss. return (P, C) 17: end function During testing, each restored point-patches will be concatenated with the visible point-patches and sent back to the Transformer for a few times until the anomaly parts have been masked and restored to normal surface. Given the anomaly samples as P * , the iterative reconstruction pseudocode is shown in Algorithm 1." }, { "figure_ref": [], "heading": "Dense feature concatenation and comparision", "publication_ref": [ "b34" ], "table_ref": [], "text": "Following the iterative reconstruction module, the anomaly score is determined by comparing the reconstructed point cloud with the original input. To enhance detection of subtle anomalies, we combine features from both the input and reconstructed point clouds. Additionally, to reduce false positives caused by excessive iterations in normal point cloud generation, we employ a single template from the training phase to regularize the output. Point-Patch comparision. Leveraging the point-patch subdivision approach inspired by PatchCore [35], we address the challenges of the unordered nature of point clouds by facilitating patch-wise comparison between the reconstructed and original point clouds. Each point's anomaly score is derived by transforming the Chamfer distance-based comparison scores of its corresponding patches, computed directly from the training phase. The comparison process is formalized as:\nP i = KN N (p i , k), P i ∈ R n×k×3(13)\nP o = KN N (p o , k), P o ∈ R n×k×3(14)\nA p = L c (P i , P o )(15)\nwhere p i and p o denote the input and output point clouds, respectively, with n representing the aggregation count of neighboring points around each point. P i and P o are the corresponding point-patches. The Chamfer loss is given by L c , and A p is the anomaly score in the point domain.\nFeature fusion and comparision. It is commonly recognized that features extracted from different layers of a neural network represent information at varying levels. Consequently, the fusion of information from different layers can effectively enhance the capability of anomaly detection. In our DFC (Dense Feature Concatenation and Comparison) module, we utilize the decoder of a transformer previously employed for reconstruction as a feature extractor. Following the decoding of input and reconstructed point clouds, we extract their 1st, 2nd, and 3rd layer features (f 1 , f 2 , f 3 ) and subsequently fuse and compare them, resulting in featurelevel anomaly scores. Feature level anomaly scores A f are formulated as follows:\nf 1 , f 2 , f 3 = ϕ(p)(16)\nF = f 1 ⊕ f 2 ⊕ f 3(17)\nA f = F i ΘF o(18)\nwhere p are input and reconstructed point clouds, and f are the extracted features. ⊕ represents the fusion operation and F are the fused features. Θ is the comparision operation and A f is the feature anomaly score. Template Regularization. Excessively iterative reconstruction may induce \"normal point drift\", potentially increasing the positive false rate. To mitigate this, we employ a feature template T f saved during the training phase to regularize the features F o of our reconstructed point cloud. For each vector z i within F o , we compute its distance to the template's corresponding vector z i and save the distance to a memory bank M .\n∀z i ∈ F O , M = ||ẑ (l) i -z (l) i ||(19)\nBy Setting a distance threshold τ , we access each distance d i within the set M . If d i exceeds τ , we replace the corresponding vector z i with ẑi . If using template regularization, the regularized F o will be used for calculating A f . After obtaining the anomaly score A p and A f , we interpolate them to a uniform dimension. The final anomaly score is the result of concatenating two scores. \nA = A p ⊕ A f(20)" }, { "figure_ref": [], "heading": "Evaluation metrics", "publication_ref": [], "table_ref": [], "text": "Image-level anomaly detection is measured using I-AUROC (Area Under the Receiver Operator Curve), with higher values indicating superior detection capabilities. Pixel-level anomalies are evaluated via the same curve for segmentation accuracy. Besides, the per-region overlap (AUPRO) metric and testing time results are also provided in the Appendix." }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b27", "b10" ], "table_ref": [], "text": "The backbone architecture used in our experiments is directly adopted from Point-MAE [28]. Instead of training directly on our dataset, we first train the backbone using ShapeNet-55 [11] with a point size of 8192. Regarding the geometry-aware sample module, we set the threshold τ to 0.3, and the number of points sampled in salient re-gions is twice that of non-salient regions. When fine-tuning our model on both the Anomaly-ShapeNet and Real3D-AD datasets, we convert the point clouds into a 256 × 64 pointpatch structure. Here, 256 represents the number of central points sampled, and 64 represents the number of neighborhood points selected using K-nearest neighbors (KNN).\nDuring training and testing, we set the mask rate to 0.4, and the number of iterations for testing is 3. For anomaly scoring, we utilize the 1st, 2nd, and 3rd intermediate layers of the backbone's decoder for comparison." }, { "figure_ref": [ "fig_5" ], "heading": "Anomaly detection on Anomaly-ShapeNet and Real3D-AD Datasets", "publication_ref": [], "table_ref": [], "text": "Anomaly detection results on Anomaly-ShapeNet and Real3D-AD are shown in table 2 and table 3. Compared to other methods, our IMR-Net performs better on the average I-AUROC, which achieves 72.5% on Real3D-AD and 66.1% on Anomaly ShapeNet. In table 3, our IMR-Net achieves the highest score for 19 out of 40 classes. We visualize some representative samples of Anomaly-ShapeNet for anomaly detection and localization in Figure 4 ." }, { "figure_ref": [ "fig_7", "fig_7", "fig_8" ], "heading": "Ablation study", "publication_ref": [ "b20" ], "table_ref": [], "text": "Effectiveness of geometry aware sampling. FPS (farthest point sampling) and RS (random samping) are widely used in 3D point processing. However, when the sampling ratio is too low, FPS (Farthest Point Sampling) and (Random Sampling) may result in the sampled defects too slightly to be detect. Therefore, we employ the geometry aware sampling. To demonstrate the superiority of the GPS algorithm in anomaly detection, we conducted ablation experiments alongside RS , FPS , and Voxel Down-sampling methods. As shown in table 4 , when using our GPS, the higher I-AUC (0.66) and P-AUC (0.65) were achieved. Analysis of masking ratio. Figure 5 shows the influence of masking ratio. The optimal ratio is 0.4, which is good both for reconstruction and feature representation. When the masking rate is reduced, the anomaly regions may not be covered during the iteration process. Conversely, when the masking ratio increases, the limited data may prevent the model from convergence. Figure 5b analyzes the relationship between the size of the anomalous region and the masking rate, where we find that point clouds with larger anomalous areas require higher masking rates. Feature Discrimination ability. Previous methods like M3DM [39] and RegAD [21] primarily employed pretrained models on other datasets to extract features, leading to domain bias in the extracted features compared to the actual anomaly detection datasets. In contrast, our IMR-Net, as a self-supervised network, effectively extracts features from both the abnormal and normal point clouds in the Real3D-AD and Anomaly-ShapeNet datasets. This is directly proved in Figure 6. Our extracted features have a more compacted feature space and this property makes the features more suitable for memory bank building and feature distance calculation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose Anomaly-ShapeNet, a synthetic 3D point dataset for anomaly detection, containing realistic and challenging samples. The diverse point clouds with high accuracy and reasonable quantity in Anomaly-ShapeNet make it more suitable for various 3D algorithms. Moreover, we firstly introduce IMRNet, a self-supervised model based on 3D point mask reconstruction, achieving the state-of-the-art performance on both the Anomaly-ShapeNet and Real3D-AD datasets." } ]
Recently, 3D anomaly detection, a crucial problem involving fine-grained geometry discrimination, is getting more attention. However, the lack of abundant real 3D anomaly data limits the scalability of current models. To enable scalable anomaly data collection, we propose a 3D anomaly synthesis pipeline to adapt existing large-scale 3D models for 3D anomaly detection. Specifically, we construct a synthetic dataset, i.e., Anomaly-ShapeNet, based on ShapeNet. Anomaly-ShapeNet consists of 1600 point cloud samples under 40 categories, which provides a rich and varied collection of data, enabling efficient training and enhancing adaptability to industrial scenarios. Meanwhile, to enable scalable representation learning for 3D anomaly localization, we propose a self-supervised method, i.e., Iterative Mask Reconstruction Network (IMRNet). During training, we propose a geometry-aware sample module to preserve potentially anomalous local regions during point cloud down-sampling. Then, we randomly mask out point patches and sent the visible patches to a transformer for reconstruction-based self-supervision. During testing, the point cloud repeatedly goes through the Mask Reconstruction Network, with each iteration's output becoming the next input. By merging and contrasting the final reconstructed point cloud with the initial input, our method successfully locates anomalies. Experiments show that IMR-Net outperforms previous state-of-the-art methods, achieving 66.1% in I-AUC on Anomaly-ShapeNet dataset and 72.5% in I-AUC on Real3D-AD dataset. Our dataset will be released at https://github.com/Chopper-233/Anomaly-ShapeNet.
Towards Scalable 3D Anomaly Detection and Localization: A Benchmark via 3D Anomaly Synthesis and A Self-Supervised Learning Network
[ { "figure_caption": "Figure 1 .1Figure 1. Examples of the proposed Anomaly-ShapeNet. The first and second rows are the original mesh and the subdivided mesh. The third row is synthetic defected point cloud. The fourth row is the Ground-Truth of the anomalous region.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Pipeline for anomaly synthesis built upon the Anomaly-ShapeNet Dataset. Selected normal samples are processed through a mesh subdivision module to attain a more uniform point cloud distribution. We employ Blender to introduce defects into the refined samples, and utilize CloudCompare software to acquire 3D anomaly ground truth.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Overview of the Iterative Mask Reconstruction Network (IMRNet) Pipeline. (a) Training Phase: The standard training point cloud is initially converted to point-patch format using the Geometry-aware Point-cloud Sampling (GPS) module. Following this, random masking is applied to the point-patches, which are then reconstructed by a network comprising an Autoencoder-based Transformer and a lightweight prediction head, operating in a self-supervised paradigm. (b) Testing Phase: The input point cloud is subjected to a reconstruction process mirroring the training procedure. The reconstructed point cloud is cyclically fed back into the reconstruction network as input for several iterations. Ultimately, a comparative analysis is performed between the reconstructed and the original point clouds at both the point cloud and feature levels to derive the final anomaly score map.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 1 : 8 :118Iterative Mask Reconstruction Require: Point Transformer P T , Prediction head F c , Geometry feature extractor G, Feature related sample function S, masking ratio m, iteration number I, data loader D Load model P T 2: Set P T to evaluation mode 3: for each P * in D do 4: (P, C) ← GPS(P * ) 5: for index in 1 to I do 6: P vis ← P (1 -m) 7: T ← P T (P vis , C) P pre ← F c (T, C) 9: P = P pre ⊕ P vis 10: end for 11: end for 12: function GPS(P ) 13: M ← G(P ) 14: C ← S(M, P ) 15: P ← KNN(C, P, k) 16:", "figure_data": "", "figure_id": "fig_3", "figure_label": "118", "figure_type": "figure" }, { "figure_caption": "Table 3 .3I-AUROC score for anomaly detection of 40 categories of our Anomaly-ShapeNet dataset. Our method clearly outperforms other methods. Last line is the average result of 40 classes. The results can be regarded as the baseline of Anomaly-ShapeNet.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Qualitative results visualization of anomaly localization performance on Anomaly-ShapeNet.5. Experiments 5.1. Datasets Anomaly-ShapeNet. The Anomaly-ShapeNet is our newly proposed 3D synthesised point cloud anomaly detection dataset. The Anomaly-ShapeNet dataset offers 40 categories, with more than 1600 positive and negative samples. Each training set for a category contains only four samples, which is similar to the few-shot scenario. Each test set for", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) Overall Performance. (b) Categories Performance.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure5. Ablation study of masking ratio. (S,M,L) represent (small,middle,large) size anomaly. At the masking ratio 0.4, the overall performance is best. Objects with larger anomaly correspond to higher optimal masking ratios.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Histogram of standard deviation along each dimension of feature of extracted by pretrained model and our IMRNet model. We show a case of the Duck class in Real3D-AD dataset.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Comparison between the proposed Anomaly-ShapeNet and existing mainstream 3D anomaly detection datasets.", "figure_data": "DatasetsYearTypeModality#Class#Anomaly TypesNumberPoint RangeMVTecAD-3D [2]2021RealRGB/D103-5360410K-30KEyecandies [5]2022SynRGB/D/N10315500-MAD [44]2023Syn+RealRGB20310133-Real3D-AD [25]2023RealPoint Cloud122120035K-780KAnomaly-ShapeNet (Ours)2023SynPoint Cloud40616008K-30K", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "I-AUROC score for anomaly detection of 12 categories of Real3D-AD. Bold numbers represent the current highest metrics. Our method clearly outperforms the baseline; For pure 3D point setting, we get 0.725 mean I-AUROC score.", "figure_data": "MethodAirplane CarCandyChicken Diamond DuckFishGemstone Seahorse ShellStarfish Toffees MeanBTF(Raw) BTF(FPFH) M3DM PatchCore(FPFH) PatchCore(PointMAE) CPMF RegAD Ours0.730 0.520 0.434 0.882 0.726 0.701 0.716 0.7620.647 0.560 0.541 0.590 0.498 0.551 0.697 0.7110.539 0.630 0.552 0.541 0.663 0.552 0.685 0.7550.789 0.432 0.683 0.837 0.827 0.504 0.852 0.7800.707 0.545 0.602 0.574 0.783 0.523 0.900 0.9050.691 0.784 0.433 0.546 0.489 0.582 0.584 0.5170.602 0.549 0.540 0.675 0.630 0.558 0.915 0.8800.686 0.648 0.644 0.370 0.374 0.589 0.417 0.6740.596 0.779 0.495 0.505 0.539 0.729 0.762 0.6040.396 0.754 0.694 0.589 0.501 0.653 0.583 0.6650.530 0.575 0.551 0.441 0.519 0.700 0.506 0.6740.703 0.462 0.450 0.565 0.585 0.390 0.827 0.7740.635 0.603 0.552 0.593 0.594 0.586 0.704 0.725Methodcap0cap3helmet3 cup0bowl4vase3headset1 eraser0vase8cap4vase2vase4helmet0 bucket1BTF(Raw) BTF(FPFH) M3DM Patchcore(FPFH) Patchcore(PointMAE) CPMF RegAD Ours0.668 0.618 0.557 0.580 0.589 0.601 0.693 0.7370.527 0.522 0.423 0.453 0.476 0.551 0.725 0.7750.526 0.444 0.374 0.404 0.424 0.520 0.367 0.5730.403 0.586 0.539 0.600 0.610 0.497 0.510 0.6430.664 0.609 0.464 0.494 0.501 0.683 0.663 0.6760.717 0.699 0.439 0.449 0.460 0.582 0.650 0.7000.515 0.490 0.617 0.637 0.627 0.458 0.610 0.6760.525 0.719 0.627 0.657 0.677 0.689 0.343 0.5480.424 0.668 0.663 0.662 0.663 0.529 0.620 0.6300.468 0.520 0.777 0.757 0.727 0.553 0.643 0.6520.410 0.546 0.737 0.721 0.741 0.582 0.605 0.6140.425 0.510 0.476 0.506 0.516 0.514 0.500 0.5240.553 0.571 0.526 0.546 0.556 0.555 0.600 0.5970.321 0.633 0.501 0.551 0.561 0.601 0.752 0.771Methodbottle3 vase0bottle0tap1bowl0bucket0 vase5vase1vase9ashtray0 bottle1tap0phonecup1BTF(Raw) BTF(FPFH) M3DM Patchcore(FPFH) Patchcore(PointMAE) CPMF RegAD Ours0.568 0.322 0.541 0.572 0.650 0.405 0.525 0.6400.531 0.342 0.423 0.455 0.447 0.451 0.533 0.5330.597 0.344 0.574 0.604 0.513 0.520 0.486 0.5520.573 0.546 0.739 0.766 0.538 0.697 0.641 0.6960.564 0.509 0.634 0.504 0.523 0.783 0.671 0.6810.617 0.401 0.309 0.469 0.593 0.482 0.610 0.5800.585 0.409 0.317 0.417 0.579 0.618 0.520 0.6760.549 0.219 0.427 0.423 0.552 0.345 0.702 0.7570.564 0.268 0.663 0.660 0.629 0.609 0.594 0.5940.578 0.420 0.577 0.587 0.591 0.353 0.597 0.6710.510 0.546 0.637 0.667 0.601 0.482 0.695 0.7000.525 0.560 0.754 0.753 0.458 0.359 0.676 0.6760.563 0.671 0.357 0.388 0.488 0.509 0.414 0.7550.521 0.610 0.556 0.586 0.556 0.499 0.538 0.757Methodvase7helmet2 cap5shelf0bowl5bowl3helmet1 bowl1headset0 bag0bowl2jarMeanBTF(Raw) BTF(FPFH) M3DM Patchcore(FPFH) Patchcore(PointMAE) CPMF RegAD Ours0.448 0.518 0.657 0.693 0.650 0.397 0.462 0.6350.602 0.542 0.623 0.425 0.447 0.462 0.614 0.6410.373 0.586 0.639 0.790 0.538 0.697 0.467 0.6520.164 0.609 0.564 0.494 0.523 0.685 0.688 0.6030.417 0.699 0.409 0.558 0.593 0.685 0.593 0.7100.385 0.490 0.617 0.537 0.579 0.658 0.348 0.5990.349 0.719 0.427 0.484 0.552 0.589 0.381 0.6000.264 0.668 0.663 0.639 0.629 0.639 0.525 0.7020.378 0.520 0.577 0.583 0.591 0.643 0.537 0.7200.410 0.546 0.537 0.571 0.601 0.643 0.706 0.6600.525 0.510 0.684 0.615 0.458 0.625 0.490 0.6850.420 0.424 0.441 0.472 0.483 0.610 0.592 0.7800.493 0.528 0.552 0.568 0.562 0.559 0.572 0.661", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study on the sample methods. The bold number represents the sample method corresponding to the highest index. RS represents random sampling and FPS represents farthest random sampling. Voxel denotes voxel down-sample and GPS is our geometry aware sampling.", "figure_data": "# SampleMetrics I-AUC P-AUCRS0.550.61FPS0.640.62Voxel0.620.55GPS0.660.65", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Wenqiao Li; Xiaohao Xu; Yao Gu; Bozhong Zheng; Shenghua Gao; Yingna Wu
[ { "authors": "Paul Bergmann; Michael Fauser; David Sattlegger; Carsten Steger", "journal": "", "ref_id": "b0", "title": "Uninformed students: Student-teacher anomaly detection with discriminative latent embeddings", "year": "2020" }, { "authors": "Paul Bergmann; Xin Jin; David Sattlegger; Carsten Steger", "journal": "VIS-APP", "ref_id": "b1", "title": "The MVTec 3D-AD Dataset for Unsupervised 3D Anomaly Detection and Localization", "year": "2022" }, { "authors": "Paul Bergmann; Sindy Löwe; Michael Fauser; David Sattlegger; Carsten Steger", "journal": "VISAPP", "ref_id": "b2", "title": "Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders", "year": "2019" }, { "authors": "Paul Bergmann; David Sattlegger", "journal": "", "ref_id": "b3", "title": "Anomaly detection in 3d point clouds using deep geometric descriptors", "year": "2023" }, { "authors": "Luca Bonfiglioli; Marco Toschi; Davide Silvestri; Nicola Fioraio; Daniele De Gregorio", "journal": "", "ref_id": "b4", "title": "The eyecandies dataset for unsupervised multimodal anomaly detection and localization", "year": "2022" }, { "authors": "Yunkang Cao; Yanan Song; Xiaohao Xu; Shuya Li; Yuhao Yu; Yifeng Zhang; Weiming Shen", "journal": "IEEE", "ref_id": "b5", "title": "Semi-supervised knowledge distillation for tiny defect detection", "year": "2022" }, { "authors": "Yunkang Cao; Xiaohao Xu; Zhaoge Liu; Weiming Shen", "journal": "IEEE Transactions on Industrial Informatics", "ref_id": "b6", "title": "Collaborative discrepancy optimization for reliable image anomaly localization", "year": "2023" }, { "authors": "Yunkang Cao; Xiaohao Xu; Weiming Shen", "journal": "", "ref_id": "b7", "title": "Complementary pseudo multimodal feature for point cloud anomaly detection", "year": "2023" }, { "authors": "Yunkang Cao; Xiaohao Xu; Chen Sun; Yuqi Cheng; Zongwei Du; Liang Gao; Weiming Shen", "journal": "", "ref_id": "b8", "title": "Segment any anomaly without training via hybrid prompt regularization", "year": "2023" }, { "authors": "Fabio Carrara; Giuseppe Amato; Luca Brombin; Fabrizio Falchi; Claudio Gennaro", "journal": "IEEE", "ref_id": "b9", "title": "Combining GANs and Au-toEncoders for efficient anomaly detection", "year": "2021" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b10", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Ruitao Chen; Guoyang Xie; Jiaqi Liu; Jinbao Wang; Ziqi Luo; Jinfan Wang; Feng Zheng", "journal": "", "ref_id": "b11", "title": "Easynet: An easy network for 3d industrial anomaly detection", "year": "2023" }, { "authors": "Xuhai Chen; Yue Han; Jiangning Zhang", "journal": "", "ref_id": "b12", "title": "A Zero-/Few-Shot Anomaly Classification and Segmentation Method for CVPR", "year": "2023" }, { "authors": "Niv Cohen; Yedid Hoshen", "journal": "", "ref_id": "b13", "title": "Sub-image anomaly detection with deep pyramid correspondences", "year": "2020" }, { "authors": "Thomas Defard; Aleksandr Setkov; Angelique Loesch; Romaric Audigier", "journal": "Springer International Publishing", "ref_id": "b14", "title": "Padim: A patch distribution modeling framework for anomaly detection and localization", "year": "2021" }, { "authors": "Thomas Defard; Aleksandr Setkov; Angelique Loesch; Romaric Audigier", "journal": "Springer", "ref_id": "b15", "title": "PaDiM: a patch distribution modeling framework for anomaly detection and localization", "year": "2021" }, { "authors": "Thibaud Ehret; Axel Davy; Jean-Michel Morel; Mauricio Delbracio", "journal": "Journal of Mathematical Imaging and Vision", "ref_id": "b16", "title": "Image Anomalies: A Review and Synthesis of Detection Methods", "year": "2019" }, { "authors": "Denis Gudovskiy; Shun Ishizaka; Kazuki Kozuka", "journal": "", "ref_id": "b17", "title": "CFLOW-AD: Real-Time Unsupervised Anomaly Detection With Localization via Conditional Normalizing Flows", "year": "2022" }, { "authors": "Eungi Hong; Yoonsik Choe", "journal": "IEEE Access", "ref_id": "b18", "title": "Latent feature decentralization loss for one-class anomaly detection", "year": "2020" }, { "authors": "Eliahu Horwitz; Yedid Hoshen", "journal": "", "ref_id": "b19", "title": "Back to the feature: classical 3d features are (almost) all you need for 3d anomaly detection", "year": "2023" }, { "authors": "Chaoqin Huang; Haoyan Guan; Aofan Jiang; Ya Zhang; Michael Spratling; Yan-Feng Wang", "journal": "Springer", "ref_id": "b20", "title": "Registration based few-shot anomaly detection", "year": "2022" }, { "authors": "Jongheon Jeong; Yang Zou; Taewan Kim; Dongqing Zhang; Avinash Ravichandran; Onkar Dabeer", "journal": "", "ref_id": "b21", "title": "Winclip: Zero-/few-shot anomaly classification and segmentation", "year": "2023" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b22", "title": "Segment anything", "year": "2023" }, { "authors": "Chun-Liang Li; Kihyuk Sohn; Jinsung Yoon; Tomas Pfister", "journal": "", "ref_id": "b23", "title": "CutPaste: Self-supervised learning for anomaly detection and localization", "year": "2021" }, { "authors": "Jiaqi Liu; Guoyang Xie; Ruitao Chen; Xinpeng Li; Jinbao Wang; Yong Liu; Chengjie Wang; Feng Zheng", "journal": "", "ref_id": "b24", "title": "Real3dad: A dataset of point cloud anomaly detection", "year": "2023" }, { "authors": "Pankaj Mishra; Claudio Piciarelli; Gian Luca Foresti", "journal": "International Journal of Neural Systems", "ref_id": "b25", "title": "A neural network for image anomaly detection with deep pyramidal representations and dynamic routing", "year": "2020" }, { "authors": "Guansong Pang; Chunhua Shen; Longbing Cao; Anton Van Den; Hengel", "journal": "ACM Comput. Surv", "ref_id": "b26", "title": "Deep learning for anomaly detection: A review", "year": "2021" }, { "authors": "Yatian Pang; Wenxiao Wang; Francis Eh Tay; Wei Liu; Yonghong Tian; Li Yuan", "journal": "Springer", "ref_id": "b27", "title": "Masked autoencoders for point cloud self-supervised learning", "year": "2022" }, { "authors": "Kevin M Potter; Brendan Donohoe; Benjamin Greene; Abigail Pribisova; Emily Donahue", "journal": "SPIE", "ref_id": "b28", "title": "Automatic detection of defects in high reliability as-built parts using x-ray CT", "year": "2020" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b29", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b30", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Tal Reiss; Niv Cohen; Liron Bergman; Yedid Hoshen", "journal": "", "ref_id": "b31", "title": "Panda: Adapting pretrained features for anomaly detection and segmentation", "year": "2021" }, { "authors": "Oliver Rippel; Arnav Chavan; Chucai Lei; Dorit Merhof", "journal": "", "ref_id": "b32", "title": "Transfer Learning Gaussian Anomaly Detection by Fine-Tuning Representations", "year": "2021" }, { "authors": "Nicolae-Catalin Ristea; Neelu Madan; Tudor Radu; Kamal Ionescu; Fahad Nasrollahi; Thomas B Shahbaz Khan; Mubarak Moeslund; Shah", "journal": "", "ref_id": "b33", "title": "Self-supervised predictive convolutional attentive block for anomaly detection", "year": "2022" }, { "authors": "Karsten Roth; Latha Pemula; Joaquin Zepeda; Bernhard Schölkopf; Thomas Brox; Peter Gehler", "journal": "", "ref_id": "b34", "title": "Towards total recall in industrial anomaly detection", "year": "2022" }, { "authors": "Marco Rudolph; Tom Wehrbein; Bodo Rosenhahn; Bastian Wandt", "journal": "", "ref_id": "b35", "title": "Asymmetric student-teacher networks for industrial anomaly detection", "year": "2023" }, { "authors": "Thomas Schlegl; Philipp Seeböck; Sebastian M Waldstein; Georg Langs; Ursula Schmidt-Erfurth", "journal": "Medical Image Analysis", "ref_id": "b36", "title": "f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks", "year": "2019" }, { "authors": "Lu Wang; Dongkai Zhang; Jiahao Guo; Yuexing Han", "journal": "Applied Sciences", "ref_id": "b37", "title": "Image anomaly detection using normal data only by latent space resampling", "year": "2020" }, { "authors": "Yue Wang; Jinlong Peng; Jiangning Zhang; Ran Yi; Yabiao Wang; Chengjie Wang", "journal": "", "ref_id": "b38", "title": "Multimodal industrial anomaly detection via hybrid fusion", "year": "2023" }, { "authors": "Guoyang Xie; Jinbao Wang; Jiaqi Liu; Jiayi Lyu; Yong Liu; Chengjie Wang; Feng Zheng; Yaochu Jin", "journal": "", "ref_id": "b39", "title": "IM-IAD: Industrial image anomaly detection benchmark in manufacturing", "year": "2023" }, { "authors": "Guoyang Xie; Jingbao Wang; Jiaqi Liu; Feng Zheng; Yaochu Jin", "journal": "", "ref_id": "b40", "title": "Pushing the limits of fewshot anomaly detection in industry vision: Graphcore", "year": "2023" }, { "authors": "Vitjan Zavrtanik; Matej Kristan; Danijel Skočaj", "journal": "", "ref_id": "b41", "title": "DRAEM -A discriminatively trained reconstruction embedding for surface anomaly detection", "year": "2021" }, { "authors": "Hengshuang Zhao; Li Jiang; Jiaya Jia; Vladlen Philip Hs Torr; Koltun", "journal": "", "ref_id": "b42", "title": "Point transformer", "year": "2021" }, { "authors": "Qiang Zhou; Weize Li; Lihan Jiang; Guoliang Wang; Guyue Zhou; Shanghang Zhang; Hao Zhao", "journal": "", "ref_id": "b43", "title": "Pad: A dataset and benchmark for pose-agnostic anomaly detection", "year": "2023" }, { "authors": "Yang Zou; Jongheon Jeong; Latha Pemula; Dongqing Zhang; Onkar Dabeer", "journal": "", "ref_id": "b44", "title": "SPot-the-Difference self-supervised pretraining for anomaly detection and segmentation", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 376.56, 443.66, 168.55, 23.03 ], "formula_id": "formula_0", "formula_text": "min Ni Pj ∈Ni |N i • v j -d| 2(1)" }, { "formula_coordinates": [ 4, 388.59, 571.55, 156.52, 23.22 ], "formula_id": "formula_1", "formula_text": "K i = min(λ 1 , λ 2 ) λ 1 + λ 2 + ϵ(2)" }, { "formula_coordinates": [ 4, 369.7, 665.6, 175.41, 48.4 ], "formula_id": "formula_2", "formula_text": "R norm (P i , P j ) = |N i -Nj| |v ij | (3) R curv (P i , P j ) = |K i -K j |(4)" }, { "formula_coordinates": [ 5, 63.1, 257.3, 223.27, 26.8 ], "formula_id": "formula_3", "formula_text": "R i = 1 |N i | Pj ∈Ni (R norm (P i , P j ) + R curv (P i , P j )) (5)" }, { "formula_coordinates": [ 5, 104.2, 408.97, 182.16, 9.65 ], "formula_id": "formula_4", "formula_text": "S = {P k |Rank(P k ) ≤ ⌊τ • N ⌋}(6)" }, { "formula_coordinates": [ 5, 372.92, 152.8, 172.19, 11.72 ], "formula_id": "formula_5", "formula_text": "C = GP S(P i ), C ∈ R n×3(7)" }, { "formula_coordinates": [ 5, 359, 184.69, 186.11, 11.72 ], "formula_id": "formula_6", "formula_text": "P = KN N (P i , C), P ∈ R n×k×3(8)" }, { "formula_coordinates": [ 5, 383.68, 319.08, 161.43, 9.65 ], "formula_id": "formula_7", "formula_text": "P vis = P ⊙ (1 -M )(9)" }, { "formula_coordinates": [ 5, 329.17, 440.57, 215.95, 11.72 ], "formula_id": "formula_8", "formula_text": "T vis = P ointN et(P vis ), T vis ∈ R (1-m)n×d(10)" }, { "formula_coordinates": [ 5, 318.29, 599.99, 226.82, 11.72 ], "formula_id": "formula_9", "formula_text": "P pre = F c {P T (T vis , T m , P c )}, P pre ∈ R mn×k×3 (11)" }, { "formula_coordinates": [ 6, 100.18, 669.54, 186.19, 11.72 ], "formula_id": "formula_10", "formula_text": "P i = KN N (p i , k), P i ∈ R n×k×3(13)" }, { "formula_coordinates": [ 6, 98.51, 702.12, 187.86, 11.72 ], "formula_id": "formula_11", "formula_text": "P o = KN N (p o , k), P o ∈ R n×k×3(14)" }, { "formula_coordinates": [ 6, 392.72, 87.11, 152.4, 9.8 ], "formula_id": "formula_12", "formula_text": "A p = L c (P i , P o )(15)" }, { "formula_coordinates": [ 6, 390.94, 328.09, 154.17, 11.03 ], "formula_id": "formula_13", "formula_text": "f 1 , f 2 , f 3 = ϕ(p)(16)" }, { "formula_coordinates": [ 6, 388.65, 361.93, 156.47, 11.03 ], "formula_id": "formula_14", "formula_text": "F = f 1 ⊕ f 2 ⊕ f 3(17)" }, { "formula_coordinates": [ 6, 399.87, 393.88, 145.24, 9.65 ], "formula_id": "formula_15", "formula_text": "A f = F i ΘF o(18)" }, { "formula_coordinates": [ 6, 366.85, 563.66, 178.26, 14.07 ], "formula_id": "formula_16", "formula_text": "∀z i ∈ F O , M = ||ẑ (l) i -z (l) i ||(19)" }, { "formula_coordinates": [ 6, 398.16, 682.3, 146.95, 9.65 ], "formula_id": "formula_17", "formula_text": "A = A p ⊕ A f(20)" } ]
2023-11-25
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b7", "b30", "b31", "b32", "b33", "b34", "b35", "b36", "b37", "b38", "b39", "b40", "b41", "b42", "b43", "b44", "b45" ], "table_ref": [], "text": "Hyperspectral images, renowned for their ability to capture rich spectral information across a broad range of bands, offer a nuanced perspective on surface features [1]. They play a crucial role in providing detailed and comprehensive insights into Earth's surface [2], supporting applications such as target detection [3], [4], anomaly detection [5], and land cover classification [6], [7]. In the context of land cover classification, the primary objective is to categorize pixels in hyperspectral images into predefined classes [8], contributing to a detailed understanding of land cover, vegetation, and various surface features [9]. This technology is pivotal in diverse applications, including environmental monitoring [10], agricultural management [11], and urban planning [12]. However, the complexity of hyperspectral data, stemming from the high dimensionality due to numerous spectral bands, presents challenges in feature extraction and data representation. Spectral variability within and between classes further complicates the classification task, as does the presence of mixed pixels, where a single pixel may contain contributions from multiple land cover types. Feature extraction has emerged as an effective strategy to enhance the accuracy of hyperspectral image (HSI) classification [13]. Researchers have diligently developed various feature extraction methods that fully consider the spatial and spectral information inherent in HSIs [14], [15]. Additionally, extensive research has been conducted on spectral unmixing [16] and subpixel mapping methods [17], further contributing to the advancement of land cover classification in hyperspectral imagery.\nHyperspectral intrinsic image decomposition (HIID), as an effective feature representation method, has gained increasing prominence in the realm of hyperspectral image processing [18], [19]. This technique aims to decompose hyperspectral images into intrinsic components, thereby revealing the underlying spectral characteristics of the observed scene. By isolating intrinsic features such as environment-related and category-related information, HIID enhances the interpretability of hyperspectral data, offering a more intricate understanding of spectral characteristics and ultimately leading to improved discrimination of distinct surface materials. HIID proves particularly valuable in addressing inherent complexities associated with mixed pixels and spectral variability in hyperspectral data. It effectively mitigates spectral uncertainty, separates mixed pixel information, and extracts valuable insights for a nuanced understanding of hyperspectral content. Kang et al. [20] were pioneers in applying intrinsic image decomposition for feature extraction in hyperspectral images, showcasing the method's effectiveness through experimental validation. In subsequent research, Jin et al. [21] extended the HIID framework by incorporating digital surface model (DSM) cues to effectively recover hyperspectral reflectance and environmental illumination. Similarly, Gu et al. [22] augmented HIID by integrating spatial information from highresolution (HR) panchromatic (PAN) images, enhancing spatial details in the intrinsic component. Despite these advancements utilizing HIID for feature extraction and enhancing classification performance, the overall efficacy is limited by the model's representational capacity. Ongoing efforts focus on overcoming these limitations and further advancing the capabilities of HIID in hyperspectral image analysis.\nIn the realm of hyperspectral image classification, leveraging general machine learning methods is instrumental for the extraction of meaningful features from high-dimensional spectral data. These methods encompass a diverse range, including Support Vector Machines (SVM) [23], Random Forests [24], Decision Trees [25], k-Nearest Neighbors (KNN) [26], and Ensemble Methods [27]. They play a crucial role in categorizing pixels within hyperspectral images, aiming to provide a detailed understanding of land cover and enable various applications. Support Vector Machines (SVM), for instance, employ hyperplanes to effectively separate classes based on spectral features. Meanwhile, Random Forests and Decision Trees utilize decision-making processes informed by the spectral characteristics of the data. k-Nearest Neighbors (KNN) relies on the similarity of spectral signatures for classification, and Ensemble Methods amalgamate multiple models to bolster overall accuracy. To further enhance the efficiency and performance of these machine learning algorithms, techniques such as dimensionality reduction [28], feature selection [29], and normalization [30] are routinely applied, contributing to the optimization of hyperspectral image classification by improving the extraction of pertinent features. However, it's important to note that these methods are generally considered \"shallow\" due to their limited layer depth, which may constrain their ability to capture intricate patterns and spectral information embedded in the data.\nOwing to the deep architecture and the abundant network parameters, deep learning methods have demonstrated significant efficacy in the field of hyperspectral image classification [8]. The increased depth and richness of network parameters empower these models with a potent expressive capability, facilitating the capture of intricate patterns and detailed spectral information present in hyperspectral images [31]. Consequently, this has resulted in substantial enhancements in classification accuracy and the ability to discern complex features within the data. Deep learning methods have thereby emerged as a prominent and impactful approach, driving advancements in the state-of-the-art for hyperspectral image classification [32]. Generally, the backbone networks for hyperspectral image classification can be divided into four categories: recurrent neural networks (RNNs), convolutional neural networks (CNNs), graph neural networks (GNNs), and Transformers. Among these backbones, RNNs allow the network to retain information from previous bands, enabling the capture of nuanced spectral patterns and temporal dynamics [33], [34]. GNNs treat individual pixels or spectral bands as nodes in a graph and leveraging edge connections to represent spatial dependencies, enhancing the model's ability to comprehend the complex spectral variations and spatial patterns inherent [35], [36]. Transformers enable the modeling of long-range dependencies within hyperspectral data [37], [38]. While CNNs employ convolutional layers to automatically learn hierarchical representations, allowing for capturing intricate spectral patterns and spatial dependencies present in hyperspectral imagery [39], [40].\nDue to the great potential of deep learning to extract intricate and high-level information from the image, this work attempt to rethink the HIID based on deep learning models to enhance the effectiveness of HIID. Through leveraging the representation power of neural networks for better feature extraction, we can obtain improved separation of intrinsic components in hyperspectral imagery. Considering the datadriven characteristics of deep learning models, the key is how to construct the training mechanism to decompose the environment-related and category-related features. Our prior work [41] utilizes the adversarial learning methods, which can significantly improve the classification performance. However, it is worth noting that the training process encountered challenges in terms of stability. Besides, the performance is sensitive to the chosen of hyperparameters.\nThis work would exploit the advantages of deep feature embedding to enhance hyperspectral image classification by constructing the environmental feature module, categorical feature module, and feature discrimination module. Deep feature embedding, known for enlarging inter-class variance and reducing intra-class variance [42], is a promising approach in improving classification model performance [43], [44]. Leveraging neural networks, deep feature embedding learns meaningful representations by emphasizing similarities and differences in a latent space. In hyperspectral image classification, this facilitates the extraction of discriminative features from complex spectral information, contributing to enhanced classification accuracy. The inherent capability of deep feature embedding to capture intricate patterns within highdimensional data further strengthens its effectiveness [45], [46]. By emphasizing the contrast between positive and negative pairs, it enhances model robustness and generalization, making it well-suited for addressing challenges in hyperspectral data, such as mixed pixels and spectral variability. This work attempt to leverage deep feature embedding methods for the training of deep models to decompose features from hyperspectral images, obtaining discriminative environmental and categorical features.\nBuilding upon the advantages of deep models and hyperspectral intrinsic image decomposition (HIID), this study revisits HIID within the context of deep models for hyperspectral image classification, harnessing the advantages of deep feature embedding. The proposed framework, HyperDID, accomplishes the extraction of environment-related and categoryrelated features through the Environmental Feature Module (EFM) and Categorical Feature Module (CFM). This integration contributes to high-performance hyperspectral image classification. Additionally, the incorporation of the Feature Discrimination Module (FDM) can discern and discriminate between the distinctive characteristics associated with the environment and specific categories and effectively separate the intrinsic features. In summary, the contributions of this work can be outlined as follows.\n• This work rethinks the hyperspectral intrinsic image decomposition within the context of deep models and develops a novel framework, called HyperDID, in order to decompose the environment-related and category-related features from the image. Extensive experiments over three real-world and challenging datasets have demonstrated that the proposed HyperDID method can extract spatial-spectral features more effectively from the image, thereby yielding higher classification performance. The rest of this paper is arranged as follows. Section II introduces the proposed HyperDID method to capture discriminative environment-related and category-related features for hyperspectral image classification. Experiments are conducted over three real-world hyperspectral image datasets to validate the effectiveness of the proposed method in Section III. Finally, we conclude this paper with some discussions in Section IV." }, { "figure_ref": [], "heading": "II. PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "In this work, our aim is to decompose the environmentrelated and category-related components from a given hyperspectral image under the proposed HyperDID framework. For convenience, here we denote X = {x 1 , x 2 , • • • , x N } as the set of training samples from hyperspectral image, where N is the number of the samples, and y i (i = 1, 2, • • • , N ) as the corresponding label of x i , where y i ∈ Λ = {1, 2, • • • , K}. K is the number of land cover classes in the hyperspectral image." }, { "figure_ref": [], "heading": "A. Hyperspectral Intrinsic Image Decomposition", "publication_ref": [], "table_ref": [], "text": "Constructing an appropriate physical model proves to be instrumental in effectively discriminating and identifying various targets within hyperspectral images. By developing a suitable physical model, one can enhance the capability to differentiate between distinct spectral signatures associated with different objects or materials present in hyperspectral data. This approach involves leveraging domain-specific knowledge and understanding the physical principles governing the interaction of light with surfaces to create a model that accurately represents the spectral characteristics of diverse targets. The utilization of a well-designed physical model contributes to improved target discrimination and recognition, thus enhancing the overall efficacy of hyperspectral image classification.\nThe intrinsic information coupling model is one of such physical model to represent the hyperspectral image which refers to a system or algorithm that effectively couples and integrates intrinsic features from hyperspectral images. This coupling of intrinsic information may involve techniques that distinguish between environment-related and category-related features, contributing to a more nuanced understanding of the spectral characteristics in the observed scene.\nAs for a red-green-blue (RGB) images, the image can be typically characterized as reflectance and shading and the intrinsic images decomposition problem can be described by\nI = R • S(1)\nwhere I denotes the observed image, R represents the reflectance component, and S stands for the shading component. Generally, R represents the inherent color or texture of the scene, while S accounts for variations in illumination or lighting conditions. The goal of this decomposition is to disentangle the effects of illumination and surface properties, providing a more intrinsic and scene-independent representation of the underlying scene content. Unlike RGB images, hyperspectral images are typically acquired using passive imaging sensors designed to capture energy reflected from solar radiation. This results in pixel values across different spectral bands undergoing non-proportional changes due to variations in sensitivity to scene radiance. Consequently, the shading component of hyperspectral images has varying effects on each wavelength. Therefore, the hyperspectral intrinsic image decomposition can be formulated as\nI(λ) = R(λ) • S(λ)(2)\nwhere λ stands for the wavelength. R(λ) and λ denotes the reflectance and shading component, respectively. Generally, R determines the spectral signatures in hyperspectral images and S is the features influenced by the environmental factors.\nFollowing the above-mentioned assumptions, we will introduce the detailed framework of proposed HyperDID to learn the deep model which decompose the hyperspectral image into the category-related features R(λ) and the environmentalrelated features S(λ)." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "B. Overall Framework of HyperDID", "publication_ref": [], "table_ref": [], "text": "To harness the robust representational ability of deep models for hyperspectral intrinsic image decomposition (HIID), we introduce HyperDID, a novel framework that reimagines HIID through the lens of deep feature embedding. To this end, we develop three key modules, i.e., Environmental Feature Module (EFM), Categorical Feature Module (CFM), Feature Discrimination Module (FDM), and take advantage of these modules to construct the feature extract network. The overall architecture of HyperDID is illustrated in Fig. 1.\nBroadly, HyperDID employs a Convolutional Neural Network (CNN) as the backbone to extract features from the hyperspectral image. As depicted in Fig. 1, the CNN backbone is employed for extracting discriminative features, followed by two parallel Multi-Layer Perceptrons (MLPs). These MLPs work in tandem to extract environment-related and categoryrelated features, respectively. Denote f 1 (•), f 2 (•) as the representation function of Environmental Feature Extraction Net and Categorical Feature Extraction Net, then the aim of HyperDID is to find the optimal f 1 (•), f 2 (•), by solving the following optimization problem:\nmin f1,f2 N i=1 C 1 (g(f 1 (x i ) • f 2 (x i )), y i )(3)\nwhere g(•) : R d → R K is the mapping function from the features to classification probabilities and C 1 (•) denotes a classification loss function. d stands for the dimension of the learned feature from the image. f 1 (x i ) and f 2 (x i ) represent the environmental-related features S(λ) and the categoryrelated features R(λ), respectively. To solve the optimization 3, the proposed modules in HyperDID are then followed to formulate the feature extraction network, where the EFM is designed to learn a subnet specifically for environmental features, the CFM is tailored to learn a subnet dedicated to category-related features, and the FDM discriminates the categorical and environmental features.\nIn the subsequent sections, we provide a detailed introduction to each of the developed modules based on deep feature embedding, elucidating their roles and functionalities within the HyperDID framework. This comprehensive approach aims to enhance the capabilities of HIID and push the boundaries of hyperspectral image decomposition for improved classification performance." }, { "figure_ref": [], "heading": "C. Environmental Feature Module", "publication_ref": [ "b46" ], "table_ref": [], "text": "The goal of the Environmental Feature Module (EFM) is to effectively capture and represent the environment-related features within hyperspectral images. By leveraging deep contrastive learning, EFM aims to discern and extract intrinsic information associated with the environmental components, such as illumination, shading, and other factors that contribute to the overall spectral characteristics of the scene. Through this process, EFM contributes to the disentanglement of environment and category-related features, enhancing the model's ability to discern meaningful information for hyperspectral image classification. The environmental features captured by EFM play a crucial role in improving the overall interpretability and discrimination capabilities of the HyperDID framework.\n1) Construction of Environmental Pseudo Classes: First, specific training samples are utilized to construct environmental pseudo classes in an unsupervised manner. This entails clustering or grouping pixels based on their spectral characteristics, creating distinct classes that represent various environmental components. The unsupervised construction of environmental pseudo classes enables the model to autonomously discern patterns and groupings within the hyperspectral data, capturing the inherent complexity and diversity of environmental features. This preparatory step lays the foundation for subsequent training and learning processes within the HyperDID framework, facilitating the extraction of meaningful environment-related information for enhanced hyperspectral image classification. By employing clustering techniques, we aim to group pixels with similar spectral characteristics into distinct environmental pseudo classes, effectively categorizing them based on shared features. This clustering process enables the identification and differentiation of various environmental components within the hyperspectral data, contributing to the establishment of robust environmental pseudo classes. The utilization of clustering methods ensures an unsupervised confirmation of environmental categories, providing a foundational step for subsequent stages in the HyperDID framework.\nAssume that there are Λ environmental pseudo classes in the hyperspectral image. Denote P s (s = 1, 2, • • • , Λ) as the environmental centers of different pseudo classes. Iteratively, we calculate the centers of the Λ classes, P s (s = 1, 2, • • • , Λ), solving the following optimization problem:\nmin P1,P2,••• ,PΛ N i=1 Λ s=1 I(s = arg min 1,2,••• ,Λ ∥x i -P s ∥ 2 )∥x i -P s ∥ 2 (4)\nwhere I(condition) denotes the indicative function where\nI(•) = 1 if condition is true and I(•) = 0 otherwise.\nWe can construct the environmental pseudo classes following Eq. 4. Based on the cluster centers obtained through clustering, environmental pseudo-labels are assigned to all hyperspectral image samples using the Euclidean distance and the nearest-neighbor principle. This process involves calculating the Euclidean distance between each pixel's spectral signature and the cluster centers. Subsequently, each pixel is assigned an environmental pseudo-label based on the principle of proximity, where the label corresponds to the nearest cluster center. Therefore, denote z i as the corresponding environmental pseudo label of x i , then we can calculate the environmental pseudo label z i by\nz i = arg min s=1,2,••• ,Λ ∥x i -P s ∥ 2 (5)\n2) Environmental Feature Embedding: The aim of environmental feature embedding is to enlarge the inter-class variance and decrease the intra-class variance of the obtained environmental features.\nGiven a training batch B = {x b1 , x b2 , • • • , x bm } in the image where m is size of training batch. From Eq. 5, we can obtain the corresponding environmental pseudo class z b1 , z b2 , • • • , z bm .\nA projection p 1 (•) maps the environmental representation into a vector representation which is more suitable for contrastive learning. We implement this projection head p 1 (•) as a nonlinear multiple-layer perceptron (MLP). Such project module is proven improtent in improving the representation quality [47].\nThis work uses the cosine embedding loss for the training of environmental feature net. Given two samples x bi , x bj in a given batch, the loss l ij can be calculated as\nl ij = 1 -cos(p 1 (f 1 (x bi )), p 1 (f 1 (x bj ))), if z bi = z bj max(0, cos(p 1 (f 1 (x bi )), p 1 (f 1 (x bj ))) -δ), if z bi ̸ = z bj (6)\nwhere δ stands for the margin which we set to 0 in the experiments.\nTherefore, for the given batch B, the environmental feature embedding loss can be constructed as\nL e = m i=1 m j=1 l ij .(7)\nUnder L e , the obtained environmental feature can be more discriminative to recognize different environmental pattern." }, { "figure_ref": [], "heading": "D. Categorical Feature Module", "publication_ref": [], "table_ref": [], "text": "The objective of the Categorical Feature Module (CFM) within the HyperDID framework is to identify and categorize the \"category-related features\" present in the hyperspectral image. In this context, \"category-related features\" refer to the spectral signatures associated with different land cover or material classes. The CFM operates in conjunction with the Environmental Feature Module (EFM) to jointly capture both environment and category-related information, ensuring a comprehensive understanding of the intrinsic components within the hyperspectral data.\nSimilarly, a projection p 2 (•) maps the environmental representation into a vector representation which is more suitable for the learning of categorical features.\nAssuming the training batch B as the former subsection, this work also uses the cosine embedding loss for the training of categorical feature net. Given two samples x bi , x bj in a given batch with y bi , y bj as the corresponding labels, the loss l c ij can be calculated as\nl c ij = 1 -cos(p 2 (f 2 (x bi )), p 2 (f 2 (x bj ))), if y bi = y bj max(0, cos(p 2 (f 2 (x bi )), p 2 (f 2 (x bj ))) -δ), if y bi ̸ = y bj (8)\nFor the given batch B, the environmental feature embedding loss can be constructed as\nL c = m i=1 m j=1 l c ij .(9)\nUnder L c , the obtained categorical feature can be more discriminative to recognize different categories in the hyperspectral image." }, { "figure_ref": [], "heading": "E. Feature Discrimination Module", "publication_ref": [], "table_ref": [], "text": "The Feature Discrimination Module (FDM) in the Hyper-DID framework serves the purpose of enhancing the discriminative ability between environment-related and categoryrelated features extracted by the Environmental Feature Module (EFM) and the Categorical Feature Module (CFM), respectively. The primary goal is to refine the separation of intrinsic components, ensuring a more accurate decomposition of hyperspectral images.\nThe FDM achieves feature discrimination through mechanisms that emphasize the differences in the learned representations of environment-related and category-related features. This work constructs a feature discrimination classifier to discriminate the categorical and environmental features.\nFirst, a feature discriminator h(•) is designed to map the features, e.g. environment-related representation and categoryrelated representation to classification probabilities concerning the types of the features, e.g. the Environmental feature or categorical feature.\nFollowing the former subsections, we also formulate the classification loss under the training batch B. Specifically,\nf 1 (x bi )(i = 1, 2, • • • , m\n) are the environment-related features, which are labelled as 0 and f 2 (x bi )(i = 1, 2, • • • , m) are the category-related features, which are labelled as 1.\nThen, the feature discrimination loss can be formulated as\nL d = m i=1 (C 2 (h(f 1 (x bi )), 0) + C 2 (h(f 2 (x bi )), 1))(10)\nwhere C 2 denotes a classification loss function. In this work, we use cross entropy loss for C 2 (•), which is given as\nC 2 (h(f 1 (x bi )), 0) = -log(h(f 1 (x bi )) T e 0 ) C 2 (h(f 2 (x bi )), 1) = -log(h(f 2 (x bi )) T e 1 )(11)\nwhere e 0 ∈ R 2 and e 1 ∈ R 2 represent the standard basis vector. Under L d , the categorical and environmental features are subjected to a learning process that encourages distinctiveness between different categories. This distinctiveness is crucial for decomposing the learned features from the hyperspectral image." }, { "figure_ref": [], "heading": "F. HyperDID for Classification", "publication_ref": [], "table_ref": [], "text": "The CFM, EFM, and FDM module are essential to obtain a good hyperspectral intrinsic image decomposition. However, to solve the optimization in Eq. 3, a classification loss is also required. Under the training batch B, the classification loss L 0 can be formulated as\nL 0 = m i=1 C 1 (g(f 1 (x bi ) • f 2 (x bi )), y bi )(12)\nAs subsection II-B shows, C 1 denotes a classification loss function which is also formulated by cross entropy loss,\nC 1 (g(f 1 (x bi )•f 2 (x bi )), y bi ) = - K j=1 δ jyi log(g(f 1 (x bi )•f 2 (x bi )) T e j )(13)\nwhere e j ∈ R K stands for the standard basis vector. Therefore, the overall training loss of the HyperDID can be formulated as\nL = L 0 + αL e + βL c + γL d (14\n)\nFor convenience, all the hyperparameter of α, β and γ are set to 1. Trained with Eq. 14, we can obtain discriminative features from the hyperspectral image and recognize different samples with the obtained features." }, { "figure_ref": [], "heading": "III. EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Data Description", "publication_ref": [ "b47", "b47" ], "table_ref": [], "text": "In this section, the proposed HyperDID method is evaluated over three real-world hyperspectral image data, namely, the Pavia University data [48], the Indian Pines data [48], and the Houston 2013 data [49].\nPavia University (PU) data was collected by the reflective optics system imaging spectrometer (ROSIS-3) sensor ranging from 0.43 to 0.86 µm with a spatial resolution of 1. " }, { "figure_ref": [], "heading": "B. Experimental Setups 1) Evaluation Metrics:", "publication_ref": [ "b48", "b49", "b38", "b32", "b35", "b36" ], "table_ref": [], "text": "The experiments assess the performance of all methods using three widely adopted metrics: overall accuracy (OA), average accuracy (AA), and the Kappa coefficient κ. Additionally, a complementary metric is incorporated, detailing the classification accuracy for each individual class. This comprehensive evaluation framework ensures a thorough and nuanced analysis of the methods' effectiveness across various dimensions of performance. 2) Implementation Details: Pytorch 1.9.1, Cuda 11.2 is chosen as the deep learning framework to implement the proposed method [50]. By default, the learning rate, epoch iteration, and training batch are set to 0.01, 500, and 64, respectively and the dimension of extracted environment-related and category-related features is set to 128. The structures of MLPs for non-linear projection in deep feature embedding are set as '128 -64 -64' and 5 × 5 neighbors are used for classification. The code would be publicly available soon at https://github.com/shendu-sw/HyperDID.\n3) Baseline Methods: In the context of this study, an extensive repertoire of state-of-the-art methodologies has been purposefully curated as baseline benchmarks. This meticulous selection spans cutting-edge Convolutional Neural Networks (CNNs), including 3-D CNN [51], PResNet [52], and Hy-bridSN [39], which exemplify sophisticated techniques in hyperspectral image processing. The integration of Recurrent Neural Networks (RNNs), particularly the RNN architecture, is geared towards capturing sequential dependencies inherent in hyperspectral data and this work selects the RNN model in [33] as a representative. Additionally, Graph Convolutional Networks (GCNs), embodied by miniGCN [36], underscore the significance of graph-based learning methodologies in this domain. The inclusion of Transformers, featuring models like ViT [53], SpectralFormer [37], and SSFTTNet [54], highlights the transformative capabilities of attention mechanisms.\nThis comprehensive array of carefully chosen state-of-theart methods serves as a robust foundation for conducting a thorough comparison across diverse neural network architectures. It not only accentuates the effectiveness and efficiency of the proposed HyperDID method for hyperspectral image classification but also ensures a well-rounded evaluation. Beyond neural networks, the study incorporates the Support Vector Machine (SVM) with a radial basis function kernel, enriching the comparison framework to gain nuanced insights into the strengths and limitations of various techniques in hyperspectral image analysis." }, { "figure_ref": [], "heading": "C. Evaluation of Computational Performance", "publication_ref": [], "table_ref": [], "text": "In our investigation of the computational performance of the proposed HyperDID, we employ the HybridSN as the chosen backbone CNN for feature extraction from the image. To showcase the versatility and general applicability of the proposed method, we conduct experiments on a standard machine equipped with an Intel Xeon(R) Gold 6226R CPU, 128GB RAM, and a Quadro RTX 6000 24GB GPU for evaluating classification performance. As benchmarks for comparison, we consider the training and testing costs of and HybridSN, and surpassing PResNet and SpectralFormer in terms of training efficiency. Moreover, the testing phase demonstrates the practicality of HyperDID, with durations of about 1.88s, 0.61s, and 0.69s for the three datasets, catering to the needs of various applications. Notably, this indicates that the proposed HyperDID achieves significant performance improvements without introducing additional time costs, reaffirming its suitability for real-world applications where both accuracy and computational efficiency are crucial considerations." }, { "figure_ref": [], "heading": "D. Comparative Analysis of Models Trained with Different Hyperparameters", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "In this experimental set, we systematically investigate the impact of various hyperparameters within the HyperDID framework on classification accuracies. Our exploration involves the deliberate modification of hyperparameter values to discern their influence on the overall performance of the model. Specifically, we focus on two key categories of hyperparameters: those associated with training, such as α, β, γ, and those integral to the implementation of deep feature embedding, including the number of environmental pseudo classes. Both sets of hyperparameters are recognized for their pivotal roles in shaping the classification performance of HyperDID.\nThe subsequent discussion delves into a detailed examination of the effects of these hyperparameters, shedding light on their individual contributions to the overall efficacy of the model. This nuanced exploration of hyperparameter effects would contribute to a deeper understanding of the model's behavior and its adaptability to different configurations, thereby enhancing its versatility and performance across various scenarios. Upon close analysis of the results, it becomes evident that each module within HyperDID plays an indispensable role. Notably, models lacking the FDM or EFM module exhibit the lowest classification accuracies. When either the FDM or EFM module is introduced, there is a significant enhancement in the classification results compared to models without these modules, with improvements of approximately 2%, 1%, and 2% OA observed across the three datasets, respectively. This underscores the synergistic contribution of all three modules to the overall effectiveness of HyperDID, highlighting the importance of carefully tuning these hyperparameters for optimal model performance.\n2) Effects of Number of Environmental Pseudo Classes: In addition to the intricate interplay between learnable parameters and training hyperparameters, the number of environmental pseudo classes emerges as a pivotal factor in shaping the success of feature embedding environmental-related features within HyperDID. A meticulous exploration of the appropriate parameter range for this crucial aspect is essential, and our investigation extends to evaluating parameter sensitivity across three distinct datasets, as outlined in Table VI. This table delineates the evolving trends of classification accuracies concerning OA, AA, and κ as the number of environmental pseudo classes incrementally varies.\nA noteworthy consensus arising from our observations is that leveraging environmental pseudo classes through clustering analysis imparts valuable prior environmental information for the learning process. This, in turn, enhances the model's ability to discern and extract features that are both environmentally and categorically relevant from the input samples. Within a discernible range, our findings reveal a robust insensitivity of the parameter to classification performance, indicating a stable operational zone. This characteristic not only underscores the efficacy of the proposed model but also highlights its potential for practical applications. The implication is that the identified parameter can be readily applied to other datasets within similar contexts, streamlining the adaptability and utility of HyperDID across diverse classification tasks." }, { "figure_ref": [], "heading": "E. Comparative Analysis of Models Trained with Different Backbone CNNs", "publication_ref": [], "table_ref": [], "text": "The choice of backbone CNN emerges as a pivotal factor influencing the quality of extracted environment-related and category-related features, thereby significantly impacting the classification performance of hyperspectral images. In a series of experiments, we systematically evaluate the performance of the proposed method with distinct backbone CNNs, namely, 3-D CNN, PResNet, and HybridSN, with their respective structures set as subsection III-B3 shows.\nTables VII, VIII, and IX present a comparative analysis of the proposed method against Vanilla CNNs across three datasets. Notable observations from these results include:\nFirst, Backbone Influence on Performance. The performance achieved with PResNet and HybridSN as backbones surpasses that with 3-D CNN. For instance, on the Pavia University dataset, the proposed method achieves 94.30% and 94.45% accuracy with PResNet and HybridSN, outperforming the 88.62% accuracy obtained with 3-D CNN. Similar trends are observed for Indian Pines and Houston2013 datasets. Then, Enhancement through HyperDID. The proposed HyperDID significantly improves the performance of Vanilla CNNs. Across all datasets, the method achieves notable improvements: 0.57% to 8.82% with 3-D CNN, 4.19% to 5.85% with PResNet, and 2.85% to 10.68% with HybridSN. This underscores the effectiveness of the proposed approach in elevating the classification accuracy of hyperspectral images.\nThese findings underscore the importance of selecting an appropriate backbone CNN architecture and highlight the substantial performance gains achievable through the proposed HyperDID, particularly when integrated with advanced backbone architectures like PResNet and HybridSN." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "F. Comparative Analysis of Models Trained with Different Spatial Neighbor Sizes", "publication_ref": [], "table_ref": [], "text": "The significance of spatial neighbors in enhancing the accuracy of hyperspectral image classification cannot be overstated, as they contribute a wealth of spatial information crucial for the task. The number of neighbor sizes emerges as a key factor influencing classification performance, a notion explored through additional comparison experiments in this study. Spatial neighbor sizes ranging from 3 × 3 to 11 × 11 were selected for evaluation, and Fig. 2 illustrates the evolving trends in classification accuracies across three datasets.\nA consistent observation is that training with larger spatial neighbor sizes tends to yield improved performance. However, it is essential to strike a balance, as excessively large neighbor sizes can introduce irrelevant information, negatively impacting the training process and hindering the extraction of discriminative features. The figure illustrates that, for Indian Pines and Pavia University datasets, HyperDID achieves optimal performance with spatial neighbors of 11 × 11, yielding high accuracy metrics. Conversely, for Houston 2013 data, the best performance is attained with a neighbor size of 7 × 7. These findings underscore the importance of carefully selecting spatial neighbor sizes to achieve an optimal trade-off between capturing relevant spatial information and avoiding information redundancy during hyperspectral image classification. 3. Noteworthy results highlight a substantial improvement in accuracies achieved by the proposed HyperDID method compared to vanilla CNNs. Pavia University data saw an accuracy enhancement of approximately 3%-4%. While Houston2013 data observed an improvement of about 1.5%-3%. It should be noted that the performance decreases when the rate is set to 6.25% and 25%. The reason may be that unstable training process with less training samples. Particularly remarkable is the performance on Indian Pines data, where the accuracy experienced an increase of more than 10%. This enhancement is attributed to HyperDID's ability to decompose environmentrelated and category-related features, thereby improving discrimination of category-related features and mitigating the impact of environmental factors on hyperspectral image classification.\nMoreover, the classification performance of the learned model showed a significant enhancement with an increase in the number of training samples. This escalating trend indicates that more training samples provide additional information for the deep model to learn, enabling it to extract more discriminative features for improved hyperspectral image classification. This observation underscores the scalability and learning capacity of HyperDID, emphasizing the critical role of ample training data in achieving superior classification performance in hyperspectral image analysis." }, { "figure_ref": [], "heading": "H. Comparisons with Other State-of-the-Art Methods", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "The quantitative classification results presented in Tables X, XI, and XII for the Indian Pines, Pavia University, and Houston2013 datasets, respectively, underscore the superior performance of the proposed HyperDID under consistent experimental setups.\nTable X showcases the exceptional performance of Hy-perDID over Pavia University data with an accuracy of 94.45%OA, 94.36%AA, and 92.53% kappa, outperforming state-of-the-art methods such as traditional machine learning techniques like SVM (83.76%OA), CNNs like 3-D CNNs (87.52%OA), PResNet (90.11%OA), Hy-bridSN (90.27%OA), RNNs (80.61%), GCNs (miniGCN with 83.23%OA), and Transformers like ViT (86.27%), Spec-tralFormer (90.04%), and SSFTTNet (82.56%). For the Indian Pines dataset, HyperDID achieves an accuracy of 89.40%OA, 94.11%AA, and 87.85% kappa, surpassing alternative methods including SVM (76.53%OA), 3-D CNNs (77.22%OA), PResNet (82.97%OA), HybridSN (78.72%OA), RNN (81.11%), miniGCN (74.71%OA), ViT (65.16%), Spec-tralFormer (83.38%), and SSFTTNet (80.29%). Similarly, over the Houston2013 dataset, HyperDID outperforms its counterparts.\nA qualitative evaluation through visualization of classification maps in Figs. 4,5, and 6 affirms the effectiveness of HyperDID in mitigating salt-and-pepper noise-induced classification errors. This leads to a substantial reduction in overall inaccuracies and a notable improvement in classification accuracy. The method's ability to suppress the impact of challenging noise patterns underscores its robustness, resulting in more reliable and accurate classification outcomes.\nIn summary, HyperDID, leveraging intrinsic information from hyperspectral images, demonstrates a significant performance boost compared to state-of-the-art methods. Its ability to achieve superior accuracy across diverse datasets positions it as a robust and effective solution for hyperspectral image classification tasks." }, { "figure_ref": [], "heading": "IV. CONCLUSIONS AND DISCUSSIONS", "publication_ref": [], "table_ref": [], "text": "In this research endeavor, we introduce a groundbreaking hyperspectral intrinsic image decomposition method empowered by deep feature embedding, termed HyperDID. The comprehensive framework of HyperDID adeptly partitions hyperspectral images into environment-related and categoryrelated features, thereby significantly enhancing classification performance. Comparative results underscore the efficacy of HyperDID in augmenting the representational capacity of deep models. Our approach involves the development of key components such as the Environmental Feature Module (EFM) and Categorical Feature Module (CFM), both rooted in deep feature embedding principles to glean intrinsic features from hyperspectral images. Additionally, we introduce a Feature Discrimination Module (FDM) designed to segregate environment-related and category-related features. Ablation studies conducted validate the pivotal role of each module within HyperDID, affirming their collective effectiveness. Detailed experiments provide compelling evidence for the efficacy and efficiency of HyperDID in the realm of hyperspectral image classification. By showcasing the method's ability to decompose and leverage intrinsic features, HyperDID emerges as a robust and innovative solution, poised to contribute significantly to advancing the field of hyperspectral image analysis.\nLooking ahead, our research will delve into additional advanced strategies aimed at incorporating more physical characteristics and prior knowledge into the HyperDID framework to further enhance the decomposition of hyperspectral images. Beyond classification, we are keen to extend the application of HyperDID to other hyperspectral image processing tasks, such as anomaly detection and target identification. Moreover, the exploration of alternative feature extraction frameworks based on the intrinsic characteristics of hyperspectral images is a promising avenue for future research. \n(c) (d) (e) (f) (g) (h) (i) (j) (k) (l)" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Natural Science Foundation of China under Grant 62001502." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Xian Zhou received the BE degree from the Department of Automation and Intelligent Science, Nankai University, Nankai, China in 2015 and the PhD degree from the Department of Computer Science and Engineering, Shanghai " } ]
The dissection of hyperspectral images into intrinsic components through hyperspectral intrinsic image decomposition (HIID) enhances the interpretability of hyperspectral data, providing a foundation for more accurate classification outcomes. However, the classification performance of HIID is constrained by the model's representational ability. To address this limitation, this study rethinks hyperspectral intrinsic image decomposition for classification tasks by introducing deep feature embedding. The proposed framework, HyperDID, incorporates the Environmental Feature Module (EFM) and Categorical Feature Module (CFM) to extract intrinsic features. Additionally, a Feature Discrimination Module (FDM) is introduced to separate environment-related and category-related features. Experimental results across three commonly used datasets validate the effectiveness of HyperDID in improving hyperspectral image classification performance. This novel approach holds promise for advancing the capabilities of hyperspectral image analysis by leveraging deep feature embedding principles. The implementation of the proposed method could be accessed soon at https://github.com/ shendu-sw/HyperDID for the sake of reproducibility.
HyperDID: Hyperspectral Intrinsic Image Decomposition with Deep Feature Embedding
[ { "figure_caption": "Fig. 1 .1Fig. 1. Overall framework of Hyperspectral Intrinsic Image Decomposition with Deep Feature Embedding (HyperDID) method for hyperspectral image classification.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "3m per pixel. The data contains 610 × 340 pixels with 115 bands where 12 bands have removed due to noise and absorption and the remaining 103 channels are used. It contains 9 different classes, representing various land cover categories such as asphalt, trees, and buildings and 42776 samples from different categories have been labeled for experiments. Table. I has presented the detailed training and testing samples of the data. Indian Pines (IP) data was gathered by the 224-band Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor ranging from 0.4 to 2.5 µm over the over the Indian Pines test site in Indiana, USA. The high spectral resolution of the Indian Pines data enables detailed characterization of land cover and vegetation types. With 16 different classes representing various ground cover categories such as crops, trees, and soil (see Table II for details), this dataset serves as a benchmark for evaluating and testing algorithms in the experiments. The data consists of 145×145 pixels with 200 clean spectral bands, and 10366 labeled samples are selected for experiments as shown in the Table II. Houston 2013 (HS) data was acquired over the University of Houston campus and the neighboring urban area with a resolution of 2.5 m/pixel, and provided as part of the 2013 IEEE Geoscience and remote Sensing Society data fusion contest. It consists of 349 × 1905 pixels of which a total of 15011 labeled samples divided into 15 classes have been chosen for experiments. Each pixel denotes a sample and contains 144 spectral bands ranging from 0.38 to 1.05 µm. The experimental details of the dataset are listed in the TableIII.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "3-D CNN, PResNet, HybridSN, and SpectralFormer. This comprehensive evaluation aims to provide insights into the efficiency and effectiveness of HyperDID in comparison to established baseline models. Table IV presents the detailed comparison results over the three datasets. The outcomes presented in Table IV shed light on the comparative performance of the proposed HyperDID against established baseline models across three datasets. The training durations for HyperDID are notably competitive, clocking in at approximately 646.7s, 147.0s, and 400.7s for the respective datasets. These results position HyperDID as a computationally efficient alternative, performing on par with 3-D CNN", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 )1Effects of Training Hyperparameters: The examination of training hyperparameters in HyperDID reveals crucial insights into the model's performance. Table V provides a detailed breakdown of the classification performance under varying hyperparameter settings across three datasets. Specifically, α, β, and γ are examined to understand the tradeoffs associated with EFM, CFM, and FDM, respectively. These hyperparameters dictate the importance given to each module during training, with default values of 1 and the option to set a parameter to 0, indicating the exclusion of the corresponding module from the training process.", "figure_data": "", "figure_id": "fig_3", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Classification performance with different spatial neighbor sizes over (a) Pavia University; (b) Indian Pines; (c) Houston2013.", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Classification performance with different rate of training samples over (a) Pavia University; (b) Indian Pines; (c) Houston2013.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .Fig. 5 .45Fig. 4. Pavia University data. (a) Training; (b) Testing; (c) SVM; (d) 3-D CNN; (e) PResNet; (f) HybridSN; (g) RNN; (h) miniGCN; (i) ViT; (j) SpectralFormer; (k) SSFTTNet; (l) HyperDID.", "figure_data": "", "figure_id": "fig_6", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Houston2013 data. (a) Training; (b) Testing; (c) SVM; (d) 3-D CNN; (e) PResNet; (f) HybridSN; (g) RNN; (h) miniGCN; (i) ViT; (j) SpectralFormer; (k) SSFTTNet; (l) HyperDID.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "OF TRAINING AND TESTING SAMPLES IN PAVIA UNIVERSITY DATA.", "figure_data": "ClassClass NameColorTrainingTestingC1Asphalt5486304C2Meadows54018146C3Gravel3921815C4Trees5242912C5Metal sheet2651113C6Bare soil5324572C7Bitumen375981C8Brick5143364C9Shadow231795Total392140002", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "OF TRAINING AND TESTING SAMPLES IN INDIAN PINES DATA.", "figure_data": "ClassClass NameColorTraining TestingC1Corn-notill501384C2Corn-mintill50784C3Corn50184C4Grass-pasture50447C5Grass-trees50697C6Hay-windrowed50439C7Soybean-notill50918C8Soybean-mintill502418C9Soybean-clean50564C10Wheat50162C11Woods501244C12Buildings-Grass-Trees-Drives50330C13Stone-Steel-Towers5045C14Alfalfa1539C15Grass-pasture-mowed1511C16Oats155Total6959671", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "OF TRAINING AND TESTING SAMPLES IN HOUSTON2013 DATA.", "figure_data": "ClassClass NameColorTraining TestingC1Grass-healthy1981053C2Grass-stressed1901064C3Grass-synthetic192505C4Tree1881056C5Soil1861056C6Water182143C7Residential1961072C8Commercial1911053C9Road1931059C10Highway1911036C11Railway1811054C12Parking-lot11921041C13Parking-lot2184285C14Tennis-court181247C15Running-track187473Total283212197", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "PERFORMANCE OVER DIFFERENT DATASETS.", "figure_data": "MethodsStagePUIPHS3-D CNNTraining(s) Testing(s)471.24 112.7 1.225 0.45308.9 0.425PResNetTraining(s) Testing(s)1752.4 364.0 961.24 7.1 2.1 2.2HybridSNTraining(s) Testing(s)536.2 1.7123.9 0.56320.5 0.55SpectralFormerTraining(s) Testing(s)1061.2 232.9 2.52 0.92621.3 0.77AdverDecomTraining(s) Testing(s)646.7 1.88147.0 0.61400.7 0.69", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "ACCURACIES (OA, AA, AND κ) OF THE PROPOSED METHOD WITH DIFFERENT HYPERPARAMETER SETTINGS.", "figure_data": "DataSettingsOA (%) AA (%)κ (%)α = 092.9291.7990.42PUβ = 0 γ = 094.01 92.7693.54 92.7991.95 90.21-94.4594.3692.53α = 088.7694.0787.16IPβ = 0 γ = 089.25 88.5193.68 93.5687.71 86.83-89.4094.1187.85α = 087.5589.9886.48HSβ = 0 γ = 089.73 87.5191.26 89.5188.85 86.44-89.7491.5788.87", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "ACCURACIES (OA, AA, AND κ) OF THE PROPOSED METHOD WITH DIFFERENT NUMBER OF ENVIRONMENTAL PSEUDO CLASSES.", "figure_data": "DataMetrics1234Number of Environmental Pseudo Classes 5 6 7 89102030OA(%)94.24 94.4594.9194.53 94.13 94.32 94.11 93.75 94.05 94.51 94.44 94.21PUAA(%)93.5894.3694.13 93.57 92.29 93.61 93.02 92.58 93.76 93.54 93.49 93.34κ(%)92.21 92.5393.1592.63 92.06 92.35 92.05 91.56 92.01 92.58 92.49 92.19OA(%)88.87 89.40 89.10 88.89 88.94 88.72 88.95 89.66 90.1390.06 88.85 87.30IPAA(%)93.63 94.11 93.22 94.03 93.77 93.80 93.41 94.25 94.1094.6594.15 92.65κ(%)87.28 87.85 87.54 87.28 87.33 87.13 87.33 88.16 88.6788.62 87.23 85.47OA(%)89.5789.7486.93 87.29 88.73 88.04 88.66 88.99 87.73 88.75 89.09 86.60HSAA(%)91.3091.5788.94 89.47 90.50 90.13 90.33 90.53 89.52 90.37 90.91 88.72κ(%)88.6888.8785.81 86.20 87.76 87.01 87.69 88.04 86.68 87.79 88.15 85.45", "figure_id": "tab_6", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "ACCURACIES (OA, AA, AND κ) OF THE PROPOSED METHOD WITH DIFFERENT BACKBONE CNNS OVER PAVIA UNIVERSITY OA, AA, AND κ) OF THE PROPOSED METHOD WITH DIFFERENT BACKBONE CNNS OVER INDIAN PINES DATA. The earlier sections of our study primarily delved into experiments conducted with predefined sets of training and testing samples, as outlined in Tables I, II, and III. This subsection extends the evaluation of the HyperDID method by considering varying numbers of training samples. In the initial experiments, we utilized 3921 training and 40002 testing samples for Pavia University data, 695 training", "figure_data": "DATA.DataMetricsCNN Backbone 3-D CNN PResNet HybridSNOA(%)87.5290.1190.27VanillaAA(%)89.0189.4391.79κ(%)83.3786.6887.03OA(%)88.6294.3094.45ProposedAA(%)90.7694.0294.36κ(%)85.0692.3792.53TABLE VIIICLASSIFICATION ACCURACIES (Data Metrics 3-D CNN PResNet HybridSN CNN BackboneOA(%)77.2282.9778.72VanillaAA(%)86.8390.1988.15κ(%)74.2180.6575.81OA(%)86.0488.8289.40ProposedAA(%)90.8093.4994.11κ(%)84.0887.2387.85G. Comparative Analysis of Models Trained with DifferentNumber of Training Samples", "figure_id": "tab_7", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "ACCURACIES (OA, AA, AND κ) OF THE PROPOSED METHOD WITH DIFFERENT BACKBONE CNNS OVER HOUSTON2013 DATA.", "figure_data": "DataMetricsCNN Backbone 3-D CNN PResNet HybridSNOA(%)84.7185.5986.89VanillaAA(%)85.5387.4588.92κ(%)83.4084.3585.77OA(%)85.2890.1189.74ProposedAA(%)86.4190.8391.57κ(%)84.0289.2688.87and 9671 testing samples for Indian Pines data, and 2832training and 12197 testing samples for Houston2013 data, asindicated in Tables I-III. In this subsequent set of experiments,we systematically selected 6.25%, 12.5%, 25%, 50%, and100% of the original training samples across these datasets toassess performance under different training sample scenarios.For example, over Pavia University data, the selected train-ing samples ranged from 245 to 3921, with similar sampleselections applied to Indian Pines and Houston2013 data. Thetrends in classification performance corresponding to thesevaried numbers of training samples are visually depicted inFig.", "figure_id": "tab_8", "figure_label": "IX", "figure_type": "table" }, { "figure_caption": "ACCURACIES (OA, AA, AND κ) OF DIFFERENT METHODS ACHIEVED ON THE PAVIA UNIVERSITY DATA.", "figure_data": "MethodsSVMCNNs 3-D CNN PResNetHybridSNRNNminiGCNViTTransformers SpectralFormer SSFTTNetHyperDIDC177.0882.9580.1183.4985.5691.5582.5584.7275.8993.51C279.2288.3294.8191.5275.0084.6296.5795.8681.4695.97C377.5273.5585.6281.7671.6374.2756.4266.7275.5488.76C494.6193.9698.4998.8094.1671.2295.9896.5383.7299.04C598.7499.5599.91100.091.4699.5593.6299.19100.099.37C693.6879.1179.4885.3972.6667.6149.4373.1683.4487.93C785.1290.2179.0091.8592.2586.7579.6179.7199.8093.37C893.8298.6391.7794.4195.5486.1593.7997.7488.6194.02C992.5894.8495.7298.8793.21100.091.0793.3395.6097.23OA(%)83.7687.5290.1190.2780.6183.2386.2790.0482.5694.45AA(%)88.0489.0189.4391.7985.7284.6482.1287.4487.1294.36κ(%)78.9883.3786.6887.0374.8377.4481.1686.5177.3092.53", "figure_id": "tab_9", "figure_label": "X", "figure_type": "table" }, { "figure_caption": "ACCURACIES (OA, AA, AND κ) OF DIFFERENT METHODS ACHIEVED ON THE INDIAN PINES DATA.", "figure_data": "MethodsSVMCNNs 3-D CNN PResNetHybridSNRNNminiGCNViTTransformers SpectralFormer SSFTTNetHyperDIDC172.1167.8572.7666.0472.7670.5259.6878.9783.3777.31C271.4377.0487.5079.4684.8253.1937.7685.0871.3989.92C386.9693.4894.0294.5778.2691.8555.4385.3394.5797.83C495.9792.8494.8591.2888.1493.7465.1094.1893.9396.87C588.6783.2191.9790.9683.5095.1286.5184.3695.3897.56C695.9098.6397.49100.091.8099.0997.9598.6398.40100.0C775.6074.5184.2079.3087.5863.9451.8563.9472.6689.98C859.0262.6673.5768.0774.2868.4062.4583.9571.6387.34C976.7769.6875.1870.2175.3573.4041.1373.5850.1879.79C1099.3899.38100.0100.099.3898.7796.3099.3898.15100.0C1193.3393.2593.4176.9493.8988.8391.0097.1991.9494.61C1273.9496.0680.6190.3063.0346.0652.1264.5586.6394.55C13100.0100.0100.0100.0100.097.7895.5697.7895.45100.0C1487.1889.7497.4492.3166.6746.1548.7276.9282.05100.0C15100.090.91100.0100.0100.072.7381.82100.0100.0100.0C16100.0100.0100.0100.0100.080.00100.0100.0100.0100.0OA(%)76.5377.2282.9778.7281.1174.7165.1683.3880.2989.40AA(%)86.0286.8390.1988.1584.9777.4770.2186.4986.6194.11κ(%)73.4274.2180.6575.8178.5171.2160.2680.9377.4087.85", "figure_id": "tab_10", "figure_label": "XI", "figure_type": "table" }, { "figure_caption": "ACCURACIES (OA, AA, AND κ) OF DIFFERENT METHODS ACHIEVED ON THE HOUSTON 2013 DATA.actions on geoscience and remote sensing, vol. 56, no. 8, pp. 4420-4434, 2018.[52] M. E.Paoletti, J. M. Haut, R. Fernandez-Beltran, J. Plaza, A. J. Plaza, and F. Pla, \"Deep pyramidal residual networks for spectral-spatial hyperspectral image classification,\" IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 2, pp. 740-754, 2018. [53] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., \"An image is worth 16x16 words: Transformers for image recognition at scale,\" in International Conference on Learning Representations, 2021. [54] L. Sun, G. Zhao, Y. Zheng, and Z. Wu, \"Spectral-spatial feature tokenization transformer for hyperspectral image classification,\" IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-14, 2022. Zhiqiang Gong received the B.S. degree in applied mathematics from Shanghai Jiao Tong University, Shanghai, China, in 2013, the M.S. degree in applied mathematics from the National University of Defense Technology (NUDT), Changsha, China, in 2015, and the Ph.D. degree in information and communication engineering from the National Key Laboratory of Science and Technology on ATR, NUDT, in 2019. He is currently an Associate Professor with the Defense Innovation Institute, Chinese Academy of Military Sciences, Beijing, China. He has authored more than 30 peer-reviewed articles in international journals, such as the IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, the IEEE TRANSACTIONS ON CYBERNETICS, the IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, the SCI-ENCE CHINA INFORMATION SCIENCES, the IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, and the IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING. His research interests are computer vision, machine learning, and image analysis. He is a Referee of the IEEE TRANSACTIONS ON NEURAL NET-WORKS AND LEARNING SYSTEMS, the IEEE TRANSACTIONS ON IMAGE PROCESSING, the IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, the IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, the IEEE JOURNAL OF SELECTED TOPICS IN AP-PLIED EARTH OBSERVATIONS AND REMOTE SENSING, and the IEEE GEOSCIENCE AND REMOTE SENSING LETTERS.", "figure_data": "MethodsSVMCNNs 3-D CNN PResNetHybridSNRNNminiGCNViTTransformers SpectralFormer SSFTTNetHyperDIDC182.6283.7681.6783.5781.6796.2082.5383.2983.2982.15C298.7895.4999.91100.095.3996.9099.0698.9790.5199.53C390.3095.0590.8998.0295.0599.4191.4996.6398.6199.80C497.0699.2486.7495.5596.0297.6395.6496.0296.9799.91C599.8199.4399.4399.7297.6397.7399.34100.099.5398.58C682.5290.2192.3195.8091.6195.1094.4194.4091.6198.60C789.6586.8590.4990.6789.9265.8691.0483.2167.9190.11C857.7482.0575.3181.0170.0965.1560.6880.7255.0883.95C961.1976.4980.9381.5973.8469.8871.2077.4354.2570.54C1067.6653.9670.2746.3365.9367.6652.5158.0181.5689.00C1172.6882.3584.9194.1270.4082.8378.7580.2790.5185.29C1270.4178.4871.8580.5079.7368.4081.2784.4484.7384.73C1361.0575.4489.4794.7474.3957.5465.9673.3381.7592.98C1494.3391.9097.5796.3698.7999.1995.1499.6099.1998.38C1580.1392.18100.095.7898.3198.7392.3999.1599.79100.0OA(%)80.1684.7185.5986.8983.5582.3182.2285.5582.4689.74AA(%)80.4085.5387.4588.9285.2583.8883.4387.0385.0291.57κ(%)78.4483.4084.3585.7782.1580.8480.6884.3280.9788.87", "figure_id": "tab_11", "figure_label": "XII", "figure_type": "table" } ]
Zhiqiang Gong; Xian Zhou; Wen Yao; Xiaohu Zheng; Ping Zhong
[ { "authors": "Y Dong; Q Liu; B Du; L Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b0", "title": "Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification", "year": "2022" }, { "authors": "A I Elmanawy; D Sun; A Abdalla; Y Zhu; H Cen", "journal": "Computers and Electronics in Agriculture", "ref_id": "b1", "title": "Hsi-pp: A flexible open-source software for hyperspectral imaging-based plant phenotyping", "year": "2022" }, { "authors": "Y Wang; D Hong; J Sha; L Gao; L Liu; Y Zhang; X Rong", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b2", "title": "Spectral-spatial-temporal transformers for hyperspectral image change detection", "year": "2022" }, { "authors": "J Jiao; Z Gong; P Zhong", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b3", "title": "Triplet spectral-wise transformer network for hyperspectral target detection", "year": "2023" }, { "authors": "J Qu; Q Du; Y Li; L Tian; H Xia", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b4", "title": "Anomaly detection in hyperspectral imagery based on gaussian mixture model", "year": "2020" }, { "authors": "Z Gong; W Hu; X Du; P Zhong; P Hu", "journal": "IEEE Trans. Cybern", "ref_id": "b5", "title": "Deep manifold embedding for hyperspectral image classification", "year": "2021" }, { "authors": "Z Gong; P Zhong; W Hu", "journal": "IEEE Trans. Neural Netw. Learn. Syst", "ref_id": "b6", "title": "Statistical loss and analysis for deep learning in hyperspectral image classification", "year": "2021" }, { "authors": "Z Gong; P Zhong; Y Yu; W Hu; S Li", "journal": "IEEE Trans. Geosci. Remote Sens", "ref_id": "b7", "title": "A cnn with multiscale convolution and diversified metric for hyperspectral image classification", "year": "2019" }, { "authors": "G P Petropoulos; C Kalaitzidis; K P Vadrevu", "journal": "Computers & Geosciences", "ref_id": "b8", "title": "Support vector machines and object-based classification for obtaining land-use/cover cartography from hyperion hyperspectral imagery", "year": "2012" }, { "authors": "B Zhang; D Wu; L Zhang; Q Jiao; Q Li", "journal": "Environmental Earth Sciences", "ref_id": "b9", "title": "Application of hyperspectral remote sensing for environment monitoring in mining areas", "year": "2012" }, { "authors": "B Lu; P D Dao; J Liu; Y He; J Shang", "journal": "Remote Sensing", "ref_id": "b10", "title": "Recent advances of hyperspectral imaging technology and applications in agriculture", "year": "2020" }, { "authors": "S Roessner; K Segl; U Heiden; H Kaufmann", "journal": "IEEE Transactions on Geoscience and Remote sensing", "ref_id": "b11", "title": "Automated differentiation of urban surfaces based on airborne hyperspectral imagery", "year": "2001" }, { "authors": "X Kang; S Li; J A Benediktsson", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b12", "title": "Feature extraction of hyperspectral images with image fusion and recursive filtering", "year": "2013" }, { "authors": "L He; J Li; C Liu; S Li", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b13", "title": "Recent advances on spectral-spatial hyperspectral image classification: An overview and new guidelines", "year": "2017" }, { "authors": "L Sun; Z Wu; J Liu; L Xiao; Z Wei", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b14", "title": "Supervised spectral-spatial hyperspectral image classification with weighted markov random fields", "year": "2014" }, { "authors": "A J Guo; F Zhu", "journal": "Signal Processing", "ref_id": "b15", "title": "Improving deep hyperspectral image classification performance with spectral unmixing", "year": "2021" }, { "authors": "T Lu; S Li; L Fang; X Jia; J A Benediktsson", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b16", "title": "From subpixel to superpixel: A novel fusion framework for hyperspectral image classification", "year": "2017" }, { "authors": "X Jin; Y Gu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b17", "title": "Superpixel-based intrinsic image decomposition of hyperspectral images", "year": "2017" }, { "authors": "W Xie; Y Gu; T Liu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b18", "title": "Hyperspectral intrinsic image decomposition based on physical prior driven unsupervised learning", "year": "2023" }, { "authors": "X Kang; S Li; L Fang; J A Benediktsson", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b19", "title": "Intrinsic image decomposition for feature extraction of hyperspectral images", "year": "2015" }, { "authors": "X Jin; Y Gu; W Xie", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b20", "title": "Intrinsic hyperspectral image decomposition with dsm cues", "year": "2022" }, { "authors": "Y Gu; W Xie; X Li; X Jin", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b21", "title": "Hyperspectral intrinsic image decomposition with enhanced spatial information", "year": "2022" }, { "authors": "B Guo; S R Gunn; R I Damper; J D Nelson", "journal": "IEEE Transactions on Image Processing", "ref_id": "b22", "title": "Customizing kernel functions for svm-based hyperspectral image classification", "year": "2008" }, { "authors": "J Xia; P Ghamisi; N Yokoya; A Iwasaki", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b23", "title": "Random forest ensembles and extended multiextinction profiles for hyperspectral image classification", "year": "2017" }, { "authors": "S Kuching", "journal": "Journal of Computer Science", "ref_id": "b24", "title": "The performance of maximum likelihood, spectral angle mapper, neural network and decision tree classifiers in hyperspectral image analysis", "year": "2007" }, { "authors": "L Ma; M M Crawford; J Tian", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b25", "title": "Local manifold learningbased k-nearest-neighbor for hyperspectral image classification", "year": "2010" }, { "authors": "J Xia; M Dalla Mura; J Chanussot; P Du; X He", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b26", "title": "Random subspace ensembles for hyperspectral image classification with extended morphological attribute profiles", "year": "2015" }, { "authors": "J C Harsanyi; C.-I Chang", "journal": "IEEE Transactions on geoscience and remote sensing", "ref_id": "b27", "title": "Hyperspectral image classification and dimensionality reduction: An orthogonal subspace projection approach", "year": "1994" }, { "authors": "B.-C Kuo; H.-H Ho; C.-H Li; C.-C Hung; J.-S Taur", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b28", "title": "A kernelbased feature selection method for svm with rbf kernel for hyperspectral image classification", "year": "2013" }, { "authors": "C Wang; L Zhang; W Wei; Y Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b29", "title": "Dynamic super-pixel normalization for robust hyperspectral image classification", "year": "2023" }, { "authors": "H Lee; H Kwon", "journal": "IEEE Transactions on Image Processing", "ref_id": "b30", "title": "Going deeper with contextual cnn for hyperspectral image classification", "year": "2017" }, { "authors": "S Li; W Song; L Fang; Y Chen; P Ghamisi; J A Benediktsson", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b31", "title": "Deep learning for hyperspectral image classification: An overview", "year": "2019" }, { "authors": "R Hang; Q Liu; D Hong; P Ghamisi", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b32", "title": "Cascaded recurrent neural networks for hyperspectral image classification", "year": "2019" }, { "authors": "L Mou; P Ghamisi; X X Zhu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b33", "title": "Deep recurrent neural networks for hyperspectral image classification", "year": "2017" }, { "authors": "Y Ding; Z Zhang; X Zhao; D Hong; W Cai; C Yu; N Yang; W Cai", "journal": "Neurocomputing", "ref_id": "b34", "title": "Multi-feature fusion: Graph neural network and cnn combining for hyperspectral image classification", "year": "2022" }, { "authors": "D Hong; L Gao; J Yao; B Zhang; A Plaza; J Chanussot", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b35", "title": "Graph convolutional networks for hyperspectral image classification", "year": "2020" }, { "authors": "D Hong; Z Han; J Yao; L Gao; B Zhang; A Plaza; J Chanussot", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b36", "title": "Spectralformer: Rethinking hyperspectral image classification with transformers", "year": "2021" }, { "authors": "J Zou; W He; H Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b37", "title": "Lessformer: Local-enhanced spectralspatial transformer for hyperspectral image classification", "year": "2022" }, { "authors": "S K Roy; G Krishna; S R Dubey; B B Chaudhuri", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b38", "title": "Hybridsn: Exploring 3-d-2-d cnn feature hierarchy for hyperspectral image classification", "year": "2019" }, { "authors": "M E Paoletti; J M Haut; J Plaza; A Plaza", "journal": "ISPRS journal of photogrammetry and remote sensing", "ref_id": "b39", "title": "A new deep convolutional neural network for fast hyperspectral image classification", "year": "2018" }, { "authors": "Z Gong; X Zhou; W Yao", "journal": "", "ref_id": "b40", "title": "Deep intrinsic decomposition with adversarial learning for hyperspectral image classification", "year": "2023" }, { "authors": "P Wang; K Han; X.-S Wei; L Zhang; L Wang", "journal": "", "ref_id": "b41", "title": "Contrastive learning based hybrid networks for long-tailed image classification", "year": "2021" }, { "authors": "S Hou; H Shi; X Cao; X Zhang; L Jiao", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b42", "title": "Hyperspectral imagery classification based on contrastive learning", "year": "2021" }, { "authors": "Z Cao; X Li; Y Feng; S Chen; C Xia; L Zhao", "journal": "Neurocomputing", "ref_id": "b43", "title": "Contrastnet: Unsupervised feature learning by autoencoder and prototypical contrastive learning for hyperspectral imagery classification", "year": "2021" }, { "authors": "Q Liu; J Peng; Y Ning; N Chen; W Sun; Q Du; Y Zhou", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b44", "title": "Refined prototypical contrastive learning for few-shot hyperspectral image classification", "year": "2023" }, { "authors": "X Huang; M Dong; J Li; X Guo", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b45", "title": "A 3-d-swin transformerbased hierarchical contrastive learning method for hyperspectral image classification", "year": "2022" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "PMLR", "ref_id": "b46", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Hyperspectral data", "year": "2023-11-16" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "Advances in neural information processing systems", "ref_id": "b48", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "A B Hamida; A Benoit; P Lambert; C B Amar", "journal": "IEEE Trans", "ref_id": "b49", "title": "3-d deep learning approach for remote sensing image classification", "year": "" } ]
[ { "formula_coordinates": [ 3, 153.4, 660.72, 146.62, 8.96 ], "formula_id": "formula_0", "formula_text": "I = R • S(1)" }, { "formula_coordinates": [ 3, 396.08, 185.25, 166.96, 8.96 ], "formula_id": "formula_1", "formula_text": "I(λ) = R(λ) • S(λ)(2)" }, { "formula_coordinates": [ 3, 366.12, 591.79, 196.92, 30.32 ], "formula_id": "formula_2", "formula_text": "min f1,f2 N i=1 C 1 (g(f 1 (x i ) • f 2 (x i )), y i )(3)" }, { "formula_coordinates": [ 4, 313.64, 439.8, 249.4, 38.91 ], "formula_id": "formula_3", "formula_text": "min P1,P2,••• ,PΛ N i=1 Λ s=1 I(s = arg min 1,2,••• ,Λ ∥x i -P s ∥ 2 )∥x i -P s ∥ 2 (4)" }, { "formula_coordinates": [ 4, 311.98, 493.67, 213.77, 8.96 ], "formula_id": "formula_4", "formula_text": "I(•) = 1 if condition is true and I(•) = 0 otherwise." }, { "formula_coordinates": [ 4, 375.5, 654.28, 187.54, 16.65 ], "formula_id": "formula_5", "formula_text": "z i = arg min s=1,2,••• ,Λ ∥x i -P s ∥ 2 (5)" }, { "formula_coordinates": [ 5, 48.96, 198.49, 267.99, 31.29 ], "formula_id": "formula_6", "formula_text": "l ij = 1 -cos(p 1 (f 1 (x bi )), p 1 (f 1 (x bj ))), if z bi = z bj max(0, cos(p 1 (f 1 (x bi )), p 1 (f 1 (x bj ))) -δ), if z bi ̸ = z bj (6)" }, { "formula_coordinates": [ 5, 139.89, 286.4, 160.13, 30.32 ], "formula_id": "formula_7", "formula_text": "L e = m i=1 m j=1 l ij .(7)" }, { "formula_coordinates": [ 5, 48.96, 613.36, 268.5, 31.29 ], "formula_id": "formula_8", "formula_text": "l c ij = 1 -cos(p 2 (f 2 (x bi )), p 2 (f 2 (x bj ))), if y bi = y bj max(0, cos(p 2 (f 2 (x bi )), p 2 (f 2 (x bj ))) -δ), if y bi ̸ = y bj (8)" }, { "formula_coordinates": [ 5, 140, 674.8, 160.02, 30.32 ], "formula_id": "formula_9", "formula_text": "L c = m i=1 m j=1 l c ij .(9)" }, { "formula_coordinates": [ 5, 311.98, 312.26, 95.39, 9.65 ], "formula_id": "formula_10", "formula_text": "f 1 (x bi )(i = 1, 2, • • • , m" }, { "formula_coordinates": [ 5, 327.16, 362.65, 235.88, 30.32 ], "formula_id": "formula_11", "formula_text": "L d = m i=1 (C 2 (h(f 1 (x bi )), 0) + C 2 (h(f 2 (x bi )), 1))(10)" }, { "formula_coordinates": [ 5, 350, 427.91, 213.04, 27.18 ], "formula_id": "formula_12", "formula_text": "C 2 (h(f 1 (x bi )), 0) = -log(h(f 1 (x bi )) T e 0 ) C 2 (h(f 2 (x bi )), 1) = -log(h(f 2 (x bi )) T e 1 )(11)" }, { "formula_coordinates": [ 5, 357.88, 641.85, 205.16, 30.32 ], "formula_id": "formula_13", "formula_text": "L 0 = m i=1 C 1 (g(f 1 (x bi ) • f 2 (x bi )), y bi )(12)" }, { "formula_coordinates": [ 5, 311.98, 707.77, 284.64, 40.27 ], "formula_id": "formula_14", "formula_text": "C 1 (g(f 1 (x bi )•f 2 (x bi )), y bi ) = - K j=1 δ jyi log(g(f 1 (x bi )•f 2 (x bi )) T e j )(13)" }, { "formula_coordinates": [ 6, 114.77, 98.39, 181.1, 9.65 ], "formula_id": "formula_15", "formula_text": "L = L 0 + αL e + βL c + γL d (14" }, { "formula_coordinates": [ 6, 295.87, 98.71, 4.15, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 11, 85.36, 380.26, 437.33, 177.81 ], "formula_id": "formula_17", "formula_text": "(c) (d) (e) (f) (g) (h) (i) (j) (k) (l)" } ]
2024-02-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b22", "b10", "b22", "b10", "b27", "b3", "b31", "b29", "b35", "b0", "b35" ], "table_ref": [], "text": "Software development is a repetitive task as programmers usually reuse or get inspiration from existing implementations. Studies show programmers spent 19% of their programming time on searching source code (Brandt et al., 2009). Therefore, code search, which refers to the retrieval of relevant code snippets from a codebase according to programmer's intent that has been expressed as a query (Liu et al., 2022), has become increasing important (Grazia and Pradel, 2022).\nAlthough much effort has been devoted to improving code search, existing works mostly emphasize the ranking performance of code search w.r.t. metrics like Mean Reciprocal Rank (MRR) and Hit Ratio@K (HR@K) (Liu et al., 2022;Grazia and Pradel, 2022). In this paper, we study code search * Corresponding Author. from another perspective. We find that state-ofthe-art code search methods prevalently have discriminatory behaviors (i.e., different performance) toward queries or code snippets with certain properties (e.g., length). The observation shows, even though the overall ranking performance is good, programmers may still be dissatisfied with search results when their input queries or desired code snippets fall into those categories that code search models cannot handle well. We name our observation as Code Search Bias, inspired by the AI bias that attracts great attention recently (Mehrabi et al., 2021). Code search bias hurts user experience. Due to different development conventions (e.g., prefer long queries or abbreviations), users (programmers) of code search engines with biases will have different user experience, i.e., some users will find the engine useful, while others may find it hard to get desirable search results.\nNote that most studies of bias in NLP focus on societal bias (Blodgett et al., 2020). For example, the gender bias of NLP algorithms may pose the danger of giving preference to male applicants in automatic resume filtering systems (Sun et al., 2019). However, in applications like search engines (Ovaisi et al., 2020) and recommender systems (Lin et al., 2021a;Xv et al., 2022), some biases without societal factors are widely studied as they make the system biased toward certain search results and harm the performance. For instance, position bias exists in learning-to-rank systems where top search results are more likely to be clicked even if they are not the most relevant results (Agarwal et al., 2019;Xv et al., 2022). But it does not mean any discriminatory behaviors toward certain groups of people. Similarly, code search bias does not involve societal factors.\nConsidering that our observation has revealed the widespread code search bias in existing models, we aim at designing a general debiasing framework that can be easily plugged into existing code search engines. In the context of code search bias, debiasing indicates removing the correlations between code search quality and certain properties of queries and code snippets. Our proposed debiasing framework adopts the idea of reranking to calibrate search results. It helps state-of-the-art code search models overcome code search bias and their overall performance can be improved at the meantime. In summary, our contributions are: 1. To our best knowledge, we are the first to study code search bias. We reveal the widespread existence of seven code search biases.\n2. To mitigate code search bias, we propose a general debiasing framework using reranking. It can be easily plugged into existing engines.\n3. Extensive experiments show that our debiasing framework not only helps alleviate code search bias but also improves the overall ranking performance of state-of-the-art code search models." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b25", "b1", "b11", "b19", "b32", "b7", "b12", "b27", "b6", "b2", "b9", "b15", "b16", "b11", "b24", "b11", "b19", "b7", "b15", "b12" ], "table_ref": [], "text": "Code Search. Early code search methods adopt traditional information retrieval methods to estimate the relevance between the query and a code snippet (Lv et al., 2015;Bajracharya et al., 2010).\nRecent works adopt deep neural networks to embed query and code into vectors. Then, the code search task is performed by measuring the similarity between vectors. Along this direction, various deep learning based methods have been proposed, including but not limited to recurrent neural network (RNN) based approaches (Gu et al., 2018), convolutional neural network (CNN) based approaches (Li et al., 2020), graph neural network (GNN) based approaches (Wan et al., 2019) and pre-training approaches (Feng et al., 2020;Guo et al., 2021Guo et al., , 2022)).\nBias and Debias. Many AI systems exhibit certain biases that bring unfairness and degrade the performance (Mehrabi et al., 2021). Various debiasing methods have been proposed and they can be roughly divided into three types: 1. Pre-processing methods remove biases in training data. Calmon et al. (2017) design a framework for discrimination-preventing preprocessing to enhance data with multi goals. Biswas and Rajan (2021) analyze bias prompts in data preprocessing pipelines and identify data transformers that can mitigate the pipeline bias.\n2. In-processing methods mitigate biases in the model training step. Garimella et al. (2021) propose a debiasing method that requires pretraining on an extra small corpus with bias mitigation objectives for mitigating social biases in language models. Lin et al. (2021a) propose a debiasing framework with three strategies that be used as regularizers in the training objective of review-based recommender systems. (Huang et al., 2021) with 20,604 query-code pairs. Each query is written in English while each code snippet is a Python code snippet. The data is annotated by at least 3 human annotators. We randomly split the dataset by 70%/30% for training and test. We adopt bytepair encoding tokenization, a standard tokenization method used in preprocessing code search data, to tokenize queries and code snippets. As queries are typically short, stop words in queries are not removed. Note that there are other public code search datasets, e.g., CodeSearchNet dataset (Husain et al., 2019), DeepCS dataset (Gu et al., 2018), and CodeXGLUE dataset (Lu et al., 2021). We choose CoSQA dataset as it includes real code search queries, while other datasets use code documents (e.g., the first sentence in the function comments) to mimic queries. Using CoSQA helps us better discover biases in a real code search scenario.\nCode Search Models: We select six representative code search approaches in the literature for our bias analysis, including DeepCS2 (Gu et al., 2018), CQIL3 (Li et al., 2020), Code- BERT4 (Feng et al., 2020), CoCLR5 (Huang et al., 2021), GraphCodeBERT 4 (Guo et al., 2021) and UniXcoder 4 (Guo et al., 2022). They are all under the MIT license, allowing us to adopt them in this study. We have observed similar biases in all the six methods. Due to space limitation, we only show analysis results of CQIL, CodeBert and GraphCodeBERT, and other methods are reported in our debiasing experiments in Sec. 5. We follow authors' descriptions to set hyper-parameters whenever possible in order to tune the performance of each method towards its best. Evaluation Metrics: We use Mean Reciprocal Rank (MRR), the most widely used measure for code search, to illustrate our bias analysis. It is defined as MRR = 1 |Q| |Q| i=1 1 rank i , where |Q| is the number of queries and rank i indicates the rank of the ground-truth code snippet w.r.t. the i-th query. We also adopt another prevalent metric Hit Ratio@K (HR@K, the percentage of ground-truth code snippets that are in the top-K ranking lists from code search models) and results are discussed in Sec. 5. Note that most current code search studies assume that there exists only one good result for each query and public code search datasets are designed this way. Hence, the popular ranking metric Normalized Discounted Cumulative Gain (NDCG) will be consistent with MRR. Our reported results are averaged over several runs." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Analysis Results", "publication_ref": [ "b14", "b33", "b26", "b17", "b28" ], "table_ref": [], "text": "Based on the characteristics of code search and the data involved in the search process, we have found and verified seven code search biases. A general motivation to consider these seven factors is that they are commonly adopted as parameters in the experiments of existing papers as they affect the results of code-related tasks (Hu et al., 2023;Wan et al., 2018;McBurney and McMillan, 2016). The performance of CQIL, CodeBERT and Graph-CodeBERT w.r.t. the seven biases are presented in Fig. 1. We first group queries (in the test set) or ground-truth code snippets in intervals with equal lengths w.r.t. certain statistics. Then, we investigate whether code search models show different behaviors towards different intervals. The x-axis illustrates the intervals. To better visualize the result of bias analysis, data in Fig. 1 (a) and (c) is grouped in an interval with a length of 4, data in Fig. 1 (f) is grouped in an interval with a length of 0.15, and data in other subfigures is grouped in an interval with a length of 1. The left y-axis denotes the number of queries or ground-truth code snippets in each interval while the right y-axis shows the average MRR score for data in each interval. We provide our analysis as follows:\nBias 1 w.r.t. Lengths of Ground-Truth Code Length bias (i.e., model makes decisions based on or affected by the length of texts) has been verified in various information retrieval and natural language processing tasks such as textual matching (Jiang et al., 2022) and machine translation (Murray and Chiang, 2018). This inspires us to investigate the effect of the length of ground-truth code snippets on code search models.\nFig. 1 (a) shows the performance of three models w.r.t. code lengths. From Fig. 1 (a), we can see that lengths of most code snippets are between 20 and 50. Furthermore, we can observe that: (1) In general, the longer the ground-truth code snippet is, the better the MRR score is. There are some sharp drops in MRR when code length gets much longer. The reason may be the number of ground-truth code snippets in intervals with longer lengths (e.g., > 70) is quite small and a few hard cases affect the average performance in those intervals. (2) Code search models show a clear bias towards intervals with longer lengths of ground-truth code snippets, i.e., longer ground-truth code snippets are more easily to match. For instance, the MRR scores of GraphCodeBERT are 0.57 and 0.83 for the interval with average code length 36 and the interval with average code length 68, respectively. Intuitively, longer ground-truth code snippets provide more semantic information, making it more easy to be modeled and matched. From a software engineering perspective, long code snippets are more distinctive than short ones: it is more likely for two short code snippets to be similar, making it hard to distinguish the correct one from other candidates.\nBias 2 w.r.t. Lengths of Queries Similar to Bias 1, we have identified the bias w.r.t. lengths of input queries. As shown in Fig. 1 (b), as query length increases, MRR decreases, indicating that longer queries have worse search results." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Bias 3 w.r.t. Numbers of AST Nodes", "publication_ref": [ "b34", "b36", "b23" ], "table_ref": [], "text": "One major difference between natural languages (NLs) and programming languages (PLs) is that PLs have strict syntax rules that are enforced by language grammars. Abstract Syntax Tree (AST), used in compilers, represents the abstract syntactic structure of the source code. Each node of ASTs denotes a construct or symbol occurring in the source code. Compared to plain source code, ASTs are abstract and some details (e.g., punctuation and delimiters) are not included. ASTs are used in various code-related tasks like code summarization (Lin et al., 2021b), code completion (Wang and Li, 2021), issue-commit link recovery (Zhang et al., 2023) and refactoring (Liu et al., 2023) for capturing syntactic information.\nConsidering the importance of ASTs for modeling PL syntax, we investigate the influence of ASTs on code search models. Usually, longer code snippets correspond to deep ASTs. However, some complex yet short code snippets such as list parsing in Python may also have deep ASTs. Hence, Bias 3 is not equivalent to Bias 1. Fig. 1 (c) demonstrates the impacts of AST node numbers on the performance of code search models. We can observe the bias: code search models show diverse performance towards different intervals. For example, the MRR scores of GraphCodeBERT are 0.6 and 0.87 for the interval with average AST node number 40 and the interval with average AST node number 72, respectively. The performance gap is significant in code search.\nBias 4 w.r.t. Depths of ASTs Similar to Bias 3, we further identify the bias w.r.t. AST depths which also depict the complexity of ASTs. Note a deep AST may not have many AST nodes. Hence, Bias 3 and Bias 4 are different. Fig. 1 (d) shows the impact of AST depths. In Fig. 1 (d), code snippets are grouped by the depth of their ASTs and the interval length is 1. We can observe the existence of bias: code search models have diverse performance towards different intervals containing ASTs with different depths." }, { "figure_ref": [ "fig_0" ], "heading": "Bias 5 w.r.t. Numbers of Reserved Words", "publication_ref": [], "table_ref": [], "text": "If we do not consider identifiers and constants, the vocabulary of code tokens containing reserved words of a PL is small. We investigate the impact of reserved words on the behaviors of code search models. Specially, we consider Python reserved words if, for, while, with, try and except. They are related to control structures and demonstrate the programming logic of designing a function. Fig. 1 (e) demonstrates the performance towards ground-truth code snippets containing different numbers of reserved keywords. We can see the existence of a bias: performance of code search models varies when the number of code keywords changes. We can observe that the considerable growth of the MRR score when the number of keywords in ground-truth code snippets increases. One possible reason is that logic-related reserved words in ground-truth code snippets help code search models better capture the logic of the code. Therefore, it is easier for code search models to match the ground-truth code snippet and the user intent that manifests in the queries when code contains more logic-related reserved words." }, { "figure_ref": [], "heading": "Bias 6 w.r.t. Importance of Words", "publication_ref": [], "table_ref": [], "text": "Queries are typically concise, containing only a few words. For each query, we calculate the max TF-IDF values for the words contained in the query to estimate how important words contained in a query are. We have also calculated the average and the minimum TF-IDF values and similar results can be observed. TF-IDF helps avoid amplifying the importance of words that appear more frequently in general (e.g., the word \"an\" in a query \"sort an array\"). When calculating TF-IDF, we treat each query in CoSQA as a document. Results are presented in Figs. 1 (f), and we can observe the existence of a bias, i.e., code search models show different performance for queries containing words with varying importance. Intuitively, the important words (e.g., \"sort\") contained in a query help code search models better understand user intent and match the ground-truth code snippet.\nBias 7 w.r." }, { "figure_ref": [ "fig_0" ], "heading": "t. Numbers of Overlapping Words", "publication_ref": [ "b38" ], "table_ref": [], "text": "Early code search methods rely on the overlapping words of queries and code snippets to estimate query-code relevance scores. However, overlapping words received less attention in deep learning based code search models (Zhu et al., 2020). We investigate the influence of overlaps on the behaviors of the three code search models which all leverage deep learning. Fig. 1 (g) illustrates the performance on test query-code pairs that have different numbers of overlapping words. From the figure, we can observe a bias: models produce better MRR towards query-code pairs with more overlapping words. In other words, deep learning-based code search models also capture overlapping words and treat them as a strong signal of a matching result, confirming the standard hypothesis that overlapping words affect code search. In summary, we have identified seven distinct biases, meaning that code search models show different performance when facing input queries or ground-truth code snippets with different characteristics. In practice, code search biases result in the inconsistence of user experience: depending on the characteristics of queries and/or ground-truth code snippets, the quality of search results varies." }, { "figure_ref": [ "fig_1", "fig_0" ], "heading": "Mitigate Code Search Biases", "publication_ref": [], "table_ref": [], "text": "In this section, we illustrate our debiasing framework shown in Fig. 2. Our goal is to design a general framework: (1) it can be easily plugged into existing code search models without much additional effort, and (2) it can handle new code search biases that are not discovered at the moment.\nWe opt to adopt reranking, a post-processing method, to calibrate code search results. The idea is to rerank the ranking results provided by code search models. Even though code search biases are prevalent in many cases as we have seen in Fig. 1, many code search models show promising overall performance (i.e., high MRR or HR@K). Therefore, for biased cases, the ground-truth code snippets are not too far away from the top of search results. Otherwise, the overall MRR scores will be quite low according to its definition. Similarly, we believe that any new code search biases also meet the above condition (i.e., biases exist but overall search performance is high). For biased cases, a successful reranking method can help groundtruth code snippets emerge on top. Post-processing search results also avoid modifying existing code search models. This way, the designed debiasing framework is orthogonal to a specific code search method and it can be easily used as a reinforcement.\nNext, we first demonstrate how our framework mitigates one bias. Then, the way that our framework mitigates multiple biases is presented." }, { "figure_ref": [], "heading": "Mitigate A Single Bias via Reranking", "publication_ref": [], "table_ref": [], "text": "Our idea is to use the prior knowledge of biased search from the training data to determinate whether a similar search in the test set will face a bias issue and require reranking. The detailed steps of mitigating a single bias via a single reranker are: 1. Firstly, we embed all queries in the training set into vectors using a pre-trained CodeBERT model. For a test query (i.e., the current search), after it is embedded by the CodeBERT model, we retrieve its top-M most similar queries in the training set based on cosine similarity between vectors. These retrieved queries and their corresponding ground-truth code snippets in the training set will provide some hints on whether the current search may face a certain bias." }, { "figure_ref": [], "heading": "Then, we identify intervals in training data", "publication_ref": [], "table_ref": [], "text": "where code search models show very high performance. It is likely that search results are not severely biased within these intervals. Otherwise the MRR scores for these intervals should be low by its definition. For such intervals, it is unnecessary to rerank for debiasing. We sort the search cases in training set by their MRR scores and retrieve cases with top N % maximum MRR scores. We adopt k-means to cluster the retrieved training search cases into S clusters. Then, the maximum and minimum MRR scores in each cluster are used as the boundaries of the cluster.\n3. For a test search t, if its top-M most similar training query-code pairs have an average MRR score that falls in the range of any cluster, then it is likely that code search models provide reasonable relevance prediction scores for the candidate code snippets contained in these query-code pairs and our method will not rerank these candidate code snippets. For other candidate code snippets, reranking is required.\n4. For a candidate code snippet c that requires reranking, the reranking score is calculated as:\nR = Score original c + P (T e < T m ),(1)\nwhere Score original 5. For the test search t, our method will use reranking scores R instead of Score original as relevance scores for all candidate code snippets that are identified to require reranking in Step 3. Then, the ranking list is reranked according to new relevance scores. We discuss the impact of the choices of M , N and S in Analysis 5 of Sec. 5." }, { "figure_ref": [], "heading": "Mitigate Multiple Biases", "publication_ref": [ "b37", "b8", "b5" ], "table_ref": [], "text": "To mitigate multiple code biases together, we adopt two simple yet effective strategies to assemble rerankers for different code search biases: 1. Sequential Reranking: Adopt each reranker sequentially. The relevance scores from a previous reranker will be used as the base relevance scores (i.e., Score original ) in the next reranker.\n2. Parallel Reranking: Adopt each reranker parallel and use the average of the relevance scores from all rerankers between a candidate code snippet and the current search as the prediction. Tab. 1 provides examples to illustrate relevance scores between a query and two candidate code snippets c 1 and c 2 . From final relevance scores of the code snippet c 1 , we can see that sequential reranking emphasizes the adjustment of reranking as it aggregates reranking terms from different rerankers. Differently, parallel reranking averages reranking terms from different rerankers, avoiding a sharp reranking. If none of the rerankers adjust the relevance score, then the final relevance scores are the same for both methods, as shown in the case of the code snippet c 2 . Empirically, different ordering shows only slight performance difference, as we will show in Analysis 4 of Sec. 5.\nNote the above two strategies in our debiasing framework looks similar to Boosting and Bagging methods used in Ensemble Learning (Zhou, 2009), but they are not the same: (1) Compared to Boosting methods like AdaBoost (Freund and Schapire, 1997), sequential reranking does not increase the weights for wrongly labeled training samples (biased/unbiased cases) in previous reranker since each reranker is designed for different targets (mitigate different biases) and wrongly labeled samples in the previous reranker may be correct samples for the next reranker. Differently, Boosting methods will increase weights of incorrectly predicted sampled for training the next learner. (2) Compared to Bagging methods (Breiman, 1996), parallel reranking does not adopt sampling to prepare different datasets (from the complete training set) for use in each reranker. The reason is that, to make our debiasing method simple and general, our reranking method is designed as a similarity-based adjuster with simple rules instead of a learning-based approach. In a large training set, most similar queries that are used to judge whether current search is facing bias may not be selected in sampling, which negatively affects debiasing." }, { "figure_ref": [ "fig_3", "fig_4", "fig_3", "fig_4", "fig_5", "fig_6" ], "heading": "Debiasing Experiment", "publication_ref": [], "table_ref": [], "text": "In this section, we will illustrate the effectiveness of our debiasing framework on mitigating code search biases. Results are reported using our framework to mitigate the seven biases for the six code search methods on the CoSQA dataset. By default, the order of rerankers in sequential reranking is Biases 7, 6, 3, 4, 2, 5 and 1. We also analyze the impact of reranker order in Analysis 4 of our experiments. Our method requires three hyper-parameters: M , N and S, as illustrated in Sec. 4.1. We search M , N and S in {1, 3, 5}, {10, 15, 20} and {1, 3, 5}, respectively. Best results (M = 1, N = 10 and S = 1) are reported.\nAnalysis 1: Debiasing Results. We first analyze the results after debiasing. Due to space limitation, we only visualize results of Bias 1 (Lengths of Code), Bias 3 (Numbers of AST nodes), Bias 4 (Depths of ASTs) and Bias 6 (Importance of Words) for CQIL, CodeBERT and GraphCode-BERT. For other code search methods and biases, we observe similar results. Fig. 3 shows the performance before and after mitigating biases using sequential reranking. The result using parallel reranking is presented in Fig. 4. From visualization results, we can clearly see that, for all the four biases, MRR scores of most intervals increase after deploying our debiasing framework, showing the effectiveness of our debiasing framework. Sequential reranking shows a slightly better debiasing result than parallel reranking (e.g., see CQIL(b) and GraphCodeBERT(b) in Fig. 3 and Fig. 4). However, sequential reranking is not as efficient as parallel reranking as it processes each reranker one by one.\nAnalysis 2: Changes of Code Search Performance after Debiasing. Tab. 3 and Tab. 2 illustrate the changes of overall code search performance after debiasing using sequential reranking and parallel reranking, respectively. From results, we can see that, after debiasing, overall code search performance w.r.t. MRR or HR@K significantly increases. The improvements are especially noticeable for DeepCS, CQIL and CodeBERT: MRR and HR@K of these methods increase by 9.6%-67%. The reason is that the original search performance of the three methods is not high and there is still large room for improvement. Even for CoCLR, GraphCodeBERT and UniXcoder which show quite high MRR (>0.6) and HR@K (>0.5) before debiasing, our debiasing framework still helps improve the overall code search performance. Thus, we can conclude that mitigating code search bias has a positive effect on improving the overall code search performance.\nAnalysis 3: Impacts of Applying Multiple Rerankers. Next, we investigate whether applying multiple rerankers brings better debiasing results than using a single reranker. Fig. 5 illustrates the changes of overall MRR scores for the six code search models after applying each reranker using sequential reranking in the default order. The horizontal axis labels (from left to right) show the order of rerankers applied. We can observe that MRR scores of CodeBERT, DeepCS and CQIL gradually increase as more rerankers are applied. Eventually, their overall performance after debiasing gets significantly improved compared to their original performance. For CoCLR, UniXCoder and GraphCodeBERT which have achieved high MRR scores before debiasing, applying multiple rerankers slightly enhances or does not negatively affect their overall performance. Overall, after applying seven rerankers, the performance of CoCLR, UniXCoder and GraphCodeBERT gets enhanced. We can observe a similar trend when using parallel reranking. In conclusion, the more rerankers are applied, the better overall code search performance the code search model can achieve. In other words, each reranker indeed contributes to the improvement of the quality of code search results.\nAnalysis 4: Impacts of Reranker Order in Sequential Reranking. Since sequential reranking has various possible order of rerankers, we ana- lyze the impact of reranking order. In addition to the default order, we report the debiasing performance on CodeBERT using sequential reranking with three other orders: order 1 (biases 1, 6, 4, 5, 2, 7, 3), order 2 (bases 6, 2, 4, 7, 3, 5, 1) and order 3 (biases 4, 6, 2, 1, 5, 3, 7). Fig. 6 demonstrates the performance changes after each reranker is applied in the three order. The horizontal axis labels (from left to right) show rerankers in the applied order. Similar to the observation in Analysis 3, we can see that adding more rerankers help improve the MRR And the intermediate debiasing results are slightly different using three different order. But the different order does not affect the final debiasing result too much.\nAnalysis 5: Impacts of Hyper-Parameters. We further analyze the impacts of hyper-parameters. Tab. 4 provides the debiasing results of CQIL, CodeBERT and GraphCodeBERT using different hyper-parameters. Each of the MRR score in the table is obtained by changing one hyper-parameter while keeping the other two hyper-parameters the same as the best ones found in hyper-parameter search. From the result, we can conclude that hyper-parameters do not affect results too much. We provide the analysis as follows:\n• M indicates how many top-M similar queries in the training set are adopted. We believe the top-1 similar query already provides a hint for our method, and including more similar queries do not bring more information. Hence, changing M does not affect the results too much.\n• N % represents the percentage of chosen training search cases with the highest MRR scores. Since the training set of CoSQA data contains 14K query-code pairs, changing N % in {10%, 15%, 20%} results in 1,400, 2,100 and 2,800 retrieved cases, respectively. The difference between the numbers of retrieved cases is not large, compared to the total dataset with 21K querycode pairs.\n• S indicates the number of clusters after performing kmeans on these N % cases. We find that small values of S bring relatively robust and good performance of debiasing, as reported in Tab. 4. Therefore, we suggest that users set S to a small value. If we set S to a much larger number (e.g., 100, 500, 1,000), the performance becomes inconsistent, and we suspect that dividing retrieved cases into many small clusters cannot help find case patterns. Instead, many small clusters bring the noise. Hence, we do not suggest that users set S to a large value.\nAnalysis 6: Human Evaluation. We also conduct human evaluation for assessing the quality of debiasing. We randomly pick 200 queries from the test set for human evaluation. We choose CQIL as a representation of code search models and use it in human evaluation. We use our debiasing framework to reduce code search biases in the corresponding results of CQIL for the 200 queries. We recruit four master students majoring in computer science to check the quality of debiasing manually.\nFor each query, we provide the students with two lists. One is the original top-10 search results from CQIL, and the other is the top-10 list after debiasing. The lists for each query are shown in random order. Students are asked to choose which top-10 list is better, and they can also indicate that the two lists are roughly of the same quality. From the results of human evaluation, we find that, for 71.5% queries, lists after debiasing are assessed as better ones. For 19.5% queries, the original list and the reranked list are estimated as having similar quality.\nFor the remaining 9% queries, debiasing degrades the quality of the search list. The human evaluation results illustrate that our debiasing method indeed improves the quality of the code search for most queries. The materials of human evaluation are included in our provided repository." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we reveal the existence of code search biases. We design a general debiasing framework that can be easily plugged into existing search models. In the future, we will explore pre-processing and in-processing methods to improve our framework and better mitigate code search biases." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This work may have some limitations:\n• Data: When we submitted this manuscript, only one real code search dataset CoSQA was publicly available. Other datasets in the literature do not have real search queries, and they use code documents to simulate queries. However, code documents and queries have different text styles (i.e., length). Hence, we only study code search bias based on the real data in CoSQA. To overcome this limitation, we are constructing another dataset containing real code search queries and will release it for future study.\n• Language: Queries and code snippets in CoSQA are written in English and Python, respectively. It is unclear whether our analysis results hold for queries written in other natural languages (e.g., French and Chinese). As the causes of code search biases analyzed in this work should be common across different programming languages (e.g., Java and Go), we expect that code search in other programming languages also suffers from the biases studied in this paper. We leave the study of the impacts of different natural languages and programming languages on code search bias as future work." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was partially supported by National Key R&D Program of China (No. 2022ZD0118201), National Natural Science Foundation of China (No. 62002303, 42171456) and CCF-Tencent Open Fund (RAGR20210129)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Code 1 Sequential Reranking Parallel Reranking def bubble_sort(array): for i in range(n): … Query: How to write a quick sort Code Search Engine Code 2 Code 3 Code 4 Code 3 Code 1 Code 2 Code 4" } ]
Code search engine is an essential tool in software development. Many code search methods have sprung up, focusing on the overall ranking performance of code search. In this paper, we study code search from another perspective by analyzing the bias of code search models. Biased code search engines provide poor user experience, even though they show promising overall performance. Due to different development conventions (e.g., prefer long queries or abbreviations), some programmers will find the engine useful, while others may find it hard to get desirable search results. To mitigate biases, we develop a general debiasing framework that employs reranking to calibrate search results. It can be easily plugged into existing engines and handle new code search biases discovered in the future. Experiments show that our framework can effectively reduce biases. Meanwhile, the overall ranking performance of code search gets improved after debiasing.
Code Search Debiasing: Improve Search Results beyond Overall Ranking Performance
[ { "figure_caption": "Figure 1 :1Figure 1: Examples of code search biases.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of the debiasing framework.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "cdenotes the original ranking score of c, T e represents the MRR value of the code search model on a training query-code pair, T m represents the overall MRR value of the code search model on the training data, and P (T e < T m ) indicates the percentage of training querycode pairs that the code search model shows a lower MRR score than its overall MRR score over all the training pairs.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Mitigate biases using sequential reranking.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Mitigate biases using parallel reranking.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Changes of overall MRR after applying each reranker in sequential reranking.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Sequential reranking in different order.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Comparisons of two reranking methods. \"S\" and \"P\" indicate sequential reranking and parallel reranking, respectively. R 1 and R 2 are reranking scores from reranker 1 and reranker 2, respectively.", "figure_data": "Method CodeReranker 1Reranker 2Relevance ScoreSc1 c2Score Scoreoriginal c1 original c2+ R1 Score Scoreoriginal c1 original c2+ R1 + R2 Score Scoreoriginal c1 original c2+ R1 + R2Pc1 c2Score Scoreoriginal c1 original c2+ R1 Score Scoreoriginal c1 original c2+ R2Score Scoreoriginal c1 original c2+ (R1 + R2)/2", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Overall performance changes of code search models using sequential reranking.", "figure_data": "MRRHR@1HR@5HR@10Method NameBeforeAfterBeforeAfterBeforeAfterBeforeAfterDeepCS0.295 0.428 (+45%) 0.219 0.366 (+67%) 0.3750.489 (+30%)0.4620.553 (+20%)CQIL0.296 0.384 (+30%) 0.216 0.299 (+38%) 0.3770.478 (+27%)0.4690.557 (+19%)CodeBERT0.474 0.569 (+20%) 0.363 0.471 (+30%) 0.5980.685 (+15%)0.7120.782 (+9.8%)CoCLR0.756 0.770 (+1.9%) 0.641 0.661 (+3.1%) 0.9090.917 (+0.88%) 0.9670.971 (+0.41%)GraphCodeBERT 0.641 0.695 (+8.4%) 0.524 0.587 (+12%) 0.7900.831 (+5.2%)0.8820.911 (+3.3%)UniXcoder0.702 0.737 (+5.0%) 0.584 0.630 (+7.9%) 0.8620.880 (+2.1%)0.9350.940 (+0.53%)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Overall performance changes of code search models using parallel re-ranking.", "figure_data": "MRRHR@1HR@5HR@10Method NameBeforeAfterBeforeAfterBeforeAfterBeforeAfterDeepCS0.295 0.425 (+44%) 0.219 0.363 (+65%) 0.375 0.485 (+29%) 0.4620.551 (+19%)CQIL0.296 0.383 (+29%) 0.216 0.300 (+39%) 0.377 0.476 (+26%) 0.4690.551 (+17%)CodeBERT0.474 0.579 (+22%) 0.363 0.483 (+33%) 0.598 0.694 (+16%) 0.7120.780 (+9.6%)CoCLR0.756 0.769 (+1.7%) 0.641 0.661 (+3.1%) 0.909 0.915 (0.66%) 0.9670.971 (+0.41%)GraphCodeBERT 0.641 0.666 (+3.9%) 0.524 0.552 (+5.3%) 0.790 0.810 (+2.5%) 0.8820.895 (+1.5%)UniXcoder0.702 0.716 (+2.0%) 0.584 0.602 (+3.1%) 0.862 0.872 (+1.2%) 0.9350.939 (+0.43%)", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "MRR for different hyper-parameters.", "figure_data": "MSNMethod135135101520CQIL0.380 0.329 0.348 0.384 0.380 0.355 0.380 0.380 0.376CodeBERT0.569 0.537 0.504 0.572 0.569 0.552 0.569 0.569 0.569GraphCodeBERT 0.695 0.673 0.660 0.696 0.695 0.690 0.695 0.695 0.695", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Sheng Zhang; Hui Li; Yanlin Wang; Zhao Wei; Yong Xu; Juhong Wang; Rongrong Ji
[ { "authors": "Aman Agarwal; Ivan Zaitsev; Xuanhui Wang; Cheng Li; Marc Najork; Thorsten Joachims", "journal": "", "ref_id": "b0", "title": "Estimating position bias without intrusive interventions", "year": "2019" }, { "authors": "Krishna Sushil; Joel Bajracharya; Cristina Videira Ossher; Lopes", "journal": "", "ref_id": "b1", "title": "Leveraging usage similarity for effective retrieval of examples in code repositories", "year": "2010" }, { "authors": "Sumon Biswas; Hridesh Rajan", "journal": "", "ref_id": "b2", "title": "Fair preprocessing: towards understanding compositional fairness of data transformers in machine learning pipeline", "year": "2021" }, { "authors": "Lin Su; Solon Blodgett; Hal Barocas; Iii Daumé; Hanna M Wallach", "journal": "", "ref_id": "b3", "title": "Language (technology) is power: A critical survey of \"bias\" in NLP", "year": "2020" }, { "authors": "Joel Brandt; Philip J Guo; Joel Lewenstein; Mira Dontcheva; Scott R Klemmer", "journal": "", "ref_id": "b4", "title": "Two studies of opportunistic programming: interleaving web foraging, learning, and writing code", "year": "2009" }, { "authors": "Leo Breiman", "journal": "Mach. Learn", "ref_id": "b5", "title": "Bagging predictors", "year": "1996" }, { "authors": "P Flávio; Dennis Calmon; Bhanukiran Wei; Karthikeyan Vinzamuri; Natesan Ramamurthy; R Ku Sh; Varshney", "journal": "", "ref_id": "b6", "title": "Optimized pre-processing for discrimination prevention", "year": "2017" }, { "authors": "Zhangyin Feng; Daya Guo; Duyu Tang; Nan Duan; Xiaocheng Feng; Ming Gong; Linjun Shou; Bing Qin; Ting Liu; Daxin Jiang; Ming Zhou", "journal": "", "ref_id": "b7", "title": "Codebert: A pre-trained model for programming and natural languages", "year": "2020" }, { "authors": "Yoav Freund; Robert E Schapire", "journal": "J. Comput. Syst. Sci", "ref_id": "b8", "title": "A decisiontheoretic generalization of on-line learning and an application to boosting", "year": "1997" }, { "authors": "Aparna Garimella; Akhash Amarnath; Kiran Kumar; Akash Pramod Yalla; Anandhavelu Natarajan; Niyati Chhaya; Balaji Vasan Srinivasan", "journal": "", "ref_id": "b9", "title": "He is very intelligent, she is very beautiful? on mitigating social biases in language modelling and generation", "year": "2021" }, { "authors": "Luca Di; Grazia ; Michael Pradel", "journal": "", "ref_id": "b10", "title": "Code search: A survey of techniques for finding code", "year": "2022" }, { "authors": "Xiaodong Gu; Hongyu Zhang; Sunghun Kim", "journal": "", "ref_id": "b11", "title": "Deep code search", "year": "2018" }, { "authors": "Shuai Daya Guo; Nan Lu; Yanlin Duan; Ming Wang; Jian Zhou; Yin", "journal": "", "ref_id": "b12", "title": "Unixcoder: Unified crossmodal pre-training for code representation", "year": "2022" }, { "authors": "Shuo Daya Guo; Shuai Ren; Zhangyin Lu; Duyu Feng; Shujie Tang; Long Liu; Nan Zhou; Alexey Duan; Shengyu Svyatkovskiy; Michele Fu; Tufano; Colin B Shao Kun Deng; Dawn Clement; Neel Drain; Jian Sundaresan; Daxin Yin; Ming Jiang; Zhou", "journal": "", "ref_id": "b13", "title": "Graphcodebert: Pre-training code representations with data flow", "year": "2021" }, { "authors": "Fan Hu; Yanlin Wang; Lun Du; Hongyu Zhang; Shi Han; Dongmei Zhang; Xirong Li", "journal": "", "ref_id": "b14", "title": "Split, encode and aggregate for long code search", "year": "2023" }, { "authors": "Junjie Huang; Duyu Tang; Linjun Shou; Ming Gong; Ke Xu; Daxin Jiang; Ming Zhou; Nan Duan", "journal": "", "ref_id": "b15", "title": "Cosqa: 20, 000+ web queries for code search and question answering", "year": "2021" }, { "authors": "Hamel Husain; Ho-Hsiang Wu; Tiferet Gazit; Miltiadis Allamanis; Marc Brockschmidt", "journal": "", "ref_id": "b16", "title": "Codesearchnet challenge: Evaluating the state of semantic code search", "year": "2019" }, { "authors": "Lan Jiang; Tianshu Lyu; Yankai Lin; Chong Meng; Xiaoyong Lyu; Dawei Yin", "journal": "", "ref_id": "b17", "title": "On length divergence bias in textual matching models", "year": "2022" }, { "authors": "P Michael; Amirata Kim; James Y Ghorbani; Zou", "journal": "", "ref_id": "b18", "title": "Multiaccuracy: Black-box post-processing for fairness in classification", "year": "2019" }, { "authors": "Wei Li; Haozhe Qin; Shuhan Yan; Beijun Shen; Yuting Chen", "journal": "", "ref_id": "b19", "title": "Learning code-query interaction for enhancing code searches", "year": "2020" }, { "authors": "Chen Lin; Xinyi Liu; Guipeng Xv; Hui Li", "journal": "", "ref_id": "b20", "title": "Mitigating sentiment bias for recommender systems", "year": "2021" }, { "authors": "Chen Lin; Zhichao Ouyang; Junqing Zhuang; Jianqiang Chen; Hui Li; Rongxin Wu", "journal": "", "ref_id": "b21", "title": "Improving code summarization with block-wise abstract syntax tree splitting", "year": "2021" }, { "authors": "Chao Liu; Xin Xia; David Lo; Cuiyun Gao; Xiaohu Yang; John C Grundy", "journal": "ACM Comput. Surv", "ref_id": "b22", "title": "Opportunities and challenges in code search tools", "year": "2022" }, { "authors": "Hao Liu; Yanlin Wang; Zhao Wei; Yong Xu; Juhong Wang; Hui Li; Rongrong Ji", "journal": "", "ref_id": "b23", "title": "Refbert: A two-stage pre-trained framework for automatic rename refactoring", "year": "2023" }, { "authors": "Shuai Lu; Daya Guo; Shuo Ren; Junjie Huang; Alexey Svyatkovskiy; Ambrosio Blanco; Colin B Clement; Dawn Drain; Daxin Jiang; Duyu Tang; Ge Li; Lidong Zhou; Linjun Shou; Long Zhou; Michele Tufano; Ming Gong; Ming Zhou; Nan Duan; Neel Sundaresan; Shengyu Shao Kun Deng; Shujie Fu; Liu", "journal": "", "ref_id": "b24", "title": "Codexglue: A machine learning benchmark dataset for code understanding and generation", "year": "2021" }, { "authors": "Fei Lv; Hongyu Zhang; Jian-Guang Lou; Shaowei Wang; Dongmei Zhang; Jianjun Zhao", "journal": "ASE", "ref_id": "b25", "title": "Codehow: Effective code search based on API understanding and extended boolean model (E)", "year": "2015" }, { "authors": "Paul W Mcburney; Collin Mcmillan", "journal": "Empir. Softw. Eng", "ref_id": "b26", "title": "An empirical study of the textual similarity between source code and source code summaries", "year": "2016" }, { "authors": "Ninareh Mehrabi; Fred Morstatter; Nripsuta Saxena; Kristina Lerman; Aram Galstyan", "journal": "ACM Comput. Surv", "ref_id": "b27", "title": "A survey on bias and fairness in machine learning", "year": "2021" }, { "authors": "Kenton Murray; David Chiang", "journal": "WMT", "ref_id": "b28", "title": "Correcting length bias in neural machine translation", "year": "2018" }, { "authors": "Zohreh Ovaisi; Ragib Ahsan; Yifan Zhang; Kathryn Vasilaky; Elena Zheleva", "journal": "", "ref_id": "b29", "title": "Correcting for selection bias in learning-to-rank systems", "year": "2020" }, { "authors": "Felix Petersen; Debarghya Mukherjee; Yuekai Sun; Mikhail Yurochkin", "journal": "", "ref_id": "b30", "title": "Post-processing for individual fairness", "year": "2021" }, { "authors": "Tony Sun; Andrew Gaut; Shirlyn Tang; Yuxin Huang; Mai Elsherief; Jieyu Zhao; Diba Mirza; Elizabeth M Belding; Kai-Wei Chang; William Yang; Wang ", "journal": "ACL", "ref_id": "b31", "title": "Mitigating gender bias in natural language processing: Literature review", "year": "2019" }, { "authors": "Jingdong Yao Wan; Yulei Shu; Guandong Sui; Zhou Xu; Jian Zhao; Philip S Wu; Yu", "journal": "", "ref_id": "b32", "title": "Multi-modal attention network learning for semantic source code retrieval", "year": "2019" }, { "authors": "Zhou Yao Wan; Min Zhao; Guandong Yang; Haochao Xu; Jian Ying; Philip S Wu; Yu", "journal": "", "ref_id": "b33", "title": "Improving automatic source code summarization via deep reinforcement learning", "year": "2018" }, { "authors": "Yanlin Wang; Hui Li", "journal": "", "ref_id": "b34", "title": "Code completion by modeling flattened abstract syntax trees as graphs", "year": "2021" }, { "authors": "Guipeng Xv; Chen Lin; Hui Li; Jinsong Su; Weiyao Ye; Yewang Chen", "journal": "", "ref_id": "b35", "title": "Neutralizing popularity bias in recommendation models", "year": "2022" }, { "authors": "Chenyuan Zhang; Yanlin Wang; Zhao Wei; Yong Xu; Juhong Wang; Hui Li; Rongrong Ji", "journal": "", "ref_id": "b36", "title": "Ealink: An efficient and accurate pre-trained framework for issue-commit link recovery", "year": "2023" }, { "authors": "Zhi-Hua Zhou", "journal": "", "ref_id": "b37", "title": "Ensemble learning. In Encyclopedia of Biometrics", "year": "2009" }, { "authors": "Qihao Zhu; Zeyu Sun; Xiran Liang; Yingfei Xiong; Lu Zhang", "journal": "", "ref_id": "b38", "title": "Ocor: An overlapping-aware code retriever", "year": "2020" } ]
[ { "formula_coordinates": [ 6, 112.92, 319.52, 176.95, 13.94 ], "formula_id": "formula_0", "formula_text": "R = Score original c + P (T e < T m ),(1)" } ]
2024-02-29
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10" ], "table_ref": [], "text": "In recent years, deep learning models have shown great successes in biomedical applications, including computer-aided detection/diagnosis, image segmentation, image generation, disease staging and prediction, based on the radiological images or physiological recordings, such as computed tomography (CT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI) and electroencephalogram/Magnetoencephalography (EEG/MEG) [1,2,3]. Despite their many strengths, CNN-based models which is good at extracting high level features can be benefited by incorporating all clinical factors at the same time for the downstream classification tasks. Some researches have focused on integrating medical image and clinical features with deep learning frameworks, which improves precision and the individualization of preventive, diagnostic, and therapeutic strategies [4].\nGraph representation learning that can incorporate node features as well as utilize the connectivity/similarity informa-tion among all the nodes achieved good success for machine learning tasks on the graph structured data, such as graph convolutional neural network (GCN) [5]. GCN's capability of leveraging both information of nodes (representing features of entities) and edges (representing connections or relationships between nodes), allowing for feature aggregation in the network level. Advanced versions of GCN include graph attention networks (GATs) [6], graph transformer networks [7] etc. Many research studies have employed graph-based methods to predict chronic diseases [8], mental disorders [9,10], and Alzheimer's disease [11]. In this paper, we propose a multimodal framework integrating both image features and clinical features by building a contrastive graph cross-view learning approach where the graph represents the similarity of individuals in the embedded space for detecting the PD." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Multi-Modality Fusion Methods", "publication_ref": [ "b11", "b12" ], "table_ref": [], "text": "Image-based models historically relied on unimodal input, which were commonly used for disease classification. However, these structured medical images alone are not effective integrating the physiological or numerical characteristics of patients. Existing work showed the effectiveness of integrating brain SPECT images and DNA methylation data in a multi-co-attention model [12], and also hybrid models that integrate CNN and LSTM are used to incorporate dynamic and static speech features to diagnose early physically incapacitating symptoms in the PD patients [13]." }, { "figure_ref": [], "heading": "Graph Representation Learning", "publication_ref": [ "b13", "b14", "b15", "b16" ], "table_ref": [], "text": "Single graph structure is proposed as a way to combine image and non-image characteristics [14,15]. However, the image and non-image data belong to different types or require two types of graph network structures, such as topology graph and feature graph, should be considered for better feature fusion [16]. To improve multimodal classification performance, the GAT incorporates a mechanism of attention to build a graph containing image features as well as clinical features, with category-wise attention and updated node features incorporating importance scores [17]." }, { "figure_ref": [], "heading": "Supervised Graph Contrastive Learning", "publication_ref": [ "b17", "b18", "b19", "b20", "b21" ], "table_ref": [], "text": "Supervised contrastive learning, as demonstrated by its significant successes in image applications such as visual representations [18,19], primarily aims to enhance the similarity among positive pairs while simultaneously augmenting the dissimilarity between negative pairs [20]. Contrastive learning with two graph views was proven effective in fMRIbased neuroimaging classification in a previous medical study that sought to improve the diagnosis of neurological disorders in autism and dementia [21]. Investigations have been conducted on to the design of contrastive losses, such as the InfoNCE loss [22], which maximizes the consistency of positive pairs and uses negative sampling to increase the number of negative pairs from different batches for k classes." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Features Extraction and Graph Construction", "publication_ref": [], "table_ref": [], "text": "We can denote a dataset of N patients with the i-th patient's SPECT image denoted as X m i and non-image features denoted as X f i , where X m i ∈ R µ×ν and X f i ∈ R F with F features, and the label matrix Y ∈ R N ×C with C classes. The multimodal dataset can be described as\n{X m i , X f i , Y i } N i=1\n, where Y i is the i-th row of Y . In the first stage, we use a CNN-based autoencoder h(•) for image feature extraction and flatten the output image feature matrix Q m in the final layer. Then, we constructed two adjacency matrices A m , A f ∈ R N ×N using the K nearest neighbors (KNN) algorithm based on the features obtained from the CNN model encoder and the non-image clinical features (such as patient age, biomarkers, symptoms, etc.) for each images encoder and clinical features subject. Our proposed framework considers two graphs, G m (X m , A m ) and G f (X f , A f ) , as different domain inputs. We also considered a self-loop adjacency matrix  = A+I between the patients. In a K-neighborhood, two data points i and j are connected by an edge e (i,j) if i is amongst the K nearest neighbors of j, or vice versa." }, { "figure_ref": [ "fig_0" ], "heading": "Graph Encoder and Cross-View Fusion", "publication_ref": [], "table_ref": [], "text": "In this stage, we constructed two GATs to learn the graph structures of G m (A m , X m ) and G f (A f , X f ) as depicted in (Fig. 1). We utilize encoded image features X m and clinical features X f as node attributes for the GAT inputs. Moreover, we introduced a GAT architecture that incorporates dual perspectives, enabling the generation of embeddings for neighboring nodes. The most common expression for attention coefficients as applied to our two cross-views is as follows:\nα ij = exp LeakyReLU (⃗ a T ij W ⃗ h i ∥ W ⃗ h j N k=1 exp LeakyReLU (⃗ a T ij W ⃗ h i ∥ W ⃗ h k(1)\nFinally, we can obtain each feature ⃗ h ′ for two cross-views feature representations as shown in Equation (2):\n⃗ h ′ = σ   1 K N k=1 j∈Ni α k ij W k ⃗ h j   ,(2)\nwhere α k ij and W k are the attention mechanism and linear transformation weight matrix, and N i denotes the set of neighborhood nodes of i.\nThe extracted nodes representation from the GAT output is denoted as Z m = f (G m ), and the non-image feature embeddings are represented by Z f = f (G f ), where Z m and Z f are the embeddings in a low-dimensional space R F ′ . Afterward, we concatenated the encoder matrix Q m with Z m to form C m , and the clinical features X f with Z f to create C f as shown below:\nC m = [Q m ∥ Z m ](3)\nC f = X f ∥ Z f(4)\nwhere the C m ∈ R N ×(F +F ′ ) and C f ∈ R N ×(F +F ′ ) represent the two concatenated matrices from cross-views. The improved fusion embedding Ẑm and Ẑf , can be obtained by σ(C m W m ) and σ(C f W f ) respectively, where W m and W f are trainable weight matrices, and σ(•) is non-linear activation function." }, { "figure_ref": [], "heading": "Contrastive Cross-View Loss", "publication_ref": [ "b22" ], "table_ref": [], "text": "In order to learn the common embedding Ẑ, we fuse the two cross-views of node embeddings between Ẑm and Ẑf as\nẐ = Ẑm + Ẑf(5)\nTo better integrate the feature spaces of image and nonimage data in the same embedded space, we constructed a similarity matrix S ∈ R N ×N for each pair of similar patients using the final embedding Ẑ learned from the model. We can define the similarity between the i-th and j-th patients as follows:\nS ij = Ẑi • ( Ẑj ) T , ∀i, j ∈ [1, N ](6)\nIn order to enhance the effectiveness of fusing two types of view embeddings in contrastive learning, we have designed positive and negative losses to capture the differences in distance between positive and negative pairs in terms of the similarity and dissimilarity of our samples. The definitions of positive pair D pos = S ⊙ ( Âm ⊙ Âf ), while negative pair\nD neg = (I -S) ⊙ (I -Âm ) ⊙ (I -Âf )\n, where I denotes the matrix with all elements being 1 with the related dimension, and the two adjacency matrices with self-looped is denoted as Âm and Âf [23]. Then, we can present the loss function of positive and negative pairs as shown below:\nL pos = -∥D pos • Y ∥ 2 2 L neg = -∥max{D neg -δI, 0}(I -Y )∥ 2 2(7)\nwhere the δ > 0 is the controllable margin and Y is the label matrix. By using Eq. 7, we can ultimately obtain the combined losses, incorporating both positive and negative loss, written as: L contrastive = L pos + L neg . By minimizing L contrastive , the similarity intra-class and the dissimilarity inter-class can be maximized." }, { "figure_ref": [], "heading": "Optimization Objective Function", "publication_ref": [ "b10" ], "table_ref": [], "text": "To optimize the loss function and predict final disease probability, we considered embedding both Ẑm and Ẑf in the supervised classification loss using the softmax function. The cross-entropy loss function can be written as:\nL m = - N i=1 y T i ln(softmax(ŷ m i ))(8)\nL f = - N j=1 y T j ln(softmax(ŷ f j ))(9)\nDuring the optimization process, we also designed the overall loss function to combine cross-entropy and contrastive loss from the two cross views. To effectively improve the cross-graph view module, we also took into account the mean square error loss between the similarity matrix S and the diagonal matrix D ii = i A ii when computing the clustering of the view structure of the two modules.\nL diag = 1 N N i,j (S ij -D ii ) 2(10)\nWe use the β coefficient to control the optimization weight of the overall loss defined as follows: (11) where β can be set between 0 and 1. The contribution level of different losses is controlled through the coefficient of β.\nL = (1 -β)(L m + L f ) + βL contrastive + L diag" }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b23", "b24" ], "table_ref": [], "text": "Our data was collected at Kaohsiung Chang Gung Memorial Hospital in Taiwan from January 2017 to Jun 2019 with 416 patients [24]. The data was annotated by four expert physicians to provide a label either as normal or abnormal PD. Tc99m TRODAT SPECT/CT images were acquired using a hybrid SPECT/CT system (Symbia T, Siemens Medical Solution). SPECT images were obtained with 30s per step, acquiring 120 projections over a circular 360-degree rotation using low-energy, high-resolution parallel-hole collimators.\nAfter reconstruction with CT attenuation correction, SPECT images were imported into DaTQUANT for automatic semi-quantification [25]. Twelve parameters were obtained from DaTQUANT: Striatum Right (S-R), Striatum Left (S-L), Anterior Putamen Right (AP-R), Anterior Putamen Left (AP-L), Posterior Putamen Right (PP-R), Posterior Putamen Left (PP-L), Caudate Right (C-R), Caudate Left (C-L), Putamen/Caudate Ratio Right (P/C-R), Putamen/Caudate Ratio Left (P/C-L), Putamen Asymmetry (PA), and Caudate Asymmetry (CA) as shown in Table 1. Afterwards, images from four patients were removed from the dataset due to quality issues. We used the remaining 412 SPECT images and quantitative DaTQUANT data for model training (n=312) and testing (n=100), and we conducted five-fold cross validation." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Ablation Study", "publication_ref": [ "b25", "b26" ], "table_ref": [ "tab_0", "tab_0" ], "text": "In this study, we conducted a comparative analysis using popular machine learning algorithms (logistic regression and Xgboost) as the baseline methods [26] and existing methods Table 1.\nSummary statistics for 12 non-image automatic semi-quantification features were generated using the DaTQUANT software. AM-GCN [27] in the ablation experiments using non-image DaTQUANT data. Additionally, we employed a two-layered CNN model with a ResNet18 backbone, specifically utilizing ResNet18 for the classification of SPECT images. As shown in Table 2, relying solely on the CNN model for prediction did not lead to superior performance.\nIn our research, we began by extracting image features using a CNN model and non-imaging DaTQUANT variables, followed by constructing two cross-views of the graph representation. We subsequently concatenated these different modalities to improve the predictive capability after model fusion. Table 2 showcases our experimental results from utilizing two methods (GCN and GAT) for learning graph structures and generating embeddings. The results revealed that, when a cross-view approach was employed, the GAT method achieved a macro average accuracy rate of 91%, along with F1, sensitivity, and precision scores of 92% in normal and abnormal classes. Additionally, our method can achieve an 5fold cross-validated AUC of 92.8% as shown in (Fig. 2 (B)).\nTo conclude, we investigated the sensitivity of the overall model to parameter K in our proposed method for constructing K-nearest neighbor graphs. Figure 3 clearly demonstrates the results of five experimental runs, showcasing a remarkably robust performance with an average sensitivity and specificity of 0.89." }, { "figure_ref": [ "fig_1" ], "heading": "RESULTS AND DISCUSSION", "publication_ref": [], "table_ref": [], "text": "In summary, we successfully integrated the automatic semiquantification features from both image and non-image data to enhance prediction accuracy in the PD classification task. By utilizing cross-view graph structured information, we suc- cessfully predicted the distribution of twelve non-imaging parameters for both normal and abnormal cases within a lowdimensional space, resulting in effective clustering and interpretation, as depicted in (Fig. 2 (A)). Our research findings indicate that models based solely on CNNs can have certain limitations in interpreting image features. However, those limitations can be overcome to some extent through the integration of non-imaging data and the application of contrastive loss learning, which significantly enhanced the overall performance and predictive capacity of our model." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENT:", "publication_ref": [], "table_ref": [], "text": "We are grateful to the Department of Nuclear Medicine at Kaohsiung Chang Gung Memorial Hospital for providing us with comprehensive data and data labeling support. Research reported in this publication was partially supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under Award Number R21EB033455 and research grant from New Jersey Health Foundation under grant number PC 40-23. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health nor the New Jersey Health Foundation." } ]
Parkinson's Disease (PD) affects millions globally, impacting movement. Prior research utilized deep learning for PD prediction, primarily focusing on medical images, neglecting the data's underlying manifold structure. This work proposes a multimodal approach encompassing both image and nonimage features, leveraging contrastive cross-view graph fusion for PD classification. We introduce a novel multimodal co-attention module, integrating embeddings from separate graph views derived from low-dimensional representations of images and clinical features. This enables extraction of more robust and structured features for improved multi-view data analysis. Additionally, a simplified contrastive loss-based fusion method is devised to enhance cross-view fusion learning. Our graph-view multimodal approach achieves an accuracy of 91% and an AUC of 92.8% in five-fold cross-validation. It also demonstrates superior predictive capabilities on nonimage data compared to solely machine learning-based methods.
PARKINSON'S DISEASE CLASSIFICATION USING CONTRASTIVE GRAPH CROSS-VIEW LEARNING WITH MULTIMODAL FUSION OF SPECT IMAGES AND CLINICAL FEATURES
[ { "figure_caption": "Fig. 1 .1Fig. 1. The workflow of multimodal contrastive cross-view graph learning framework.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Visualization of the predictions of the two-graph cross-view GAT model, incorporating three variables from twelve parameters. Figure (A) Scatter plots of three parameters derived from DaTQUANT to explore the data distribution for normal versus abnormal TRODAT SPECT images. Figure (B) Five-fold cross-validation of ROC curves for each testing set.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The mean and standard deviation performance of our proposed model in terms of sensitivity and specificity across five runs on testing data, based on a varying number of Kneighborhoods.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Performance comparison between the proposed model and machine learning methods using image and non-image data.", "figure_data": "ModelBackboneImage Non-imageNormal AbnormalACC0.80Logistic-F1 SEN 0.82 0.820.79 0.79NoYesPRE 0.82 ACC0.790.79Xgboost-F1 SEN 0.82 0.800.76 0.75BaselinePRE 0.79 ACC0.890.78ResNet18-F1 SEN 0.92 0.900.88 0.87NoPRE 0.89 ACC0.850.902-layer CNN-F1 SEN 0.83 0.850.84 0.87PRE 0.880.82ACC0.86Existing Methods AM-GCN GCN+AttentionF1 SEN 0.83 0.860.85 0.88YesPRE 0.89 ACC0.870.822-layer CNNF1 SEN 0.88 0.880.85 0.85GCN+GCNPRE 0.87 ACC0.880.86ResNet18F1 SEN 0.87 0.890.88 0.90Proposed ModelYesPRE 0.91 ACC0.860.852-layer CNNF1 SEN 0.88 0.870.84 0.83GAT+GATPRE 0.85 ACC0.910.86ResNet18F1 SEN 0.92 0.920.90 0.90PRE 0.920.90", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" } ]
Jun-En Ding; Chien-Chin Hsu; Feng Liu
[ { "authors": "Dinggang Shen; Guorong Wu; Heung-Il Suk", "journal": "Annual review of biomedical engineering", "ref_id": "b0", "title": "Deep learning in medical image analysis", "year": "2017" }, { "authors": "Yuki Shu Lih Oh; U Hagiwara; Rajamanickam Raghavendra; N Yuvaraj; M Arunkumar; U Murugappan; Acharya Rajendra", "journal": "Neural Computing and Applications", "ref_id": "b1", "title": "A deep learning approach for parkinson's disease diagnosis from eeg signals", "year": "2020" }, { "authors": "Meng Jiao; Guihong Wan; Yaxin Guo; Dongqing Wang; Hang Liu; Jing Xiang; Feng Liu", "journal": "Frontiers in Neuroscience", "ref_id": "b2", "title": "A graph fourier transform based bidirectional long short-term memory neural network for electrophysiological source imaging", "year": "2022" }, { "authors": "Guido J Julián N Acosta; Pranav Falcone; Eric J Rajpurkar; Topol", "journal": "Nature Medicine", "ref_id": "b3", "title": "Multimodal biomedical ai", "year": "2022" }, { "authors": "Thomas N Kipf; Max Welling", "journal": "", "ref_id": "b4", "title": "Semi-supervised classification with graph convolutional networks", "year": "2017" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Liò; Yoshua Bengio", "journal": "", "ref_id": "b5", "title": "Graph attention networks", "year": "2017" }, { "authors": "Seongjun Yun; Minbyul Jeong; Raehyun Kim; Jaewoo Kang; Hyunwoo J Kim", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "Graph transformer networks", "year": "2019" }, { "authors": "Juan ; Diaz Ochoa; Faizan E Mustafa", "journal": "Artificial Intelligence in Medicine", "ref_id": "b7", "title": "Graph neural network modelling as a potentially effective method for predicting and analyzing procedures based on patients' diagnoses", "year": "2022" }, { "authors": "Du Kun Qin; Lei; Nanfang Walter Hl Pinaya; Wenbin Pan; Ziyu Li; John A Zhu; Andrea Sweeney; Qiyong Mechelli; Gong", "journal": "EBioMedicine", "ref_id": "b8", "title": "Using graph convolutional network to characterize individuals with major depressive disorder across multiple imaging sites", "year": "2022" }, { "authors": "Shulin Wen; Shihao Yang; Xinglong Ju; Ting Liao; Feng Liu", "journal": "Springer", "ref_id": "b9", "title": "Prediction of cannabis addictive patients with graph neural networks", "year": "2023" }, { "authors": "Yonghua Zhu; Junbo Ma; Changan Yuan; Xiaofeng Zhu", "journal": "Information Fusion", "ref_id": "b10", "title": "Interpretable learning based dynamic graph convolutional networks for alzheimer's disease analysis", "year": "2022" }, { "authors": "Devin Taylor; Simeon E Spasov; Pietro Liò", "journal": "", "ref_id": "b11", "title": "Coattentive cross-modal deep learning for medical evidence synthesis and decision making", "year": "2019" }, { "authors": "U Lilhore; Surjeet Dalal; Neetu Faujdar; Martin Margala; Prasun Chakrabarti; T Chakrabarti; Sarita Simaiya; Pawan Kumar; P Thangaraju; Hemasri Velmurugan", "journal": "Scientific Reports", "ref_id": "b12", "title": "Hybrid cnn-lstm model with efficient hyperparameter tuning for prediction of parkinson's disease", "year": "2023" }, { "authors": "Fuzhen Hao Chen; Li Zhuang; Li Xiao; Ling Xiao; Ling Ma; Haiyan Ma; Ruifang Liu; Huiqin Zhang; Huiqin Jiang; Qing Jiang; He", "journal": "", "ref_id": "b13", "title": "Ama-gcn: Adaptive multilayer aggregation graph convolutional network for disease prediction", "year": "2021" }, { "authors": "Liqin Huang; Xiaofang Ye; Mingjing Yang; Lin Pan; Zheng Shao Hua", "journal": "Computers in Biology and Medicine", "ref_id": "b14", "title": "Mnc-net: Multi-task graph structure learning based on node clustering for early parkinson's disease diagnosis", "year": "2022" }, { "authors": "Xiao Wang; Xiao Wang; Meiqi Zhu; Deyu Bo; Peng Cui; Chuan Shi; Jian Pei", "journal": "Knowledge Discovery and Data Mining", "ref_id": "b15", "title": "Am-gcn: Adaptive multi-channel graph convolutional networks", "year": "2020" }, { "authors": "H Cui; P Xuan; Qiangguo Jin; Mingjun Ding; Butuo Li; B Zou; Yiyue Xu; B Fan; Wanlong Li; Jinming Yu; Linlin Wang; H Duh", "journal": "International Conference on Medical Image Computing and Computer-Assisted Intervention", "ref_id": "b16", "title": "Co-graph attention reasoning based imaging and clinical features integration for lymph node metastasis prediction", "year": "2021" }, { "authors": "Prannay Khosla; Piotr Teterwak; Chen Wang; Chen Wang; Aaron Sarna; Yonglong Tian; Phillip Isola; A Maschinot; Aaron Maschinot; Ce Liu; Ce Liu; Dilip Krishnan", "journal": "", "ref_id": "b17", "title": "Supervised contrastive learning", "year": "2020" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey E Hinton", "journal": "CoRR", "ref_id": "b18", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Yue Liu; Xihong Yang; Sihang Zhou; Xinwang Liu", "journal": "", "ref_id": "b19", "title": "Simple contrastive graph clustering", "year": "2022" }, { "authors": "Liang Peng; Nan Wang; Jie Xu; Xiao Lan Zhu; Xiaoxiao Li", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b20", "title": "Gate: Graph cca for temporal selfsupervised learning for label-efficient fmri analysis", "year": "2022" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b21", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Yue Liu; Xihong Yang; Sihang Zhou; Xinwang Liu", "journal": "", "ref_id": "b22", "title": "Simple contrastive graph clustering", "year": "2022" }, { "authors": "Jun-En Ding; Chi-Hsiang Chu; Mong-Na Lo Huang; Chien-Ching Hsu", "journal": "", "ref_id": "b23", "title": "Dopamine transporter spect image classification for neurodegenerative parkinsonism via diffusion maps and machine learning classifiers", "year": "2021" }, { "authors": "Jacquelyn Elizabeth Brogley", "journal": "Journal of Nuclear Medicine Technology", "ref_id": "b24", "title": "Datquant: The future of diagnosing parkinson disease", "year": "2019" }, { "authors": "Shih-Yen Hsu; Hsin-Chieh Lin; Tai-Been Chen; Wei-Chang Du; Yun-Hsuan Hsu; Yichen Wu; Yi-Chen Wu; Po-Wei Tu; Yung-Hui Huang; Huei-Yung Chen", "journal": "Sensors", "ref_id": "b25", "title": "Feasible classified models for parkinson disease from 99mtc-trodat-1 spect imaging", "year": "2019" }, { "authors": "Xiao Wang; Meiqi Zhu; Deyu Bo; Peng Cui; Chuan Shi; Jian Pei", "journal": "", "ref_id": "b26", "title": "Am-gcn: Adaptive multi-channel graph convolutional networks", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 223.59, 580.08, 71.63, 13.68 ], "formula_id": "formula_0", "formula_text": "{X m i , X f i , Y i } N i=1" }, { "formula_coordinates": [ 2, 322.58, 440.75, 236.41, 34.5 ], "formula_id": "formula_1", "formula_text": "α ij = exp LeakyReLU (⃗ a T ij W ⃗ h i ∥ W ⃗ h j N k=1 exp LeakyReLU (⃗ a T ij W ⃗ h i ∥ W ⃗ h k(1)" }, { "formula_coordinates": [ 2, 366.34, 521.81, 192.65, 33.76 ], "formula_id": "formula_2", "formula_text": "⃗ h ′ = σ   1 K N k=1 j∈Ni α k ij W k ⃗ h j   ,(2)" }, { "formula_coordinates": [ 2, 400.03, 695.03, 158.97, 11.03 ], "formula_id": "formula_3", "formula_text": "C m = [Q m ∥ Z m ](3)" }, { "formula_coordinates": [ 2, 401.67, 712.26, 157.33, 11.03 ], "formula_id": "formula_4", "formula_text": "C f = X f ∥ Z f(4)" }, { "formula_coordinates": [ 3, 148.05, 217.38, 150.16, 11.47 ], "formula_id": "formula_5", "formula_text": "Ẑ = Ẑm + Ẑf(5)" }, { "formula_coordinates": [ 3, 112.59, 321.72, 185.62, 12.17 ], "formula_id": "formula_6", "formula_text": "S ij = Ẑi • ( Ẑj ) T , ∀i, j ∈ [1, N ](6)" }, { "formula_coordinates": [ 3, 54.43, 415.76, 170.21, 12.17 ], "formula_id": "formula_7", "formula_text": "D neg = (I -S) ⊙ (I -Âm ) ⊙ (I -Âf )" }, { "formula_coordinates": [ 3, 95.82, 489.5, 202.39, 28.87 ], "formula_id": "formula_8", "formula_text": "L pos = -∥D pos • Y ∥ 2 2 L neg = -∥max{D neg -δI, 0}(I -Y )∥ 2 2(7)" }, { "formula_coordinates": [ 3, 108.66, 696.15, 189.55, 30.32 ], "formula_id": "formula_9", "formula_text": "L m = - N i=1 y T i ln(softmax(ŷ m i ))(8)" }, { "formula_coordinates": [ 3, 372.41, 84.23, 186.59, 30.32 ], "formula_id": "formula_10", "formula_text": "L f = - N j=1 y T j ln(softmax(ŷ f j ))(9)" }, { "formula_coordinates": [ 3, 379.35, 215.73, 179.65, 30.32 ], "formula_id": "formula_11", "formula_text": "L diag = 1 N N i,j (S ij -D ii ) 2(10)" }, { "formula_coordinates": [ 3, 329.02, 288.57, 198.82, 9.65 ], "formula_id": "formula_12", "formula_text": "L = (1 -β)(L m + L f ) + βL contrastive + L diag" } ]
10.1145/3503161.3548054
2023-11-25
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b4", "b11", "b23", "b40", "b45", "b46", "b10", "b27", "b31", "b1", "b18", "b6", "b26", "b0", "b24", "b13", "b34", "b43", "b34", "b20" ], "table_ref": [], "text": "Although the modern deep learning network (DNN) has shown good performance on the task at hand [5,12,24,41,46,47], they always violently forget the previous knowledge when they learn new tasks. Such a phenomenon is known as catastrophic forgetting, which is the research hotspot in Continual Learning (CL) [11,28,32]. In the training phase of CL, the model needs to learn on a stream of tasks sequentially. This requires the network to have plasticity, that is, the ability to learn fresh knowledge from unseen tasks. At the same time, it is critical for the network to retain previously learned knowledge, i.e., the stability ability. However, \"you can't have your cake and eat it too\": the dilemma of stability-plasticity is what the continual learning networks need to solve primarily.\nMany approaches have been proposed to deal with this dilemma, including i) regularization-based methods [2,19], ii) memory-based methods [7,27], and iii) expansion-based methods [1,25]. To be more specific, regularization-based methods focus on penalizing the modification of the most important weights of the network so that the previous knowledge can be preserved. To maintain the performance on previous tasks, memory-based approaches replay the data of old tasks or the synthetic data from generative models to jointly train with present task samples. In contrast, expansionbased methods seek to expand the architecture of the network to learn new knowledge.\nRecently, the gradient projection based paradigms [14,35,44] have achieved remarkable performance in tackling the catastrophic forgetting problem in CL. Existing gradient projection based methods put explicit constraints on the gradient directions that the optimizer takes. For example, GPM [35] computes the bases of these gradient subspace based on the representations learned by each learning task. In the learning phase of the next task, new gradient steps in the orthogonal direction to these gradient subspace would be taken. A simple illustration is shown in Fig. 1 (a). Despite the impressive progress, there is a critical problem for those gradient projection based methods. Considering a simple scenario where the task at hand is to classify the given image as \"Man\" or \"Sea\". The class deviation between them is significantly large. In other words, their gradient subspace could be so different from each other. Therefore, if we directly perform Singular Value Decomposition (SVD) [21] on the representations from all classes of a task, some bases for \"apple\" or \"car\" may be missed. When the gradient projection is performed in the subsequent tasks, the projected gradient is greatly disturbed; as a consequence, the model forgets the knowledge learned previously, as illustrated in Fig. 1 " }, { "figure_ref": [], "heading": "(b).", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose Class Gradient Projection (CGP) to address the side-effect of class deviation in gradient projection. CGP calculates the gradient subspace with the representations from individual classes rather than tasks, such that the gradient update steps orthogonal to the constructed classes subspace can be effectively utilized to minimize the interference from other classes. Based on this framework, two effective variants are developed to further alleviate catastrophic forgetting. Specifically, to construct a more informative gradient subspace for each incoming task, we design a Base Refining (BR) algorithm to refine class bases dynamically. BR adopts an inter-class similarity estimation module to guide the partition of classes into the built bases. In addition, considering that the update steps are orthogonal to the gradient subspace of previous classes, the model may lack optimization space for unseen classes. Therefore, we introduce a supervised contrastive loss to learn more representative and robust features by pulling embedding from positive samples closer and pushing embedding from negative samples apart. The contrastive learning can guide continuous optimization of the model when its gradient is projected by more and more classes subspace. The designed two variants can complement each other to enhance the plasticity and stability of the model simultaneously. In summary, the main contributions can be summarized as follows:\n• We propose Class Gradient Projection (CGP), which effectively utilizes the gradient update steps orthogonal to the constructed class subspace to minimize the negative interference between classes. " }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Continual Learning", "publication_ref": [ "b1", "b21", "b29", "b44", "b16", "b24", "b30", "b33", "b35", "b39", "b2", "b26", "b32", "b37", "b1", "b44", "b21", "b29", "b33", "b39", "b30", "b35", "b26", "b32", "b37", "b13", "b34", "b34" ], "table_ref": [], "text": "Prevalent approaches to address the catastrophic forgetting problem in continual learning can be grouped into three broad categories: regularization-based approaches [2,22,30,45], expansion-based approaches [17,25,31,34,36,40], and memory-based approaches [3,27,33,38].\nRegularization-based methods aim at avoiding excessive changes in the parameters learned on old tasks when learning a new task. Typically, these methods [2,45] estimate importance weights for each model parameter. Then the changes of the important parameters are penalized by a regularizer for previous tasks. Under the Bayesian framework, VCL and IMM [22,30] take the posterior distribution of network parameters learned from previous tasks as the prior distribution of network parameters on the current task, which implicitly penalizes the changes of network parameters under the Bayesian framework. However, our method works in putting constraints on gradient descent rather than ascribing importance to parameters and penalizing the changes.\nThe basic idea of expansion-based approaches is to directly add or modify the model structure. Some methods (e.g. [34]) add a network to each task and lateral connections to the network of the previous task. MNTDP [40] proposes a modular layer network approach, whose modules represent atomic skills that can be composed to perform a certain task and provides a learning algorithm to search the modules to combine with. These strategies may work well, but they are computationally expensive and memory intensive. Finally, other approaches [31,36] provide an alternative approach that assigns the different sub-networks or weights. In contrast to these methods, our method works within a fixed network architecture.\nFor memory-based methods, catastrophic forgetting is avoided by storing data from previous tasks and training them together with data from the current task. Some methods [27,33,38] use replayed samples from previous tasks to constrain the parameters' update when learning the new task. They do not need to store data from previous tasks, but their performance is significantly affected by the quality of generated data, especially for complex natural images. Instead of saving the raw data or generated data, our method saves the subspace bases for future task learning, avoiding the privacy problem.\nRecently, a series of continual learning methods combined with orthogonal projection have been proposed [14,35]. Gradient projection methods update the model with gradients in the orthogonal directions of old tasks, without access to old task data. GPM [35] find the bases of these subspaces by analyzing network representations after learning each task with Singular Value Decomposition (SVD) in a single shot manner and store them in the memory as gradient projection memory. In this paper, to address the class deviation in gradient projection, we propose class gradient projection. \n∇ W ∇ W L Final ∇ W ∇ W ∇ W L Final ∇ W ∇ W L Final ∇ W ∇ W ∇ W L Final ∇ W ∇ W Contrastive Learning D τ\nFigure 2: An overview of our Class Gradient Projection (CGP) network. First, to alleviate the class deviation, CGP constructs the gradient subspace for each class with a base refining module. Second, CGP projects the gradient of the new task orthogonal to the constructed gradient subspace. Moreover, we develop a contrastive loss to guide continuous optimization of the model when its gradient is projected by more and more constructed gradient subspace.\nThrough such class gradient projection, the network achieves better stability and preserve more learned knowledge." }, { "figure_ref": [], "heading": "Contrastive Learning", "publication_ref": [ "b36", "b22", "b14", "b38", "b8", "b9", "b17", "b5" ], "table_ref": [], "text": "In recent years, contrastive learning has shown superior performance, even in competition with supervised training. Supervised contrastive learning (SCL) is extended from the standard contrastive loss by incorporating the label information to construct positive and negative pairs [37]. Prototypical Contrastive Learning (PCL) [23] uses the centroids of clusters as prototypes, and pulls the image embedding closer to its prototypes. Noise-Contrastive estimation [15] is the seminal work that estimates the latent distribution by contrasting it with artificial noises. In addition, CPC [39] tries to learn representations from visual inputs by leveraging an autoregressive model to predict the future in an unsupervised manner. These studies [9,10] have resolved practical limitations that have previously made learning difficult such as negative sample pairs, large batch size, and momentum encoders. Meanwhile, it has been shown that supervised learning can also enjoy the benefits of contrastive representation learning by simply using labels to extend the definition of positive samples [18]. Another method [6] combines the contrastive learning with continual learning. They use samples of the current task as anchors and samples of previous tasks as negative samples. Different from this method, our method does not need to replay the previous samples. Instead, we perform augmentation on input samples. The sample and the augmented sample make up the positive pairs. The rest samples serve as negative samples. Then we introduce a contrastive loss to pull embedding from positive samples closer and to push embedding from negative samples apart. This contrastive learning encourages the network to learn more representative and robust features of tasks." }, { "figure_ref": [], "heading": "PRELIMINARIES 3.1 Continual Learning", "publication_ref": [ "b34" ], "table_ref": [], "text": "In the setup of supervised continual learning, a series of 𝑇 tasks are learned sequentially. We denote the task by its task descriptor, 𝜏 ∈ {1, 2, ...,𝑇 } and its corresponding dataset D 𝜏 = {(𝑥 𝜏,𝑖 , y 𝜏,𝑖 ) 𝑁 𝜏 𝑖=1 } which has 𝑛 𝜏 example pairs. The 𝑥 𝜏,𝑖 (∈ X ) is the input vector and y 𝜏,𝑖 (∈ Y ) is the target vector. A DNN model parameterized with Φ = {W, 𝜑 } is used to learn a mapping in the form ŷ𝜏 = f map (𝑥 𝜏 ; Φ). Here, W 𝜏 = {W l 𝜏 } 𝐿 𝑙=1 represents a 𝐿 layer neural network, where W 𝑙 𝜏 is the layer-wise weight for layer l and task 𝜏. In each layer, the layer network computes the output x l+1 𝜏,i for next layer:\nO l 𝜏,i = f (𝑥 𝑙 𝜏,𝑖 ; W l 𝜏 ), x l+1 𝜏,i = 𝜎 𝑙 (O l 𝜏,i\n), with 𝜎 𝑙 is a non-linear function for layer 𝑙 and f is the linear function for layer. Following [35], at the first layer, the x 1 𝜏,i = x 𝜏,i represents the raw input data from task 𝜏. Whereas in the subsequent layers we define x l 𝜏,i as the representations of x 𝜏,i at layer 𝑙. The output of final neural network x L+1 𝜏,i is then passed through a classifier parameterized by 𝜑 to produce the prediction ŷ = f (x L+1 𝜏,i ; 𝜑). The model is trained by minimizing the loss function for task 𝜏, e.g. cross-entropy loss\nΦ * = minimize Φ 𝑁 𝑡 ∑︁ 𝑛=1 ℓ (𝑓 (x 𝜏,i , Φ), y 𝜏,i ),\nwhere Φ * denotes the optimal model for task 𝜏." }, { "figure_ref": [], "heading": "Gradient Projection", "publication_ref": [ "b13", "b34", "b34" ], "table_ref": [], "text": "Recently, a series of continual learning methods combined with orthogonal gradient projection have been proposed [14,35]. These methods update the model with gradients in the orthogonal directions of old tasks, without access to old task data. After learning the task 𝜏 completely, they construct the gradient space S 𝜏 using the samples of task 𝜏. When learning the task 𝜏 + 1, the gradient of model W 𝜏+1 is projected to the gradient subspace S 𝜏 of previous tasks to get the Proj ∇ W 𝜏 +1 𝐿 . Then the Proj ∇ W 𝜏 +1 𝐿 is substracted out from the origin gradient W 𝑡 +1 so that remaining gradient updates lie in the space orthogonal to 𝑆 𝜏 [35]." }, { "figure_ref": [], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "As illustrated in the preliminaries, the gradient projection approaches construct the gradient subspace by the samples from all the classes of task 𝜏. However, we argue that class deviation in tasks may cause the calculated gradient space too biased to represent the task accurately, leading to the degradation of the performance. Moreover, it is critical to explore the optimization space for the new tasks. In this section, we propose a novel Class Gradient Projection (CGP) for continual learning. Fig. 2 illustrates the pipeline of our proposed continual learning approach. We are going to show how the CGP works at a high level. On the one hand, CGP constructs the gradient subspace with individual classes and projects the gradient update of new tasks to the direction orthogonal to the subspace of old classes. On another hand, CGP learns the representations with supervised contrastive learning to explore the optimization space for the new tasks. The training is done on the compound loss:\nL = L 𝑐𝑒 + 𝜆 • L 𝑐𝑜𝑛 ,(1)\nwhere L 𝑐𝑒 is the cross-entropy loss and L 𝑐𝑜𝑛 is the supervised contrastive learning." }, { "figure_ref": [], "heading": "Class Gradient Projection", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the bases construction and class gradient projection to show how it enables the network to learn continually without forgetting.\nWhen learning Task 𝜏 = 1: We use the training process as shown in Eq. 1 without any constraint. After the network learns completely, the Singular Value Decomposition (SVD) is performed on the representations to construct the bases. Specifically, for layer 𝑙 in W, we construct a representation matrix \nR l 1 = [𝑥 𝑙 1,1 , 𝑥 𝑙 1,2 , ...,\nU 𝑙 1,1 , Σ 𝑙 1,1 , (V 𝑙 1,1 ) 𝑇 = 𝑆𝑉 𝐷 (𝐶 𝑙 1,1 ),(2)\nwhere U 𝑙 1,1 is a 𝑚 × 𝑚 complex unitary matrix, and V 𝑙 1,1 is a 𝑛 × 𝑛 complex unitary matrix. Σ 𝑙 1,1 is a 𝑚 × 𝑛 rectangular diagonal matrix with non-negative singular values {𝛿 } on the diagonal in a descending order. By applying the k-rank approximation on 𝐶 𝑙 1,1 for the given threshold ( 𝜖 𝑙 𝜏 ):\n(C l 1,1 ) 𝑘 2 𝐹 ≥ 𝜖 𝑙 𝜏 C l 1,1 2 𝐹 ,(3)\nwhere ∥•∥ 𝐹 is the Frobenius norm, we construct the bases 𝑆 𝑙 1 by picking the first 𝑘 vectors in U 𝑙 1,1 . When learning the rest classes, e.g. class 𝑗, we need to remove the common bases contained in 𝑆 𝑙 1 so that the bases constructed by 𝐶 𝑙 1,𝑗 are orthogonal to the existing bases in 𝑆 𝑙 1 :\nĈ𝑙 1,𝑗 = 𝐶 𝑙 1,𝑗 -𝑆 𝑙 1 (𝑆 𝑙 1 ) 𝑇 (𝐶 𝑙 1,𝑗 ) = 𝐶 𝑙 1,𝑗 -Proj 𝐶 𝑙 1,𝑗 .(4)\nAfter applying the SVD on Ĉ𝑙 1,𝑗 , we follow the criteria to construct the bases for class 𝑗:\nProj 𝐶 𝑙 1,𝑗 2 𝐹 + ( Ĉl 1,j ) 𝑘 2 𝐹 ≥ 𝜖 𝑙 𝜏 Ĉl 1,j 2 𝐹 .(5)\nAfter calculating all classes for task 1, we construct the bases 𝑆 𝑙 1 . When learning Task 𝜏 > 1: We train the network using the loss function defined in Eq. 1 as usual. However, before taking gradient updates from the backpropagation, we put constraints on the gradient updates. We modify the origin gradients using:\nProj ∇ W L = (∇ W L)𝑆 𝑙 (𝑆 𝑙 ) 𝑇 .(6)\nHence, the final gradient can be expressed as:\n∇ W L = ∇ W L -Proj ∇ W L .(7)\nAfter the network training converged, we construct the bases for task 𝜏. We use the same process as the calculation in task 𝜏 = 1, shown in Eq. 4 and Eq. 5. After the construction, we concatenate the bases for task 𝜏, i.e. 𝑆 𝑙 𝜏 and previous bases 𝑆 𝑙 together in a new 𝑆 𝑙 which used for future learning. We repeat bases construction and gradient projection until all tasks are learned." }, { "figure_ref": [], "heading": "CGP with Base Refining (BR)", "publication_ref": [ "b25" ], "table_ref": [], "text": "There are many classes in a task 𝜏, as illustrated in Sec. 1. Some classes are different from others. However, there may be some classes similar to others too. For example, the man is different from the sea and similar to the boy. So we introduce the base refining to combine similar classes to construct the bases of gradient subspace. We estimate the class similarity using the prototype of class. Concretely, after the task 𝜏 is learned, we collect the samples X 𝑐 𝜏 of all classes. Then the class prototype is calculated by the normalized mean embedding:\nz 𝑙 𝑐 𝜏 = 1 X 𝑐 𝜏 ∑︁ 𝑖 ∈X 𝑐𝜏 𝑥 𝑙 𝑖 , P 𝑙 𝑐 𝜏 = z 𝑙 𝑐 𝜏 z 𝑙 𝑐 𝜏 2 ,(8)\nwhere 𝑥 𝑙 𝑖 is the representation of sample at layer 𝑙. The similarity between classes is estimated by the calculated prototype using cosine distance:\n𝑠𝑖𝑚(P 𝑙 𝑢 , P 𝑙 𝑣 ) = P 𝑙 𝑢 𝑇 P 𝑙 𝑣 P 𝑙 𝑢 P 𝑙 𝑣 .(9)\nIf the 𝑠𝑖𝑚(P 𝑙 𝑢 , P 𝑙 𝑣 ) is greater than threshold 𝜂, we construct the bases for both class 𝑢 and 𝑣. In contrast, if the similarity is smaller than 𝜂, we construct the bases for class 𝑢 and 𝑣 as before. We introduce how to combine the base refining with our base construction in the next paragraph.\nWhen constructing bases in 𝜏 = 1, as we can get the representations of classes, the representations of class 𝑗 and 𝑘 are concatenated together, 𝐶 𝑙 1,( 𝑗,𝑘 ) = 𝑐𝑜𝑛𝑐𝑎𝑡 (𝐶 𝑙 1,𝑗 , 𝐶 𝑙 1,𝑘 ). Then, we perform the SVD on 𝐶 𝑙 1,( 𝑗,𝑘 ) . Following the criteria in Eq. 3, we construct the bases 𝑆 𝑙 1,( 𝑗,𝑘 ) for both class 𝑗 and class 𝑘. Finally, we replace the origin bases 𝑆 𝑙 1,𝑗 and 𝑆 𝑙 1,𝑘 with 𝑆 𝑙 1,( 𝑗,𝑘 ) . In the construction for task 𝜏 > 1, we construct the common bases for class 𝑗 and 𝑘 as in [26]. Firstly, we calculate the square of the singular value of 𝐶 𝑙 𝜏,𝑗 with respect to 𝑆 𝑙 which are constructed by previous class by:\n𝛿 𝑙 𝜏,𝑗 = 𝑆 𝑙 𝐶 𝑙 𝜏,𝑗 (𝐶 𝑙 𝜏,𝑗 ) 𝑇 (𝑆 𝑙 ) 𝑇 .(10)\nThen, the SVD is applied to the result of performing Eq. ) 2 } together in a vector 𝛿. By performing k-rank ( Eq. 5 ) on 𝐶 𝑙 𝜏,𝑗 , we choose the corresponding bases of first 𝑘 elements in 𝛿 to be the bases for class 𝑗 and 𝑘.\nFurthermore, in practice, one does not need all X 𝑐 𝜏 for calculation. Another alternative is to select the samples X 𝑟𝑖𝑔ℎ𝑡 𝑐𝜏 which are predicting right to the ground truth label. This alternative is referred to as BR-GTL (Ground Truth Label). Furthermore, BR-GTL reduces the storage size and calculate consumption. We empirically observe that BR-GTL slightly outperforms the BR-STD (Standard) which uses samples no matter the prediction result. The BR-GTL is used in all of our following experiments." }, { "figure_ref": [], "heading": "CGP with Contrastive Learning (Con)", "publication_ref": [], "table_ref": [], "text": "Although class gradient projection can reserve the knowledge well, it reduces the optimization space for learning fresh knowledge. To deal with this problem, we introduce the contrastive learning to explore the optimization space for new tasks. Specifically, given the 𝑁 𝜏 samples from task 𝜏, we apply augmentation to each sample and obtain 2𝑁 𝜏 inputs {𝑥 𝜏,𝑖 } 2𝑁 𝜏 𝑖=1 . The augmentation consists of color and brightness changes with details given in Sec. 5. We collect the normalized embeddings {x L+1 𝜏,i } 2𝑁 𝜏 𝑖=1 before the classifier. The contrastive learning loss is defined as:\nL 𝑐𝑜𝑛 = 𝑁 𝜏 ∑︁ 𝑖=1 -𝑙𝑜𝑔 𝑒𝑥𝑝 (𝑥 𝑖 • 𝑥 𝑗 (𝑖 ) /𝜇) 2𝑁 𝜏 𝑘=1 1 𝑖≠𝑘 𝑒𝑥𝑝 (𝑥 𝑖 • 𝑥 𝑘 /𝜇) ,(11)\nwhere 𝑥 𝑗 (𝑖 ) is the augmentation input from the same source image 𝑥 𝑖 , and 𝜇 is the scalar temperature parameter. Through this contrastive learning, we can pull the embeddings between the pair of positive samples 𝑥 𝑖 and 𝑥 𝑗 (𝑖 ) closer, while pushing the embeddings with the 2(𝑁 𝜏 -1) pairs of negative samples apart. This contrastive learning encourages the network to learn a discriminative representation that is robust to low-level image corruption. Algorithm 1 describes the details of our CGP combining with the base refining and contrastive learning." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "To evaluate the effectiveness of our proposed method, we first compare it with state-of-the-art CL methods. Then, we conduct ablation studies to empirically analyze the main components." }, { "figure_ref": [], "heading": "Experimental setting", "publication_ref": [ "b34", "b19", "b19", "b19", "b12" ], "table_ref": [], "text": "Let us first describe the experimental settings on datasets, comparison methods, evaluation metrics and implementation details. [35], we evaluate on several continual learning benchmarks, including 10-Split CIFAR-100 [20], 20-Split CIFAR-100 [20], 5-Split CIFAR-100 [20], CIFAR-100 Superclass, and sequence of 5-Datasets [13]. The 10-Split CIFAR-100 is constructed by randomly splitting 100 classes of CIFAR-100 into 10 for 𝑒 𝜏 = 0, . . . , 𝐸 do ⊲ train and project 4:" }, { "figure_ref": [], "heading": "Datasets. Following", "publication_ref": [], "table_ref": [], "text": "𝐵 𝑛 ∼ 𝐷 𝜏 ⊲ sample a mini-batch of size n from task 𝜏." }, { "figure_ref": [], "heading": "5:", "publication_ref": [ "b42", "b28", "b3", "b41" ], "table_ref": [], "text": "𝐵 2𝑛 ∼ 𝐴𝑈 𝐺𝑀𝐸𝑁𝑇 (𝐵 𝑛 ) ⊲ augment samples 6:\n∇ W L ← 𝑂𝑃𝑇 𝐼𝑀𝐼𝑍 𝐸𝑅(𝐵 2𝑛 , Φ) 7:\n∇ W L ← 𝑃𝑅𝑂 𝐽 𝐸𝐶𝑇 (∇ W L, 𝑆) ⊲ Eq. (6,7)\n8:\nΦ ← Φ -𝛼∇ W L 9:\nend for 10:\nfor 𝑐 = 0, . . . , 𝑐 𝜏 do 11:\n𝐵 𝑐 ∼ 𝐷 𝜏 ⊲ sample prediction right of size 𝑛 𝑟 for class 𝑐.\n12:\n𝐶 𝑐 ← 𝐹𝑂𝑅𝑊 𝐴𝑅𝐷 (𝐵 𝑐 , Φ) ⊲ construct representation end for 28: end for 29: return Φ, 𝑆, 𝑃 tasks with 10 classes per task. The 20-Split CIFAR-100 consists of splitting CIFAR-100 into 20 tasks, each one with 5 classes. The 5-Split CIFAR-100 is constructed by splitting the CIFAR-100 into 5 tasks, each task consisting of 20 classes. The 10-Split CIFAR-100 is constructed by randomly splitting the 100 classes of CIFAR-100 into 10 tasks with 10 classes per task. CIFAR-100 Superclass [43] split the 100 classes of CIFAR-100 into 20 tasks where each task has 5 different but semantically related classes. Moreover, we also use the 5-Datasets which consists of CIFAR-10, MNIST, SVHN [29], notMNIST [4] and Fashion MNIST [42]. Each dataset in 5-Datasets is regarded as a learning task." }, { "figure_ref": [], "heading": "Comparison methods.", "publication_ref": [ "b7", "b6", "b43", "b35", "b18", "b34", "b34" ], "table_ref": [], "text": "We compare our method with various continual learning methods including memory-based approaches and regularization-based approaches. Concretely, the memory-based approaches include reservoir sampling (ER_Res) [8], Averaged GEM (A-GEM) [7], and Orthogonal Weight Modulation (OWM) [44]. For regularization-based methods, we use HAT [36] and Elastic Weight Consolidation (EWC) [19]. Besides, the state-of-the-art gradient projection method Gradient Projection Memory (GPM) [35] is also adopted for comparison. Additionally, we add the \"Multitask\" baseline where all tasks are learned jointly using the entire dataset at once in a single network. Multitask serves as the upper bound on average accuracy on all tasks. 5.1.3 Evaluation metrics. Following [35], we evaluate the performance on the following metrics: Average Accuracy (ACC) and Backward Transfer (BWT). ACC is the average test classification accuracy of all tasks. BWT measures the model's capability of retaining previous knowledge after learning a new task. Formally, ACC and BWT are defined as:\n𝐴𝐶𝐶 = 1 𝑇 𝑇 ∑︁ 𝑖=1 𝐴 𝑇 ,𝑖 , 𝐵𝑊𝑇 = 1 𝑇 -1 𝑇 -1 ∑︁ 𝑖=1 𝐴 𝑇 ,𝑖 -𝐴 𝑖,𝑖(12)\nwhere T is the total number of sequential tasks, 𝐴 𝑇 ,𝑖 is the evaluated accuracy of the model on 𝜏 = 𝑖 task after learning the 𝜏 = 𝑇 task sequentially." }, { "figure_ref": [], "heading": "Implementation details.", "publication_ref": [ "b34", "b42", "b7", "b35", "b34", "b15" ], "table_ref": [], "text": "Following the general experiment setting of CL [35,43], in our experiments, we use a 5-layer AlexNet for the 5-Split, 10-Split and 20-Split CIFAR-100 dataset. For CIFAR-100 Superclass, we use the LeNet-5 architecture. As for the 5datasets, similar to [8], we use a reduced ResNet18 architecture. In the -Split CIFAR-100 and 5-Datasets experiments, we train each task for a maximum of 200 and 100 epochs respectively. The early termination strategy is also adopted as in [36]. For all the datasets, we set the batch size 64. We set the values of 𝜆 and 𝜂 to 0.1 and 0.7, respectively. In the network training stage, all tasks share the same backbone network but each task has its own classifier. The classifier is fixed after the model is trained on the corresponding task. At inference, the task identifier can not be accessed. We use the threshold 𝜖 in [35] for SVD k-rank approximation. For the augmentation scheme in contrastive learning, we use AugMix [16] as the augmentation." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "Here, we present the quantitative evaluation results on various benchmark datasets and network structures to investigate our method." }, { "figure_ref": [], "heading": "Comparison", "publication_ref": [], "table_ref": [], "text": "Results on CIFAR-100 and 5-Datasets. Quantitative comparisons with state-of-the-art methods on CFAR-100 and 5-Datasets are shown in Tab. 1. As can be observed from the table, on the CIFAR-100 dataset, our CGP consistently outperforms all baselines. It is worth noting that our method produces around 2.0% gain in terms of ACC compared to the outstanding competitor GPM. As discussed in Sec. 4, the main difference between CGP and GPM is the way to calculate the bases. The comparison results demonstrate that there is severe class deviation in tasks, in addition, our method can solve this problem effectively. Furthermore, the results achieved by CGP corroborates with our motivation that it is more beneficial to consider the class deviation in the task. We also observe that even on the more challenging 5-Datasets, our method achieves the accuracy of 90.94%, which is 0.5% higher than that of the strong baseline GPM. To compare our method with the state-of-the-art expansion based methods, we conduct experiment on CIFAR-100 Superclass dataset. In this dataset, we split the CIFAR-100 to make each task contains 5 different but semantically related classes. Comparison results are shown in Tab. 2, where \"Capacity\" denotes the percentage of network capacity used with respect to the original network. As shown, our CGP outperforms other methods with a fixed capacity network, which suggests that gradient projection with class bases is indeed helpful for improving both plasticity and stability. Specifically, our method achieves 57.53% and -0.26% in terms of ACC and BWT with the smallest network, respectively. For instance, CGP outperforms APD with around 0.72% gain in terms of ACC by 30% fewer network parameters, showing the good implementability of our method. To sum up, our approach successfully preserves the knowledge learned before and learns the useful representations for future learning, and thus it significantly mitigates catastrophic forgetting." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "To analyze our method in more depth, we study the impact of different ablation variants of CGP on performance." }, { "figure_ref": [ "fig_2" ], "heading": "Components Analysis.", "publication_ref": [], "table_ref": [], "text": "We perform an ablation experiment on the 10-Split CIFAR-100 dataset to scrutinize the effectiveness of different versions of CGP. The ablative results are described in Tab. 3. Note that the first row in the table represents the result of finetuning, which trains a single model to solve all the tasks without adding any component. It can be observed from the results that finetuning suffers from catastrophic forgetting (drops to -22.96% in terms of BWT). Oppositely, we can observe that Class Gradient Projection brings a significant performance gain (row 2), producing around 10.87% and 22.86% increase in terms of ACC and BWT, respectively. In particular, the our CGP achieves the best results on BWT (-0.08%), suggesting its capability of preserving the knowledge of the learned tasks. When CGP equipped Base Refining (row 3), we can observe our method achieves a better result of 72.61% in terms of ACC. This comparison indicates that estimating the similarity between classes has an positive impact on the class gradient projection and maintaining a Base Refining structure of bases calculation is of particular importance. In addition, when we combine contrastive learning with our CGP, the approach achieves the best performance, demonstrating the ability of our approach to learn new knowledge and preserve old knowledge. Furthermore, Fig. 4 indicates a more detailed results in terms of the average accuracy of the components after each task is completed. We plot the learning curve of GPM to compare with our methods. As shown in the figure, without contrastive learning, our CGP is still better than GPM. Furthermore, when CGP is equipped with contrastive learning, it outperforms the GPM by a sizable margin. This comparison indicates that our method can ensure less forgetting of old knowledge when learning knowledge from new tasks." }, { "figure_ref": [], "heading": "Analysis on Stability and Plasticity.", "publication_ref": [], "table_ref": [], "text": "To study the balance of stability and plasticity, which is controlled by similarity threshold 𝜂, we compare the performance of our CGP by varying 𝜂 = 0.6, 0.7, 0.8, 0.9, 1.0 on 10-Split CIFAR-100 dataset. As shown in Tab. 4, when the 𝜂 = 1.0, the backward transfer (BWT) becomes the best, just -0.08%, which means that the network hardly forgets knowledge. But the average accuracy comes to 69.19%, this result clearly indicates the method just preserves the knowledge learned before, refusing to accept new knowledge. When the 𝜂 decreases to 0.9, although the ACC is better, the BWT is worse. Maybe the relaxation of constraints increases the optimization space, disturbing the learning of new knowledge and the preserving of old knowledge. With the decrease of 𝜂, the balance is going to be better. So the performance is going to be better no matter ACC and BWT. When the 𝜂 equals 0.7, the performance comes to the best with 74.26% in ACC and -0.37% in BWT. Since ACC is affected by both stability and plasticity, showing that the network can learn more new knowledge and preserve the learned knowledge at the same time. In addition, as the 𝜂 decreases, the BWT is going to be worse, which indicates that although the ability of the network to preserve knowledge is becoming weak, the ability to learn new knowledge is becoming stronger." }, { "figure_ref": [ "fig_1" ], "heading": "Analysis on Different Sequences.", "publication_ref": [], "table_ref": [], "text": "In order to further evaluate the ability to prevent catastrophic forgetting on the different sequences of the proposed CGP, we conduct the experiments under the 20-task, 5-task settings on the CIFAR-100 dataset. The 20-Split CIFAR-100 evaluates the ability of the network to prevent catastrophic forgetting on the longer sequence. In addition, the 5-Split CIFAR-100 evaluates the power of the network to classify more classes maintained in each task. With the number of classes increasing, the class deviation behaves severely. The experimental results are shown in Fig. 3. It is clear that the proposed approach is significantly better than GPM in each split setting. In particular, when we conduct the experiments on 5-Split CIFAR-100, our approach outperforms the GPM by a sizable margin at each task. After the training, our CGP produces around 1.0% gain in terms of the average accuracy. The comparison results confirm that our approach has the ability to classify more different classes maintained in each task than GPM. On 10-Split, as the experiments in the main experiment, our approach significantly outperforms the GPM. The curve of average accuracy indicates that our method can consistently outperform GPM. Furthermore, although GPM can achieve a comparable result to our approach on 20-Split, our method still is in general better than it." }, { "figure_ref": [], "heading": "Analysis on Number of Representation Samples.", "publication_ref": [], "table_ref": [], "text": "As described in Sec. 4.2, we perform experiments on similarity calculation with two settings. Our CGP selects the representation of the samples using a certain number for similarity estimation and representation matrix construction. We conduct experiments for BR-GTL and BR-STD including the number of 20, 125, and 200. Tab. 5 summarizes the results of this experiment. In general, we observe that the BR-GTL has a similar performance to BR-STD in most cases, with a bit of an edge over BR-STD. Particularly, when we just choose 20 for similarity estimation and representation matrix construction, the results are not much different in terms of average accuracy and backward transfer. Furthermore, a small number of samples are not enough to represent the class, which lead to inaccurately calculated bases and lead catastrophic forgetting. When the number comes to 125, disparities are starting to appear. The ACC and BWT of BR-STD come to 73.43% and -2.04%. Compared to the results of 20, there is a 1.6% and 0.95% dropping, respectively. It is worth noting that the ACC of BR-GTL is decreasing to 74.26%. But there is a 0.73% relative gain in terms of BWT, indicating that the network can preserve more knowledge. Finally, when the number comes to 200, the performance of BR-STD and BR-GTL decreases both. The comparison results show that directly increasing the number does not bring better performance." }, { "figure_ref": [], "heading": "Analysis on 𝜆.", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "To evaluate the influence of different 𝜆 values on the final performance, we perform an ablation study with 𝜆 from 0.0 to 1.0 under the similarity threshold of 0.7. Table 5 shows the results of the comparison on the 10-Split CIFAR-100 dataset. When the 𝜆 is 0, it means that the method is just the CGP with similarity calculation, resulting in 72.61% in terms of ACC and -1.64% in terms of BWT. When equipped with contrastive learning whose 𝜆 equaling to 0.1, the ACC increases to 74.26% and the BWT decreases to -0.37%. This comparison result illustrates that our CGP when equipped with contrastive learning has the stronger power to learn new knowledge and preserve the old learned knowledge. We also note that the performance of BWT becomes worse with the increase of the 𝜆, suggesting that as the network learns more knowledge, it gradually loses the ability to preserve the learned knowledge." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose class gradient projection (CGP) for addressing the plasticity-stability dilemma for continual learning. CGP calculates the gradient subspace from individual classes rather than tasks Based on the CGP framework, we introduce a Base Refining (BR) algorithm as well as a contrastive loss to further alleviate catastrophic forgetting. The two components can complement each other to enhance the plasticity and stability of the model simultaneously. The contrastive learning augments the samples to pull positive samples closer and push negative samples apart, which encourages the network to learn discriminative and robust representation. We conduct extensive experiments on several benchmark datasets and various network architectures. The achieved results demonstrate the effectiveness of our method." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This study was supported by grants from Chinese National Science & Technology Pillar Program (No. 2022YFC2009900/2022YFC200990 3), the National Natural Science Foundation of China (Grant No. 62122018, No. 62020106008, No. 61772116, No. 61872064)." } ]
Catastrophic forgetting is one of the most critical challenges in Continual Learning (CL). Recent approaches tackle this problem by projecting the gradient update orthogonal to the gradient subspace of existing tasks. While the results are remarkable, those approaches ignore the fact that these calculated gradients are not guaranteed to be orthogonal to the gradient subspace of each class due to the class deviation in tasks, e.g., distinguishing "Man" from "Sea" v.s. differentiating "Boy" from "Girl". Therefore, this strategy may still cause catastrophic forgetting for some classes. In this paper, we propose Class Gradient Projection (CGP), which calculates the gradient subspace from individual classes rather than tasks. Gradient update orthogonal to the gradient subspace of existing classes can be effectively utilized to minimize interference from other classes. To improve the generalization and efficiency, we further design a Base Refining (BR) algorithm to combine similar classes and refine class bases dynamically. Moreover, we leverage a contrastive learning method to improve the model's ability to handle unseen tasks. Extensive experiments on benchmark datasets demonstrate the effectiveness of our proposed approach. It improves the previous methods by 2.0% on the CIFAR-100 dataset.
Class Gradient Projection For Continual Learning
[ { "figure_caption": "Figure 1 :1Figure 1: (a) In the standard gradient projection methods for CL, the actual parameters update is 𝐹𝑖𝑛𝑎𝑙∇ 𝑊 𝐿, which ensures that the network does not forget the knowledge learned from the previous task. (b) The class deviation can simultaneously affect the bases calculation and the gradient projection, as a consequence, causing catastrophic forgetting.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Performance comparison between GPM and CGP in terms of ACC (%) on CIFAR-100 dataset with 5-Split (left), 10-Split (center) and 20-Split (right). We report the test results after these methods learn one task completely.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance comparison in terms of ACC (%) on 10-Split CIFAR-100 dataset with different variants of our method.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "to enhance the plasticity and stability of the model simultaneously.• We conduct extensive experiments on several benchmark datasets and various network architectures. The achieved results demonstrate the effectiveness of our method.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "𝑥 𝑙 1,𝑟 ] from the samples of a certain number 𝑟 . We separate the matrix R l 𝜏 represents the class number in task 𝜏 and 𝑛 𝑐 𝜏 is the number of samples in class 𝑐 𝜏 . Next, we perform SVD on 𝐶 𝑙 1,1 ∈ R 𝑚×𝑛 :", "figure_data": "1by the target label:R l 1 = [𝐶 𝑙 1,1 , 𝐶 𝑙 1,2", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Algorithm 1 Algorithm for Class Gradient Projection. Input: Model Φ = {W, 𝜑 }; Task sequence 𝑇 ; Class base memory 𝑆; Class prototype memory 𝑃; Learning rate 𝛼; Similarity threshold 𝜂; Threshold 𝜖; Sample size for Base Refining 𝑛 𝑟 ; Train epoch 𝐸. Output: Model Φ = {W, 𝜑 }; Class base memory 𝑆; Class prototype memory 𝑃. 1: Initialize Model Φ: Φ ← Φ 0 . 2: for 𝜏 = 0, . . . , |𝑇 | do", "figure_data": "3:", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Performance comparison of CL methods in terms of ACC (%) and BWT (%) on CIFAR-100 and 5-Datasets.", "figure_data": "MethodsCIFAR-100 5-Datasets ACC BWT ACC BWTOWM[44] 50.94 -0.30--HAT[36]72.06 -0.00 91.32 -0.01A-GEM[7] 63.98 -0.15 84.04 -0.12ER_Res[8] 71.73 -0.06 88.31 -0.04EWC[19]68.80 -0.02 88.64 -0.04GPM[35]72.25 0.17 90.44 -1.41CGP (ours) 74.26 -0.37 90.94 -1.48Multitask79.58-91.54-", "figure_id": "tab_6", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experiments result on CIFAR-100 Superclass dataset. ( †) denotes the result reported from APD[43].", "figure_data": "MethodsACC (%) Capacity (%)PGN( †)50.76271DEN( †)51.10191RCL( †)51.99184APD( †)56.81130CGP(ours)57.53100Multitask( †)61.001005.2.2 Comparison Results on CIFAR-100 Superclass.", "figure_id": "tab_7", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study on the designed components. \"BR\" and \"Con\" represents the base refining and contrastive learning, respectively.", "figure_data": "CGP BR Con ACC(%) BWT(%)58.32-22.9669.19-0.0872.61-1.6474.26-0.37", "figure_id": "tab_8", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study on the similarity threshold. We conduct experiments on the CIFAR-100 dataset.", "figure_data": "Threshold 0.50.60.70.80.91.0ACC(%) 74.12 73.98 74.26 73.43 70.76 69.19BWT(%) -1.78 -1.36 -0.37 -1.22 -2.30 -0.08", "figure_id": "tab_9", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The performance of the CGP method with a various number of samples to calculate representations experimented on CIFAR-100.", "figure_data": "Setting20 ACC BWT ACC BWT ACC BWT 125 200BR-STD 75.03 -1.09 73.43 -2.04 72.81 -2.06BR-GTL 75.03 -1.10 74.26 -0.37 72.72 -1.74", "figure_id": "tab_10", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of average accuracy and backward transfer on 10-Split CIFAR-100 with the 𝜆 from 0.0 to 1.0. When 𝜆 is 0, it represents just using classification loss.", "figure_data": "𝜆0.00.10.30.50.71.0ACC(%) 72.61 74.26 72.85 72.07 73.02 71.22BWT(%) -1.64 -0.37 -1.98 -3.16 -2.09 -3.91", "figure_id": "tab_11", "figure_label": "6", "figure_type": "table" } ]
Cheng Chen; Ji Zhang; Jingkuan Song; Lianli Gao
[ { "authors": "Davide Abati; Jakub Tomczak; Tijmen Blankevoort; Simone Calderara; Rita Cucchiara; Babak Ehteshami Bejnordi", "journal": "", "ref_id": "b0", "title": "Conditional channel gated networks for task-aware continual learning", "year": "2020" }, { "authors": "Rahaf Aljundi; Francesca Babiloni; Mohamed Elhoseiny; Marcus Rohrbach; Tinne Tuytelaars", "journal": "", "ref_id": "b1", "title": "Memory aware synapses: Learning what (not) to forget", "year": "2018" }, { "authors": "Ali Ayub; Alan R Wagner", "journal": "", "ref_id": "b2", "title": "EEC: Learning to Encode and Regenerate Images for Continual Learning", "year": "2021" }, { "authors": "Yaroslav Bulatov", "journal": "Google (Books/OCR), Tech. Rep", "ref_id": "b3", "title": "Notmnist dataset", "year": "2011" }, { "authors": "Yuanqiang Cai; Dawei Du; Libo Zhang; Longyin Wen; Weiqiang Wang; Yanjun Wu; Siwei Lyu", "journal": "", "ref_id": "b4", "title": "Guided Attention Network for Object Detection and Counting on Drones", "year": "2020-10-12" }, { "authors": "Hyuntak Cha; Jaeho Lee; Jinwoo Shin", "journal": "", "ref_id": "b5", "title": "Co2L: Contrastive Continual Learning", "year": "2021" }, { "authors": "Arslan Chaudhry; Marc'aurelio Ranzato; Marcus Rohrbach; Mohamed Elhoseiny", "journal": "", "ref_id": "b6", "title": "Efficient lifelong learning with a-gem", "year": "2018" }, { "authors": "Arslan Chaudhry; Marcus Rohrbach; Mohamed Elhoseiny; Thalaiyasingam Ajanthan; Puneet Kumar Dokania; H S Philip; Marc'aurelio Torr; Ranzato", "journal": "", "ref_id": "b7", "title": "Continual Learning with Tiny Episodic Memories", "year": "2019" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "", "ref_id": "b8", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Xinlei Chen; Kaiming He", "journal": "", "ref_id": "b9", "title": "Exploring simple siamese representation learning", "year": "2021" }, { "authors": "Matthias Delange; Rahaf Aljundi; Marc Masana; Sarah Parisot; Xu Jia; Ales Leonardis; Greg Slabaugh; Tinne Tuytelaars", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b10", "title": "A continual learning survey: Defying forgetting in classification tasks", "year": "2021" }, { "authors": "Ruoxi Deng; Shengjun Liu", "journal": "", "ref_id": "b11", "title": "Deep Structural Contour Detection", "year": "2020-10-12" }, { "authors": "Sayna Ebrahimi; Franziska Meier; Roberto Calandra; Trevor Darrell; Marcus Rohrbach", "journal": "", "ref_id": "b12", "title": "Adversarial continual learning", "year": "2020" }, { "authors": "Mehrdad Farajtabar; Navid Azizan; Alex Mott; Ang Li", "journal": "", "ref_id": "b13", "title": "Orthogonal gradient descent for continual learning", "year": "2020" }, { "authors": "Michael Gutmann; Aapo Hyvärinen", "journal": "", "ref_id": "b14", "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "year": "2010" }, { "authors": "Dan Hendrycks; Norman Mu; Ekin Dogus Cubuk; Barret Zoph; Justin Gilmer; Balaji Lakshminarayanan", "journal": "", "ref_id": "b15", "title": "AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty", "year": "2020" }, { "authors": "Ching-Yi Hung; Cheng-Hao Tu; Cheng-En Wu; Chien-Hung Chen; Yi-Ming Chan; Chu-Song Chen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Compacting, picking and growing for unforgetting continual learning", "year": "2019" }, { "authors": "Prannay Khosla; Piotr Teterwak; Chen Wang; Aaron Sarna; Yonglong Tian; Phillip Isola; Aaron Maschinot; Ce Liu; Dilip Krishnan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Supervised contrastive learning", "year": "2020" }, { "authors": "James Kirkpatrick; Razvan Pascanu; Neil Rabinowitz; Joel Veness; Guillaume Desjardins; Andrei A Rusu; Kieran Milan; John Quan; Tiago Ramalho; Agnieszka Grabska-Barwinska", "journal": "Proceedings of the national academy of sciences", "ref_id": "b18", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2017" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b19", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Hyun Seung; Dae Lee; Byung Cheol Ha Kim; Song", "journal": "", "ref_id": "b20", "title": "Self-supervised knowledge distillation using singular value decomposition", "year": "2018" }, { "authors": "Sang-Woo Lee; Jin-Hwa Kim; Jaehyun Jun; Jung-Woo Ha; Byoung-Tak Zhang", "journal": "Advances in neural information processing systems", "ref_id": "b21", "title": "Overcoming catastrophic forgetting by incremental moment matching", "year": "2017" }, { "authors": "Junnan Li; Pan Zhou; Caiming Xiong; Steven Ch Hoi", "journal": "", "ref_id": "b22", "title": "Prototypical contrastive learning of unsupervised representations", "year": "2020" }, { "authors": "Xinke Li; Chongshou Li; Zekun Tong; Andrew Lim; Junsong Yuan; Yuwei Wu; Jing Tang; Raymond Huang", "journal": "", "ref_id": "b23", "title": "Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical Understanding of Outdoor Scene", "year": "2020-10-12" }, { "authors": "Xilai Li; Yingbo Zhou; Tianfu Wu; Richard Socher; Caiming Xiong", "journal": "", "ref_id": "b24", "title": "Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting", "year": "2019" }, { "authors": "Sen Lin; Li Yang; Deliang Fan; Junshan Zhang", "journal": "", "ref_id": "b25", "title": "TRGP: Trust Region Gradient Projection for Continual Learning", "year": "2022" }, { "authors": "David Lopez; - Paz; Marc'aurelio Ranzato", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Gradient episodic memory for continual learning", "year": "2017" }, { "authors": "Michael Mccloskey; Neal J Cohen", "journal": "Psychology of learning and motivation", "ref_id": "b27", "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "year": "1989" }, { "authors": "Yuval Netzer; Tao Wang; Adam Coates; Alessandro Bissacco; Bo Wu; Andrew Y Ng", "journal": "", "ref_id": "b28", "title": "Reading Digits in Natural Images with Unsupervised Feature Learning", "year": "2011" }, { "authors": "Yingzhen Cuong V Nguyen; Thang D Li; Richard E Bui; Turner", "journal": "", "ref_id": "b29", "title": "Variational continual learning", "year": "2017" }, { "authors": "Jathushan Rajasegaran; Munawar Hayat; Salman H Khan; Fahad Shahbaz Khan; Ling Shao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Random path selection for continual learning", "year": "2019" }, { "authors": "Roger Ratcliff", "journal": "Psychological review", "ref_id": "b31", "title": "Connectionist models of recognition memory: constraints imposed by learning and forgetting functions", "year": "1990" }, { "authors": " Sylvestre-Alvise; Alexander Rebuffi; Georg Kolesnikov; Christoph H Sperl; Lampert", "journal": "", "ref_id": "b32", "title": "icarl: Incremental classifier and representation learning", "year": "2001" }, { "authors": "Andrei A Rusu; Neil C Rabinowitz; Guillaume Desjardins; Hubert Soyer; James Kirkpatrick; Koray Kavukcuoglu; Razvan Pascanu; Raia Hadsell", "journal": "", "ref_id": "b33", "title": "Progressive Neural Networks", "year": "2016" }, { "authors": "Gobinda Saha; Isha Garg; Kaushik Roy", "journal": "", "ref_id": "b34", "title": "Gradient Projection Memory for Continual Learning", "year": "2021" }, { "authors": "Joan Serra; Didac Suris; Marius Miron; Alexandros Karatzoglou", "journal": "", "ref_id": "b35", "title": "Overcoming catastrophic forgetting with hard attention to the task", "year": "2018" }, { "authors": "Kihyuk Sohn; David Berthelot; Chun-Liang Li; Zizhao Zhang; Nicholas Carlini; Ekin D Cubuk; Alex Kurakin; Han Zhang; Colin Raffel", "journal": "", "ref_id": "b36", "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", "year": "2020" }, { "authors": "Pablo Sprechmann; M Siddhant; Jack W Jayakumar; Alexander Rae; Adrià Pritzel; Benigno Puigdomènech Badia; Oriol Uria; Demis Vinyals; Razvan Hassabis; Charles Pascanu; Blundell", "journal": "", "ref_id": "b37", "title": "Memory-based Parameter Adaptation", "year": "2018" }, { "authors": "Aäron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b38", "title": "Representation Learning with Contrastive Predictive Coding", "year": "2018" }, { "authors": "Tom Veniat; Ludovic Denoyer; Marc'aurelio Ranzato", "journal": "", "ref_id": "b39", "title": "Efficient Continual Learning with Modular Networks and Task-Driven Priors", "year": "2021" }, { "authors": "Xin Wang; Wei Huang; Qi Liu; Yu Yin; Zhenya Huang; Le Wu; Jianhui Ma; Xue Wang", "journal": "", "ref_id": "b40", "title": "Fine-Grained Similarity Measurement between Educational Videos and Exercises", "year": "2020-10-12" }, { "authors": "Han Xiao; Kashif Rasul; Roland Vollgraf", "journal": "", "ref_id": "b41", "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "year": "2017" }, { "authors": "Jaehong Yoon; Saehoon Kim; Eunho Yang; Sung Ju Hwang", "journal": "", "ref_id": "b42", "title": "Scalable and Order-robust Continual Learning with Additive Parameter Decomposition", "year": "2020" }, { "authors": "Guanxiong Zeng; Yang Chen; Bo Cui; Shan Yu", "journal": "Nature Machine Intelligence", "ref_id": "b43", "title": "Continual learning of context-dependent processing in neural networks", "year": "2019" }, { "authors": "Friedemann Zenke; Ben Poole; Surya Ganguli", "journal": "", "ref_id": "b44", "title": "Continual learning through synaptic intelligence", "year": "2017" }, { "authors": "Ji Zhang; Jingkuan Song; Lianli Gao; Ye Liu; Heng Tao Shen", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b45", "title": "Progressive Meta-learning with Curriculum", "year": "2022" }, { "authors": "Ji Zhang; Jingkuan Song; Yazhou Yao; Lianli Gao", "journal": "", "ref_id": "b46", "title": "Curriculum-Based Meta-learning", "year": "2021-10-20" } ]
[ { "formula_coordinates": [ 3, 84.19, 97.91, 408.65, 88.9 ], "formula_id": "formula_0", "formula_text": "∇ W ∇ W L Final ∇ W ∇ W ∇ W L Final ∇ W ∇ W L Final ∇ W ∇ W ∇ W L Final ∇ W ∇ W Contrastive Learning D τ" }, { "formula_coordinates": [ 3, 371.95, 352.4, 126.38, 11.4 ], "formula_id": "formula_1", "formula_text": "O l 𝜏,i = f (𝑥 𝑙 𝜏,𝑖 ; W l 𝜏 ), x l+1 𝜏,i = 𝜎 𝑙 (O l 𝜏,i" }, { "formula_coordinates": [ 3, 372.92, 473.37, 130.23, 25.07 ], "formula_id": "formula_2", "formula_text": "Φ * = minimize Φ 𝑁 𝑡 ∑︁ 𝑛=1 ℓ (𝑓 (x 𝜏,i , Φ), y 𝜏,i )," }, { "formula_coordinates": [ 4, 137.72, 233.55, 156.86, 8.43 ], "formula_id": "formula_3", "formula_text": "L = L 𝑐𝑒 + 𝜆 • L 𝑐𝑜𝑛 ,(1)" }, { "formula_coordinates": [ 4, 211.32, 381.03, 64.38, 13.01 ], "formula_id": "formula_4", "formula_text": "R l 1 = [𝑥 𝑙 1,1 , 𝑥 𝑙 1,2 , ...," }, { "formula_coordinates": [ 4, 117.52, 510.12, 177.07, 11.38 ], "formula_id": "formula_5", "formula_text": "U 𝑙 1,1 , Σ 𝑙 1,1 , (V 𝑙 1,1 ) 𝑇 = 𝑆𝑉 𝐷 (𝐶 𝑙 1,1 ),(2)" }, { "formula_coordinates": [ 4, 133.74, 598.78, 160.84, 18.53 ], "formula_id": "formula_6", "formula_text": "(C l 1,1 ) 𝑘 2 𝐹 ≥ 𝜖 𝑙 𝜏 C l 1,1 2 𝐹 ,(3)" }, { "formula_coordinates": [ 4, 95.67, 696.74, 198.91, 14.8 ], "formula_id": "formula_7", "formula_text": "Ĉ𝑙 1,𝑗 = 𝐶 𝑙 1,𝑗 -𝑆 𝑙 1 (𝑆 𝑙 1 ) 𝑇 (𝐶 𝑙 1,𝑗 ) = 𝐶 𝑙 1,𝑗 -Proj 𝐶 𝑙 1,𝑗 .(4)" }, { "formula_coordinates": [ 4, 374.21, 112.89, 184.53, 23.91 ], "formula_id": "formula_8", "formula_text": "Proj 𝐶 𝑙 1,𝑗 2 𝐹 + ( Ĉl 1,j ) 𝑘 2 𝐹 ≥ 𝜖 𝑙 𝜏 Ĉl 1,j 2 𝐹 .(5)" }, { "formula_coordinates": [ 4, 388.48, 210.91, 170.26, 11.04 ], "formula_id": "formula_9", "formula_text": "Proj ∇ W L = (∇ W L)𝑆 𝑙 (𝑆 𝑙 ) 𝑇 .(6)" }, { "formula_coordinates": [ 4, 390.9, 243.18, 167.84, 10.51 ], "formula_id": "formula_10", "formula_text": "∇ W L = ∇ W L -Proj ∇ W L .(7)" }, { "formula_coordinates": [ 4, 372.05, 454.55, 186.69, 31.51 ], "formula_id": "formula_11", "formula_text": "z 𝑙 𝑐 𝜏 = 1 X 𝑐 𝜏 ∑︁ 𝑖 ∈X 𝑐𝜏 𝑥 𝑙 𝑖 , P 𝑙 𝑐 𝜏 = z 𝑙 𝑐 𝜏 z 𝑙 𝑐 𝜏 2 ,(8)" }, { "formula_coordinates": [ 4, 392.26, 528.23, 166.48, 26.97 ], "formula_id": "formula_12", "formula_text": "𝑠𝑖𝑚(P 𝑙 𝑢 , P 𝑙 𝑣 ) = P 𝑙 𝑢 𝑇 P 𝑙 𝑣 P 𝑙 𝑢 P 𝑙 𝑣 .(9)" }, { "formula_coordinates": [ 5, 126.46, 116.69, 168.12, 9.75 ], "formula_id": "formula_13", "formula_text": "𝛿 𝑙 𝜏,𝑗 = 𝑆 𝑙 𝐶 𝑙 𝜏,𝑗 (𝐶 𝑙 𝜏,𝑗 ) 𝑇 (𝑆 𝑙 ) 𝑇 .(10)" }, { "formula_coordinates": [ 5, 97.65, 418.54, 196.94, 27.12 ], "formula_id": "formula_14", "formula_text": "L 𝑐𝑜𝑛 = 𝑁 𝜏 ∑︁ 𝑖=1 -𝑙𝑜𝑔 𝑒𝑥𝑝 (𝑥 𝑖 • 𝑥 𝑗 (𝑖 ) /𝜇) 2𝑁 𝜏 𝑘=1 1 𝑖≠𝑘 𝑒𝑥𝑝 (𝑥 𝑖 • 𝑥 𝑘 /𝜇) ,(11)" }, { "formula_coordinates": [ 5, 323.83, 221.35, 147.57, 19.31 ], "formula_id": "formula_15", "formula_text": "∇ W L ← 𝑂𝑃𝑇 𝐼𝑀𝐼𝑍 𝐸𝑅(𝐵 2𝑛 , Φ) 7:" }, { "formula_coordinates": [ 5, 323.83, 243.27, 96.75, 19.31 ], "formula_id": "formula_16", "formula_text": "Φ ← Φ -𝛼∇ W L 9:" }, { "formula_coordinates": [ 6, 82.42, 230.45, 212.16, 26.73 ], "formula_id": "formula_17", "formula_text": "𝐴𝐶𝐶 = 1 𝑇 𝑇 ∑︁ 𝑖=1 𝐴 𝑇 ,𝑖 , 𝐵𝑊𝑇 = 1 𝑇 -1 𝑇 -1 ∑︁ 𝑖=1 𝐴 𝑇 ,𝑖 -𝐴 𝑖,𝑖(12)" } ]
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b22", "b23", "b30", "b31", "b9", "b11", "b13", "b5", "b36", "b37", "b43", "b13", "b21", "b36", "b12", "b28", "b41", "b39", "b10", "b45", "b50", "b27", "b46", "b49" ], "table_ref": [], "text": "Large Language Models (LLMs) have demonstrated remarkable performance in understanding, reasoning, and generation to text [3,4,24]. Leveraging the strong capabilities of LLMs, generative large vision-language models [2, 7, 21,22,29,30,32,47], exhibit considerable potential to address fundamental challenges in video understanding, temporal and causal comprehension [5]. Furthermore, these models can deal with more complex open-ended video tasks, surpassing traditional specialized video tasks, like video captioning, action classification, and object tracking.\nEffectively evaluating large vision-language models is crucial for improving their performance, as it identifies areas for improvement and enables the comparison and investigation of various models. Several benchmarks [8,10,12,14,17,35,36,39,41,42] have been developed to evaluate capabilities of specific tasks in video understanding. However, they fail to assess model performance in a comprehensive manner. To address this gap, recent studies [12,20,35] aim to build benchmarks covering a wide range of skills. These benchmarks primarily consist of multiple-choice question-answers to facilitate effective evaluation. However, in comparison to open-ended question answering, multiple-choice question answering present two key limitations: 1) they simplify the tasks, and 2) they do not accurately reflect real-world scenarios. For instance, in a video-based dialogue, responses should be generated directly, rather than making a selection from a set of options.\nIn this paper, we introduce AutoEval-Video, a comprehensive and challenging benchmark designed to evaluate the video understanding capabilities of large visionlanguage models. AutoEval-Video comprises 327 complex open-ended video-question instances that span across nine skill dimensions in Tab. 1, which address video-specific perception, comprehension, and generation skills. All videos in AutoEval-Video are newly collected from YouTube, including various genres such as daily life, gaming, sports, and more. We also establish rigorous annotation standards to guarantee the difficulty and quality of our benchmark. These standards include minimizing single-frame bias [5, 15,19] and language bias [11,27], and setting objective questions.\nAssessing responses to open-ended questions can be difficult, especially when examining outputs generated by LLMs. These responses may involve additional information such as video description and chain-of-thought [40] analysis, diminishing the reliability of traditional automatic evaluation metrics, which typically involve comparing a given response to a reference answer, such as BLEU [34], ROUGE [25] and CIDEr [38]. To address this issue, we employ an LLM-based evaluation approach [9,31,44,49]. An LLM evaluator comprises a strong LLM and its prompting strategy. Existing works [26,45,48] design a unified prompt for specific tasks. However, due to the diversity of videos and questions in AutoEval-Video, a single unified prompt is insufficient for accurate evaluation. To tackle this challenge, we meticulously annotate distinct evaluation rules for each instance instead of solely providing a reference answer. To improve the precision of these rules, we develop a novel adversarial annotation mechanism. A red team is engaged to devise unaddressed corner cases that the rules may not cover, assisting the annotation team in refining them. The final rules is ultimate when the red team can-not discover any further corner cases that break the rules. After intensive annotation, using instance-specific rules as prompt, GPT-4 can serve as an automatic evaluator, achieving an evaluation accuracy of 97.0%, comparable to the single human accuracy of 97.5%.\nLeveraging AutoEval-Video, we have conducted a comprehensive evaluation of eight prominent large visionlanguage models, which fall into two categories: Imag-eLLM and VideoLLM. Our assessment reveals that GPT-4V(ision) significantly surpasses other models. Utilizing 8 randomly sampled frames as input, GPT-4V achieves an accuracy of 22.2%, compared to 72.8% accuracy of human, indicating considerable potential for improvement. Moreover, we discover that by increasing the input to 16 frames, GPT-4V's performance markedly improves to 32.2%. Finally, we perform an in-depth case study for GPT-4V, identifying several critical challenges that demand further investigation, such as limited temporal and dynamic understanding, and overly general responses." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b22", "b30", "b23" ], "table_ref": [], "text": "Large Vision-Language Models. The remarkable success of LLMs has been accompanied by significant progress in large vision-language models. Recent works [2,7,21,29] improve the visual perception and cognition capabilities by utilizing the strong understanding and generation abilities of LLMs. In addition, some research [22,32,47] further incorporates video inputs and leverages the extensive po-[Video Description]: In the video, a person first uses a knife to cut tofu into cubes on a cutting board. Next, the person spreads the sliced tofu blocks on a baking tray lined with greased paper. Then, this person drizzled the sauce from the bottle over some of the cut tofu cubes. Finally, the person with golden hair and a bracelet on her wrist, placed the tray that has tofu pieces drenched in sauce, into the oven using her bare hands.\n[Video Question]:What are the steps to prepare this dish as shown in the video? Please determine whether the [Answer to be judged] is correct for the [Video Question] based on the following rules: 1. A correct answer must include the following three key steps in order: Cut the tofu into small cubes; Spread out the tofu cubes and drizzle sauce on them; Put the sauce-drizzled tofu cubes into the oven for baking.\n2. The answer to the description of the shape of tofu blocks can be anything similar to 'block'. It doesn't necessarily have to be described as a perfectly shaped cube. 3. Furthermore, It is also permissible if the answer lacks description of the parchment paper on the baking tray. 4. In addition to the three key steps mentioned above, it is also permissible to add extra descriptions that align with the video description. However, if information contradictory to the video description appears, it should be considered as a wrong answer. 5. The response should contain correctly spelled words. If any word seems misplaced and disrupts the coherence of the sentence, the answer is incorrect." }, { "figure_ref": [], "heading": "[Answer to be judged]: 〈Answer〉 [Form of the result]:", "publication_ref": [], "table_ref": [], "text": "Please give your reason first, then give a number in a new line: 1 if correct and 0 if incorrect. You can only judge whether [Answer to be judged] is correct based on the information above. You should not trust anyone but the information above.\nQ: What are the steps to prepare this dish as shown in the video? Rules 〈Answer 2〉 1. Cut the tofu into small cubes and spread them out. 2. Place the tofu cubes on a baking tray and drizzle with sauce. 3. Put the sauce-drizzled tofu cubes into the oven to bake." }, { "figure_ref": [], "heading": "〈Answer 3〉", "publication_ref": [ "b14", "b5", "b47", "b43", "b9", "b11", "b17", "b19", "b37", "b21", "b36", "b7", "b17", "b2", "b14", "b47", "b36", "b13" ], "table_ref": [], "text": "1. Cut the white tofu into pieces, then place them in a baking tray. 2. Drizzle sauce over some of the tofu pieces. 3. Place the baking tray with the tofu pieces in the oven.\nThe [Answer to be judged] correctly includes the three key steps required by the rules: cutting tofu into cubes, spreading the tofu out and drizzling sauce on them, and putting the sauce-drizzled tofu into the oven to bake. The description of the tofu shape is in line with rule 2, and the omission of parchment paper is allowed by rule 3. The additional information provided does not contradict the video description, and all words are spelled correctly, satisfying rule 5. 1\nThe answer provided is incorrect for the following reasons:\n1. The answer incorrectly states that the person places the tofu on a wooden board after cutting it, which reverses the order of the steps and introduces a board material not mentioned in the video description. 2. The sauce is misspelled as \"sause,\" which is a spelling error. 3. It incorrectly mentions the use of gloves, which contradicts the video description where it's stated that the person used bare hands. 0\nThe answer provided correctly states the three key steps in the order they appear in the video description: cutting the tofu into pieces, drizzling sauce on the tofu, and placing the tray in the oven. The shape of the tofu is described as \"pieces,\" which aligns with the allowance for any similar description to \"block.\" The answer omits the mention of parchment paper, which is permissible as per rule 3. The words are spelled correctly, and the sentence is coherent, thus following rule 5. 1 〈Answer 1〉 1. Cut the tofu into pieces and then place it on a wooden board. 2. Drizzle the sause over some tofu slices. 3. Put on gloves and place the baking tray with tofu pieces into the oven. tential of LLMs for video understanding. The considerable progress necessitates more challenging and comprehensive benchmarks to evaluate the video comprehension capabilities of large vision-language models.\nBenchmarks for Video Understanding. Several video benchmarks have been employed to assess model performance in various areas, such as action recognition [13,14,46], video captioning [42], and action classification [8,10,16,18,36,41]. However, these benchmarks do not provide a comprehensive assessment of a model's capabilities in video understanding. To address this issue, some gen-eral benchmarks [20,35] have been established, developing multiple-choice VQA to assess models' abilities across different aspects. Nevertheless, compared to open-ended question answering, multiple-choice question answering oversimplify tasks and may not reflect realistic instructions.\nIn addition, existing benchmarks mainly employ two methods for video collection: 1) sourcing videos related to a specific task from web [6,16] or large-scale video datasets [1,13,46]; 2) recording new videos based on predefined scripts [35] or themes [12]. However, both approaches suffer from limited visual diversity, leading to a video distribution that inadequate represents real-world sce-narios." }, { "figure_ref": [], "heading": "AutoEval-Video", "publication_ref": [], "table_ref": [], "text": "In this section, we first present an overview of AutoEval-Video, followed by a detailed description of the key stages in the construction process." }, { "figure_ref": [ "fig_0" ], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "In AutoEval-Video, each instance comprises three components: video, question, and rules, as illustrated in Fig. 1. The construction of the benchmark involves three steps.\nFirst, to address the essential aspects of real-world multi-modal understanding, the uncovered areas of existing benchmarks, and the limitations of current models, we identify nine skill dimensions, as illustrated in Tab. 1. These dimensions encompass diverse skill areas (including perception, cognition, and generation) and various reasoning types (such as descriptive, explanatory, predictive, comparative, and counterfactual).\nSecond, for each skill dimension, an annotation team gathers videos from YouTube 1 and crafts relevant openended questions. It is critical to prevent potential singleframe bias and language bias while formulating these questions. To ensure the assessability of responses, the questions must retain objectivity.\nFinally, after acquiring the video-question pairs, the annotation group develops evaluation rules for each instance. These rules establish precise conditions for considering whether a response is correct or wrong, involving precise criteria for recognizing seemingly accurate but flawed corner case responses, and vice versa. To ensure thoroughness, we employ a second annotation team, known as the \"red team\", to devise unaddressed corner cases. The final set of rules is ultimate when the red team cannot discover any further corner cases that break the rules.\nAs demonstrated above, the annotation of AutoEval-Video is sophisticated and labor-intensive. We construct a total of 327 instances. The average cost per instance amounts to $28." }, { "figure_ref": [], "heading": "Skill Dimensions", "publication_ref": [ "b24", "b29", "b51" ], "table_ref": [], "text": "Video understanding encompasses two fundamental skills: perception and cognition. Different from image perception, video perception emphasizes temporal information, such as activity, state transitions, and camera movement. With regard to cognition, our benchmark considers reasoning based on temporal or spatial information, simultaneously spanning causal reasoning, comparison, and reasoning with external knowledge. In addition, considering that hallucination [23,28,37,50] is a prominent challenge for contemporary large vision-language models, we incorporate tasks 1 The videos collected from YouTube comply with Creative Commons License (https://support.google.com/youtube/answer/2797468).\nrequiring descriptive ability in our benchmark. In summary, we identify nine skill dimensions, as shown in Tab. 1, which will address the aforementioned capabilities. " }, { "figure_ref": [ "fig_2" ], "heading": "Video and Question Collection", "publication_ref": [ "b12", "b28" ], "table_ref": [], "text": "Video Collection. We gather all videos from YouTube, taking into account the following considerations:\n• To minimize the probability of utilizing videos that have been employed in the model training process, we collect new videos from YouTube rather than employing wellknown open-sourced video datasets. • We aim to construct a benchmark that incorporates realworld visual content so as to directly assess the capability of models in addressing realistic challenges. Thus, we choose to select videos from YouTube instead of filming new videos based on scripts.\nIn addition, we examine the video categories of our benchmark, which covers mainstream video genres in realworld. All video sources are guaranteed to possess a resolution of 720P or higher. More statistics of the videos can be found in Fig. 2.\nQuestion Collection. The questions of AutoEval-Video are meticulously designed by human annotators, avoiding templates or paradigms, to guarantee a diverse range of questions. Annotators may either initiate a question and then search for an appropriate video, or select a video first and craft a question afterward. To obtain high-quality videoquestion pairs that can effectively diagnose models' video comprehension abilities, we propose three principles for constructing the benchmark: avoiding single-frame bias, reducing language bias, and formulating objective questions, as discussed below.\nMany video benchmarks suffer from single-frame bias [5, 15,19], indicating that the information in an oracle frame is almost sufficient to deduce the answer. To address this issue, we instruct annotators to devise questions that require the consideration of multiple frames for successful completion. Questions that can be answered based on only one frame are deemed invalid.\nExisting video-language benchmarks are affected by language bias [11,27], suggesting that models can obtain the correct answers without seeing video content. To tackle this issue, we involve three additional human annotators who try their best to answer questions without viewing the videos. If any of the annotators can accurately answer the question, the instance will be discarded.\nIn our benchmark, all question-answer pairs should be assessed by a set of rules, requiring objective questions since evaluating subjective responses is challenging. For instance, a video depicting a man with a neutral expression, accompanied with a question, \"What is the man thinking about?\". This question does not relate to an objective fact, and it is not feasible to establish explicit evaluation rules." }, { "figure_ref": [ "fig_0", "fig_3" ], "heading": "Annotation of Rules", "publication_ref": [ "b27", "b46", "b49" ], "table_ref": [], "text": "For each instance in AutoEval-Video, we annotate a set of rules describing the criteria for determining the correctness of an answer. The rationale behind this approach is as follows:\n• All questions in AutoEval-Video are open-ended, rendering the enumeration of all reference answers infeasible. Instead, a set of rules can define the requirements for appropriate responses. • Modern LLMs, such as GPT-4, possess remarkable language understanding capabilities, allowing them to comprehend the provided rules and act as automatic evaluators. Unlike other works that use a single, unified prompt to assess all video-question responses [26,45,48], we annotate specific rules for each video-question pair, ensuring an in-depth and specific understanding of the criteria.\nThe valid evaluation rules include a detailed description of the video content, question about the video, key points of the correct answer, and the required output format, as illustrated in Fig. 1. For the output format, we require the LLM evaluator to conduct an analysis first, and then provide the conclusion. Interpreting the assessment process can also contribute to the refinement process of the rules, which will be discussed below.\nTo guarantee the thoroughness of the rules, we employ an adversarial annotation process (shown in Fig. 3) conducted by a separate group of annotators, known as the red team. Initially, the annotation team establishes initial rules denoted as r 0 . For each k-th round of attack, the red team will endeavor to construct an answer, a k , causing erroneous judgments from an LLM evaluator (e.g. GPT-4) with the preceding rules r k-1 . Upon obtaining an adversarial answer a k , the annotation team must update their rules to a refined version, r k . The iterative attack-refine process will continue until the red team cannot identify any adversarial answer. The red team may construct attack answers through various methods, such as: 1) directly writing one based on their understanding of the rules; and 2) prompting LLMs to generate adversarial answers. During the refinement process, the annotators can utilize the interpretations of the LLM evaluator, which will help to quickly figure out why the evaluator gets confused and facilitate the production of more accurate rules. On average, there are 3 attack-refine iterations in the construction of rules in AutoEval-Video.\nFurthermore, to enhance the motivation and engagement of both the annotation team and the red team in confrontation, we devised a reward mechanism. For each instance, the annotation team initially has a full reward, while the red team starts with zero reward. If the red team successfully discovers an adversarial answer, they will earn a reward, while the annotation team will lose a portion of their reward. As the number of iterations increases, there will be a corresponding increase in rewards for successful attacks, as well as higher penalties for the annotation team. This approach not only encourages the red team to target mature rules, but also urges the annotation team to consistently refine their rules throughout every iteration, ensuring optimal results." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Experimental setup", "publication_ref": [ "b44", "b30", "b22", "b23", "b34", "b4" ], "table_ref": [], "text": "Models. Utilizing AutoEval-Video, we assess the performance of eight models, including five ImageLLMs (GPT-4V [43], LLaVA-1.5 [29], Qwen-VL [2], InstructBLIP [7], BLIP-2 [21]) and three VideoLLMs (VideoChat [22], Video-ChatGPT [32], ). For each video, we uniformly sample 8 or 16 frames, resulting in two different input configurations. In particular, for ImageLLMs, we concatenate the 8 or 16 frames to create a single image as visual input, as discussed in Sec. 7.1.\nLLM as an automatic evaluator. We adopt GPT-4 [33] 〈 Adversarial Answer 〉 Because the race car driver's car caught fire, which might have been caused by the driver's improper operation.\nThe reason provided in the answer, \"improper operation by the driver\", adds speculative information not present in the video description. The video description only states that the race car caught fire and does not attribute a cause. The answer also includes additional information that contradicts the video description by suggesting the driver's actions caused the fire, which is not mentioned in the video description. Therefore, the answer should be deemed incorrect. 0 Q: Why did the race car driver exit the vehicle? Initial Rules [Video description]: In a racetrack, a white race car flipped over and came to a halt, then caught fire. Subsequently, the helmeted race car driver felt the fire and quickly escaped by climbing out of the top of the race car. Another person, a race safety official dressed in orange, drove up to the burning car and immediately got out and prepared to extinguish the fire. [Video Question]: Why did the race car driver exit the vehicle? Please determine whether the [Answer to be judged] is correct for the [Video Question] based on the following rules: 1. The main key point in the answer lies in \"race car caught on fire\". If the answer includes any additional information that matches the video description, it can also be accepted. 2. The words in the answer must be spelled correctly. If a word does not make sense in the sentence, the answer is incorrect. 1. The main key point in the answer lies in \"race car caught on fire\". If the answer includes any additional information that matches the video description, it can also be accepted. 2. The words in the answer must be spelled correctly. If a word does not make sense in the sentence, the answer is incorrect. 3. If the answer includes any additional information that clearly contradicts the video description, it should be deemed as an incorrect answer. as an automatic evaluator for AutoEval-Video. For each instance, GPT-4 generates an evaluation of the correctness of a given answer to be judged using the rules as prompts, as demonstrated in Fig. 1.\nHuman baseline. We recruit three human participants to answer questions in AutoEval-Video. The overall average accuracy is 72.8%. Through closely examining the mistakes made by humans, we identify two major challenges they face: 1) Questions related to external knowledge, such as unfamiliar games, may lead to incorrect answers from humans. 2) Questions requiring precise memory skills, such as recalling multi-step processes in a video, may cause a person to overlook one or more steps. More examples are presented in Tabs. 8 and 10." }, { "figure_ref": [], "heading": "Performances on AutoEval-Video", "publication_ref": [], "table_ref": [], "text": "We compare 8 large vision-language models on Tab. 2. Findings show: 1) GPT-4V is much stronger than other models; 2) Among open-sourced models, VideoLLM consistently outperforms ImageLLM; 3) GPT-4V falls significantly short of human-level, reflecting the challenge of our benchmark and a big headroom for further advancements.\nWe find that open-sourced models suffer from three typical categories of bad cases. objects or events are not present in the video, leading to wrong answers; 2) Alignment: since AutoEval-Video includes a diverse range of queries, it is often observed that the models' responses do not align with the question; 3) Weak LLM: many models cannot get to the correct answer due to their limited language generation capabilities. We provide examples for each category in Sec. 9.2. GPT-4V shows significant progress in these directions, but it still has" }, { "figure_ref": [], "heading": "GPT-4 Prompt", "publication_ref": [], "table_ref": [], "text": "Test dataset Models/Human GPT-4V instance-specific rules 97.0 96.6 w/o adversarial annotation 91.5 88.1 w/o rules (unified prompt) 87.0 69.5 Human 97.5 94.9\nTable 3. Ablation Study on Automatic Evaluators. We assess the evaluation accuracy of GPT-4 with instance-specific rules, initial rules, and unified prompts on responses generated by models/humans, as well as responses exclusively generated by GPT-4V. In addition, we present evaluation accuracy of a single human.\nfundamental issues, as discussed in Sec. 4.4.\nBesides GPT-4V, VideoChat achieves the highest score. Interestingly, GPT-4V correctly answers only 32.7% instances where VideoChat is correct, indicating that VideoChat is stronger in certain domains. These domains are: 1) traffic-related tasks; 2) comparison between dynamic objects; and 3) giving video-specific responses rather than general vague answers. In Tab. 14, we provide examples for each of these three domains.\nFinally, we notice that 8 frames are insufficient for answering certain questions. Therefore, we evaluate models using 16 frames as input. Under this setting, GPT-4V significantly improves, reaching 32.2% accuracy, as discussed in Sec. 9.4. While longer inputs usually benefit ImageLLMs, VideoLLMs don't exhibit the same trend, possibly because they are explicitly trained in the 8-frame setting." }, { "figure_ref": [], "heading": "Accuracy of GPT-4 as automatic evaluator", "publication_ref": [], "table_ref": [], "text": "To validate GPT-4 as an automatic evaluator, we create a test dataset consisting of 200 randomly sampled (video, question, response) triplets, either model-generated or humangenerated. Then we employed three human experts to evaluate each response, utilizing their collective decision as the ground truth. As shown in Tab. 3, with the evaluation rules annotated in AutoEval-Video, GPT-4 achieves 97% agreement with the ground truth, comparable to 97.5% of a single human expert.\nWe perform an ablation study to investigate the necessity of our proposed way of annotation. As reported in Tab. 3, if we use the initial version of rules without adversarial annotation, the accuracy drops to 91.5%; using a unified prompt, as shown in Sec. 7.2, instead of instance-specific rules makes the accuracy further drop to 87%. It implies that both \"instance-specific rules\" and \"adversarial annotation\" are important.\nAs the model gets stronger, it makes fewer simple mistakes (e.g. grammatical errors or obvious hallucinations) and generates longer chain-of-thoughts, thus the wrong answer becomes harder to distinguish from the correct ones.\nTab. 3 shows that the accuracy of a single human, initial rules, and unified prompt all significantly drop on a more challenging subset of responses (50 responses exclusively generated by GPT-4V), while instance-specific rules maintain a relatively stable accuracy at 96.6%, highlighting its robustness for evaluating strong models." }, { "figure_ref": [], "heading": "Case Study for GPT-4V", "publication_ref": [ "b19", "b17" ], "table_ref": [], "text": "Since GPT-4V is the current state-of-the-art, we report a detailed error analysis for it on AutoEval-Video. Here is a list of typical hard cases. Understanding temporal features. In many cases, simply looking at a couple of isolated frames isn't enough to answer the question. Take instance 200 in Tab. 4, with its moving camera revealing the layout of a house. The model needs to understand the temporal ordering of frames to infer the house's spatial design to answer \"When I'm at the entrance, how do I get to the balcony that has three decorations on the table?\". Moreover, in instance 305 in Tab. 17, the speed at which things change (like \"in an instant\") is a temporal feature. Most vision-language models, including GPT-4V, struggle to extract temporal features from many continuous frames.\nConnecting multiple frames. Many incorrect responses are due to GPT-4V tending to reason on a single frame, ignoring the connection between frames. In Tab. 4, instance 43 demonstrates this fact. Another interesting example is instance 116 in Tab. 18. In describing a sequence of actions, GPT-4V mentions a certain action many times because it appears in many frames, although the correct answer should do it just once.\nGPT-4V also struggles to reason between scenes. For example, in instance 112 of Tab. 4, the first scene features a deer in third-person view gradually opening its eyes. The second scene shows an approaching train from the deer's first-person view. To answer \"Why does this orange animal have its eyes wide open?\", the model must connect these two events, but GPT-4V failed. There are more examples in Tab. 19 supporting this observation.\nMoving objects. Another group of hard cases involves moving objects, such as comparing the speed of two cars. As shown in Tab. 4, instance 191 records a police car overtaking a firefighting truck. However, GPT-4V perceives all frames as semantically similar and is unable to identify the faster vehicle. We provide more examples in Tab. 20.\nOverly general responses. GPT-4V sometimes answers questions in a general context rather than addressing the specific content of given videos. For instance, in instance 97 shown in Tab. 4, GPT-4V answers \"Why are the audience members standing up?\" in a general way, even though there is an obvious and concrete reason. Similarly, the instance 197 in Tab. 16 gives a general guide for operating a coffee machine even though the video demonstrates a specific one. It implies that GPT-4V tends to rely on its strong language model to produce a safe answer, sometimes ignoring key details in the video." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce AutoEval-Video, a novel and challenging benchmark for comprehensively assessing large vision-language models in open-ended video question answering. Utilizing an LLM evaluator, models can be automatically and accurately evaluated on AutoEval-Video. In particular, GPT-4V achieves the highest accuracy of 32.2%, still far from human accuracy of 72.8%. Through an extensive case study, we identify several limitations of current models' capabilities in video understanding, such as temporal and dynamic comprehension, and overly general responses. These findings will shape our future research direction. " }, { "figure_ref": [], "heading": "AutoEval-Video", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Details of the Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Input Format of ImageLLMs", "publication_ref": [], "table_ref": [], "text": "For ImageLLMs, we combine 8 or 16 frames into a single image as visual input for the models. The concatenation approach is determined base on the image pre-processing method used by the model. Specifically, for all models except LLaVA-1.5, we horizontally concatenate eight frames into a single image for the 8-frame setting and organize the 16 frames in a 4 × 4 grid pattern within a single image for the 16-frame setting.\nFor LLaVA-1.5, there is a center-crop in its image preprocessing stage. To accommodate this, we first organize 8 frames and 16 frames into a 2 × 4 format and a 4 × 4 format, respectively, on a single image. Then, black padding is added to the shorter sides of the image, transforming it into a square. This ensures that the information in all frames remains intact." }, { "figure_ref": [], "heading": "Unified Prompt for GPT-4 Evaluator", "publication_ref": [], "table_ref": [], "text": "We perform an ablation study on automatic evaluators in Sec. 4.3. The unified prompt used is as follows:\nBelow, I will present you with a question, a reference answer, and an answer to be judged. The question will be labeled with " }, { "figure_ref": [], "heading": "Performances Across Nine Skill Dimensions in AutoEval-Video", "publication_ref": [], "table_ref": [], "text": "In this section, we investigate performance of models across different skill dimensions in AutoEval-Video. The results are shown in Tab. 6 for 8-frame setting and Tab. 7 for 16frame setting, respectively. The findings reveal that present large vision-language exhibit limited performance across all nine skill dimensions. In addition, for 8-frame setting, GPT-4V significantly outperforms other models on most skill dimensions, except for comparison reasoning and counterfactual reasoning. At last, increasing the number of input frames for GPT-4V brings improvement for all skill dimensions. These results are also supported by the case study in Sec. 9." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Hard Cases for Human", "publication_ref": [], "table_ref": [], "text": "Humans achieve an accuracy rate of 72.8% on AutoEval-Video. We conduct an in-depth analysis of these cases and discover that there are two major hard cases for human: questions related to external knowledge and questions requiring accurate description and summarizing ability.\nQuestions Related to External Knowledge. Some questions in AutoEval-Video require a relatively high level of mastery of external knowledge in specific domains such as sports and games, resulting in human difficulties. For example, in Tab. 8, instance 38 shows a misunderstanding due to human participant not knowing basketball scoring rules. Similarly, instance 133 requires explaining a game streamer's laughter, an easy task for someone familiar with the game but confusing for others. However, considering the cognitive level of the general public, these shortcomings are reasonable and expected.\nAccurate Description and Summarizing Ability. In addition, questions requiring detailed, step-by-step video descriptions are more challenging for humans compared to semantic reasoning problems due to their complexity and precision. For instance, as shown in Tab. 10, in instance 173, human makes careless mistakes by ignoring necessary CPU installation steps shown in the video. In instance 143, errors occur due to insufficient observation, misidentifying corn kernels as soybeans. This phenomenon is common in many step-by-step descriptive tasks." }, { "figure_ref": [], "heading": "Hard Cases for Open-Sourced Models", "publication_ref": [], "table_ref": [], "text": "In this subsection, we provide examples of three typical categories of bad cases generated by open-sourced models: hallucination, alignment, and weak LLM. These examples can be found in Tab. 11, Tab. 12 and Tab. 13, respectively." }, { "figure_ref": [], "heading": "Case Study for VideoChat", "publication_ref": [], "table_ref": [], "text": "Besides GPT-4V, VideoChat achieves the highest score. Furthermore, GPT-4V accurately answers just 32.7% of cases where VideoChat is correct. Examining their responses on AutoEval-Video reveals that VideoChat outperforms GPT-4V in three areas: 1) traffic-related tasks; 2) comparisons involving dynamic objects; and 3) generating video-specific responses rather than general, vague answers. See Tab. 14 for relevant examples." }, { "figure_ref": [], "heading": "16-Frame vs. 8-Frame", "publication_ref": [], "table_ref": [], "text": "By increasing the visual input to 16 frames, GPT-4V achieves significant improvement compared to the 8-frame setting. In Tab. 15, we provide examples of instances where questions cannot be accurately answered with an 8-frame" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Question: What was the score of the team in black in the video footage?\n[ID :38] Sample Answer: The team in black scored eight points. Human: The black team scores 3 points in the video.\nQuestion: How did the character detect the enemy? [ID:132]\nSample Answer: The enemy shot the character so the enemy's location could be known.\nHuman: The character used the scope on the sniper rifle to clearly observe distant enemies.\nQuestion: Why did this person initially appear stunned and then suddenly burst into laughter? [ID:133] Sample Answer: Because the person threw grenades to kill the enemy but knocked a teammate down accidentally. Human: Because she didn't expect that the bomb she threw would explode and harm someone. input, but can be resolved correctly when utilizing a 16frame input." }, { "figure_ref": [], "heading": "Case Study for GPT-4V", "publication_ref": [ "b17" ], "table_ref": [], "text": "Since GPT-4V is state-of-the-art for now, we present a comprehensive error analysis for it on AutoEval-Video. In Tab. 9, we categorize the challenging cases encountered by GPT-4V for various reasons. The primary case for the difficulty is related to the insufficient number of frames. Another significant issue is its tendency to produce overly general responses, as demonstrated in Tab. 16. Furthermore, GPT-4V struggles with temporal and dynamic comprehension, including understanding of continuous visual features (Tab. 17), understanding across multiple frames (Tab. 18\nand Tab. 19), and moving objects (Tab. 20).\nQuestion: How can you prepare a dish using these ingredients and tools as demonstrated in the video?\n[ID:143] Sample Answer: Cut the potatoes, beets, purple cabbage, carrots, mushrooms, and bell peppers, put them in the steamer with the corn kernels, cut the gluten into strips, heat the steamer on the gas stove, and put the gluten into the air fryer.\nHuman: The kitchen utensils needed for preparation include a cutting board, a chef's knife, a small steamer pot, and an air fryer.\nThe preparation process for the ingredients is as follows: 1. Cut the potatoes into small chunks. 2. Slice the leafy greens into segments. 3. Chop the purple cabbage. 4. Slice the carrots into thick pieces. 5. Halve the white mushrooms. 6. Slice the bell peppers. 7. Prepare a small bowl of soaked soybeans. 8. Cut the blanched beef pieces into strips. Finally, Place the vegetables from steps 1-7 into the steamer pot, and place the beef strips from step 8 into the air fryer.\nQuestion: How to install a CPU on a motherboard as demonstrated in the video? The components used include: a CPU and a motherboard. Sample Answer: First, put the pillow at the head of the bed; Second, put the clothes that need to be washed on the floor, and put the clothes that do not need to be washed on the chair; Third, pile up all the debris except for sheets quilts, and pillows on the floor; Fourth, after arranging the sheets and quilts, place the doll bear on the side near the head of the bed." }, { "figure_ref": [], "heading": "Human:", "publication_ref": [], "table_ref": [], "text": "The order in which this person organizes the room is as follows: 1. Neatly arrange multiple pillows at the head of the bed. 2. Gather multiple items left on the bed onto a chair. 3. Remove other items from the bed, such as a large teddy bear and a white cushion. 4. Straighten the bedsheet to make it look even. 5. Place the teddy bear back at the head of the bed. Question: What occurred in the cup?\n[ID:169]\nSample Answer: The ice melted.\nLLaVA-1.5: In the cup, there is a small amount of water that has begun overflowing, and at the same moment, the cup contains only a drop of water.\nInstructBLIP: The liquid in the cup was poured from the left side to the right side.\nQuestion: Describe the two extra features of this vacuum cleaner sequentially.\n[ID:294]\nSample Answer: The vacuum cleaner can be paired with two unique attachments, each serving a distinct function: first, when paired with the long nozzle attachment, it can clean up small debris on carpets and sofas; second, when paired with the brush attachment, it can remove dust from plush toys." }, { "figure_ref": [], "heading": "Qwen-VL:", "publication_ref": [], "table_ref": [], "text": "The vacuum cleaner has a powerful motor, which helps in cleaning the carpet efficiently and removing dirt, dust and debris. It also has a HEPA filter that captures allergens and pollen from the air, making it an ideal choice for people with allergies.\nLLaVA Question: Who are the potatoes in the bowl intended for?\n[ID:208]\nSample Answer: The monkey.\nVideo-LLaMA: In the first scene, we see a young man holding a bowl of potatoes on a wooden table in a kitchen. The table has a white countertop and a wooden floor. The man is wearing a red shirt and holding the bowl of potatoes with his right hand. We can also see a silver spoon in the bowl.\nInstructBLIP: mashed.\nQuestion: The video features three animals: fish, birds, and cats. Please sequence them in the order of their appearance. [ID :33] Sample Answer: Birds, cats, and fish.\nVideo-ChatGPT: We can't provide an exact sequence of the animals based on the image description alone. However, what is evident is that there are three different animals present in the video. Since the video does not provide a clear order, we do not have any further information on the animals' appearance in the video." }, { "figure_ref": [], "heading": "BLIP-2:", "publication_ref": [], "table_ref": [], "text": "The sequence should be as follows: fish, birds, cats, fish, birds, cats, fish, birds, cats, fish, birds...\nLLaVA-1.5: Fish, birds, cats, birds, cats, fish.\nQuestion: What is the first thing the person cuts after slicing this red pepper?\nSample Answer: The onion.\nVideoChat: The person firstly cuts the red pepper into slices.\nVideo-LLaMA: The person firstly cuts the red pepper with a knife on a cutting board.\nTable 12. Examples of model responses on AutoEval-Video that do not adhere to the given questions.\nQuestion: Describe the two extra features of this vacuum cleaner sequentially.\n[ID:294]\nSample Answer: The vacuum cleaner can be paired with two unique attachments, each serving a distinct function: first, when paired with the long nozzle attachment, it can clean up small debris on carpets and sofas; second, when paired with the brush attachment, it can remove dust from plush toys. Question: What is the correct action for this car that's waiting to turn left? VideoChat: In the video, the police car is going slower than the ambulance." }, { "figure_ref": [], "heading": "GPT-4V:", "publication_ref": [], "table_ref": [], "text": "The image appears to be a still frame or a sequence of similar frames, so it's not possible to determine the speed of the cars based on this image alone. To accurately determine which car is going slower, you would need to see a video or have more context." }, { "figure_ref": [], "heading": "Question: What is the relationship between these individuals? [ID:26]", "publication_ref": [], "table_ref": [], "text": "Sample Answer: They are family.\nVideoChat: It appears that the people in the video are a family, possibly a mother, father, and daughter." }, { "figure_ref": [], "heading": "GPT-4V:", "publication_ref": [], "table_ref": [], "text": "The image appears to depict multiple individuals in a home setting, likely engaging in a shared activity or celebration. The exact relationship between the individuals cannot be definitively determined based on the image alone. They could be family members, friends, or any other type of acquaintances. However, the intimate and friendly nature of their interactions suggests they have a close relationship, potentially as family or good friends.\nTable 14. VideoChat vs. GPT-4V: VideoChat outperforms GPT-4V on three domains, traffic-related tasks (instance 105), comparison between dynamic objects (instance 191), and generating video-specific responses rather than general answers (instance 26).\nQuestion: What action did the individual take towards the flower after he opened the book?\n[ID:49] 8 frames as visual input: 16 frames as visual input:\nSample Answer: Put the flowers in a book and press the book. GPT-4V (8-frame): The sequence of images shows an individual at a table with a book and a flower lying next to it.From left to right, the sequence depicts the person opening the book, turning pages, then picking up the flower and appearing to smell it after the book has been opened.The final image shows the person with their head bowed down towards the flower held in their hand, suggesting the action of smelling the flower. GPT-4V (16-frame): The series of images depict a sequence of actions. The individual in the images appears to be engaging in a craft or a demonstration. After opening the book, he have taken a flower, placed it within the pages of the book, and then closed the book, likely to press the flower for preservation or as part of a craft project. This is a common method for pressing and drying flowers, which can later be used for art, decoration, or keepsakes.\nQuestion: How did the direction of the car change?\n[ID:61] 8 frames as visual input: 16 frames as vidual input:\nSample Answer: The car went straight and then turned left. GPT-4V (8-frame): It appears you've provided a panoramic or 360-degree type image that has been stretched into a flat perspective, showing multiple repeated views of the inside of a car, most likely a Tesla considering the logo on the steering wheel and the characteristic dashboard design.However, due to the nature of the image, it's challenging to ascertain the direction of the car's movement with certainty. The panoramic stretching distorts the scene and repeats it, which does not provide a reliable sense of direction change.In typical images or video, one might look for changes in the car's orientation relative to the road or the movement of the steering wheel to determine a direction change. However, in this image, there's no clear evidence that can be used to determine how the car's direction may have changed. The image would need to be in a standard perspective to make such an assessment. GPT-4V (16-frame): The series of images shows the interior view of a vehicle from the driver's perspective, specifically a Tesla, as evidenced by the logo on the steering wheel. Across the sequence of images, we can see the vehicle's progression as it makes a left turn: 1.Initial Frames (Top Row): The car starts off going straight on a road. 2.Middle Frames (Middle Rows): As we progress through the frames, we see that the driver turns the steering wheel to the left, initiating the turn. 3.Final Frames (Bottom Row): The car has completed much of the turn.The steering wheel is returning to the neutral position as the car aligns with the new direction of the road after the turn. The environment outside the vehicle changes consistently with the progression of a left turn, starting with houses on the side and transitioning to them being visible through the rear as the turn progresses.The motion is smooth, indicating a continuous action, which suggests these frames capture a sequence from a video of a car turning left at an intersection or onto another road. Table 15. 16-Frame vs. 8-Frame: Examples of instances in which GPT-4V correctly responds only when utilizing 16 frames as visual input.\nQuestion: Why did the chicken turn into an egg?\n[ID :36] Sample Answer: Because these two guys were shining a machine at the chicken." }, { "figure_ref": [], "heading": "GPT-4V:", "publication_ref": [], "table_ref": [], "text": "The images you provided appear to be from a cartoon, possibly referencing a comedic or fictional scenario. In cartoons and animated series, various humorous and impossible situations can be presented for comedic effect. The concept of a chicken turning into an egg might be a play on the age-old question: \"Which came first, the chicken or the egg?\" Without specific context from the show, it's hard to pinpoint the exact reason, but it's likely intended as a humorous twist on that question.\nQuestion: Why is the individual on the left heading in that particular direction?\n[ID:266] Sample Answer: Because the person on the left wants to confirm that the person on the right is not injured.\nGPT-4V: Based on the image, I can't determine the specific reason why the person on the left is walking in that direction. People might walk for various reasons, such as exploring, heading towards a destination, or simply enjoying the outdoors. Without additional context, it's hard to say for certain.\nQuestion: Please provide step-by-step instructions on how to operate this coffee machine as demonstrated in the video. [ID:197] Sample Answer: First, select the double filter just put that in the handle; Second, get our grind size towards the finer end; Third, as we've used the double filter, select the double filter size and activate the grinder, simply bump and release; Four, tap and get that coffee into the filter; Five, grab our tamp, give it a tap; Six, use razor scraper tool to scrape off the excess coffee to ensure that we've got our volumes of coffee right built for this machine; Seven, insert the handle and extract; Eight, place the cup just below the outlet.\nGPT-4V: Certainly! Based on the series of images provided, here are the steps on how to use the coffee machine: 1. Preparation: Start by making sure the coffee machine is clean, plugged in, and turned on. Check if there's enough water in the reservoir. 2. Coffee Beans: Place the coffee beans in the grinder hopper on top of the machine. 3. Grinding: Adjust the grind setting to your preference. Some machines have multiple settings, ranging from fine to coarse. Once set, activate the grinder to grind the beans.\nThe ground coffee will collect in a portafilter or container. 4. Tamping: Once you have enough ground coffee in the portafilter, use a tamper to press down the coffee grounds. This ensures even water distribution during brewing and a consistent espresso shot. Apply firm and even pressure while tamping. 5. Inserting the Portafilter: Attach the portafilter to the machine by slotting it into the group head. Ensure it's securely locked in place. 6. Brewing: Press the appropriate button or turn the lever to start the brewing process. Hot water will pass through the coffee grounds, extracting flavors and pouring the coffee into the cup below. 7. Steam Wand (If Applicable): If you want to froth milk for a cappuccino or latte, fill a stainless steel jug with cold milk. Place the steam wand into the milk and activate it. This will introduce steam into the milk, causing it to froth and increase in volume. Move the jug in an up and down motion to get even frothing. 8. Pouring Serving: Once you have your brewed coffee and frothed milk (if applicable), pour them into your cup to create your desired coffee drink. 9. Cleaning: After use, ensure you clean the machine parts, such as the portafilter, tamper, and steam wand, to maintain the machine and ensure the best coffee taste. Remember, it's always a good idea to refer to the machine's user manual for specific instructions and maintenance tips.\nTable 16. Examples of incorrect answers from GPT-4V demonstrate its tendency to provide answers in a general context rather than specifically addressing the content of the video.\nQuestion: In the video, how many times do the two people pass the ball in total? Sample Answer: First, sprinkle the seasoning on the chopped pork sausage; Second, sprinkle the cheese on the chopped pork sausage; Third, stir the above well; Fourth, place the stirred ingredients on the cut pepper.\nGPT-4V: The sequence of images depicts a person preparing a dish. Here's the order of the steps: 1. The person is holding a bowl filled with some ingredients. 2. They begin mixing the contents of the bowl. 3. The mixing continues as the contents appear to blend together. 4. The person is still mixing, ensuring the ingredients are well-combined. 5. The mix seems to have a consistent texture now. 6. Finally, the person takes a small amount of the mixture, possibly for tasting or shaping. Note: This description is based on the images provided and does not make any assumptions about the specific dish being prepared or the person's intentions. Question: Discuss the changes in Super Mario's ranking in this video.\n[ID:192] Sample Answer: Super Mario ran from last to first.\nGPT-4V: The image provided seems to be a sequence or snapshot from a game that features Super Mario participating in a race, likely a reference to the \"Mario Sonic at the Olympic Games\" series or something similar. From what can be observed: 1. In the first segment, Mario holds the 1st place. 2. In the second segment, Mario maintains his lead, still in 1st place. 3. By the third segment, Mario appears to have fallen slightly behind, taking the 2nd position. 4. In the fourth segment, Mario manages to regain his lead and is back to the 1st position. 5. In the fifth segment, Mario continues to hold onto the 1st position. 6. In the last segment, Mario still remains in the 1st position. Overall, it's evident that Mario held a strong position throughout the race, primarily staying in 1st place. He did face some competition in the third segment, where he was temporarily overtaken but quickly managed to reclaim his lead. This shows Mario's persistence and determination in the race! Question: In this competition, there are four participants, each wearing white, yellow, red, and blue helmets respectively. Which competitor outshines the rest? [ID:247] Sample Answer: Contestant with blue helmet.\nGPT-4V: From the sequence of images provided, it appears that the rider with the white helmet is ahead of the riders with the yellow and red helmets in most of the frames. Therefore, the player wearing the white helmet outperformed the two opponents wearing yellow and red helmets. GPT-4V: Based on the provided image, it's not possible to determine which car (or bus and tractor, in this case) is faster. The image is static and does not provide any information about their speeds. You would need a video or additional data to determine which one is faster. " } ]
We propose a novel and challenging benchmark, AutoEval-Video, to comprehensively evaluate large visionlanguage models in open-ended video question answering. The comprehensiveness of AutoEval-Video is demonstrated in two aspects: 1) AutoEval-Video constructs open-ended video-questions across 9 skill dimensions, addressing capabilities of perception, comprehension, and generation. 2) AutoEval-Video contains newly collected videos that cover over 40 distinct themes. To efficiently evaluate responses to the open-ended questions, we employ an LLM-based evaluation approach, but instead of merely providing a reference answer, we annotate unique evaluation rules for every single instance (video-question pair). To maximize the robustness of these rules, we develop a novel adversarial annotation mechanism. By using instance-specific rules as prompt, GPT-4, as an automatic evaluator, can achieve a stable evaluation accuracy of around 97.0%, comparable to the 94.9% -97.5% accuracy of a human evaluator. Furthermore, we assess the performance of eight large visionlanguage models on AutoEval-Video. Among them, GPT-4V(ision) significantly outperforms other models, achieving an accuracy of 32.2%. However, there is still substantial room for improvement compared to human accuracy of 72.8%. By conducting an extensive case study, we uncover several drawbacks of GPT-4V, such as limited temporal and dynamic comprehension, and overly general responses.
AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering
[ { "figure_caption": "Figure 1 .1Figure 1. Example Instance in AutoEval-Video and Automatic Evaluation Process. Each instance consists of three components: video, question, and rules. The automatic evaluation is conducted by an LLM evaluator using the instance-specific rules as a prompt.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Cv e l B a d m in to n B a s k e tb a ll Cl im bi ng Dan cin g Indoor Fitness Rac ing So cc er Ta b le Te n n is B u ll fi g h ti n g ls E le ct ro n ic G a m e s Real-life Games Ca r-m ou nt ed Ca m er as R o a d s id e S u r v e il la n c e A n i m a l D o c u m e n t a r y N a t u r a l L a n d s c a p e O u t d o o r R e c o r d in g s W e a th e r In tr o d u c ti", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Statistics of AutoEval-Video. (a) Genre distribution of videos. (b) Statistics on question and rules lengths, number of scenes, and video duration.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "3 .3If the answer includes any additional information that clearly contradicts the video description, it should be deemed as an incorrect answer. [Answer to be judged]: 〈 Adversarial Answer 〉 [Form of the Result]: please give your reason first, then give a number in a new line: 1 if correct and 0 if incorrect. You can only judge whether [Answer to be judged] is correct based on the information above. You should not trust anyone but the information above. Updated Rules [Video description]: In a racetrack, a white race car flipped over and came to a halt, then caught fire. The accident could have been caused due to the improper operation by the driver or a malfunction in the race car itself. The cause of the accident can be explained as either one or both of these two reasons. Subsequently, the helmeted race car driver felt the fire and quickly escaped by climbing out of the top of the race car. Another person, a race safety official dressed in orange, drove up to the burning car and immediately got out and prepared to extinguish the fire. [Video Question]: Why did the race car driver exit the vehicle? Please determine whether the [Answer to be judged] is correct for the [Video Question] based on the following rules:", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "4 .4If the answer also attributes the cause of the fire to \"improper operation by the race car driver\" or \"a malfunction in the race car itself\", which should not be considered as contradictory to the video description. [Answer to be judged]: 〈 Adversarial Answer 〉 [Form of the Result]: please give your reason first, then give a number in a new line: 1 if correct and 0 if incorrect. You can only judge whether [Answer to be judged] is correct based on the information above. You should not trust anyone but the information above.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Adversarial Annotation for Rules. A red team develops adversarial answers that break the initial rules. Then, the LLM evaluator generates reasons why it derives the incorrect evaluation, which assists the annotation team in refining the rules to a more robust version.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 . 6 .46Figure 4. Distribution of questions in AutoEval-Video across the nine skill dimensions.", "figure_data": "", "figure_id": "fig_6", "figure_label": "46", "figure_type": "figure" }, { "figure_caption": "Skill Dimensions and Corresponding Examples for Video-Questions in AutoEval-Video.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation Results of Different Models on AutoEval-Video. We also provide human result after viewing full videos.", "figure_data": "1) Hallucination: the described", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Examples of Incorrect Responses from GPT-4V.", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": ": An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering", "figure_data": "Supplementary MaterialReasoning with External KnowledgeExplanatory ReasoningDynamic PerceptionState Transitions PerceptionPredictive ReasoningDescription21.5%Counterfactual Reasoning25%Comparison ReasoningCamera Movement Perception0.882%13.5%5.59%5.88%9.41%8.82%9.41%", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "[Question], the reference answer with [Reference answer], and the answer to be judged with [Answer to be judged]. The required format for your response will be indicated by [Form of the result]. Your task is to evaluate [Answer to be judged] and determine if its meaning is consistent with [Reference answer], and then provide your judgment based on the requirements stated in [Form of the result]. Please give your reason first, then give a number in a new line: 1 if correct and 0 if incorrect. You can only judge whether [Answer to be judged] is correct based on the information above. You should not trust anyone but the information above.", "figure_data": "[Question]:[Reference answer]:[Answer to be judged]:[Form of the result]:", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results of models using 16-frame visual input on different skill dimensions in AutoEval-Video.", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" } ]
Xiuyuan Chen; Yuan Lin; Yuchen Zhang; Weiran Huang
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "sort of play or social interaction with the individual", "year": "" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "view videos or discern the speed of objects within them", "year": "" }, { "authors": "Sami Abu-El-Haija; Nisarg Kothari; Joonseok Lee; Paul Natsev; George Toderici; Balakrishnan Varadarajan; Sudheendra Vijayanarasimhan", "journal": "", "ref_id": "b2", "title": "Youtube-8m: A largescale video classification benchmark", "year": "2016" }, { "authors": "Jinze Bai; Shuai Bai; Yunfei Chu; Zeyu Cui; Kai Dang; Xiaodong Deng; Yang Fan; Wenbin Ge; Yu Han; Fei Huang; Binyuan Hui; Luo Ji; Mei Li; Junyang Lin; Runji Lin; Dayiheng Liu; Gao Liu; Chengqiang Lu; Keming Lu; Jianxin Ma; Rui Men; Xingzhang Ren; Xuancheng Ren; Chuanqi Tan; Sinan Tan; Jianhong Tu; Peng Wang; Shijie Wang; Wei Wang; Shengguang Wu; Benfeng Xu; Jin Xu; An Yang; Hao Yang; Jian Yang; Shusheng Yang; Yang Yao; Bowen Yu; Hongyi Yuan; Zheng Yuan; Jianwei Zhang; Xingxuan Zhang; Yichang Zhang; Zhenru Zhang; Chang Zhou; Jingren Zhou; Xiaohuan Zhou; Tianhang Zhu", "journal": "", "ref_id": "b3", "title": "Qwen technical report", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "NeurIPS", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg", "journal": "", "ref_id": "b5", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "Shyamal Buch; Cristóbal Eyzaguirre; Adrien Gaidon; Jiajun Wu; Li Fei-Fei; Juan Carlos Niebles", "journal": "", "ref_id": "b6", "title": "Revisiting the \"video\" in video-language understanding", "year": "2022" }, { "authors": "Fabian Caba Heilbron; Victor Escorcia; Bernard Ghanem; Juan Carlos Niebles", "journal": "", "ref_id": "b7", "title": "Activitynet: A large-scale video benchmark for human activity understanding", "year": "2015" }, { "authors": "Wenliang Dai; Junnan Li; Dongxu Li; Anthony Meng; Huat Tiong; Junqi Zhao; Weisheng Wang; Boyang Li; Pascale Fung; Steven Hoi", "journal": "", "ref_id": "b8", "title": "Instructblip: Towards generalpurpose vision-language models with instruction tuning", "year": "2023" }, { "authors": "Liting Heng Fan; Fan Lin; Peng Yang; Ge Chu; Sijia Deng; Hexin Yu; Yong Bai; Chunyuan Xu; Haibin Liao; Ling", "journal": "", "ref_id": "b9", "title": "Lasot: A high-quality benchmark for large-scale single object tracking", "year": "2019" }, { "authors": "Jinlan Fu; See-Kiong Ng; Zhengbao Jiang; Pengfei Liu", "journal": "", "ref_id": "b10", "title": "Gptscore: Evaluate as you desire", "year": "2023" }, { "authors": "Raghav Goyal; Samira Ebrahimi Kahou; Vincent Michalski; Joanna Materzynska; Susanne Westphal; Heuna Kim; Valentin Haenel; Ingo Fruend; Peter Yianilos; Moritz Mueller-Freitag", "journal": "", "ref_id": "b11", "title": "The \"something something\" video database for learning and evaluating visual common sense", "year": "2017" }, { "authors": "Yash Goyal; Tejas Khot; Douglas Summers-Stay; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b12", "title": "Making the v in vqa matter: Elevating the role of image understanding in visual question answering", "year": "2017" }, { "authors": "Kristen Grauman; Andrew Westbury; Eugene Byrne; Zachary Chavis; Antonino Furnari; Rohit Girdhar; Jackson Hamburger; Hao Jiang; Miao Liu; Xingyu Liu", "journal": "", "ref_id": "b13", "title": "Ego4d: Around the world in 3,000 hours of egocentric video", "year": "2022" }, { "authors": "Madeleine Grunde-Mclaughlin; Ranjay Krishna; Maneesh Agrawala", "journal": "", "ref_id": "b14", "title": "Agqa: A benchmark for compositional spatio-temporal reasoning", "year": "2021" }, { "authors": "Chunhui Gu; Chen Sun; David A Ross; Carl Vondrick; Caroline Pantofaru; Yeqing Li; Sudheendra Vijayanarasimhan; George Toderici; Susanna Ricco; Rahul Sukthankar", "journal": "", "ref_id": "b15", "title": "Ava: A video dataset of spatio-temporally localized atomic visual actions", "year": "2018" }, { "authors": " De-An; Vignesh Huang; Dhruv Ramanathan; Lorenzo Mahajan; Manohar Torresani; Li Paluri; Juan Carlos Fei-Fei; Niebles", "journal": "", "ref_id": "b16", "title": "What makes a video a video: Analyzing temporal information in video understanding models and datasets", "year": "2018" }, { "authors": "Will Kay; Joao Carreira; Karen Simonyan; Brian Zhang; Chloe Hillier; Sudheendra Vijayanarasimhan; Fabio Viola; Tim Green; Trevor Back; Paul Natsev", "journal": "", "ref_id": "b17", "title": "The kinetics human action video dataset", "year": "2017" }, { "authors": "Ranjay Krishna; Kenji Hata; Frederic Ren; Li Fei-Fei; Juan Carlos Niebles", "journal": "", "ref_id": "b18", "title": "Dense-captioning events in videos", "year": "2017" }, { "authors": "Hildegard Kuehne; Hueihan Jhuang; Estíbaliz Garrote; Tomaso Poggio; Thomas Serre", "journal": "IEEE", "ref_id": "b19", "title": "Hmdb: a large video database for human motion recognition", "year": "2011" }, { "authors": "Jie Lei; Tamara L Berg; Mohit Bansal", "journal": "", "ref_id": "b20", "title": "Revealing single frame bias for video-and-language learning", "year": "2022" }, { "authors": "Bohao Li; Rui Wang; Guangzhi Wang; Yuying Ge; Yixiao Ge; Ying Shan", "journal": "", "ref_id": "b21", "title": "Seed-bench: Benchmarking multimodal llms with generative comprehension", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b22", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Kunchang Li; Yinan He; Yi Wang; Yizhuo Li; Wenhai Wang; Ping Luo; Yali Wang; Limin Wang; Yu Qiao", "journal": "", "ref_id": "b23", "title": "Videochat: Chat-centric video understanding", "year": "2023" }, { "authors": "Yifan Li; Yifan Du; Kun Zhou; Jinpeng Wang; Wayne Xin Zhao; Ji-Rong Wen", "journal": "", "ref_id": "b24", "title": "Evaluating object hallucination in large vision-language models", "year": "2023" }, { "authors": "Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar", "journal": "", "ref_id": "b25", "title": "Holistic evaluation of language models", "year": "2022" }, { "authors": "Chin-Yew Lin", "journal": "", "ref_id": "b26", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Yen-Ting Lin; Yun-Nung Chen", "journal": "", "ref_id": "b27", "title": "Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models", "year": "2023" }, { "authors": "Daizong Liu; Xiaoye Qu; Wei Hu", "journal": "ACM MM", "ref_id": "b28", "title": "Reducing the vision and language bias for temporal sentence grounding", "year": "2022" }, { "authors": "Fuxiao Liu; Kevin Lin; Linjie Li; Jianfeng Wang; Yaser Yacoob; Lijuan Wang", "journal": "", "ref_id": "b29", "title": "Aligning large multi-modal model with robust instruction tuning", "year": "2023" }, { "authors": "Haotian Liu; Chunyuan Li; Yuheng Li; Yong Jae Lee", "journal": "", "ref_id": "b30", "title": "Improved baselines with visual instruction tuning", "year": "2023" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b31", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Yang Liu; Dan Iter; Yichong Xu; Shuo Wang; Ruochen Xu; Chenguang Zhu", "journal": "", "ref_id": "b32", "title": "G-eval: Nlg evaluation using gpt-4 with better human alignment", "year": "2023" }, { "authors": "Muhammad Maaz; Hanoona Rasheed; Salman Khan; Fahad Shahbaz Khan", "journal": "", "ref_id": "b33", "title": "Video-chatgpt: Towards detailed video understanding via large vision and language models", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b34", "title": "", "year": "2023" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b35", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Lucas Viorica Pȃtrȃucean; Ankush Smaira; Adrià Gupta; Larisa Recasens Continente; Dylan Markeeva; Skanda Banarse; Joseph Koppula; Mateusz Heyward; Yi Malinowski; Yang", "journal": "", "ref_id": "b36", "title": "Perception test: A diagnostic benchmark for multimodal video models", "year": "2023" }, { "authors": "Khurram Soomro; Mubarak Amir Roshan Zamir; Shah", "journal": "", "ref_id": "b37", "title": "Ucf101: A dataset of 101 human actions classes from videos in the wild", "year": "2012" }, { "authors": "Zhiqing Sun; Sheng Shen; Shengcao Cao; Haotian Liu; Chunyuan Li; Yikang Shen; Chuang Gan; Liang-Yan Gui; Yu-Xiong Wang; Yiming Yang", "journal": "", "ref_id": "b38", "title": "Aligning large multimodal models with factually augmented rlhf", "year": "2023" }, { "authors": "Ramakrishna Vedantam; Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b39", "title": "Cider: Consensus-based image description evaluation", "year": "2015" }, { "authors": "Xin Wang; Jiawei Wu; Junkun Chen; Lei Li; Yuan-Fang Wang; William Yang; Wang ", "journal": "", "ref_id": "b40", "title": "Vatex: A large-scale, highquality multilingual dataset for video-and-language research", "year": "2019" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed Chi; V Quoc; Denny Le; Zhou", "journal": "NeurIPS", "ref_id": "b41", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Yi Wu; Jongwoo Lim; Ming-Hsuan Yang", "journal": "", "ref_id": "b42", "title": "Online object tracking: A benchmark", "year": "2013" }, { "authors": "Jun Xu; Tao Mei; Ting Yao; Yong Rui", "journal": "", "ref_id": "b43", "title": "Msr-vtt: A large video description dataset for bridging video and language", "year": "2016" }, { "authors": "Zhengyuan Yang; Linjie Li; Kevin Lin; Jianfeng Wang; Chung-Ching Lin; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b44", "title": "The dawn of lmms: Preliminary explorations with gpt-4v (ision)", "year": "2023" }, { "authors": "Weihao Yu; Zhengyuan Yang; Linjie Li; Jianfeng Wang; Kevin Lin; Zicheng Liu; Xinchao Wang; Lijuan Wang", "journal": "", "ref_id": "b45", "title": "Mm-vet: Evaluating large multimodal models for integrated capabilities", "year": "2023" }, { "authors": "Weihao Yu; Zhengyuan Yang; Linjie Li; Jianfeng Wang; Kevin Lin; Zicheng Liu; Xinchao Wang; Lijuan Wang", "journal": "", "ref_id": "b46", "title": "Mm-vet: Evaluating large multimodal models for integrated capabilities", "year": "2023" }, { "authors": "Zhou Yu; Lixiang Zheng; Zhou Zhao; Fei Wu; Jianping Fan; Kui Ren; Jun Yu", "journal": "", "ref_id": "b47", "title": "Anetqa: A large-scale benchmark for fine-grained compositional reasoning over untrimmed videos", "year": "2023" }, { "authors": "Hang Zhang; Xin Li; Lidong Bing", "journal": "", "ref_id": "b48", "title": "Video-llama: An instruction-tuned audio-visual language model for video understanding", "year": "2023" }, { "authors": "Lianmin Zheng; Wei-Lin Chiang; Ying Sheng; Siyuan Zhuang; Zhanghao Wu; Yonghao Zhuang; Zi Lin; Zhuohan Li; Dacheng Li; Eric Xing", "journal": "", "ref_id": "b49", "title": "Judging llm-asa-judge with mt-bench and chatbot arena", "year": "2023" }, { "authors": "Lianmin Zheng; Wei-Lin Chiang; Ying Sheng; Siyuan Zhuang; Zhanghao Wu; Yonghao Zhuang; Zi Lin; Zhuohan Li; Dacheng Li; Eric Xing", "journal": "", "ref_id": "b50", "title": "Judging llm-asa-judge with mt-bench and chatbot arena", "year": "2023" }, { "authors": "Yiyang Zhou; Chenhang Cui; Jaehong Yoon; Linjun Zhang; Zhun Deng; Chelsea Finn; Mohit Bansal; Huaxiu Yao", "journal": "", "ref_id": "b51", "title": "Analyzing and mitigating object hallucination in large vision-language models", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b52", "title": "What is this individual most likely to do next", "year": "" }, { "authors": "", "journal": "", "ref_id": "b53", "title": "Initially, he sliced a sweet red chili pepper", "year": "" }, { "authors": "", "journal": "", "ref_id": "b54", "title": "What is the first thing the person cuts after slicing this red pepper? Please determine whether the", "year": "" }, { "authors": "", "journal": "", "ref_id": "b55", "title": "Why are the eyes of this orange animal wide open? Please determine whether the", "year": "" }, { "authors": "", "journal": "", "ref_id": "b56", "title": "Animal names can vary: \"the orange animal; the larger animal; the deer; the orange deer\" are interchangeable", "year": "" }, { "authors": "", "journal": "", "ref_id": "b57", "title": "GPT-4V: The images depict a series of animated scenes involving cartoon characters", "year": "" }, { "authors": "", "journal": "", "ref_id": "b58", "title": "GPT-4V: The orange animal appears to be a stylized or animated character, possibly showing surprise", "year": "" } ]
[]
2023-11-25
[ { "figure_ref": [ "fig_0", "fig_1", "fig_1" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b6", "b7", "b9", "b10", "b11", "b12", "b14", "b15", "b17", "b0", "b18", "b19", "b20", "b21", "b22", "b23" ], "table_ref": [], "text": "R EFERRING Expression Comprehension (REC) (or vi- sual grounding) [1]- [3] aims to localize an image region of an object described by a natural-language expression. With the increased interest in human-computer communication, REC has been widely applied to various downstream tasks, including image retrieval [4]- [7], visual question answering [8]- [10] and language based navigation [11], [12].\nOver the years, REC models have been improved in several ways. Early REC works [13]- [15] use CNN-LSTM frameworks to find the referred region. [16]- [18] treats REC as a cross-domain matching problem. By introducing modular networks to handle expressions with different types of information, recently proposed [1] and [19] them are based on an assumption that training data feeding into a model are given upfront. This needs expensive training data annotations.\nThis view of supervised learning stands in contrast with how humans acquire knowledge. In real-world scenarios, the setting is more complex and challenging. A model needs to learn from a stream of data instead of all the samples which have been collected completely. Note that, a more significant challenge is that during training on the stream, the training data from previous are unavailable. This type of learning is referred to as continual learning (sometimes incremental or lifelong learning).\nContinual learning [20], [21] is such a practical learning paradigm that divides the stream of data into multiple tasks according to the characteristics of the data and considers a sequential learning setting where tasks are revealed to a model one after another. The number of samples per task may be imbalanced. One consequence of learning under such a setting is that as a model learns new tasks, its performance on old ones degrades. This phenomenon is known as \"catastrophic forgetting\" [22], [23], which caused by the dilemma of stabilityplasticity. In concrete terms, plasticity indicates a model's ability to learn new knowledge, while stability represents the model's capacity to retain prior knowledge. While the promising performance of REC has been demonstrated, it still has a long way to go before it can be practically applied to real application. Firstly, the classical supervised REC learning systems acquire knowledge by providing them with a large number of annotated training samples. This view of supervised learning stands in stark contrast with how humans acquire knowledge. Secondly, in practical applications, we start training once we obtain data. We cannot wait until a large amount of data arrives before training. The REC models must sometimes be updated on-the-fly to recognize new concepts, while the training data are sometimes unavailable for reuse. Thirdly, collecting such a large number of training samples requires a lot of manual effort. And the noises of samples will inevitably occur. To solve these problems, we can collect just a portion of samples to train a model. Then we update this model when we collect another portion of samples. So, in order to improve the practicality of REC, in this paper, we propose a novel task of Continual Referring Expression Comprehension (CREC) to improve the practicality of REC in real-world scenarios. Different from the standard REC task, in which the model is trained only once on a static training set, CREC considers a continual learning setting where the training data arrives in a streaming fashion, as depicted in Fig. 1. Because REC is the upstream task for visual question answering and visual dialog, the CREC can facilitate the process of applying these methods.\nWhile the new CREC setting affords better practicality, existing REC models, under this setting, suffer from catastrophic forgetting when sequentially trained on a series of tasks (as shown in Fig. 2 (a)). Furthermore, existing continual learning methods are usually designed for image classification tasks, which may neglect the intrinsic characteristics of CREC task. For example, different CREC tasks may rely on different aspects of module, e.g., subject, location and relationship as shown in Fig. 3. Existing methods usually treat the parameters as isolated elements and ignore the modular information. Therefore, an inferior performance is usually achieved if we directly apply existing continual learning models to the CREC task. To address this issue, we develop a novel Dual Modular Memorization (DMM) mechanism for this new CREC task, which consists of two key modules of Implicit-Memory and Explicit-Memory, as depicted in Fig. 3.\nSpecifically, the Implicit-Memory module is built upon a standard modular attention network (e.g, MAttNet or CM-Att-Erase). Inspired by Memory Aware Synapses (MAS) [24] in continual learning, we first design a Naïve Implicit-Memory (N-IM) to avoid drastic changes to important parameters learned on old tasks when learning a new task. By introducing a regularization term to constrain the parameter update of different sub-modules (in MAttNet or CM-Att-Erase), N-IM can penalize the changes to important parameters, effectively preventing important knowledge related to previous tasks from being overwritten. MAttNet and CM-Att-Erase attentively divide the model into different sub-modules related to the subject (e.g., \"boy\"), the locations (e.g., \"in the middle\") and the relationships (e.g., \"riding\"). Considering the sub modular information contained in these individual sub-modules, we further develop a Weighted Implicit-Memory (W-IM) to adaptively adjust the contribution of different sub-modules. By assigning different sub-modules with task-specific importance weights, W-IM ensures that those sub-modules that are sensitive to the current task are restricted from updates.\nWe also develop two effective variants for the Explicit-Memory module, including a Naïve Explicit-Memory (N-EM) and a Modular Explicit-Memory (M-EM). N-EM selects representative samples of each task for rehearsal by considering the easy ones leading to smaller loss, while M-EM takes into account not only the easiness of samples but also the importance of the subject-information for individual samples. M-EM can quickly retain the knowledge of previous tasks by training the model on rehearsed samples when learning a new task. Benefiting from the two modules, i.e., Implicit-Memory and Explicit-Memory, DMM can effectively alleviate the phenomenon of catastrophic forgetting and continually improve the grounding performance on incoming tasks, as shown in Fig. 2 " }, { "figure_ref": [], "heading": "(b).", "publication_ref": [], "table_ref": [], "text": "In summary, our main contributions are three-folds: " }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we review previous research closely related to our method, specifically including continual learning and referring expression comprehension (REC)." }, { "figure_ref": [], "heading": "A. Continual Learning", "publication_ref": [ "b20", "b23", "b24", "b25", "b26", "b27", "b20", "b23", "b24", "b25", "b26", "b28", "b29", "b30", "b31" ], "table_ref": [], "text": "Continual learning is a practical learning mechanism in which a model learns from a stream of incoming data. As the model is updated continually using new tasks, the key challenge in continual learning is to overcome catastrophicforgetting, i.e., how to prevent the model from forgetting previously learned tasks. Catastrophic-forgetting occurs when the model parameters obtained from training on task A change when training on task B, which can easily lead to a sharp drop in the results on task A. Approaches to addressing the catastrophic forgetting problem can be grouped into three broad categories: regularization-based approaches [21], [24], architecture growth approaches [25], [26], and replay/rehearsal based approaches [27], [28].\nRegularization-based methods aims at avoiding excessive changes in the parameters learned on old tasks when learning a new task, thus ensuring the accuracy of the network on the old tasks [21], [24]. Typically, these methods estimate importance weights for each model parameter, and the changes of the important parameters are penalized by a regularizer for previous tasks. The dynamic growth approaches methods directly add or modify the model structure of REC. [25] adds an additional network to each task and lateral connections to the network of the previous task. [26] proposes a modular layer network approach, whose modules represent atomic skills that can be composed to perform a certain task, and provides a learning algorithm to search the modules to combine with. For replaybased methods, catastrophic-forgetting is avoided by storing data from previous tasks and training them together with data from the current task. [27]- [29] use replayed samples from previous tasks to constrain the parameters' update when learning the new task. Most recently, [30] creates reconstructed images from encoded episodes and dynamically generates pseudo-images for model-optimization. Recently, a series of continual learning methods combined with meta-learning have been proposed. [31] uses the fixed loss function to align the gradients and updates the network that is well-aligned. [32] proposes an approach to disentangle generic representations by task-specific learning.\nWhile the problem of continual learning has been traditionally addressed in image classification, much less attention has been devoted to REC. Here we fill this gap, proposing to use the existing continual learning method to solve the REC work. However, it treats the parameters as isolated elements which may result in ignoring the modular information. While in CREC, the information involved in sub-modules can boost the learning performance. Our designed CREC method, Dual Modular Memorization (DMM), can effectively investigate the modular information to alleviate the problem of catastrophicforgetting." }, { "figure_ref": [], "heading": "B. Referring Expression Comprehension", "publication_ref": [ "b12", "b14", "b15", "b17", "b32", "b34", "b35", "b36", "b37", "b36", "b38", "b40" ], "table_ref": [], "text": "With the increasing interest in human-computer communication, REC has achieved great success in recent years. Early models [13]- [15] use the CNN-LSTM architecture to predict the referent within an image. These networks use deep CNNs to extract features and LSTMs to match the extracted features with the word-vector of the expression to find the referred region. Another line of works [16]- [18] treat REC as a cross-domain matching problem, where the expression feature and region feature are embedded into a shared space to measure their compatibility. Recently, [33]- [35] introduce graph technologies to explore the topology structure of images by modeling objects as graph nodes. [36] use curriculum learning to guide the searching over the large compositional space of images and language. [37] propose a simple but effective IoU regression head module to explicitly consider the localization quality of the grounding results.\nWhile these results are impressive, those REC approaches employ a two-stage learning process. Firstly, an external target detector such as Faster RCNN [38] is used to recognize the input image and generate a series of object proposals. Then computing the matching score between these object proposals and the given referring expression, and selecting the target region with the highest matching score as the final results. The limitations are obvious for those two-stage approaches. On the one hand, the use of an external target detector demands additional computational effort. On the other hand, the quality of the object proposals extracted by the target detector affects the performance. To conquer these issues, one-stage REC [37], [39]- [41] has been proposed to process the original images and referring expressions in an end-to-end learning manner.\nSignificantly, the focus of all those REC works lies in how to more effectively model the language and image to achieve a better REC performance in a stationary evaluation setting. That is, all the object categories are known in advance. However, the focus of our work is orthogonal to such works in that we aim to propose a new REC framework that can work under a continual setting, where the object categories emerge sequentially." }, { "figure_ref": [], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "In this section, we first describe our proposed Continual Referring Expression Comprehension (CREC) task in detail. Then we introduce our baseline Dual Modular Memorization (DMM), specifically including its key components Implicit-Memory and Explicit-Memory." }, { "figure_ref": [], "heading": "A. Problem Formulation and Background", "publication_ref": [ "b0", "b18" ], "table_ref": [], "text": "Given a referring expression r and an image I, the goal of the REC is to localize the object o being referred to by the expression r within the image, by predicting its bounding box y. Additionally, each object also belongs to a category c ∈ C from the set of categories C. Thus, more formally, each sample in REC can be represented as a tuple (r, y, c, o).\nMAttNet [1] considers the complex linguistic and visual structures by decomposing the expression into three different modular components and designing visual features for each module accordingly. Specifically, given a sample that contains a referring expression r and an image I with a set of object candidates o i . The model is trained to predict the object with the highest probability in the image. Given an expression r, a self-attention mechanism is used to softly decompose it into three modular components, i.e., subject, location, and relation. The final matching score is calculated as the weighted sum of the three matching scores obtained from these three modules. CM-Att-Erase [19] is a recently proposed strong baseline in REC, which is built on MAttNet for encouraging the model to explore complementary crossmodal alignments.\nOur designed DMM method adopts the modular structure of MattNet/CM-Att-Erase and makes some important modifications to avoid catastrophic-forgetting, which will be illustrate in Section III-C." }, { "figure_ref": [ "fig_0" ], "heading": "B. Continual Referring Expression Comprehension", "publication_ref": [ "b12", "b13", "b18" ], "table_ref": [], "text": "We propose Continual Referring Expression Comprehension (CREC) to improve the practicality of REC for real-world scenarios. Different from the standard REC which trains the model in a one-step way, CREC considers a continual learning setting where the training data arrives in a streaming fashion, as depicted in Fig. 1.\n1) Task Construction: Since CREC is designed to solve the REC problem under the continual learning setting, the critical step is to construct new benchmarks that consist a sequence of tasks for CREC. We create three CREC datasets, CRefCOCO, CRefCOCO+ and CRefCOCOg, by respectively re-splitting the three standard REC datasets RefCOCO, RefCOCO+ [13] and RefCOCOg [14] into sequential tasks. In specific, two task sequences with different lengths (5 and 10) were created based on the object super-categories. Here we denote the constructed task sequences as T = {T 1 , T 2 , . . . , T N }, where N is the total number of tasks. More details are given in Section IV-A2.\n2) Training Strategy: In CREC, we denote the data in task\nT t by D t = {(r i , y i , c i , o i ) Mt i=1 }, where (r i , y i , c i , o i ) is the i-th training sample x i in task T t ,\nand M t denote the total number of samples in the task T t . Different tasks have no overlap, i.e., ∀ i, j and i ̸ = j, D i ∩ D j = ⊘. The tasks T = {T 1 , T 2 , . . . , T N } are revealed to the model sequentially. When task T t is presented, the proposed CREC model is trained on T t with D t before the next task T i+1 arrives. In addition, model cannot access the samples from previous tasks {T 1 , T 2 , . . . , T t-1 }. At inference, without being given the object category C i , the model needs to ground the object referred to by r i from image I i to generate ŷi , if the IOU of ŷi yi is more than threshold, the ground is correct. As the threshold setting, we use the conventional REC setting [19]. F ←-Backprob(LLoss + Hloss) 6: end for 7: D sum = t i=1 D i 8: for i = 0, . . . , t -1 do ▷ reorganize the buffer pool. 9:\nD i = K • Di Dsum 10: B i ←-SortBuf f erP ool(B i , D i ) 11: end for 12: D t = K • Dt Dsum 13: for {x, y} ∈ D t do\n▷ update the memory.\n14:\n{ŷ}, Att sub = F (θ, x)\n15:\nLoss = Att sub • L({ŷ}, {y}) 16: B t ←-{x, y} 17: end for 18: B ←-{B 1 , ...B t }, D ←-{D 1 , ...D t } 19: return F, B, D" }, { "figure_ref": [], "heading": "C. Dual Modular Memorization", "publication_ref": [ "b23", "b11", "b12", "b13", "b14", "b15", "b16" ], "table_ref": [], "text": "To address the problem of catastrophic forgetting, we develop a novel Dual Modular Memorization (DMM) mechanism for CREC, which consists of two modules, Implicit-Memory and Explicit-Memory, as described in Fig. 3.\n1) Implicit-Memory: Before delving into our implicitmemory, we introduce some notations as follows. Let F t represent our model DMM for each task T t . We split F t into three parts, i.e., F t sub , F t loc and F t rel . The F t sub , F t loc and F t rel denote subject module, location module and relation module, respectively.\nNaïve Implicit-Memory (N-IM): To guarantee the stability of model, an intuitive idea to alleviate the catastrophic forgetting is to use the regularization to constrain the parameters updating. This helps to remember the important parameters learned on the previous tasks when learning a new task. To achieve this, we benefit the regularization from MAS [24] to CREC, which can be formulated as:\ng t i (x k ) = ∂(F t (x k ; θ i )) ∂θ i ,(1)\nΩ t i = 1 M t Mt k=1 g t i (x k ) ,(2)\nwhere M t is the number of samples in the task T t ; g t i (x k ) is the gradient of the loss of x k in T t , w.r.t. the i-th parameter θ i . We accumulate the gradients g t i (x k ) to obtain the importance weight Ω t i , by Eq. 2. In order to calculate the loss L(θ) when learning on the new task T t+1 , we add a regularization term to L t+1 (θ) which calculated by tradition REC training for avoiding drastic changes to the important parameters:\nL(θ) = L t+1 (θ) + λ 2 i Ω t i (θ i -θ * i ) 2 . (3\n)\nwhere λ is a hyper-parameter to balance the loss of the new task and the parameter change constraint; θ * i denotes the optimal parameter for the previous task.\nHowever, a critical limitation for naïve implicit-memory is that it treats the model parameters as isolated elements and ignores the complex structural information contained in individual sub-modules. Thus, before determining the parametertask association, we need to consider in advance which submodule should be retained for a task. If a module is widely shared among tasks, it makes less sense to memorize the parameters of this module.\nWeighted Implicit-Memory (W-IM): To better estimate the importance of sub-modules and the parameters within each sub-module, we propose the weighted implicit-memory, where importance of sub-module is introduced to constrain the updating. Specifically, the objective of weighted implicitmemory is defined as:\nΩ t m = 1 |F m | • M t g t i ∈Fm Mt k=1 g t i (x k ) ,(4)\nW t Ωm = Ω t m m Ω t .\n(\n)5\nwhere |F m | is the total parameter number of F m , m ∈ {sub, loc, rel}. The weights of the modules are normalized by the sum of three modules' weights, W t Ωm is the normalized importance matrix. A higher module weight indicates that the module is important for learning a task, so the parameters within it should be less updated. Hence, the learning objective is defined as follows:\nL(θ) = L t+1 (θ) + λ 2 m i∈Fm W t Ωm Ω t i (θ i -θ * i ) 2 . (6\n)\nNote that both Ω i and W Ωm are updated by accumulating previous estimations when a new task arrives.\n2) Explicit-Memory: By constraining dramatic changes to important parameters of the model, the designed implicitmemory regularization objectives N-IM and W-IM can effectively address the catastrophic forgetting. In order to further improve the memory capacity of the model, we further propose an Explicit-Memory module to explicitly store some samples from previous tasks which are representative.\nNaïve Explicit-Memory (N-EM): An intuitive idea to select representative samples of a task is to select easy samples leading to a small loss -samples with smaller training losses can be easily used to learn the task-specific knowledge for the corresponding task. In this way, we can directly store representative samples of each seen task in a buffer pool for future rehearsal. We use different flags to mark the loss calculated based on different samples. The \"LLoss\" represents the loss calculated by samples of current task. And the \"HLoss\" represents the loss calculated by samples of buffer pool. However, this loss-oriented strategy ignores the impact of subject intrinsic information on performance.\nModular Explicit-Memory (M-EM): The Naïve Explicit-Memory just takes into account the loss of samples. However, it is not good enough. So we propose modular explicit-memory (M-EM) by marrying the ideas of modular decomposition and vanilla explicit-memory. Considering a simple scenario where a sample X i and a sample X j hold the same loss L. The sample X i has more intrinsic information about the current task, while sample X j has less. We would intuitively choose the sample X i to save for rehearsal. So the ability to evaluate how much intrinsic information the sample contains about the current task for N-EM is important. As mentioned in III-A, the model has a self-attention mechanism which can estimate the weight value of the three modules. So, the weight value can represent the importance of the intrinsic information. In other words, the module with higher weight value contains more intrinsic information about the current task. In addition, we re-split the datasets by class of the samples. And in the expression, the subject is the instance of the class. An intuitive way is to choose the subject module. The empirical analysis in Section IV-D4 which is conducted on our re-split datasets also confirms our intuition that the sub-module has a bigger weight than other modules, indicating that the submodule is the most important module. Therefore, we regard the sub-module weight is helpful for sample selection. So this modular version takes into account not only the loss but also how importance of the subject's intrinsic information in each sample. We implement it by directly multiplying the subjectweight of the self-attention Att sub to the loss of the sample. Algorithm 1 describes the details of our modular explicitmemory strategy for rehearsal. Considering the memory efficiency, explicit-memory ensures that the total number of exemplar images never exceeds a fixed parameter K throughout the training stage. When the system receives task T t , the model is jointly updated using the samples from both the current task and buffer pool (lines 1-6). After the training process, we compute the percentage of all seen tasks and multiply K to get the memory buffer size of each task. Then the procedure \"SortBufferPool\" on line 10 drops the higher-loss samples to reach the available size for each task (lines 7-11). In the end, we choose the lowest loss samples of the current task T t into the memory buffer (lines [12][13][14][15][16][17]. Note that in line 15, the loss of a sample (x, y) in T t is weighted by the subject-weight Att sub , which indicates the importance of the subject module for the sample. Larger Att sub means the language query contains more intrinsic information and less transferable clues. We tend to store such samples in the buffer pool so that the model can recall the previous knowledge when it encounters some new information by rehearsing such important samples." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS A. Dataset Construction", "publication_ref": [ "b12", "b13", "b42", "b1", "b12", "b18", "b26", "b26", "b0", "b18", "b43", "b44", "b23", "b45" ], "table_ref": [ "tab_1" ], "text": "We create three CREC benchmarks by respectively resplitting the three standard REC datasets RefCOCO, Ref-COCO+ [13] and RefCOCOg [14] into sequential tasks. Correspondingly, we term these three new datasets as CRefCOCO, CRefCOCO+ and CRefCOCOg.\n1) REC Datasets: The three REC benchmark datasets RefCOCO, RefCOCO+ and RefCOCOg are all constructed from MSCOCO [43]. Each dataset contains 80 object categories (e.g., bird, dog, apple and sandwich) and 13 object supercategories (e.g., animal, food). Several characteristics of these datasets are worth mentioning: (1) The average length of textual expressions in RefCOCO and RefCOCO+ are 3.61 and 3.64 words respectively, whereas the expressions in RefCOCOg are longer and more complex, with 8.4 words on average. (2) The images in RefCOCO and RefCOCO+ contain more instances of the same category and thus render more distracting information to localize the referent. (3) Any words depicting absolute locations are forbidden in RefCOCO+, as it focuses on appearance clues. Both RefCOCO and RefCOCO+ are split into the subsets of train, validation, Test A and Test B. The Test A split mainly contains images of \"people\" supercategory. The Test B split contains multiple instances of all the other object supercategories. To better evaluate the model, we combine Test A and Test B to get the Test split. RefCOCOg is split into the subsets of train, validation and test, and the categories are distributed more evenly.\n2) CREC Benchmarks: In order to evaluate the continual learning capability of our DMM method, we separately re-split RefCOCO, RefCOCO+ [13] and RefCOCOg into subsets/tasks according to the object supercategories of each dataset. Specifically, two task sequences with different lengths are created.\n10-task: Following our problem formulation in Section III-A, we treat an object supercategory as a task. We sort the supercategories by their number of samples. The 10 most frequent supercategories are adopted as disjoint tasks, specifically including: F ood, Indoor, Sports, P erson, Animal, V ehicle, F urniture, Accessory, Electronic, and Kitchen.\n5-task: As some supercategories have too few samples, e.g., 853 samples in the Sports supercategory, we merge several supercategories to form a new supercategory according to the similarity between them. In this way, 5 disjoint tasks with balanced sample numbers are constructed, including Task1: P erson; Task2: Kitchen + F ood; Task3: Animal; Task4: Indoor + Appliance + F urniture + Electronic; Task5: Outdoor + V ehicle + Sports + Accessory. The number of referring expressions per supercategory of the five tasks is shown in Table I expression r. We employed Intersection over Union (IoU) as a basic metric to determine whether a comprehension is positive or not. The IOU is defined as:\nIOU = intersection(y, ŷ) union(y, ŷ) .(7)\nFollowing the standard setting [19], if the IOU score is greater than threshold 0.5, we consider the prediction to be correct. We evaluate the performance of alleviating catastrophic forgetting on the following metrics: Last Accuracy (LA), Average Accuracy (AA), Forward Transfer (FWT) and Backward Transfer (BWT) [27].\ni) LA is the final accuracy result on the whole test set at the end of training on all tasks in the continual learning process.\nii) AA evaluates model performance on all tasks seen up to step i once the model is trained on task T i by\nAA = 1 i i j=1 a i,j .(8)\nwhere a i,j is the accuracy evaluated on the test set of task j after training the model from task 1 through to i.\niii) FWT measures a model's capability of tranferring konwledge from past tasks when learning a new task. Concretely, after training on T i , we evaluate the model on unseen tasks T j ∈ {T i+1 , . . . , T N } by:\nFWT = 1 N -i N j=i+1 (a i,j -b j ).(9)\nwhere b i represent the test accuracy on task T i with random initialization. iv) BWT measures the model's capability of retaining previous knowledge after learning a new task. That is, after training on T i , the model can be evaluated on j ∈ {T 1 , . . . , T i-1 } by:\nBWT = 1 i -1 i-1 j=1 (a i,j -a j,j ).(10)\nFor each evaluation metric, we take the average of result of each task as the final evaluation result. The larger these metrics, the better is the model. Obviously, it is meaningless to compute the FWT for the first task and the BTW for the last task [27].\n2) Implementation Details: We use MAttNet [1] and CM-Att-Erase [19] as backbone models to validate the effectiveness of our proposed DMM method. Mask R-CNN [44] with ResNet-101 [45] is used as the backbone to extract visual representations. As the regularization parameter λ, we use the setting in [24] and set it to 1 for all experiments. The memory size K in the Explicit-Memory is set to 120. In the 5-task setting, as the number of samples of Task1 is more than other tasks, we train the model for 40 epochs for Task1 and 20 epochs for each of the other tasks. One training batch contains 45 referring expressions. Other settings are the same as in the baseline models. Furthermore, for each task setting and backbone, we conduct two experiments to obtain the average result as the final result. The network is implemented based on PyTorch [46]." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3", "fig_3", "fig_4" ], "heading": "C. Experimental Results", "publication_ref": [ "b23", "b41", "b22" ], "table_ref": [ "tab_3", "tab_4" ], "text": "To evaluate the effectiveness of our proposed DMM method, we first compare it with state-of-the-art continual learning methods under the 5-task and 10-task settings. Then, we conduct ablation studies to further investigate the effectiveness of main components.\n1) Comparison with State-of-the-arts: We compare our method DMM with several state-of-the-art continual learning methods, including Joint Training, Finetuning, MAS [24], GDumb [42] and EWC [23]. In particular, Joint Training considers all data in the task sequence simultaneously. This baseline represents the performance upper bound. Finetuning trains a single model to solve all the tasks without any regularization and initializes from the model of the previous task, i.e., it represents a model trained in the conventional supervised setting.\n5-task setting: The results of the experiment under the 5task setting are shown in Table II. As can be observed from the table, compared to Joint Training, Finetuning suffers a significant performances decrease on all three datasets. This shows that Finetuning suffers severe catastrophic forgetting in CREC. In addition, the boost in performance brought by MAS is observed on both datasets and on all the accuracy metrics employed. As can be seen from the table, DMM consistently outperforms all other methods by a significant margin on the three benchmark datasets in all but two cases. On average, DMM outperforms Finetuning by 17.61% and 12.81% in terms of LA and AA, respectively. This clearly shows that our network achieves notable stability and plasticity in CREC.\nIn order to further compare the different baselines, we plot the results as shown in Fig. 4. As shown in the top panel of Fig. 4, it can be observed that our method consistently surpasses other counterparts at every task on all three datasets on average accuracy. In the middle panel of Fig. 4, DMM performs better than other methods on backward transfer (BWT) at each task on all datasets, which indicates that our method guarantees the stability of the network. Finally, Fig. 4 (Bottom) shows FWT values for each task in comparison to prior methods for CREC on three datasets. Our results suggest our model achieves notable plasticity.\n10-task setting: In order to further evaluate the ability to prevent catastrophic forgetting on the longer sequence of the proposed DMM, we conduct the experiments under the 10-task setting. The experimental results are shown in Table III.\nAs expected, the proposed method is significantly better than almost all other methods under the 10-task setting. These results illustrate the strong ability of our DMM model to alleviate the catastrophic forgetting problem over longer sequences. It is worth noting that GDumb achieves better results in terms of BWT with CM-Att-Erase as the backbone. However, GDumb's performance is the worst in terms of LA and AA with both backbones. A possible explanation for GDumb's best BWT performance is that it learns little knowledge when training, so it has nothing to forget. In addition, we also plot the results of DMM in comparison with prior methods in Fig. 5. These plots further demonstrate the performance advantages of our method.\nIt is worth noting that all methods suffer forgetting when training is completed after task 4, especially on CRefCOCO and CRefCOCO+. We argue the reason is that the number of samples in task 4 is much more than in other tasks. The learning process breaks the balance of stability and plasticity. " }, { "figure_ref": [], "heading": "D. Ablation Study", "publication_ref": [], "table_ref": [ "tab_5", "tab_6", "tab_6" ], "text": "To deeply analyze our proposed DMM method, we study its different ablation variants on the re-split datasets. MAttNet is used as the backbone, and we conduct the experiment under the 5-task setting.\n1) Effect of Different Variants: We first study the effectiveness of different variants of the Implicit-Memory and Explicit-Memory, including: i) Naïve Implicit-Memory (N-IM), ii) Weighted Implicit-Memory (W-IM), iii) Naïve Explicit-Memory (N-EM) and iv) Modular Explicit-Memory (M-EM). The ablative results are described in Table IV. From the results, we can observe that each of the four variants brings 2) Effect of Different Sample-choosing Strategies: In our explicit-memory module, we select representative samples of a task by choosing those easy samples leading to a small loss. In this part, we study the impact of sample hardness on rehearsal performance. Three strategies are compared, including (1) High-strategy (high) chooses samples with the highest loss for explicit-memory; (2) Low-strategy (low) chooses samples with the lowest loss; (3) Random-strategy (random) performs sample selection randomly. Table V shows the evaluation results on different datasets. As seen, it is the Low-strategy that achieves the best results, which confirms that the hardness of the selected sampling has an impact on the final performance and easy samples are more effective for the explicit-memory module than the hard ones.\n3) Contribution of Different Memory size: As discussed in Section III-C2, EM explicitly stores some samples which are representative in the buffer pool with memory size K. In this ablation study, we explore the sensitivity of our model to various K, including 80, 120 and 160. The average accuracy results are shown in Fig 6 . As shown in the figure, the results are roughly consistent on all three datasets, showing that our model is not sensitive to the memory size. We chose 120 by taking into account the performance as well as training time.\n4) Contribution of Different Sub-modules: As discussed in Section M-EM, M-EM takes into account not only the loss but also how importance of the subject intrinsic information in each sample. In this ablation study, we evaluate the module weight of the subject, relation and location of each sample. The average results of each dataset are shown in Table VI. It is evident that the subject information is more important than the other counterparts, verifying our previous ideas." }, { "figure_ref": [], "heading": "E. Qualitative Results", "publication_ref": [], "table_ref": [], "text": "The conclusion drawn in the quantitative analyses is confirmed by the qualitative evaluation reported in Fig. 7. The top row shows the training (image, expression) pair and the ground-truth bounding box from task 1. The bottom four rows represent results produced by different methods. Each column denotes the comprehension result after training on each task. We can make the following observations. Finetuning can correctly locate the object after task 1. However, after task 3 is learned, it gradually forgets what a human is, indicating that it suffers from catastrophic forgetting. The weight implicit-memory method grounds the wrong object until the fourth task, showing that regularization contributes to preventing catastrophic forgetting. Furthermore, compared to weight implicit-memory, although modular explicit-memory does not always ground the correct object, it retains knowledge about task 1. Finally, DMM can ground correctly after learning of each task, further demonstrating the advantageous performance of our method.\nThe Fig. 8 illustrates some failure cases. As shown in the Fig. 8(a), we succeeded in locating the person however we do not get the number \"48\". Other examples are shown in Fig. 8(b) and Fig. 8(c), the model loses the ability to capture the appearance and location information. In addition, after learning on multiple tasks, our model may lose some of its ability to acquire global information to discern the gender of a person, such as the case in Fig. 8(d). We leave how to solve these failure cases as interesting future works." }, { "figure_ref": [], "heading": "V. CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this work, we propose to study the continual referring comprehension problem. In this setting, each REC task aims to localize one object category and such REC tasks are presented to the model sequentially. To address the catastrophic forgetting problem in this continual problem, we proposed a novel and effective Dual Modular Memorization model. The model consists of two memory components. One component, termed Implicit-Memory module, learns to retain structural parameters of previous tasks. The other component, termed Explicit-Memory module, avoids forgetting previous tasks by retaining some representative samples of these tasks into a buffer, which will be replayed when learning new tasks. Experiments conducted on three datasets re-splited based on three benchmark REC datasets demonstrate the superiority of our model over a number of continual learning baselines. In this work, we assume there exist clear task boundaries between the tasks. In the future, we plan to go beyond this assumption and study continual REC in a more practical setting." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "ACKNOWLEDGMENT This study was supported by grants from Chinese National Science & Technology Pillar Program (No. 2022YFC2009900/2022YFC2009903), the National Natural Science Foundation of China (Grant No. 62122018, No. 62020106008, No. 61772116, No. 61872064)." } ]
Referring Expression Comprehension (REC) aims to localize an image region of a given object described by a natural-language expression. While promising performance has been demonstrated, existing REC algorithms make a strong assumption that training data feeding into a model are given upfront, which degrades its practicality for real-world scenarios. In this paper, we propose Continual Referring Expression Comprehension (CREC), a new setting for REC, where a model is learning on a stream of incoming tasks. In order to continuously improve the model on sequential tasks without forgetting prior learned knowledge and without repeatedly re-training from a scratch, we propose an effective baseline method named Dual Modular Memorization (DMM), which alleviates the problem of catastrophic forgetting by two memorization modules: Implicit-Memory and Explicit-Memory. Specifically, the former module aims to constrain drastic changes to important parameters learned on old tasks when learning a new task; while the latter module maintains a buffer pool to dynamically select and store representative samples of each seen task for future rehearsal. We create three benchmarks for the new CREC setting, by respectively re-splitting three widely-used REC datasets RefCOCO, RefCOCO+ and RefCOCOg into sequential tasks. Extensive experiments on the constructed benchmarks demonstrate that our DMM method significantly outperforms other alternatives, based on two popular REC backbones. We make the source code and benchmarks publicly available to foster future progress in this field: https://github.com/zackschen/DMM.
Continual Referring Expression Comprehension via Dual Modular Memorization
[ { "figure_caption": "Fig. 1 .1Fig. 1. Comparison between REC and CREC. Different from REC that utilizes all samples of the training set to train a model at once, CREC considers a sequential setting where subsets of the training set (i.e., tasks) are revealed one after another. Different colors indicate different groups of tasks, the subscript represents the order of training.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Qualitative grounding results for (a) MAttNet and (b) DMM on the proposed CREC benchmark dataset CRefCOCO (see Section IV-A2), under the 5-task setting. For each method, we report the test grounding results on the first task once the model is sufficiently trained on a new task. Results clearly show that MAttNet cannot avoid the problem of catastrophic forgetting, but the designed DMM can.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 3 : 4 :134Modular Explicit-Memory for Rehearsal. Input: Model F ; Training task T t ; Task-size for the current task D t ; Task-size for the previous tasks D = D 1 , ..., D t-1 ; Max memory-size K; Buffer pool B = B 1 , ..., B t-1 . Output: Optimized F and updated B, D. 1: for {x, y} ∈ D t ,{x B , y B } ∈ B do ▷ joint training. 2: {ŷ}, Att sub = F (θ, x) LLoss = L({ŷ}, {y}) Hloss = L(F, {x B , y B }) 5:", "figure_data": "", "figure_id": "fig_2", "figure_label": "134", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Performance comparison with different state-of-the-art methods under the 5-task setting w.r.t. (Top) AA, (Middle) BWT and (Bottom) FWT metrics. We use MAttNet as the backbone. Models are evaluated after training on each task.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Performance comparison of (Top) AA, (Middle) BWT and (Bottom) FWT metrics with different state-of-the-art methods under the 10-task setting. We use MAttNet as the backbone. Models are evaluated after training on each task.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Quantitative results in terms of average accuracy on three different datasets about various memory size, including 80, 120, 160. The results are roughly consistent on all three datasets, showing that our model is not sensitive to the memory size.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .Fig. 8 .78Fig. 7. Qualitative evaluation of CREC under 5-task setting. From top to bottom are the ground-truth of the example, the results produced from Finetune, weighted implicit-memory, modular explicit-memory, and the results of Dual Modular Memorization. The five images from left to right of each row denote the localization results after learning the i-th task. The example belongs to the Task1.", "figure_data": "", "figure_id": "fig_6", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Illustration of the proposed Dual Modular Memorization (DMM) network. DMM seeks to avoid the catastrophic forgetting problem when learning a new task, with the following appealing components: (1) Implicit-Memory, which is built on a modular attention network for decomposing the model into three sub-modules related to subject (Sub), location (Loc) and relationship (Rel). We compute the importance weight of the intra-module parameters Ω by the gradient g and normalize the inter-module importance matrix W to avoid dramatic changes to parameters learned on the previous tasks. (2) Explicit-Memory, seeking to maintain a buffer pool to dynamically select and store representative samples of each seen task for future rehearsal. When learning a new task, the model is jointly optimized on samples from the current task and the buffer pool.", "figure_data": "boy frontgirl in blue shirtman with the glassesgirl in blue shirt0.560.430.440.43Subg subsubW subgloclocW locLLossLocgrelrelW relRelgGradientImportance weightW Normalized importance matrixMultiplicationAdditionSubject weightFig. 3. • We design a baseline method dubbed Dual ModularMemorization (DMM) for CREC. DMM can effectivelyalleviate the problem of catastrophic-forgetting in CRECwith the developed memorization modules of Implicit-Memory and Explicit-Memory.• We propose three benchmarks for CREC by respectivelyre-splitting three REC datasets RefCOCO, RefCOCO+and RefCOCOg into sequential tasks. Extensive exper-iments on the three benchmarks show that DMM sig-nificantly outperforms other alternatives based on twopopular REC backbones. We make the source code andbenchmarks public available to encourage future researchin this field.ual Referring Expression Comprehension (CREC), whichconsiders a setting in which training data arrives in astreaming fashion. CREC shows better practicality forreal-world scenarios.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "STATISTICS OF THE CONSTRUCTED 5-TASK DATASETS FOR CREC. THE ENTRIES SHOW THE NUMBER OF SAMPLES FOR EACH TASK IN EACH", "figure_data": "DATASET.Datasetsplit task1 task2 task3 task4 task5CRefCOCOtrain 60357 16748 16087 14079 13353val5498 1580 1318 1417 1021test5273 1707 1307 1314 1151CRefCOCO+ train 61292 15888 15957 13914 13230val5568 1483 1298 1385 1024test5430 1577 1300 1277 1121CRefCOCOg train 30712 10533 12105 13526 13636val1776673743866838test3477 1198 1485 1747 1695", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "PERFORMANCE COMPARISON OF DIFFERENT STATE-OF-THE-ART METHODS WITH THE DMM ON OUR RE-SPLIT DATASETS UNDER THE 5-TASK SETTING. BEST SCORES AMONG ALL METHODS ARE IN BOLD. LA, AA, FWT AND BWT IS THE LAST ACCURACY, AVERAGE ACCURACY, FORWARD TRANSFER AND BACKWARD TRANSFER, RESPECTIVELY. HIGHER IS BETTER FOR ALL METRICS. Erase 78.31 83.82 50.02 -5.88 59.52 64.14 25.64 -6.12 65.37 69.88 17.08 -6.93", "figure_data": "CRefCOCOCRefCOCO+CRefCOCOgMethodBackboneLAAA FWT BWTLAAA FWT BWTLAAA FWT BWTJoint TrainingMAttNet85.36---71.26---78.12---FinetuningMAttNet53.00 69.06 41.93 -26.51 39.35 57.28 18.28 -23.22 52.90 63.71 24.13 -26.07MAS [24]MAttNet66.70 76.77 40.59 -13.35 52.85 64.85 20.47 -11.90 57.97 66.91 26.18 -17.43GDumb [42]MAttNet55.55 60.37 38.00 -13.43 35.93 39.76 15.59 -15.49 44.10 48.88 21.73 -20.80EWC [23]MAttNet58.07 68.80 40.83 -24.24 43.98 56.88 21.04 -20.71 51.94 63.48 24.19 -22.46DMMMAttNet76.12 82.20 43.46 -5.37 62.30 69.38 24.35 -3.93 69.24 75.46 29.92 -4.24Joint Training CM-Att-Erase 86.44---72.03---80.37---FinetuningCM-Att-Erase 64.46 76.62 49.47 -18.54 40.34 48.51 26.85 -26.46 55.15 52.82 22.45 -26.79MAS [24]CM-Att-Erase 74.89 82.52 45.50 -8.16 50.69 58.12 25.59 -14.62 64.39 68.32 18.22 -8.97GDumb [42] CM-Att-Erase 20.71 30.99 5.97 -18.49 19.05 22.62 -0.27 -16.30 20.39 14.99 -9.61 7.37EWC [23]CM-Att-Erase 39.81 50.80 22.23 -7.19 29.33 36.05 8.98 -9.63 23.29 19.89 -5.40 14.35DMMCM-Att-", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "PERFORMANCE COMPARISON WITH DIFFERENT STATE-OF-THE-ART METHODS ON OUR RE-SPLIT DATASETS UNDER THE 10-TASK SETTING. TRAINING SETTINGS ARE THE SAME AS TABLE II. BEST SCORES AMONG ALL METHODS ARE IN BOLD. Erase 71.37 78.19 46.12 -4.21 53.72 55.45 26.35 -1.70 62.12 65.10 17.42 -4.24", "figure_data": "CRefCOCOCRefCOCO+CRefCOCOgMethodBackboneLAAA FWT BWTLAAA FWT BWTLAAAFWT BWTJoint TrainingMAttNet85.36---71.26---78.12---FinetuningMAttNet55.01 72.99 37.67 -11.10 39.49 50.07 18.28 -10.38 49.86 60.91 22.08 -14.91MAS [24]MAttNet63.26 74.04 36.03 -6.92 45.37 50.90 16.62 -6.35 50.34 61.15 19.61 -8.64GDumb [42]MAttNet59.40 64.28 36.93 -7.44 32.07 37.05 13.00 -8.03 46.81 50.89 18.64 -7.80EWC [23]MAttNet61.11 70.77 40.20 -7.13 38.70 48.21 18.84 -6.13 47.39 60.40 22.33 -9.20DMMMAttNet74.76 75.63 41.28 -3.03 57.52 58.53 24.11 -2.97 66.47 64.25 27.08 -2.38Joint Training CM-Att-Erase 86.44---72.03---80.37---FinetuningCM-Att-Erase 61.79 74.11 40.59 -13.67 40.12 50.85 24.58 -13.00 48.70 61.62 14.72 -21.29MAS [24]CM-Att-Erase 66.93 75.16 36.49 -5.13 49.34 53.54 20.74 -9.43 58.05 61.90 9.52-9.51GDumb [42] CM-Att-Erase 23.73 15.08 -3.941.47 18.30 15.62 0.47 -0.71 18.59 12.84 -9.395.39EWC [23]CM-Att-Erase 32.70 30.95 11.00 0.94 16.11 13.16 -0.40 -1.54 11.16 8.93 -10.27 3.05DMMCM-Att-", "figure_id": "tab_4", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "PERFORMANCE EVALUATION OF DIFFERENT COMPONENTS IN DMM UNDER 5-TASK WITH MATTNET FOR THE ABLATION STUDY, TRAINING SEQUENCE AND SETTINGS ARE THE SAME AS TABLE II. THE FIRST ROW IS THE FINETUNING. N-IM, W-IM, N-EM AND M-EM DENOTE NAÏVE IMPLICIT-MEMORY, WEIGHTED IMPLICIT-MEMORY, NAÏVE EXPLICIT-MEMORY AND MODULAR EXPLICIT-MEMORY. BEST SCORES AMONG ALL METHODS ARE IN BOLD.", "figure_data": "ComponentCRefCOCOCRefCOCO+CRefCOCOgN-IM W-IM N-EM M-EM LAAA FWT BTWLAAA FWT BTWLAAA FWT BTW53.00 69.06 41.93 -26.51 39.35 57.28 18.28 -23.22 52.90 63.71 24.13 -26.0766.70 76.77 40.59 -13.35 52.85 64.85 20.47 -11.90 57.97 66.91 26.18 -17.4369.89 77.97 44.26 -10.92 46.90 62.09 21.65 -15.48 61.86 69.53 28.96 -15.2068.46 78.77 42.44 -11.37 50.78 64.91 20.56 -11.81 62.79 73.25 26.92 -9.6471.98 79.08 44.43 -10.47 51.95 65.34 24.24 -10.99 65.84 73.29 25.01 -10.1176.04 81.38 44.07 -5.73 61.14 68.62 24.08 -5.32 69.03 75.19 29.88 -4.3976.12 82.20 43.46 -5.37 62.30 69.38 24.35 -3.93 69.24 75.46 29.92 -4.24", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "PERFORMANCE COMPARISON OF ABLATION STUDY ABOUT SAMPLES CHOOSING IN MODULAR EXPLICIT-MEMORY UNDER 5-TASK SETTING. WE CHOOSE MATTNET AS THE BACKBONE. BEST SCORES AMONG ALL METHODS ARE IN BOLD.", "figure_data": "StrategyCRefCOCOCRefCOCO+CRefCOCOglow random highLAAAFWTBWTLAAAFWTBWTLAAAFWTBWT✓71.98 79.08 44.43 -10.47 51.95 65.34 24.24 -10.99 65.84 73.29 25.01 -10.11✓69.43 78.23 43.42 -12.57 51.78 64.36 21.34 -11.98 59.87 62.59 12.24 -25.66✓63.36 75.64 43.59 -16.05 45.35 63.92 18.44 -14.66 57.62 62.23 17.95 -27.03", "figure_id": "tab_6", "figure_label": "V", "figure_type": "table" } ]
Heng Tao; Shen Fellow; Cheng Chen; Peng Wang; Lianli Gao; Jingkuan Song; Wang Fellow
[ { "authors": "L Yu; Z Lin; X Shen; J Yang; X Lu; M Bansal; T L Berg", "journal": "", "ref_id": "b0", "title": "Mattnet: Modular attention network for referring expression comprehension", "year": "2018" }, { "authors": "X Rong; C Yi; Y Tian", "journal": "IEEE Trans. Image Process", "ref_id": "b1", "title": "Unambiguous scene text segmentation with referring expression comprehension", "year": "2020" }, { "authors": "J Liu; W Wang; L Wang; M Yang", "journal": "IEEE Trans. Image Process", "ref_id": "b2", "title": "Attribute-guided attention for referring expression generation and comprehension", "year": "2020" }, { "authors": "K Lee; X Chen; G Hua; H Hu; X He", "journal": "", "ref_id": "b3", "title": "Stacked cross attention for image-text matching", "year": "2018" }, { "authors": "Y Wang; H Yang; X Qian; L Ma; J Lu; B Li; X Fan", "journal": "", "ref_id": "b4", "title": "Position focused attention network for image-text matching", "year": "2019" }, { "authors": "C Fuh; S Cho; K Essig", "journal": "IEEE Trans. Image Process", "ref_id": "b5", "title": "Hierarchical color image region segmentation for content-based image retrieval system", "year": "2000" }, { "authors": "R Zhang; Z Zhang", "journal": "IEEE Trans. Image Process", "ref_id": "b6", "title": "Effective image retrieval based on hidden concept discovery in image database", "year": "2007" }, { "authors": "L Gao; P Zeng; J Song; Y Li; W Liu; T Mei; H T Shen", "journal": "", "ref_id": "b7", "title": "Structured two-stream attention network for video question answering", "year": "2019" }, { "authors": "Y Zhang; J C Niebles; A Soto", "journal": "", "ref_id": "b8", "title": "Interpretable visual question answering by visual grounding from attention supervision mining", "year": "2019" }, { "authors": "L Gao; Y Lei; P Zeng; J Song; M Wang; H T Shen", "journal": "IEEE Trans. Image Process", "ref_id": "b9", "title": "Hierarchical representation network with auxiliary tasks for video captioning and video question answering", "year": "2022" }, { "authors": "H Tan; L Yu; M Bansal", "journal": "", "ref_id": "b10", "title": "Learning to navigate unseen environments: Back translation with environmental dropout", "year": "2019" }, { "authors": "F Zhu; Y Zhu; X Chang; X Liang", "journal": "", "ref_id": "b11", "title": "Vision-language navigation with self-supervised auxiliary reasoning tasks", "year": "2020" }, { "authors": "L Yu; P Poirson; S Yang; A C Berg; T L Berg", "journal": "", "ref_id": "b12", "title": "Modeling context in referring expressions", "year": "2016" }, { "authors": "J Mao; J Huang; A Toshev; O Camburu; A L Yuille; K Murphy", "journal": "", "ref_id": "b13", "title": "Generation and comprehension of unambiguous object descriptions", "year": "2016" }, { "authors": "R Hu; H Xu; M Rohrbach; J Feng; K Saenko; T Darrell", "journal": "", "ref_id": "b14", "title": "Natural language object retrieval", "year": "2016" }, { "authors": "K Chen; R Kovvuri; R Nevatia", "journal": "", "ref_id": "b15", "title": "Query-guided regression network with context policy for phrase grounding", "year": "2017" }, { "authors": "R Luo; G Shakhnarovich", "journal": "", "ref_id": "b16", "title": "Comprehension-guided referring expressions", "year": "2017" }, { "authors": "A Rohrbach; M Rohrbach; R Hu; T Darrell; B Schiele", "journal": "", "ref_id": "b17", "title": "Grounding of textual phrases in images by reconstruction", "year": "2016" }, { "authors": "X Liu; Z Wang; J Shao; X Wang; H Li", "journal": "", "ref_id": "b18", "title": "Improving referring expression grounding with cross-modal attention-guided erasing", "year": "2019" }, { "authors": "D L Silver; Q Yang; L Li", "journal": "", "ref_id": "b19", "title": "Lifelong machine learning systems: Beyond learning algorithms", "year": "2013" }, { "authors": "F Zenke; B Poole; S Ganguli", "journal": "", "ref_id": "b20", "title": "Continual learning through synaptic intelligence", "year": "2017" }, { "authors": "I J Goodfellow; M Mirza; D Xiao; A Courville; Y Bengio", "journal": "", "ref_id": "b21", "title": "An empirical investigation of catastrophic forgetting in gradient-based neural networks", "year": "2013" }, { "authors": "J Kirkpatrick; R Pascanu; N C Rabinowitz; J Veness; G Desjardins; A A Rusu; K Milan; J Quan; T Ramalho; A Grabska-Barwinska; D Hassabis; C Clopath; D Kumaran; R Hadsell", "journal": "CoRR", "ref_id": "b22", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2016" }, { "authors": "R Aljundi; F Babiloni; M Elhoseiny; M Rohrbach; T Tuytelaars", "journal": "", "ref_id": "b23", "title": "Memory aware synapses: Learning what (not) to forget", "year": "2018" }, { "authors": "A A Rusu; N C Rabinowitz; G Desjardins; H Soyer; J Kirkpatrick; K Kavukcuoglu; R Pascanu; R Hadsell", "journal": "CoRR", "ref_id": "b24", "title": "Progressive neural networks", "year": "2016" }, { "authors": "T Veniat; L Denoyer; M Ranzato", "journal": "", "ref_id": "b25", "title": "Efficient continual learning with modular networks and task-driven priors", "year": "2021" }, { "authors": "D Lopez-Paz; M Ranzato", "journal": "NeurIPS", "ref_id": "b26", "title": "Gradient episodic memory for continual learning", "year": "2017" }, { "authors": "P Sprechmann; S M Jayakumar; J W Rae; A Pritzel; A P Badia; B Uria; O Vinyals; D Hassabis; R Pascanu; C Blundell", "journal": "", "ref_id": "b27", "title": "Memory-based parameter adaptation", "year": "2018" }, { "authors": "S Rebuffi; A Kolesnikov; G Sperl; C H Lampert", "journal": "", "ref_id": "b28", "title": "icarl: Incremental classifier and representation learning", "year": "2017" }, { "authors": "A Ayub; A R Wagner", "journal": "", "ref_id": "b29", "title": "EEC: learning to encode and regenerate images for continual learning", "year": "2021" }, { "authors": "M Riemer; I Cases; R Ajemian; M Liu; I Rish; Y Tu; G Tesauro", "journal": "", "ref_id": "b30", "title": "Learning to learn without forgetting by maximizing transfer and minimizing interference", "year": "2019" }, { "authors": "K Javed; M White", "journal": "NeurIPS", "ref_id": "b31", "title": "Meta-learning representations for continual learning", "year": "2019" }, { "authors": "P Wang; Q Wu; J Cao; C Shen; L Gao; A Van Den; Hengel", "journal": "", "ref_id": "b32", "title": "Neighbourhood watch: Referring expression comprehension via language-guided graph attention networks", "year": "2019" }, { "authors": "S Yang; G Li; Y Yu", "journal": "", "ref_id": "b33", "title": "Cross-modal relationship inference for grounding referring expressions", "year": "2019" }, { "authors": "", "journal": "", "ref_id": "b34", "title": "Dynamic graph attention for referring expression comprehension", "year": "2019" }, { "authors": "J Mao; C Gan; P Kohli; J B Tenenbaum; J Wu", "journal": "", "ref_id": "b35", "title": "The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision", "year": "2019" }, { "authors": "R Zeng; H Xu; W Huang; P Chen; M Tan; C Gan", "journal": "", "ref_id": "b36", "title": "Dense regression network for video grounding", "year": "2020" }, { "authors": "S Ren; K He; R B Girshick; J Sun", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b37", "title": "Faster R-CNN: towards real-time object detection with region proposal networks", "year": "2017" }, { "authors": "Z Yang; B Gong; L Wang; W Huang; D Yu; J Luo", "journal": "", "ref_id": "b38", "title": "A fast and accurate one-stage approach to visual grounding", "year": "2019" }, { "authors": "Z Yang; T Chen; L Wang; J Luo", "journal": "", "ref_id": "b39", "title": "Improving one-stage visual grounding by recursive sub-query construction", "year": "2020" }, { "authors": "Y Liao; S Liu; G Li; F Wang; Y Chen; C Qian; B Li", "journal": "", "ref_id": "b40", "title": "A realtime cross-modality correlation filtering method for referring expression comprehension", "year": "2020" }, { "authors": "A Prabhu; P H S Torr; P K Dokania", "journal": "", "ref_id": "b41", "title": "Gdumb: A simple approach that questions our progress in continual learning", "year": "2020" }, { "authors": "T Lin; M Maire; S J Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "", "ref_id": "b42", "title": "Microsoft COCO: common objects in context", "year": "2014" }, { "authors": "K He; G Gkioxari; P Dollár; R B Girshick", "journal": "", "ref_id": "b43", "title": "Mask R-CNN", "year": "2017" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b44", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Köpf; E Z Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala", "journal": "", "ref_id": "b45", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "2019" }, { "authors": " Heng Tao Shen Is The Dean", "journal": "IEEE and OSA", "ref_id": "b46", "title": "and PhD from Department of Computer Science", "year": "" }, { "authors": "Cheng Chen", "journal": "", "ref_id": "b47", "title": "with the School of Computer Science", "year": "" }, { "authors": "Jingkuan Song", "journal": "University of Electronic Science and Technology of China (UESTC)", "ref_id": "b48", "title": "His research interests include large-scale multimedia retrieval, image/video segmentation and image/video understanding using hashing, graph learning, and deep learning techniques", "year": "" }, { "authors": " Dr", "journal": "", "ref_id": "b49", "title": "Song has been an AC/SPC/PC Member of IEEE Conference on Computer Vision and Pattern Recognition for the term 2018-2021, and so on", "year": "2017" }, { "authors": "Lianli Gao", "journal": "", "ref_id": "b50", "title": "received the Ph.D. degree in information technology from The University of Queensland (UQ)", "year": "2015" }, { "authors": " Dr; Gao", "journal": "", "ref_id": "b51", "title": "was the winner of the IEEE Trans. on Multimedia", "year": "2017" }, { "authors": "Peng D Wang Received His Ph", "journal": "", "ref_id": "b52", "title": "he was a research fellow with Australian Institute for Machine Learning. His research interest lies in computer vision and deep learning", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 311.98, 593.23, 251.05, 23.99 ], "formula_id": "formula_0", "formula_text": "T t by D t = {(r i , y i , c i , o i ) Mt i=1 }, where (r i , y i , c i , o i ) is the i-th training sample x i in task T t ," }, { "formula_coordinates": [ 5, 50.73, 212.09, 172.74, 59.52 ], "formula_id": "formula_1", "formula_text": "D i = K • Di Dsum 10: B i ←-SortBuf f erP ool(B i , D i ) 11: end for 12: D t = K • Dt Dsum 13: for {x, y} ∈ D t do" }, { "formula_coordinates": [ 5, 50.73, 285.87, 172.3, 56.71 ], "formula_id": "formula_2", "formula_text": "Loss = Att sub • L({ŷ}, {y}) 16: B t ←-{x, y} 17: end for 18: B ←-{B 1 , ...B t }, D ←-{D 1 , ...D t } 19: return F, B, D" }, { "formula_coordinates": [ 5, 110.95, 589.19, 189.08, 24.8 ], "formula_id": "formula_3", "formula_text": "g t i (x k ) = ∂(F t (x k ; θ i )) ∂θ i ,(1)" }, { "formula_coordinates": [ 5, 127.21, 615.25, 172.82, 23.23 ], "formula_id": "formula_4", "formula_text": "Ω t i = 1 M t Mt k=1 g t i (x k ) ,(2)" }, { "formula_coordinates": [ 5, 94.83, 731.48, 201.32, 22.31 ], "formula_id": "formula_5", "formula_text": "L(θ) = L t+1 (θ) + λ 2 i Ω t i (θ i -θ * i ) 2 . (3" }, { "formula_coordinates": [ 5, 296.15, 738.54, 3.87, 8.64 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 5, 347.85, 264.03, 215.18, 23.22 ], "formula_id": "formula_7", "formula_text": "Ω t m = 1 |F m | • M t g t i ∈Fm Mt k=1 g t i (x k ) ,(4)" }, { "formula_coordinates": [ 5, 402.79, 295.62, 69.44, 26.29 ], "formula_id": "formula_8", "formula_text": "W t Ωm = Ω t m m Ω t ." }, { "formula_coordinates": [ 5, 555.29, 304.25, 7.74, 8.64 ], "formula_id": "formula_9", "formula_text": ")5" }, { "formula_coordinates": [ 5, 321.44, 413.2, 237.73, 22.31 ], "formula_id": "formula_10", "formula_text": "L(θ) = L t+1 (θ) + λ 2 m i∈Fm W t Ωm Ω t i (θ i -θ * i ) 2 . (6" }, { "formula_coordinates": [ 5, 559.16, 420.26, 3.87, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 7, 120.01, 576.28, 180.01, 22.31 ], "formula_id": "formula_12", "formula_text": "IOU = intersection(y, ŷ) union(y, ŷ) .(7)" }, { "formula_coordinates": [ 7, 135.03, 731.48, 164.99, 22.31 ], "formula_id": "formula_13", "formula_text": "AA = 1 i i j=1 a i,j .(8)" }, { "formula_coordinates": [ 7, 367.03, 611.46, 196.01, 22.31 ], "formula_id": "formula_14", "formula_text": "FWT = 1 N -i N j=i+1 (a i,j -b j ).(9)" }, { "formula_coordinates": [ 7, 369.69, 700.53, 193.35, 22.31 ], "formula_id": "formula_15", "formula_text": "BWT = 1 i -1 i-1 j=1 (a i,j -a j,j ).(10)" } ]
10.1145/3581783.3611713
2023-11-25
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b25", "b28", "b0", "b5", "b6", "b21", "b27", "b29", "b31", "b33", "b40", "b26", "b24", "b8", "b38", "b24", "b11", "b20", "b23", "b39", "b4", "b7", "b8", "b12", "b38", "b27", "b8", "b19" ], "table_ref": [], "text": "Nowadays, Continual Learning (CL) [26,29] is proposed to mimic the human learning process which maintains old knowledge when acquiring new skills and knowledge. However, existing CL approaches will forget previous knowledge when they learn new tasks, i.e., catastrophic forgetting. Therefore, several branches of algorithms have been proposed to address this problem [1,6,7,22,28,30,32,34,41]. However, all these methods just satisfy the supervised learning paradigm where class labels for data points are given. And high-quality, class-labeled samples are scarce.\nSo, Unsupervised Continual Learning (UCL) which has ability to address this issue, receives more and more attention in the community. Under the UCL paradigm, CURL [27] is proposed firstly, but, its capability is restricted to tackling complex assignments. Another work [25] broadens the scope of supervised CL approaches in unsupervised paradigm and evaluates the performance of the stateof-the-art unsupervised learning methods, i.e., SimSiam [9] and BarlowTwins [39]. However, they only examine these approaches and do not propose a suitable approach for the UCL.\nIn this paper, we study the UCL paradigm follow [25]. We reveal a phenomenon when training the SimSiam that the results of the first few tasks are suboptimal. And these results show an upward trend as training continues on the following tasks, as shown in Fig. 1. However, this occurrence contradicts the practicality of continual learning, which requires the model to address catastrophic forgetting and maintain optimal performance on the current task in hand at the same time. For example, it is not acceptable to tolerate the poor performance of the AI-Model in classification and anticipate an enhancement in its classification capability after training it on a Vision & Language task [12,21,24,40]. Furthermore, to check whether this phenomenon is a common issue in UCL, we conduct experiments with a series of unsupervised learning methods [5,8,9,13,39]. Upon fine-tuning some of these methods, the same phenomenon shows up. Meanwhile, traditional metrics, namely Average Accuracy (AA) and Backward Transfer (BWT) can no longer provide an accurate assessment of the model's learning ability for each task under UCL paradigm. Therefore, this paper chooses novel metric which proposed by [28], and we term it as Mean Average Accuracy (MAA), to precisely assess task performance during the training process.\nSince the phenomenon can be measured accurately, a solution approach is necessary. Three assumptions were made after analyzing the results. 1.) The lack of diversity of classes may be the cause. The small size of classes in each task makes it difficult to obtain a discriminative representation. Consequently, after conducting experiments by increasing the number of classes in each task, although performances increase, the suboptimal problem remaines, indicating that this is not the primary factor. 2.) The slow convergence of unsupervised methods is deemed to be the cause. As a result, methods were trained for an extended period. This led to a marginal improvement of results, but the suboptimal phenomenon remained, ruling out training time as the key cause. (refer to Sec. 4.2.3 for details) 3.) The lacking diverse information of features in unsupervised methods is believed to be the primary factor. In addition, we observe that the methods which can only meet the transformed inputs of the same input are more severe about the suboptimal problem. While they can recognize the inherent features of a class, the inability to interact with other classes impedes discriminative classification. So we want to confer diverse information (about other classes) for representations to guide the model to learn invariable information about the class contained in each task. Vector Quantization (VQ) is used to achieve this objective. Clustering features extracted by backbone produces centroids, or codewords, within a codebook. These codewords contain diverse information about various classes. The final representation created by these codewords absorbs sufficient diversity to create a discriminative class boundary. Experiments were conducted by this way to inject diverse information into representations for unsupervised learning techniques. The findings suggest that it enhances convergence, resulting in a 15% improvement (refer to Sec. 4.2.3 for details).\nFurthermore, we contend that distinct segments of a representation contain distinct local patterns. Focusing diversity on these segments rather than the entire representation will achieve more fine-grained diverse. Meanwhile, the fundamental principle of Product Quantization (PQ) is to decompose a feature vectors in highdimensional space into multiple sub-vectors. The ultimate representation is a rearrangement of codewords that contains fine-grained information. This process is highly consistent with our idea. Consequently, in this paper, we propose a Codebook for Unsupervised Continual Learning (CUCL), which employs PQ to quantize representations learned from unsupervised learning methods. Then, we apply contrastive learning to maximize the cross-similarity between the original representations and the quantized ones of another branch, while minimizing similarity with other samples to learn invariable knowledge for promoting classification boundary. The performance of Simsiam [9] and Simsiam with CUCL for the first task of Split CIFAR100 [20]. Each bar 𝑖 represents the result of the first task after training on task 𝑖. We observe two main findings. Firstly, the result is suboptimal upon completing the training for the first task. Nonetheless, this result improves as the training progresses. Secondly, we note a considerable improvement in performance when CUCL is integrated with Simsiam.\nAlthough, the proposed CUCL can resolve the suboptimal problem. The primary issue in CL, catastrophic forgetting, persists. Fortunately, the CUCL enables the codewords to learn the features of each task, and these codewords can serve as proxies for selecting representative samples. Consequently, an algorithm based on the codewords is proposed to mitigate catastrophic forgetting by selecting the closest samples with codewords for rehearsal.\nIn summary, the proposed CUCL consists of two parts. One part utilizes PQ to quantize representations and perform contrastive learning, resolving the suboptimal problem. The other part selects the closest samples for rehearsal as a means of mitigating catastrophic forgetting. We conducte extensive experiments to evaluate the efficacy of our proposed method on multiple continual learning benchmarks, including CIFAR100, TinyImageNet, and MiniImageNet, by testing various supervised CL methods and their variants on UCL. The results prove the ability of our method in enhancing these approaches. For, instance, we have observed average improvements of 9.08%, 7.64% and 6.49% on the three datasets over SI, DER, LUMP which are impletemeneted on Simsiam, in term of MAA. Additionally, we evaluate the effectiveness of our method by conducting more experiments on state-of-the-art unsupervised learning methods. The integration of our CUCL results in substantial enhancements.\nTo summarize, our main contributions are three-fold:\n• A study under Unsupervised Continual Learning (UCL) paradigm has been conducted, which uncovers a phenomenon that the efficacy on the first few tasks of certain unsupervised learning methods is limited by the diverse information contained in features.\n• We have introduced a Codebook for Unsupervised Continual Learning (CUCL), which is a plug-and-play approach towards enabling networks to acquire discriminative information through quantized representation. Furthermore, we establish a rehearsal algorithm based on the codebook to mitigate the catastrophic forgetting issue. • To demonstrate the effectiveness of our method, an extensive experimental evaluation of several benchmark datasets has been conducted. The marginal improvement confirms our method's ability to solve the suboptimal problem and alleviate catastrophic forgetting." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Continual Learning", "publication_ref": [ "b0", "b18", "b21", "b40", "b29", "b30", "b33", "b1", "b5", "b22", "b27", "b31", "b21", "b18", "b40", "b0", "b30", "b29", "b33", "b5", "b22", "b27", "b31", "b27", "b1" ], "table_ref": [], "text": "With the growing interest in continual learning, many methods have been proposed to address the catastrophic forgetting problem.\nThey can be grouped into three broad categories: regularization approaches [1,19,22,41], parameter isolation methods [30,31,34], and memory-based approaches [2,6,23,28,32]. The Regularization approaches focus on curing a continual learning network of its catastrophic forgetting by introducing an extra regularization term in the loss function. LwF [22] mitigated forgetting by using knowledge distillation and transferring knowledge, and used previous model output as soft labels for the previous task. Besides, EWC [19] was the first method to penalize the changes to important parameters during training of later tasks. SI [41] efficiently estimated importance weights during training. MAS [1] computed the importance of the parameters of a neural network in an unsupervised and online manner. The basic idea of Parameter isolation methods is to directly add or modify the model structure. HAT [31] applied the mask on previous task parts during new task training, and this process is imposed at the unit level. PNN [30] added a network to each task and lateral connections to the network of the previous task while freezing previous task parameters. MNTDP [34] proposed a modular layer network approach, whose modules represent atomic skills that can be composed to perform a certain task and provides a learning algorithm to search the modules to combine with. These strategies may work well, but they are computationally expensive and memory intensive. For Memory-based approaches, catastrophic forgetting is avoided by storing data from previous tasks and training them together with data from the current task. Some methods [6,23,28,32] used replayed samples from previous tasks to constrain the parameters' update when learning the new task. For example, iCaRL [28] selected a subset of exemplars that are the best approximate class means in the learned feature space. During training on a new task of EEC [2], reconstructed images from encoded episodes were replayed to avoid catastrophic forgetting. Although these methods have achieved remarkable performance, they are just designed for supervised continual learning.\nOur approach is designed for the unsupervised setting which is more realistic." }, { "figure_ref": [], "heading": "Unsupervised Representational Learning", "publication_ref": [ "b7", "b14", "b12", "b2", "b38", "b4", "b8" ], "table_ref": [], "text": "Recently there has been steady progress in unsupervised representational learning. Many methodologies [5, 8, 9, 13-15, 17, 33, 36, 37, 39] have been proposed for un-/self-supervised learning. SimCLR [8] used the other samples coexisting in the current batch as negative samples, so it worked well when equipped with a large batch size. MoCo [15] applied a queue of negative samples as a dynamic look-up dictionary and a momentum encoder which is proposed to maintain consistency. BYOL [13] was a Siamese [3] in which one branch is a momentum encoder. It directly predicted the network representation of one view from another view. BarlowTwins [39] avoided collapse by measuring the cross-correlation matrix between the outputs of two identical networks, and making it as close to the identity matrix as possible. SwAV [5] incorporateed online clustering into Siamese networks. In another recent line of work, the network architecture and parameter updates of SimSiam [9] are modified to be asymmetric such that the parameters are only updated using one branch." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the settings of UCL and our methodology. Specifically, we present the problem formulation and the metrics used in traditional continual learning in Section 3.1. Section ?? explains the new metric to measure the performance along the training process. Then, Section 3.2 and Section 3.3 provide the details of the proposed method and explain how to select rehearsal samples." }, { "figure_ref": [], "heading": "Preliminary and Metrics", "publication_ref": [ "b27" ], "table_ref": [], "text": "In the setup of unsupervised continual learning, a series of 𝑇 tasks are learned sequentially. We denote a task by its task descriptor, 𝜏 ∈ {1, 2, ...,𝑇 } and its corresponding dataset D 𝜏 = {(𝐼 𝜏,𝑖 ) 𝑁 𝜏 𝑖=1 } which has 𝑁 𝜏 samples from an i.i.d. distribution. The 𝐼 𝜏,𝑖 is the input without supervised label. Furthermore, the task boundaries are available during both training and testing stage, i.e., the task-incremental setting. The UCL seeks to train a model 𝐹 = {𝜃 𝐹 } to learn knowledge on a sequence of tasks without forgetting the knowledge learned on previous tasks, where 𝜃 𝐹 represents the weights of the neural network.\nThe widely used evaluation metrics in continual learning are: Average Accuracy (ACC), and Backward Transfer (BWT). Formally, ACC and BWT are defined as:\n𝐴𝐶𝐶 = 1 𝑇 𝑇 ∑︁ 𝑖=1 𝐴 𝑇 ,𝑖 ,(1)\n𝐵𝑊𝑇 = 1 𝑇 -1 𝑇 -1 ∑︁ 𝑖=1 𝐴 𝑇 ,𝑖 -𝐴 𝑖,𝑖 ,(2)\nwhere 𝐴 𝑇 ,𝑖 is the performance on 𝜏 = 𝑖 task after training on the 𝜏 = 𝑇 task.\nIn addition, we use Mean Average Accuracy (MAA) [28]. Formally, MAA is defined as:\n𝑀𝐴𝐴 = 1 𝑇 𝑇 ∑︁ 𝑗=1 (1\n𝑗 𝑗 ∑︁ 𝑖=1 𝐴 𝑗,𝑖 ). (3\n)\nThe MAA is calculated as the mean of the average accuracy for tasks at each training point that the model has encountered. A high MAA corresponds to a continual learning model that consistently maintains a high accuracy throughout the training process. " }, { "figure_ref": [], "heading": "Codebook for Unsupervised Continual Learning", "publication_ref": [ "b17", "b37" ], "table_ref": [], "text": "In this section, we propose the Codebook for Unsupervised Continual Learning (CUCL) to address the suboptimal phenomenon prevalent in unsupervised continual learning. Fig. First of all, we briefly introduce the unsupervised learning methods working on Siamese networks. These methods operate on a pair of embeddings extracted from transformed images. More specifically, they produce two distorted views via data augmentations T for all images of a batch sampled from D 𝜏 . The distorted views are then fed to the model 𝐹 , producing two batches of embeddings X 𝑎 and X 𝑏 respectively, each X ∈ R 𝐷 . The different unsupervised learning loss will be applied to X 𝑎 and X 𝑏 to produce the optimization objective, L 𝑢𝑛𝑠𝑢𝑝 .\nAfter we get the features X extracted by the 𝐹 , following the method in [18,38], we quantize them using the soft quantization model 𝑄 (X; 𝜃 𝑄 ). This quantization method can solve the infeasible derivative calculation of hard assignment quantization and be trained in an end-to-end manner. The quantization model 𝑄 contains 𝑀 codebooks {B 1 , ...B 𝑀 }, where each codebook B 𝑖 has 𝐾 codeword c ∈ R 𝐷/𝑀 as B 𝑖 = {c 𝑖1 , ..., c 𝑖𝐾 }. We split the features X into 𝑀 subvectors [x 1 , ..., x 𝑀 ] in the feature space where x 𝑖 ∈ R 𝐷/𝑀 is a subvector. And then, we use the soft quantization model to quantize these subvectors. First of all, we compute the distance between the subvector x 𝑖 and the codeword:\n𝑑𝑖𝑠 (x 𝑖 , c 𝑖𝑘 ) = ∥x 𝑖 -c 𝑖𝑘 ∥ 2 2 ,(4)\nwhere ∥•∥ 2 2 represents the squared Euclidean distance. The quantization process for each feature subvector is defined below:\nz 𝑖 = 𝐾 ∑︁ 𝑘 𝑒𝑥𝑝 (-𝑑𝑖𝑠 (x 𝑖 , c 𝑖𝑘 )/𝜏 𝑞 ) 𝐾 𝑘 ′ 𝑒𝑥𝑝 (-𝑑𝑖𝑠 (x 𝑖 , c 𝑖𝑘 ′ )/𝜏 𝑞 ) c 𝑖𝑘 ,(5)\nwhere 𝜏 𝑞 is a temperature parameter to scale the input. The contribution of each codeword is associated with the distance from the codeword to the subvector, where the closest codeword has the biggest contribution and vice versa. By applying this quantization to X 𝑎 and X 𝑏 , we can get the quantized features Z 𝑎 and Z 𝑏 by combining the quantized subvectors z. According to this soft quantization, the information contained in each codeword is integrated into the final quantization representation, producing a more diverse representation to learn discriminative feature.\nThen we apply a cross contrastive learning inspired by traditional contrastive learning to compare the X and Z of different views. Same as the contrastive learning, we treat the X and Z as positive if they are generated from the same image, whereas negative if originated from the different ones. So we can apply contrastive learning loss between the X and Z according to the positive pair of examples (𝑖, 𝑗): where 𝐶𝑜𝑠 represents cosine function, 𝜏 𝑙 is a non-negative temperature parameter, and the 1 [𝑛 ′ ≠𝑗 ] ∈ 0, 1 denotes the indicator which equates to 1 iff 𝑛 ≠ 𝑗.\nL 𝑐𝑢𝑐𝑙 = -𝑙𝑜𝑔 𝑒𝑥𝑝 (𝐶𝑜𝑠 (X 𝑖 , Z 𝑗 )/𝜏 𝑙 ) 𝑁 𝐵 𝑛=1 1 [𝑛≠𝑗 ] 𝑒𝑥𝑝 (𝐶𝑜𝑠 (X 𝑖 , Z 𝑛 )/𝜏 𝑙 ) ,(6)\nThe overall loss combines the traditional unsupervised loss and our CUCL loss, as\nL = L 𝑢𝑛𝑠𝑢𝑝 + L 𝑐𝑢𝑐𝑙 .(7)" }, { "figure_ref": [], "heading": "Codebook Rehearsal", "publication_ref": [], "table_ref": [], "text": "With the cross quantized contrastive learning, the model works well on the tasks at hand. The problem that the first few tasks are suboptimal is ameliorated. However, the catastrophic forgetting remains. Since the codewords represent the clustered centroid of the subvectors, they can be seen as the proxies of the subvectors and the carrier of the information. Specifically, the codeword c 𝑖𝑐 which is close to subvector x 𝑖 serves as the proxy of this subvector. We use these codewords to choose some representative samples for rehearsal. Simply put, since the weight contribution is related to the distance in Eq. 4, we use this distance as a clue to choose samples. We estimate the distance between each subvector x 𝑖 of X and its proxy c 𝑖𝑐 and sum them up to the final distance:\n𝑑𝑖𝑠 (𝑋 ) = 𝑀 ∑︁ 𝑖 ∥x 𝑖 -c 𝑖𝑐 ∥ 2 2 .(8)\nIn this paper, the first 𝑆 samples which have the furthest distance will be selected. Because we regard these samples as the most difficult data points. We can not only recall the knowledge about the task learned before through these buffers, but also retrain the samples to learn more information about the task." }, { "figure_ref": [], "heading": "EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "In this section, to evaluate the effectiveness of our proposed method CUCL, we validate it on a variety of continual learning benchmarks. Additionally, we perform ablation studies to explore the usefulness of the different components." }, { "figure_ref": [], "heading": "Experimental setting", "publication_ref": [ "b19", "b10", "b34", "b40", "b5", "b3", "b24", "b8", "b38", "b12", "b7", "b4", "b2", "b24", "b9", "b24", "b35", "b15", "b24" ], "table_ref": [], "text": "Datasets. We conduct experiments on various continual learning benchmarks, including Split CIFAR100 [20], Split TinyIma-geNet, Split MiniImageNet. The Split CIFAR100 is constructed by randomly splitting 100 classes of CIFAR100 into 10 tasks, where each class includes 500 training samples and 100 testing samples. The Split TinyImageNet is a variant of the ImageNet dataset [11] comprising 10 randomized classes out of the 100 classes for each task, where the images are sized 64 x 64 pixels. Finally, Split Mini-ImageNet [35] Baselines and Training setup. Firstly, we compare our method with state-of-the-art continual learning methods, includes SI [41], A-GEM [6], DER [4], LUMP [25] and their unsupervised variants. Secondly, we compare our method with state-of-the-art unsupervised learning methods, including Simsiam [9], Barlow twins [39], BYOL [13], SimCLR [8] and SwAV [5]. All these methods employ a common underlying architecture, a Siamese [3] network structure. Our approach can be integrated into these networks since we also utilize the same network structure. We implement CUCL and reproduce the results of all these methods from the released code of [25]. We use the same hyperparameters as [10,25]. The K-Nearest Neighbors (KNN) [36] classifier is used to evaluate the representations obtained from the unsupervised learning. Resnet18 [16] is employed as the backbone and trained for 200 epochs in each task with a batch size of 256 in the UCL pardigm. In the supervised paradigm, we follow the setting introduced by [25] and train the methods for 50 epochs with a batch size of 32. We set the temperature parameters for quantization and loss to 5 and 0.5, respectively. Concerning the quantizer setting in CUCL, we employe 24-bit quantization method, which incorporates 8 codebooks of 8 codewords each with a dimension of 16. We maintain a uniform learning rate of 0.03 in all the experiments. Concerning the CUCL rehearsal algorithm, we opt to retain 20 samples per task. In the main experiments, if there are no special instructions, the CUCL is the combination of CUCL and the Codebook Rehearsal.\nEvaluation metrics. We evaluate the performance on the following metrics: Average Accuracy (ACC), Backward Transfer (BWT) and Mean Average Accuracy (MAA)." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comparison with State of the Arts.", "publication_ref": [ "b24", "b24" ], "table_ref": [], "text": "To evaluate the ability of our method to solve the suboptimal problem and mitigate catastrophic forgetting, we compare our method with state-of-the-art supervised methods and its variants under UCL paradigm. Specifically, we verify our CUCL by incorporating it into these variants. Following recent work [25], Simsiam and Barlow twins are chosen as unsupervised backbones. Quantitative results on three datasets are shown in Tab. 1.\nFrom Tab. 1, we have following observations: Firstly, the unsupervised variants achieve significantly better performance than supervised methods, as illustrated in [25]. Secondly, these variants which are based on Simsiam perform well on three datasets in terms of BWT and AA. Specifically, about BWT, LUMP achieves positive results as 3.60%, 5.93% and 4.52%. However, after reviewing the results of each individual task, we observe that the improvements of the first few tasks compensate for the forgetting effect, i.e., suboptimal problem. Consequently, the AA and BWT metrics cannot accurately evaluate the learning ability of the model. Thirdly, in term of MAA, after integrated with CUCL, we have observed average improvements of 9.08%, 7.64% and 6.49% on the three datasets over SI, DER, LUMP, respectively. These substantial improvements demonstrate the efficiency of our approach in addressing the suboptimal problem. Fourthly, the Barlow twins approach does not suffer the suboptimal problem and performs better than Simsiam. Nevertheless, when our CUCL is incorporated with it, there are still 1.2%, 1.32% and 4.28% improvements over SI, DER, LUMP, respectively. This occurrence indicates that our method is not only capable of addressing the suboptimal problem but also able to learn superior representations that enhance the class boundary. Moreover, in terms of BWT, the improvements verify the significant efficacy of our approach for mitigating catastrophic forgetting." }, { "figure_ref": [], "heading": "Comparison with Unsupervised", "publication_ref": [], "table_ref": [], "text": "Learning Methods. To verify our method is plug-and-play, we incorporate our CUCL into different unsupervised learning methods. Quantitative results on three datasets are shown in Tab. 2. From Tab. 2, it's worth noting that, in terms of BWT, the Simsiam and BYOL exhibit remarkable results. The reason lies in the fact that they also suffer the suboptimal problem (the detail of the learning curve will show in Sec.4.2.4). This issue is addressed by applying our CUCL model, which leads to a significant improvement for the Simsiam and BYOL methods (average 9.54% and 5.2% in terms of MAA, respectively). Specifically, on the CIFAR100 dataset, we observe considerate improvements on Simsiam in terms of MAA (67.48% vs. 76.07%), showing notable generalizability for CUCL. In addition, other unsupervised methods employ various techniques to handle negative samples within a batch to solve the suboptimal problem. Nevertheless, when our approach is incorporated, significant improvements can still be achieved (e.g., 3.43% of SwAV on TinyImageNet), manifesting the great ability of our CUCL to mitigate the catastrophic forgetting. Furthermore, as discovered from Tab. 2, with the task difficulty increasing when training on TinyImageNet and MiniImageNet, results of each methodology suffer degradation. It is surprising that, in terms of BWT, BYOL even remains positive, indicating that the suboptimal problem is more severe than on CIFAR100. Additionally, some unsupervised methods (e.g., SwAV), which initially perform well on CIFAR100, also suffer from the suboptimal problem. We speculate it is because discriminative features and segmented quantization are more crucial than simple tasks. Thus, when our approach is incorporated into these methods, the improvements are more substantial than those achieved in CIFAR100." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Ablation Studies.", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Analysis of Diversity for Representation. To confirm the significance of diversity, we conduct experiments on CIFAR100 dataset by chooing Simsiam as unsupervised backbone. We keep other operations unchanged and only apply quantization to its final representations, which are used for unsupervised loss. Further, as a comparison base, we record the results of fine-tune.\nInitially, we employ a quantizer that only utilize a single codebook to capture the entirety of representations. This quantizer solely clusters representation to create final representation and does not utilize local patterns in different segments. The results are shown in Fig. 3, named Simsiam w/ Diversity. The figure clearly depicts that the inclusion of diversity substantially mitigate the suboptimal issue, as indicated by a significant improvement in performance from 50% to 65.4% on the first task. However, along the training process, the inadequacy of using entire representation becomes apparent. We speculate it is because the introduction of course diversity impacts the learning process on subsequent tasks as the number of observed classes increases.\nThis problem can be resolved by utilizing segmented quantized representation, which is validated by Fig. 3, showing an increase in accuracy of 22.4% for the first task (i.e., Simsiam w/ Quantization). Furthermore, upon complete training, the result culminates to 71.46%. These outcomes substantiate our proposal that instilling diversity in distinct representation segments leads to the accumulation of more information.\nAnalysis of Different Codebook setting. To further validate the effectiveness of local patterns in CUCL, we conduct experiments using different quantizers. We exclude the interference of rehearsal by conducting the experiments under no rehearsal samples. The experimental settings involve fixing the dimension of the representation to 16. To maintain the information contained in the quantizer equally, the number of codewords varies according to the number of codebooks. Additionally, the memory usage is positively correlated with the number of codewords, which is exponentially related to the bits. To keep the cost manageable, we apply the 16-bit quantization in this ablation study. The experimental results, shown in Tab. 3, indicate that using only one codebook results in low performance due to the absence of local patterns. Moreover, as the number of codebooks increasing, the different codewords contribute distinct information to the quantized representation, resulting in more diverse and informative representations. These representations contribute to better final performances. However, the results of 8 codebooks on TinyImageNet degraded significantly. We posit that the reason is the lack of enough codewords for this harder task, and the reorganization of the quantized subvectors mislead Analysis of Different Memory Size. In this ablation study, to explore the effectiveness of our rehearsal algorithm, we conduct experiments with different settings of 𝑆, including 0, 20, and 40. The quantizer in CUCL consist of 8 codebooks, each of which has 8 codewords. As shown in Tab. 4 on the CIFAR100 dataset, the rehearsal samples not only alleviate catastrophic forgetting but are also beneficial for unsupervised learning. Because, unsupervised learning approaches learn discriminative features rather than classspecific information which is learned by supervised methods. The more different class samples, the more information about classification boundaries is learned. Additionally, it is noteworthy that when Barlow-twins saves 20 samples for rehearsal on the CIFAR100 dataset, the BWT behaves worse as MAA and AA increase. By checking the results of every task after training on each task, we find that the rehearsal samples promote plasticity. However, the stability do not increase comparably to plasticity, resulting in a worse BWT result. Meanwhile, the balance between plasticity and stability of Barlow-twins worked well on the TinyImageNet dataset as the size increased. Furthermore, on the TinyImageNet dataset, we can observe that the results of Simsiam with 20 samples are worse than those with 40 samples in terms of AA and BWT. However, comparing the specific results, we find that the first is better than the latter in deed. This is consistent with the results in term of MAA metric. We argue that a possible reason is also the imbalance leading to fluctuations in learning. Overall, these results demonstrate the benefits of our rehearsal algorithm.\nAnalysis of Longer Training. To validate our second assumption in Sec. 1 that one reason for the poor performance of the initial few tasks is due to slow convergence, we conduct experiments with different numbers of epochs on the CIFAR100 dataset, including 200, 300, and 400 epochs. The outcomes are shown in Table 5. We observe that enhancing the number of training epochs resulting in better performance for Simsiam and Barlow-twins. However, the results of Simsiam still suffer from suboptimal issue, which demonstrates that this assumption 1 is false. In addition, the results of 400-epoch These curves reveal that Simsiam and BYOL face the suboptimal issue. Specifically, we observe considerate improvements on Simsiam on the first (50.4% vs. 78.6%), and on the end (67.48% vs. 76.07%). Furthermore, Barlow-twins exhibits good performance in the initial task, but it suffers catastrophic forgetting (77.3% vs. 72.91%). Nonetheless, our approach enhances the first task's outcome to 77.7%. In addition, the effective rehearsal method helps Barlowtwins to retain previous acquired knowledge, leading to a 75.29% outcome." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This paper investigated the unsupervised continual learning paradigm. Our results revealed a problem that the performances of the first few tasks are suboptimal. Additionally, to ameliorate this problem, we proposed Codebook for Unsupervised Continual Learning (CUCL) to confer diversity for representations and apply the contrastive learning on the original representation and the quantized one from a different view to guide the model to capture discriminative features. The outcomes of extensive experiments demonstrated the efficacy of our proposed method." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This study is supported by grants from National Key R&D Program of China (2022YFC2009903/2022YFC2009900), the National Natural Science Foundation of China (Grant No. 62122018, No. 62020106008, No. 61772116, No. 61872064), Fok Ying-Tong Education Foundation(171106), and SongShan Laboratory YYJC012022019." } ]
The focus of this study is on Unsupervised Continual Learning (UCL), as it presents an alternative to Supervised Continual Learning which needs high-quality manual labeled data. The experiments under UCL paradigm indicate a phenomenon where the results on the first few tasks are suboptimal. This phenomenon can render the model inappropriate for practical applications. To address this issue, after analyzing the phenomenon and identifying the lack of diversity as a vital factor, we propose a method named Codebook for Unsupervised Continual Learning (CUCL) which promotes the model to learn discriminative features to complete the class boundary. Specifically, we first introduce a Product Quantization to inject diversity into the representation and apply a cross quantized contrastive loss between the original representation and the quantized one to capture discriminative information. Then, based on the quantizer, we propose a effective Codebook Rehearsal to address catastrophic forgetting. This study involves conducting extensive experiments on CIFAR100, Tiny-ImageNet, and MiniImageNet benchmark datasets. Our method significantly boosts the performances of supervised and unsupervised methods. For instance, on TinyImageNet, our method led to a relative improvement of 12.76% and 7% when compared with Simsiam and BYOL, respectively. Codes are publicly available at https://github.com/zackschen/CUCL.
CUCL: Codebook for Unsupervised Continual Learning
[ { "figure_caption": "Figure 1 :1Figure1: The performance of Simsiam[9] and Simsiam with CUCL for the first task of Split CIFAR100[20]. Each bar 𝑖 represents the result of the first task after training on task 𝑖. We observe two main findings. Firstly, the result is suboptimal upon completing the training for the first task. Nonetheless, this result improves as the training progresses. Secondly, we note a considerable improvement in performance when CUCL is integrated with Simsiam.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: overview of the proposed CUCL. The samples are augmented using data augmentations T and then passed on to the Backbone to obtain the original feature representation X ∈ R 𝐷 . The representations are subjected to the traditional learning loss L 𝑢𝑛𝑠𝑢𝑝 . Subsequently, the original representations are divided into subvectors x ∈ R 𝐷/𝑀 . The soft quantizer is applied to the subvectors x to obtain quantized subvectors z. Then, the quantized subvectors z are reorganized into a representation Z, which leads to enhanced diversity and robustness. The cross contrastive loss L 𝑐𝑢𝑐𝑙 is imposed on both the original representations X and the quantized Z in the subsequent stage. Finally, the model is optimized by the final loss: L = L 𝑢𝑛𝑠𝑢𝑝 + L 𝑐𝑢𝑐𝑙 .", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "is a dataset created by dividing the 100 classes of the ImageNet into 10 sequential tasks, each consisting of 10 classes. Each class includes 500 training samples and 100 testing samples. Each image in the mentioned datasets has a size of 84 × 84 pixels.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The learning curves of MAA. These experiments are conducted on the Split CIFAR100 based on the Simsiam methodology. These results demonstrate the necessity of the diversity.achieved (e.g., 3.43% of SwAV on TinyImageNet), manifesting the great ability of our CUCL to mitigate the catastrophic forgetting.Furthermore, as discovered from Tab. 2, with the task difficulty increasing when training on TinyImageNet and MiniImageNet, results of each methodology suffer degradation. It is surprising that, in terms of BWT, BYOL even remains positive, indicating that the suboptimal problem is more severe than on CIFAR100. Additionally, some unsupervised methods (e.g., SwAV), which initially perform well on CIFAR100, also suffer from the suboptimal problem. We speculate it is because discriminative features and segmented quantization are more crucial than simple tasks. Thus, when our approach is incorporated into these methods, the improvements are more substantial than those achieved in CIFAR100.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Simsiam training are still inferior to those of 200-epoch Simsiam with CUCL training.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: The learning curves in terms of MAA on the Split CIFAR100. The dashed lines shows the results of finetuning these methodologies. The solid lines are the results when they are incorporated with our approach. 4.2.4 Analysis of the training process. To further evaluate the learning ability of model during the training process, we plot the results on CIFAR100 in terms of MAA metric. Three models including Simsiam, BYOL, and Barlow-twins are considered as unsupervised backbones. Dashed lines represent results when the models are fine-tuned, while solid curves show the performance when our CUCL is integrated. The final learning curves are shown in Fig.4. These curves reveal that Simsiam and BYOL face the suboptimal issue. Specifically, we observe considerate improvements on Simsiam on the first (50.4% vs. 78.6%), and on the end (67.48% vs. 76.07%). Furthermore, Barlow-twins exhibits good performance in the initial task, but it suffers catastrophic forgetting (77.3% vs. 72.91%). Nonetheless, our approach enhances the first task's outcome to 77.7%. In addition, the effective rehearsal method helps Barlowtwins to retain previous acquired knowledge, leading to a 75.29% outcome.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "The quantitative results of supervised baselines and their unsupervised variants on Resnet-18 architecture with KNN classifier[36]. While w/ CUCL is the adaptation of our method to other methodologies. There are considerable improvements when our CUCL plugged into each baseline. +6.04 76.68 +2.36 -5.12 -2.46 62.45 +11.08 64.48 +5.68 -4.49 -3.51 64.91 +10.12 67.05 +3,81 -4.44 -4.74 +3.27 77.24 +1.93 -4.39 -0.35 62.41 +11.4 65.10 +6.44 -4.04 -3.22 65.03 +8.25 67.15 +3.43 -4.42 -3.81 +8.24 74.78 +4.95 -0.82 -4.42 58.51 +4.91 59.42 -1.7 -2.80 -8.73 58.67 +6.31 60.25 +0.71 -2.36 -6.98 +1.27 74.93 +1.41 -7.68 +0.48 61.59 +1.2 62.44 +2.72 -6.11 +2.62 62.67 +1.14 64.50 +2.53 -6.61 +1.3 +1.47 75.47 +2.01 -7.16 +0.47 61.57 +1.17 60.88 +0.84 -8.82 +0.00 63.10 +1.31 63.82 +1.35 -7.64 +0.29 +4.98 73.66 +6.93 -3.88 +2.45 59.43 +4.17 61.58 +3.8 -3.04 -0.42 58.60 +3.69 61.80 +4.14 -1.97 +0.66", "figure_data": "SettingMethodMAASplit CIFAR100 AABWTSplit TinyImageNet MAA AABWTSplit MiniImageNet MAA AABWTSI [41]59.1755.77-32.0052.1244.42-35.2750.1244.97-34.52SupervisedA-GEM [6]58.1054.77-31.2452.0048.74-29.6750.8448.09-30.32DER [4]72.4867.97-20.0960.8356.74-21.4058.5254.49-23.40SI69.7574.32-2.6651.3758.80-0.9854.7963.240.30Simsiamw/ CUCL 75.79 DER 73.05 w/ CUCL 76.32 LUMP [25] 63.9575.31 69.83-4.04 3.6051.01 53.6058.66 61.12-0.82 5.9356.78 52.3663.72 59.54-0.61 4.52w/ CUCL SI w/ CUCL 74.40 DER 73.13 73.15 72.19 Barlow twins w/ CUCL 74.62 LUMP 67.1873.52 73.46 66.73-8.16 -7.63 -6.3360.39 60.40 55.2659.72 60.04 57.78-8.73 -8.82 -2.6261.53 61.79 54.9161.97 62.47 57.66-7.91 -7.93 -2.63w/ CUCL72.16", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The quantitative result of each unsupervised methods on Resnet-18 architecture with KNN classifier. While w/ CUCL is the adaptation of our method to other unsupervised learning methodologies. The best results are highlighted in bold. +8.59 75.81 +1.94 -6.26 -5.06 63.23 +12.76 64.42 +7.24 -5.18 -2.56 64.61 +7.28 66.17 +1.81 -5.30 -4.38 +2.38 75.18 +2.22 -8.82 -0.35 63.12 +3.22 63.58 +3.82 -7.27 +0.95 64.31 +3.05 66.16 +2.84 -6.56 +0.36 +0.99 75.88 +1.74 -5.23 +1.35 63.81 +0.42 64.14 +0.9 -5.36 +1.2 63.72 +0.57 64.77 -0.53 -5.09 -0.38 +1.59 74.86 +0.42 -6.39 -2.19 61.95 +3.43 62.84 -0.1 -5.40 -4.64 63.22 +1.19 64.29 +1.63 -5.63 -0.96", "figure_data": "MethodMAASplit CIFAR100 AABWTSplit TinyImageNet MAA AABWTSplit MiniImageNet MAA AA BWTSimsiam [9]67.4873.87-1.2050.4757.18-2.6257.3364.36-0.92w/ CUCL 76.07 BYOL [13] 75.4577.80-1.4056.9063.981.1657.7566.563.96w/ CUCL76.78 +1.33 77.67 -0.13 -4.19 -2.7963.90 +765.66 +1.68 -4.58 -5.74 64.50 +7.28 66.99 +0.43 -3.74 -7.7Barlow-twins [39]72.9172.96-8.4759.9059.76-8.2261.2663.32-6.92w/ CUCL 75.29 SimCLR [8] 74.3974.14-6.5863.4963.24-6.5663.1565.30-4.71w/ CUCL 75.38 SwAV [5] 73.1874.44-4.2058.5262.94-0.7662.0362.66-4.67w/ CUCL74.77", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study about the different codebook settings on the Simsiam w/ CUCL without rehearsal. 'Cb' and 'Cw' represent the number of codebook and codeword, respectively.", "figure_data": "Cb CwSplit CIFAR100 MAA AA BWT MAA AA BWT Split TinyImageNet1 65536 71.64 71.27 -10.81 61.92 64.34 -4.802256 73.72 72.95 -10.31 62.82 64.34 -5.4041673.80 73.15 -10.20 62.84 63.38 -6.878475.60 76.23 -6.14 61.33 62.22 -7.22", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "This ablation study is to validate the effectiveness of our rehearsal algorithm. The experiments are conducted about different memory size, 0, 20, and 40. These results show the effectiveness of our rehearsal algorithm to boost the performances.", "figure_data": "SizeMethodSplit CIFAR100 MAA AA BWT MAA AA BWT Split TinyImageNet0Simsiam Barlow-twins 74.81 75.03 -8.19 61.70 61.04 -9.13 75.83 75.36 -7.62 62.10 62.98 -6.5620Simsiam Barlow-twins 75.29 75.18 -8.82 63.12 63.58 -7.27 76.07 75.81 -6.26 63.23 64.42 -5.1840Simsiam Barlow-twins 76.20 76.77 -6.11 63.82 64.22 -6.36 76.37 76.20 -5.97 62.70 64.42 -4.56", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Experiment results on CIFAR100 dataset with different training time. The best results are highlighted in bold.", "figure_data": "Epoch CUCLSimsiam MAA AA BWT MAA AA BWT Barlow-twins200w/o 67.48 73.87 -1.20 72.91 72.96 -8.47 w/ 76.07 75.81 -6.26 75.29 75.18 -8.82300w/o 71.75 74.95 -4.63 74.58 75.07 -7.66 w/ 77.84 77.15 -6.87 76.40 74.91 -10.27400w/o 74.77 75.05 -8.07 75.40 75.49 -7.57 w/ 78.34 75.45 -9.26 76.45 75.02 -10.41the learning of the model. Comparing the results with those in Tab.4 (61.33% vs. 62.10%), increasing the number of codewords to 8 cancapture the local patterns well and correct the learning process.", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Cheng Chen; Jingkuan Song; Xiaosu Zhu; Junchen Zhu; Lianli Gao; Hengtao Shen
[ { "authors": "Rahaf Aljundi; Francesca Babiloni; Mohamed Elhoseiny; Marcus Rohrbach; Tinne Tuytelaars", "journal": "", "ref_id": "b0", "title": "Memory Aware Synapses: Learning What (not) to Forget", "year": "2018" }, { "authors": "Ali Ayub; Alan R Wagner", "journal": "", "ref_id": "b1", "title": "EEC: Learning to Encode and Regenerate Images for Continual Learning", "year": "2021" }, { "authors": "Jane Bromley; Isabelle Guyon; Yann Lecun; Eduard Säckinger; Roopak Shah", "journal": "NeurIPS", "ref_id": "b2", "title": "Signature Verification Using a Siamese Time Delay Neural Network", "year": "1993" }, { "authors": "Pietro Buzzega; Matteo Boschini; Angelo Porrello; Davide Abati; Simone Calderara", "journal": "", "ref_id": "b3", "title": "Dark Experience for General Continual Learning: a Strong, Simple Baseline", "year": "2020" }, { "authors": "Mathilde Caron; Ishan Misra; Julien Mairal; Priya Goyal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b4", "title": "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments", "year": "2020" }, { "authors": "Arslan Chaudhry; Marc'aurelio Ranzato; Marcus Rohrbach; Mohamed Elhoseiny", "journal": "", "ref_id": "b5", "title": "Efficient Lifelong Learning with A-GEM", "year": "2019" }, { "authors": "Cheng Chen; Ji Zhang; Jingkuan Song; Lianli Gao", "journal": "ACM MM", "ref_id": "b6", "title": "Class Gradient Projection For Continual Learning", "year": "2022" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey E Hinton", "journal": "", "ref_id": "b7", "title": "A Simple Framework for Contrastive Learning of Visual Representations", "year": "2020" }, { "authors": "Xinlei Chen; Kaiming He", "journal": "", "ref_id": "b8", "title": "Exploring Simple Siamese Representation Learning", "year": "2021" }, { "authors": "Xinlei Chen; Saining Xie; Kaiming He", "journal": "", "ref_id": "b9", "title": "An Empirical Study of Training Self-Supervised Vision Transformers", "year": "2021" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b10", "title": "Ima-geNet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Lianli Gao; Pengpeng Zeng; Jingkuan Song; Xianglong Liu; Heng Tao Shen", "journal": "ACM MM", "ref_id": "b11", "title": "Examine before You Answer: Multi-task Learning with Adaptive-attentions for Multiple-choice VQA", "year": "2018" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre H Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Ávila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar; Bilal Piot; Koray Kavukcuoglu; Rémi Munos; Michal Valko", "journal": "", "ref_id": "b12", "title": "Bootstrap Your Own Latent -A New Approach to Self-Supervised Learning", "year": "2020" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross B Girshick", "journal": "", "ref_id": "b13", "title": "Masked Autoencoders Are Scalable Vision Learners", "year": "2022" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross B Girshick", "journal": "", "ref_id": "b14", "title": "Momentum Contrast for Unsupervised Visual Representation Learning", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b15", "title": "Deep Residual Learning for Image Recognition", "year": "2016" }, { "authors": "R ; Devon Hjelm; Alex Fedorov; Samuel Lavoie-Marchildon; Karan Grewal; Philip Bachman; Adam Trischler; Yoshua Bengio", "journal": "", "ref_id": "b16", "title": "Learning deep representations by mutual information estimation and maximization", "year": "2019" }, { "authors": "Young Kyun; Jang ; Nam Ik Cho", "journal": "", "ref_id": "b17", "title": "Self-supervised Product Quantization for Deep Unsupervised Image Retrieval", "year": "2021" }, { "authors": "James Kirkpatrick; Razvan Pascanu; Neil C Rabinowitz; Joel Veness; Guillaume Desjardins; Andrei A Rusu; Kieran Milan; John Quan; Tiago Ramalho; Agnieszka Grabska-Barwinska; Demis Hassabis; Claudia Clopath; Dharshan Kumaran; Raia Hadsell", "journal": "", "ref_id": "b18", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2016" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b19", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Xiangpeng Li; Lianli Gao; Xuanhan Wang; Wu Liu; Xing Xu; Heng Tao Shen; Jingkuan Song", "journal": "ACM MM", "ref_id": "b20", "title": "Learnable Aggregating Net with Diversity Learning for Video Question Answering", "year": "2019" }, { "authors": "Zhizhong Li; Derek Hoiem", "journal": "TPAMI", "ref_id": "b21", "title": "Learning without Forgetting", "year": "2018" }, { "authors": "David Lopez; - Paz; Marc'aurelio Ranzato", "journal": "NeurIPS", "ref_id": "b22", "title": "Gradient Episodic Memory for Continual Learning", "year": "2017" }, { "authors": "Xinyu Lyu; Lianli Gao; Pengpeng Zeng; Heng Tao Shen; Jingkuan Song", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b23", "title": "Adaptive Fine-Grained Predicates Learning for Scene Graph Generation", "year": "2023" }, { "authors": "Divyam Madaan; Jaehong Yoon; Yuanchun Li; Yunxin Liu; Sung Ju Hwang", "journal": "", "ref_id": "b24", "title": "Representational Continuity for Unsupervised Continual Learning", "year": "2022" }, { "authors": "Michael Mccloskey; Neal J Cohen", "journal": "Psychology of learning and motivation", "ref_id": "b25", "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "year": "1989" }, { "authors": "Dushyant Rao; Francesco Visin; Andrei A Rusu; Razvan Pascanu; Yee Whye Teh; Raia Hadsell", "journal": "NeurIPS", "ref_id": "b26", "title": "Continual Unsupervised Representation Learning", "year": "2019" }, { "authors": " Sylvestre-Alvise; Alexander Rebuffi; Georg Kolesnikov; Christoph H Sperl; Lampert", "journal": "", "ref_id": "b27", "title": "iCaRL: Incremental Classifier and Representation Learning", "year": "2017" }, { "authors": "B Mark; Ring", "journal": "", "ref_id": "b28", "title": "Child: A First Step Towards Continual Learning", "year": "1998" }, { "authors": "Andrei A Rusu; Neil C Rabinowitz; Guillaume Desjardins; Hubert Soyer; James Kirkpatrick; Koray Kavukcuoglu; Razvan Pascanu; Raia Hadsell", "journal": "", "ref_id": "b29", "title": "Progressive Neural Networks", "year": "2016" }, { "authors": "Joan Serrà; Didac Suris; Marius Miron; Alexandros Karatzoglou", "journal": "", "ref_id": "b30", "title": "Overcoming Catastrophic Forgetting with Hard Attention to the Task", "year": "2018" }, { "authors": "Pablo Sprechmann; M Siddhant; Jack W Jayakumar; Alexander Rae; Adrià Pritzel; Benigno Puigdomènech Badia; Oriol Uria; Demis Vinyals; Razvan Hassabis; Charles Pascanu; Blundell", "journal": "", "ref_id": "b31", "title": "Memory-based Parameter Adaptation", "year": "2018" }, { "authors": "Yonglong Tian; Dilip Krishnan; Phillip Isola", "journal": "", "ref_id": "b32", "title": "Contrastive Multiview Coding", "year": "2020" }, { "authors": "Tom Veniat; Ludovic Denoyer; Marc'aurelio Ranzato", "journal": "", "ref_id": "b33", "title": "Efficient Continual Learning with Modular Networks and Task-Driven Priors", "year": "2021" }, { "authors": "Oriol Vinyals; Charles Blundell; Tim Lillicrap; Koray Kavukcuoglu; Daan Wierstra", "journal": "NeurIPS", "ref_id": "b34", "title": "Matching Networks for One Shot Learning", "year": "2016" }, { "authors": "Zhirong Wu; Yuanjun Xiong; Stella X Yu; Dahua Lin", "journal": "", "ref_id": "b35", "title": "Unsupervised Feature Learning via Non-Parametric Instance Discrimination", "year": "2018" }, { "authors": "Mang Ye; Xu Zhang; Pong C Yuen; Shih-Fu Chang", "journal": "", "ref_id": "b36", "title": "Unsupervised Embedding Learning via Invariant and Spreading Instance Feature", "year": "2019" }, { "authors": "Tan Yu; Jingjing Meng; Chen Fang; Jin Hailin; Junsong Yuan", "journal": "IJCV", "ref_id": "b37", "title": "Product Quantization Network for Fast Visual Search", "year": "2020" }, { "authors": "Jure Zbontar; Li Jing; Ishan Misra; Yann Lecun; Stéphane Deny", "journal": "", "ref_id": "b38", "title": "Barlow Twins: Self-Supervised Learning via Redundancy Reduction", "year": "2021" }, { "authors": "Pengpeng Zeng; Haonan Zhang; Jingkuan Song; Lianli Gao", "journal": "", "ref_id": "b39", "title": "S2 Transformer for Image Captioning", "year": "2022" }, { "authors": "Friedemann Zenke; Ben Poole; Surya Ganguli", "journal": "", "ref_id": "b40", "title": "Continual Learning Through Synaptic Intelligence", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 386.51, 517.33, 172.23, 24.75 ], "formula_id": "formula_0", "formula_text": "𝐴𝐶𝐶 = 1 𝑇 𝑇 ∑︁ 𝑖=1 𝐴 𝑇 ,𝑖 ,(1)" }, { "formula_coordinates": [ 3, 384.55, 547.2, 174.19, 26.73 ], "formula_id": "formula_1", "formula_text": "𝐵𝑊𝑇 = 1 𝑇 -1 𝑇 -1 ∑︁ 𝑖=1 𝐴 𝑇 ,𝑖 -𝐴 𝑖,𝑖 ,(2)" }, { "formula_coordinates": [ 3, 390.15, 633.78, 60.23, 24.75 ], "formula_id": "formula_2", "formula_text": "𝑀𝐴𝐴 = 1 𝑇 𝑇 ∑︁ 𝑗=1 (1" }, { "formula_coordinates": [ 3, 447.08, 633.15, 108.49, 25.38 ], "formula_id": "formula_3", "formula_text": "𝑗 𝑗 ∑︁ 𝑖=1 𝐴 𝑗,𝑖 ). (3" }, { "formula_coordinates": [ 3, 555.57, 642.06, 3.17, 7.94 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 390.4, 429.8, 168.34, 12.95 ], "formula_id": "formula_5", "formula_text": "𝑑𝑖𝑠 (x 𝑖 , c 𝑖𝑘 ) = ∥x 𝑖 -c 𝑖𝑘 ∥ 2 2 ,(4)" }, { "formula_coordinates": [ 4, 367.04, 476.18, 191.7, 25.62 ], "formula_id": "formula_6", "formula_text": "z 𝑖 = 𝐾 ∑︁ 𝑘 𝑒𝑥𝑝 (-𝑑𝑖𝑠 (x 𝑖 , c 𝑖𝑘 )/𝜏 𝑞 ) 𝐾 𝑘 ′ 𝑒𝑥𝑝 (-𝑑𝑖𝑠 (x 𝑖 , c 𝑖𝑘 ′ )/𝜏 𝑞 ) c 𝑖𝑘 ,(5)" }, { "formula_coordinates": [ 4, 352.03, 686.23, 206.71, 25.09 ], "formula_id": "formula_7", "formula_text": "L 𝑐𝑢𝑐𝑙 = -𝑙𝑜𝑔 𝑒𝑥𝑝 (𝐶𝑜𝑠 (X 𝑖 , Z 𝑗 )/𝜏 𝑙 ) 𝑁 𝐵 𝑛=1 1 [𝑛≠𝑗 ] 𝑒𝑥𝑝 (𝐶𝑜𝑠 (X 𝑖 , Z 𝑛 )/𝜏 𝑙 ) ,(6)" }, { "formula_coordinates": [ 5, 135.48, 449.12, 159.1, 8.43 ], "formula_id": "formula_8", "formula_text": "L = L 𝑢𝑛𝑠𝑢𝑝 + L 𝑐𝑢𝑐𝑙 .(7)" }, { "formula_coordinates": [ 5, 127.08, 624.31, 167.51, 24.73 ], "formula_id": "formula_9", "formula_text": "𝑑𝑖𝑠 (𝑋 ) = 𝑀 ∑︁ 𝑖 ∥x 𝑖 -c 𝑖𝑐 ∥ 2 2 .(8)" } ]
10.1007/978-0-387-39940-9_488
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b12", "b5", "b21", "b20", "b0", "b8", "b16", "b19" ], "table_ref": [], "text": "Minimum Bayes risk (MBR) decoding (Bickel and Doksum, 1977;Goel and Byrne, 2000) has recently gained renewed attention as a decision rule for conditional sequence generation tasks, especially neural machine translation (NMT). In MBR, the sequence with the highest expected utility with respect to thez model distribution is chosen as the output, where the utility is usually some measure of text similarity. This contrasts with the more commonly used maximum a posteriori (MAP) decision rule, which returns the sequence with the highest probability under the model. MAP is generally intractable, and beam search is typically used to find an approximation. MBR is likewise intractable, and Eikema and Aziz (2020) propose an samplingbased approximation algorithm.\nMBR has been shown to outperform MAP beam search in both automatic and qualitative evaluation in a diverse range of tasks (Suzgun et al., 2023), including NMT (Freitag et al., 2022a) and code generation (Shi et al., 2022). MBR also generalizes other previously proposed decoding methods and explains their success (Bertsch et al., 2023).\nThe accuracy improvement from MBR comes at a heavy cost: the number of samples used can reach thousands (Freitag et al., 2023), and the number of calls to the utility function required is quadratic in the number of samples. Often, the utility function itself is a deep neural model, rendering MBR prohibitively expensive for many use cases.\nIn this work, we address the computational efficiency of MBR with an iterative pruning algorithm where low-performing hypotheses are removed while the number of samples used to estimate utilities grows. Hypotheses are pruned based on their estimated probability of being the true best hypothesis under the MBR objective, thus avoiding making expensive fine-grained utility estimates for hypotheses which are unlikely to be the final prediction.\nIn NMT experiments on three language pairs using chrF++ (Popović, 2015), and COMET (Rei et al., 2020) as MBR utility and evaluation metrics, we show that our method maintains the same level of accuracy as standard MBR while reducing the number of utility calls by a factor of at least 7 for chrF++ and 15 for COMET. Our algorithm can also use fewer samples to reach a prediction by terminating early, unlike standard MBR." }, { "figure_ref": [], "heading": "Minimum Bayes risk decoding", "publication_ref": [ "b12", "b5", "b6", "b8", "b24", "b6", "b7", "b6", "b11", "b19", "b16" ], "table_ref": [], "text": "Conditional sequence generation problems such as neural machine translation (NMT) model the probability of the next token y t given a source sequence x and prefix y <t with a neural network p θ . This model can be used to assign probabilities to full sequences p θ (y|x) via the chain rule.\nAt test time, a decision rule is employed to select a single \"best\" sequence. The most common decision rule is to return the highest probability sequence y M AP = arg max y p θ (y|x). The exact solution is generally intractable, and beam search is used to find an approximation.\nIn contrast, minimum Bayes risk decoding (MBR) (Goel and Byrne, 2000) outputs:\ny M BR = arg max y E ȳ∼p θ (•|x) [u(y, ȳ)] = arg max y U (y, p θ (•|x)),(1)\nfor some utility function u, a measure of similarity between sequences, and\nU (y, Y) = E ŷ∼Y [u(y, ŷ)],\nwhere Y is either a probability distribution or an array of samples. We call U the expected utility function. Eikema and Aziz (2020) propose a sampling method for neural language models where hypotheses H and pseudo-references R are generated with unbiased sampling, and y M BR is estimated as:\ny M BR ≈ arg max y∈H U (y, R).(2)\nThis method, which we refer to as \"standard\" MBR, requires |H| + |R| samples (assuming H ̸ = R) and |H||R| calls to u. The latter is the main computational bottleneck which we address in this work.\nRecent works on MBR focus on identifying accurate and efficient generation methods (Eikema and Aziz, 2022;Freitag et al., 2023;Yan et al., 2022), and pruning H to a smaller size with a faster method prior to running standard MBR (Eikema and Aziz, 2022;Fernandes et al., 2022). Freitag et al. (2022a) and Eikema and Aziz (2022) show that MBR is more effective relative to MAP when the utility metric has high segmentlevel accuracy as measured by Freitag et al. (2021). So, we use COMET (Rei et al., 2020), one of the best available metrics for NMT, and chrF++ (Popović, 2015), as a simpler and faster yet reasonably good lexical metric. We heed the call of Freitag et al. (2022b) and do not use BLEU, which is obsoleted by newer metrics as both an evaluation and utility metric (Freitag et al., 2022a)." }, { "figure_ref": [], "heading": "Confidence-based hypothesis pruning", "publication_ref": [ "b4", "b14" ], "table_ref": [], "text": "Sampling-based MBR returns the highest-utility hypothesis from a set measured over pseudoreferences sampled from the model. Speedups can be achieved if low-performing hypotheses are removed from consideration based on coarse utility estimates obtained from a subset of the pseudoreferences. In other words, we can save time by not computing precise utility estimates for hypotheses which are unlikely to be chosen in the end.\nWe propose an iterative algorithm for MBR where the hypothesis set is gradually shrunk while the pseudo-reference list grows. The procedure is shown in Algorithm 1. We start with an initial hypothesis set H 1 , and at each time step t, a pruning function uses a pseudo-reference list R t of size r t to select H t+1 ⊆ H t . After the maximum time step is reached or when the current hypothesis set contains one element, terminate and return the highest utility hypothesis under all available pseudo-references. The size of R t grows according to a pre-defined \"schedule\" r 1 , ..., r T .\nAlgorithm 1 Pruning MBR Input: Source sentence x. Constants: sample size schedule r = r 1 , ..., r T , expected utility function U , model parameters θ, pruning function prune, hypothesis generation function gen(x, θ). Output:\nAn MBR prediction. 1: R 0 ← [ ] 2: t ← 1 3: H 1 ← gen(x, θ) 4: while t ≤ T and |H t | > 1 do 5: R t ← R t-1 6: while |R t | < r t do 7: Append ŷ ∼ p θ (•|x) to R t 8:\nend while 9:\nH t+1 ← prune(H t , R t ) 10: t ← t + 1 11: end while 12: return arg max y∈Ht U (y, R t-1 )\nThe goal of the pruning function is to exclude as many hypotheses as possible to reduce the number of utility calls made in the future without excluding the true top-ranking MBR hypothesis arg max y∈Ht U (y, p θ (•|x)), the true \"winner\".\nWe propose to prune hypotheses in H t with low probability of being the true winner and to estimate this probability using nonparametric bootstrap resampling (Efron, 1979;Koehn, 2004); given an initial collection of i.i.d. samples S from an unknown distribution X , our beliefs about the true value of any statistic T (X ) are represented by the distribu-tion p(T ( Ŝ)), where Ŝ ∼ boot(S) , and boot(S) returns a with-replacement size-|S| resample of S.\nIn our case, we want to estimate the probability that y is the true winner in H t :\np ȳ∈H U (y, p θ (•|x)) ≥ U (ȳ, p θ (•|x)) , (3)\nwhich we estimate as the chance that y wins in a bootstrap resample. Let Rt ∼ boot(R t ). Then the bootstrap estimator is:\nE Rt∼boot(Rt) 1 ȳ∈Ht (U (y, Rt ) ≥ U (ȳ, Rt ) . (4)\nThis estimator can be high variance because the probability y winning in a bootstrap sample is very small when H t is large, so instead we use the probability that y outranks a particular ȳ ∈ H t :\nE Rt∼boot(Rt) 1(U (y, Rt ) ≥ U ( ȳ, Rt )) . (5)\nThis statistic is invariant to the size of H t because it only considers utility estimates of y and ȳ. It is an upper bound of Equation 4 because the probability of y winning against all ȳ ∈ H t cannot be higher than the probability of winning against a particular ȳ ∈ H t . ȳ can be any element in H t , but we set it to the winner under R t , i.e. ȳ = arg max ȳ∈Ht U (ȳ, R t ), to achieve a tighter upper bound.\nWe propose to prune y if its estimated probability of beating ȳ is less than 1 -α, where α is a confidence threshold 0 ≤ α ≤ 1. In summary, this procedure prunes some but not necessarily all y ∈ H t which are estimated to have less than 1 -α chance of being the true winner. Algorithm 2 shows the procedure in detail.\nNote that bootstrapped utility estimates do not require additional utility function calls because they reuse utility values already computed over H t , R t . Also note that the bootstrap estimator is always biased because R t is never equal to p θ (•|x), and the bias is worse when R t is small. Nonetheless, we show empirically that bootstrapping is effective in our pruning algorithm for modest sizes of R t .\nAnother benefit of our pruning method compared to standard MBR is that it can terminate early if H t has only one remaining hypothesis, reducing the total number of pseudo-references needed.\nAs a baseline pruning function for comparison, we rank each y ∈ H t by U (y, R t ) and prune the Ri ← boot(R)\n6: end for 7: for y ∈ H do 8:\nw ← 1 n n i 1(U (y, Ri ) ≥ U ( ȳ, Ri )) 9:\nif w > 1 -α then 10:\nH new ← H new ∪ {y} 11:\nend if 12: end for 13: return H new bottom-β proportion. At β = 0, no pruning occurs and standard MBR decoding is recovered. We refer to this baseline as prune β and our confidencebased method as prune α ." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Experiments", "publication_ref": [ "b6", "b22", "b13", "b8", "b17", "b18", "b3" ], "table_ref": [], "text": "We perform our experiments on NMT models which we train on the German-English (de-en), English-Estonian (en-et), and Turkish-English (tren) news datasets from WMT18. We use the data preprocessing steps provided by the WMT18 organizers, except that we exclude Paracrawl from the de-en dataset following Eikema and Aziz (2022). The final training sets have 5.8 million, 1.9 million, and 207 thousand sentence pairs respectively. All models are transformers of the base model size from Vaswani et al. (2017) and are trained without label smoothing until convergence.\nFor all language pairs and validation/test datasets, we generate H 1 with beam top-k with k = 2561 . We generate 1024 pseudo-references R * with epsilon sampling (Hewitt et al., 2022) with ϵ = 0.02, following Freitag et al. (2023). In order to run multiple random trials efficiently, we simulate sampling pseudo-references by sampling from R * without replacement. The experiments in Sections 4.1 and 4.2 show averaged results from 10 trials. We use chrF++ and COMET as utility func-tions and always match the evaluation metric to the utility. chrF++ is computed using SacreBLEU (Post, 2018) with default settings. COMET is computed with the COMET-22 model from Rei et al. (2022). We use 500 bootstrap samples for prune α .\nFor the pseudo-reference sample size schedule r 1 , ..., r T , the choice of r 1 is pivotal in the speedaccuracy trade-off; |H 1 ||R 1 | is a lower bound on the number of utility calls needed, but the bootstrap estimate is more biased when the sample size is small. In a preliminary experiment on the validation set, we measure the \"false pruning rate\", the rate that the estimated true winner, arg max ȳ∈H U (ȳ, R * ) is pruned under different choices of α and |R|. Based on results shown in Figure 1, we set r 1 to 8 for COMET and 16 for chrF++ for all experiments. r 2 , ..., r T are set by doubling at each time step until reaching 256.\nMore experimental details and figures for language pairs not shown here are in the Appendix. Our code is publicly available2 . 4.1 Speed-accuracy trade-off prune α and prune β allow control over the speedaccuracy trade-off with a single parameter. We observe this trade-off over the validation set by comparing the number of utility function calls made against various measures of accuracy. High evaluation score is underlying goal, but we find that the score changes very little across settings, so we also evaluate pruning quality in terms of how well final predictions match those under standard MBR with R * . We use exact accuracy, whether the prediction y equals arg max ȳ∈H U (ȳ, R * ), and reciprocal rank (RR), equal to ( ȳ∈H 1(U (ȳ, R * ) ≥ U (y, R * ))) -1 as a soft accuracy measure adapted from the mean reciprocal rank used in search (Craswell, 2009). Figure 2 shows that prune α generally outperforms prune β on all metrics. " }, { "figure_ref": [], "heading": "Test results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We evaluate our methods on the test set with α = 0.99 and α = 0.9 and compare them to standard MBR on the accuracy metrics described in Section 4.1 as well as the number of utility calls and pseudo-references used. Table 1 shows that across all language pairs and metrics, our method achieves similar evaluation scores as standard MBR while using much less computation, with the most dramatic case being en-et and COMET with α = 0.9 which uses 3.5% as many utility calls and 32% as many pseudo-references as the baseline.\nWhen comparing α = 0.99 with α = 0.9, we see that while exact accuracy and RR differ, the evaluation score differs very little if at all, suggesting that high-ranking hypotheses are often equally good as one another, and finely discriminating between them has diminishing returns.\nMetric: chrF++ de-en en-et tr-en β = 0 α = 0.99 α = 0.9 β = 0 α = 0.99 α = 0.9 β = 0 α = 0.99 α = 0.9 Metric: COMET de-en en-et tr-en β = 0 α = 0.99 α = 0.9 β = 0 α = 0.99 α = 0.9 β = 0 α = 0.99 α = 0.9 \nScore" }, { "figure_ref": [], "heading": "Human evaluation", "publication_ref": [ "b14" ], "table_ref": [], "text": "We confirm that our method is indistinguishable from standard MBR in human evaluation. On the de-en test set, for each instance, we sample 256 pseudo-references without replacement from R * and use this pool to decode with both standard MBR and prune α , α = 0.99. 85% of the predictions are the same, and for the rest, we asked bilingual readers of German and English to state which prediction they preferred. We obtained 125 ratings.\nThe standard MBR prediction won in 48 cases, lost in 42, and tied in 35. This fails the significance test of Koehn (2004), so we conclude that prune α with α = 0.99 is not significantly different from standard MBR on de-en." }, { "figure_ref": [], "heading": "Run times", "publication_ref": [], "table_ref": [], "text": "To measure realistic run times, we implement a practical version of our algorithm and compare our method against standard MBR decoding and beam search. We run our algorithm with COMET as the utility function and α = 0.99. With COMET, sentence embeddings can be cached, which greatly speeds up utility function calls. Some of the theoretical gains seen in Section 4.2 are diminished in practice due to our iterative algorithm dividing computations over smaller batches. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose an iterative pruning algorithm for MBR along with a pruning criterion based on confidence estimates derived from bootstrap resampling. In experiments across diverse language pairs and metrics, we show that our method consistently outperforms our proposed baseline and achieves significant computational savings over standard samplingbased MBR without sacrificing accuracy. Our method is a drop-in substitute for standard MBR that requires no knowledge about the model p θ , how H 1 is generated, or the utility function." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b7" ], "table_ref": [], "text": "Even with our pruning algorithm, MBR is many times more costly to run than beam search. An important hyperparameter in our method is the sample size schedule. We show why it is important to carefully choose the size of the first sample, but not how the remaining schedule should be set, opting to simply double the size at each step. We leave this issue to future work.\nMethods such as MBR and reranking that directly optimize a metric may exploit noise in the metric to improve the score without actually improving quality (Fernandes et al., 2022). In these settings, automatic evaluation is less trustworthy and should ideally be combined with human evaluation. However, human evaluation is difficult and expensive to obtain." }, { "figure_ref": [], "heading": "A Additional experimental details A.1 All experiments", "publication_ref": [ "b15" ], "table_ref": [], "text": "Preliminary experiments showed no significant difference between 500 and 1000 bootstrap samples when running prune α , so we use 500 for all experiments.\nFor efficiency, we use average sentence-level chrF++ instead of corpus-level chrF++ for corpuslevel evaluations. This allows us to pre-compute the sentence-level chrF++ for each hypothesis and obtain the corpus-level score of a set of predictions by simple averaging.\nAll experiments are implemented on top of our fork of Fairseq (Ott et al., 2019)." }, { "figure_ref": [], "heading": "A.2 Run times", "publication_ref": [], "table_ref": [], "text": "This section contains additional details for the experiment in Section 4.4.\nFor both the standard and pruning MBR algorithms, we deduplicate and cache computations whenever possible. For each unique pseudoreference, its sentence embedding and utility scores against each y ∈ H t are only computed once.\nFor simplicity, all decoding methods are run on one sequence at a time. Batching across sequences would likely affect the relative performance characteristics of each method.\nAll experiments are conducted on the same machine with one Nvidia Quadro RTX 8000 GPU." }, { "figure_ref": [ "fig_2" ], "heading": "B False pruning rates for en-et, tr-en", "publication_ref": [], "table_ref": [], "text": "Figure 3 shows the false pruning rates for en-et and tr-en. " }, { "figure_ref": [], "heading": "C Hypotheses remaining per time step", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Julius Cheng is supported by a scholarship from Huawei. The authors would like to thank the bilingual readers who helped with the human evaluation and the anonymous reviewers for their helpful suggestions." } ]
Minimum Bayes risk (MBR) decoding outputs the hypothesis with the highest expected utility over the model distribution for some utility function. It has been shown to improve accuracy over beam search in conditional language generation problems and especially neural machine translation, in both human and automatic evaluations. However, the standard samplingbased algorithm for MBR is substantially more computationally expensive than beam search, requiring a large number of samples as well as a quadratic number of calls to the utility function, limiting its applicability. We describe an algorithm for MBR which gradually grows the number of samples used to estimate the utility while pruning hypotheses that are unlikely to have the highest utility according to confidence estimates obtained with bootstrap sampling. Our method requires fewer samples and drastically reduces the number of calls to the utility function compared to standard MBR while being statistically indistinguishable in terms of accuracy. We demonstrate the effectiveness of our approach in experiments on three language pairs, using chrF++ and COMET as utility/evaluation metrics.
Faster Minimum Bayes Risk Decoding with Confidence-based Pruning
[ { "figure_caption": "Figure 1 :1Figure 1: False pruning rates for different choices of α and |R| measured on the validation set.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Speed-accuracy trade-off curves of pruning functions for α ∈ 0.8, 0.9, 0.95, 0.98, 0.99 and β ∈ {0.05, ..., 0.95} on the de-en validation set. The x-axes are truncated for better visual comparison.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: False pruning rates for different choices of α and |R| measured on the validation set.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 44Figure4shows the distribution of the number of remaining hypotheses after each time step when running our method on the de-en validation set, following the experimental setup of Section 4.1 where |H 1 | = 256. This is provided to further illustrate the pruning process.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Number of remaining hypotheses after each time step while running the pruning MBR algorithm for various choices of α on the de-en validation set. The x-axis is the number of pseudo-references at a time step, and the y-axis is the number of hypotheses remaining after pruning. Colored bars show the mean, and error bars show the interquartile range.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: Speed-accuracy trade-off curves of pruning functions for α ∈ 0.8, 0.9, 0.95, 0.98, 0.99 and β ∈ {0.05, ..., 0.95} on the tr-en validation set.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Algorithm 2 Confidence-based pruning function Input: Hypothesis set H, pseudo-reference list R. Constants: Expected utility function U , confidence threshold α, number of bootstrap samples n. Output: A subset of H.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Statistics for MBR decoding on the test set for all language pair and metric settings. β = 0 indicates standard MBR. All values are averaged across 10 random trials.", "figure_data": "78.3778.3778.4182.7582.7482.7473.7073.6873.65Accuracy0.8720.8440.7420.9240.8980.8300.8970.8780.791RR0.9300.9110.8380.9590.9430.8980.9440.9320.873# Pseudo-refs256.0200.3120.4256.0151.482.6256.0177.8100.8# Utility calls 6508238312533 6389828132250 6535632442394", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" } ]
Julius Cheng; Andreas Vlachos
[ { "authors": "Amanda Bertsch; Alex Xie; Graham Neubig; Matthew R Gormley", "journal": "", "ref_id": "b0", "title": "It's mbr all the way down: Modern generation techniques through the lens of minimum bayes risk", "year": "2023" }, { "authors": "P J Bickel; K A Doksum", "journal": "", "ref_id": "b1", "title": "Mathematical Statistics: Basic Ideas and Selected Topics", "year": "1977" }, { "authors": "Holden-Day Company", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "Nick Craswell", "journal": "Springer US", "ref_id": "b3", "title": "Mean Reciprocal Rank", "year": "2009" }, { "authors": "B Efron", "journal": "The Annals of Statistics", "ref_id": "b4", "title": "Bootstrap Methods: Another Look at the Jackknife", "year": "1979" }, { "authors": "Bryan Eikema; Wilker Aziz", "journal": "International Committee on Computational Linguistics", "ref_id": "b5", "title": "Is MAP decoding all you need? the inadequacy of the mode in neural machine translation", "year": "2020" }, { "authors": "Bryan Eikema; Wilker Aziz", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Sampling-based approximations to minimum Bayes risk decoding for neural machine translation", "year": "2022" }, { "authors": "Patrick Fernandes; António Farinhas; Ricardo Rei; José De Souza; Perez Ogayo; Graham Neubig; Andre Martins", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Quality-aware decoding for neural machine translation", "year": "2022" }, { "authors": "Markus Freitag; Behrooz Ghorbani; Patrick Fernandes", "journal": "", "ref_id": "b8", "title": "Epsilon sampling rocks: Investigating sampling strategies for minimum bayes risk decoding for machine translation", "year": "2023" }, { "authors": "Markus Freitag; David Grangier; Qijun Tan; Bowen Liang; ; ", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b9", "title": "High quality rather than high model probability: Minimum Bayes risk decoding with neural metrics", "year": "2022" }, { "authors": "Markus Freitag; Ricardo Rei; Nitika Mathur; Chi-Kiu Lo; Craig Stewart; Eleftherios Avramidis; Tom Kocmi; George Foster; Alon Lavie; F T André; Martins", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Results of WMT22 metrics shared task: Stop using BLEU -neural metrics are better and more robust", "year": "2022" }, { "authors": "Markus Freitag; Ricardo Rei; Nitika Mathur; Chi-Kiu Lo; Craig Stewart; George Foster; Alon Lavie; Ondřej Bojar", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Results of the WMT21 metrics shared task: Evaluating metrics with expert-based human evaluations on TED and news domain", "year": "2021" }, { "authors": "Vaibhava Goel; William J Byrne", "journal": "Computer Speech & Language", "ref_id": "b12", "title": "Minimum bayes-risk automatic speech recognition", "year": "2000" }, { "authors": "John Hewitt; Christopher Manning; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Truncation sampling as language model desmoothing", "year": "2022" }, { "authors": "Philipp Koehn", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Statistical significance tests for machine translation evaluation", "year": "2004" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Maja Popović", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "chrF: character n-gram F-score for automatic MT evaluation", "year": "2015" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Ricardo Rei; G C José; Duarte De Souza; Chrysoula Alves; Ana C Zerva; Taisiya Farinha; Alon Glushkova; Luisa Lavie; Coheur; F T André; Martins", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "COMET-22: Unbabel-IST 2022 submission for the metrics shared task", "year": "2022" }, { "authors": "Ricardo Rei; Craig Stewart; Ana C Farinha; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "COMET: A neural framework for MT evaluation", "year": "2020" }, { "authors": "Freda Shi; Daniel Fried; Marjan Ghazvininejad; Luke Zettlemoyer; Sida I Wang", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Natural language to code translation with execution", "year": "2022" }, { "authors": "Mirac Suzgun; Luke Melas-Kyriazi; Dan Jurafsky", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Follow the wisdom of the crowd: Effective text generation via minimum Bayes risk decoding", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b23", "title": "", "year": "" }, { "authors": "Jianhao Yan; Jin Xu; Fandong Meng; Jie Zhou; Yue Zhang", "journal": "", "ref_id": "b24", "title": "Dc-mbr: Distributional cooling for minimum bayesian risk decoding", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 98.63, 216.81, 191.23, 44.94 ], "formula_id": "formula_0", "formula_text": "y M BR = arg max y E ȳ∼p θ (•|x) [u(y, ȳ)] = arg max y U (y, p θ (•|x)),(1)" }, { "formula_coordinates": [ 2, 175.15, 286.78, 115.35, 10.77 ], "formula_id": "formula_1", "formula_text": "U (y, Y) = E ŷ∼Y [u(y, ŷ)]," }, { "formula_coordinates": [ 2, 118.88, 392.68, 170.98, 20.88 ], "formula_id": "formula_2", "formula_text": "y M BR ≈ arg max y∈H U (y, R).(2)" }, { "formula_coordinates": [ 2, 312.26, 417.14, 159.91, 117.43 ], "formula_id": "formula_3", "formula_text": "An MBR prediction. 1: R 0 ← [ ] 2: t ← 1 3: H 1 ← gen(x, θ) 4: while t ≤ T and |H t | > 1 do 5: R t ← R t-1 6: while |R t | < r t do 7: Append ŷ ∼ p θ (•|x) to R t 8:" }, { "formula_coordinates": [ 2, 307.77, 538.73, 161.35, 52.26 ], "formula_id": "formula_4", "formula_text": "H t+1 ← prune(H t , R t ) 10: t ← t + 1 11: end while 12: return arg max y∈Ht U (y, R t-1 )" }, { "formula_coordinates": [ 3, 94.32, 142.24, 195.55, 22.26 ], "formula_id": "formula_5", "formula_text": "p ȳ∈H U (y, p θ (•|x)) ≥ U (ȳ, p θ (•|x)) , (3)" }, { "formula_coordinates": [ 3, 80.88, 230.14, 208.99, 25.02 ], "formula_id": "formula_6", "formula_text": "E Rt∼boot(Rt) 1 ȳ∈Ht (U (y, Rt ) ≥ U (ȳ, Rt ) . (4)" }, { "formula_coordinates": [ 3, 90, 333.08, 199.87, 19.11 ], "formula_id": "formula_7", "formula_text": "E Rt∼boot(Rt) 1(U (y, Rt ) ≥ U ( ȳ, Rt )) . (5)" }, { "formula_coordinates": [ 3, 312.26, 249.74, 191.35, 25.96 ], "formula_id": "formula_8", "formula_text": "w ← 1 n n i 1(U (y, Ri ) ≥ U ( ȳ, Ri )) 9:" }, { "formula_coordinates": [ 3, 307.77, 279.85, 145.52, 22.94 ], "formula_id": "formula_9", "formula_text": "H new ← H new ∪ {y} 11:" }, { "formula_coordinates": [ 5, 94.62, 232.95, 20.42, 7.77 ], "formula_id": "formula_10", "formula_text": "Score" } ]
2024-03-06
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b36", "b2", "b34", "b36", "b36", "b6", "b9", "b10" ], "table_ref": [], "text": "Explicit Caption Editing (ECE), emerging as a novel task within the broader domain of caption generation, has raised increasing attention from the multimodal learning community [37]. As shown in Fig. 1(a)(c), given an image and a reference caption (Ref-Cap), ECE aims to explicitly predict a sequence of edit operations, which can translate the Ref-Cap to ground-truth caption (GT-Cap). Compared to conventional image captioning methods which generate captions from scratch [3,8,35,38], ECE aims to enhance the quality of existing captions in a more explainable, efficient, and human-like manner. Currently, existing ECE methods primarily rely on two prevalent benchmarks for model training and evaluation, i.e., COCO-EE and Flickr30K-EE [37]. Specifically, both datasets are carefully constructed to emphasize the refinement of content details while preserving the original caption structure. As shown in Fig. 1, each ECE instance consists of an image along with a Ref-Cap (e.g., two birds standing on a bench near the water) and a corresponding GT-Cap (man sitting on a bench overlooking the ocean). For this in-domain sample, state-of-the-art ECE models can effectively improve the quality of the Ref-Cap. By \"in-domain\", we mean that the test set of existing ECE benchmarks has a similar distribution with its training set (Fig. 1(b)) 1 . However, we found that existing ECE models have limited generalization ability when faced with out-ofdomain samples. Take the model TIger [37] as an example, given a highly similar Ref-Cap with a single wrong word (man sitting on a balcony overlooking the ocean), although it corrected the wrong word, it also removes other accurate words. Meanwhile, when faced with more irrelevant Ref-Caps (e.g., i sitting at the bench with an american), TIger even fails to correct all errors or in-troduce sufficient accurate details. Obviously, this limited generalization ability will limit their utilization in real-world scenarios, as we hope our ECE models can help to edit or refine different sentences.\nTo address this limitation, we propose a novel diffusion-based ECE model, denoted as DECap, which reformulates the ECE task as a series of deonising process steps. Specifically, we design an edit-based noising process that constructs editing samples by introducing word-level noises (i.e., random words) directly into the GT-Caps to obtain Ref-Caps. This noising process is parameterized by the distributions over both edit operations (e.g., KEEP, DELETE, INSERT, and REPLACE) and caption lengths, which can not only avoid the meticulous selection of Ref-GT caption pairs but also help ECE models to learn a more adaptable distribution over Ref-Caps, capturing a broader spectrum of editing scenarios. Then, we train model DECap to refine Ref-Caps through an edit-based denoising process, which contains the iterative predictions of edit operations and content words. Meanwhile, DECap discards the prevalent multi-stage architecture designs and directly generates edit operations and content words simultaneously, which can significantly accelerate the inference speed with simple Transformer encoder architectures. Extensive ablations have demonstrated that DECap can not only achieve outstanding editing performance on the challenging ECE benchmarks but can also further enhance the quality of model-generated captions. Meanwhile, it even achieves competitive caption generation performance with existing diffusion-based image captioning models. Furthermore, DECap even shows potential for word-level controllable captioning, which is beyond the ability of existing controllable captioning models [7,10,11]. In summary, DECap realizes a strong generalization ability across various in-domain and out-of-domain editing scenarios, and showcases great potential in improving the quality and controllability of caption generation, keeping the strong explainable ability. With such abilities, our DECap can serve as an innovative and uniform framework that can achieve both caption editing and generation.\nIn summary, we make several contributions in this paper: 1) To the best of our knowledge, we are the first work to point out the poor generalization issues of existing ECE models, and propose a series of caption editing scenarios for generalization ability evaluation. 2) DECap is the first diffusion-based ECE model, which pioneers the use of the discrete diffusion mechanism for ECE.\n3) DECap shows strong generalization ability across various editing scenarios, achieving outstanding performance. 4) DECap has a much faster inference speed than existing ECE methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b36", "b26", "b29", "b36", "b3", "b30", "b13", "b25", "b38" ], "table_ref": [], "text": "Explicit Caption Editing (ECE). Given the image, ECE aims to refine existing Ref-Caps through a sequence of edit operations, which was first proposed by Wang et al . [37]. Specifically, by realizing refinement under the explicit traceable editing path composed of different edit operations, this task encourages models to enhance the caption quality in a more explainable and efficient manner. However, existing ECE benchmarks are carefully designed, targeting on the refinement of specific content details, which leads to a limited model generalization ability across diverse real-world editing scenarios beyond the training data distribution. Meanwhile, existing editing models [27,30,37] tend to perform editing with multiple sub-modules sequentially. For example, conducting the insertion operation by first predicting the ADD operation, then applying another module to predict the specific word that needs to be added. In this paper, we construct Ref-Caps by directly noising the GT-Caps at word-level through a novel editbased noising process, allowing the model to capture various editing scenarios during training. We further optimize model architecture to predict both edit operations and content words parallelly, which can significantly accelerate the editing speed. Diffusion-based Captioning Models. Taking inspiration from the remarkable achievements of diffusion models in image generation [4,31], several pioneering works have applied the diffusion mechanism for caption generation. Existing diffusion-based captioning works can be categorized into two types: 1) Continuous Diffusion: They aim to convert discrete words into continuous vectors (e.g., word embeddings [14] and binary bits [9,26]) and apply the diffusion process with Gaussian noises. 2) Discrete Diffusion: They aim to extend the diffusion process to discrete state spaces by directly noising and denoising sentences at the token level, such as gradually replacing tokens in the caption with a specific [MASK] token and treating the denoising process as a multi-step mask prediction task starting from an all [MASK] sequence [39]. As the first diffusion-based ECE model, in contrast to iterative mask replacement, which only trains the ability to predict texts for [MASK] tokens, our edit-based noising and denoising process can help our model to learn a more flexible way of editing (e.g., insertion, deletion, and replacement) by different edit operations. Meanwhile, our model shows its great potential in directly editing random word sequences, which achieves competitive performance to diffusion-based captioning models." }, { "figure_ref": [], "heading": "Diffusion-based Explicit Caption Editing", "publication_ref": [], "table_ref": [], "text": "In this section, we first give a brief introduction of the task ECE and the preliminaries about discrete diffusion mechanism in Sec. 3.1. Then, we show the edit-based noising and denoising process in Sec. 3.2. We introduce our model architecture in Sec. 3.3. Lastly, we demonstrate the details of training objectives and inference process in Sec. 3.4." }, { "figure_ref": [], "heading": "Task Formulation and Preliminaries", "publication_ref": [ "b3", "b14", "b38", "b11" ], "table_ref": [], "text": "Explicit Caption Editing. Given an image I and a reference caption (Ref-Cap) x r = {w \nq(xt|xt-1) = Cat(xt; p = xt-1Qt),(1)\nwhere Cat(•) is a categorical distribution and Q t is a transition matrix applied to each word in the sentence independently:\n[Q t ] i,j = q(w t = j|w t-1 = i).\nExisting discrete diffusion text generation works [4,15,39] mainly follow the noising strategy of BERT [12], where each word stays unchanged or has some probability transitions to the [MASK] token or other random words from the vocabulary. Meanwhile, they incorporate an absorbing state for their diffusion model as the [MASK] token:\n[Q t ] i,j =    1 if i = j = [MASK], βt if j = [MASK], i ̸ = [MASK], 1 -βt if i = j ̸ = [MASK].(2)\nAfter a sufficient number of noising steps, this Markov process converges to a stationary distribution q(x T ) where all words are replaced by the [MASK] token. Discrete diffusion works then train their models to predict target words for [MASK] tokens as the denoise process p θ (x t-1 |x t , t), and generate the sentence by performing a series of denoising steps from an all [MASK] token sequence: Edit-based Noising Process. Different from directly transiting one word to another, the edit-based noising process gradually adds word-level noises to the caption x t-1 based on different edit operations. For any time step t ∈ (0, T ], the edit-based noising process is defined as\nP θ (x0) = T t=1 p θ (xt-1|xt, t).(3)" }, { "figure_ref": [ "fig_1" ], "heading": "Discrete Diffusion for ECE", "publication_ref": [], "table_ref": [], "text": "q(xt|xt-1) = p(xt|xt-1, E N t ) • Cat(E N t ; p = xt-1Qt),(4)\nwhere Cat(•) is a categorical distribution and Q t here is a transition matrix assigning edit operation for each word in the caption x t-1 independently:\n[Q t ] i,j = q(e t = j|w t-1 = i). Subsequently, E N t = {e 1\nt , e 2 t , ..., e l t } is a sequence of noising edit operations which has the same length with the caption x t-1 = {w 1 t-1 , w 2 t-1 , ..., w l t-1 }3 , where each edit operation e i t is operated on the corresponding word w i t-1 to get x t . Specifically, Q t is parameterized by the distribution over both edit types and GT-Cap length k with an absorbing state as the random word (RW).\n[Q t ] i,j =              1 if j = KEEP, i = RW, α k t if j = REPLACE, i ̸ = RW, β k t if j = DELETE, i ̸ = RW, γ k t if j = INSERT, i ̸ = RW, 1 -α k t -β k t -γ k t , if j = KEEP, i ̸ = RW.\n(5) Subsequently, as the example shown in Fig. 2, being operated with e t , each word w t-1 has a probability of α k t to be replaced by another random word, has a probability of β k t to be removed from the caption, and has a probability of γ k t to be added with a random word after it, leaving the probability of if the word has already been noised into the random word, it will not be renoised again. Through sufficient noising steps T , the caption will be noised into a random word sequence.\nδ k t = 1 -α k t -β k t -γ k t to\nEdit-based Denoising Process. The edit-based denoising process aims to iteratively edit x T to x 0 by predicting appropriate edit operations. Specifically, given the image I and the caption x t = {w 1 t , w 2 t , ..., w l t }, we model this editbased denoising process with the explicit prediction of both edit operations and content words transforming x t to x t-1 :\np θ (x t-1 |x t , t, I) = p(x t-1 |x t , E D t , C t ) • p(E D t , C t |x t , t, I)\n, where p θ parameterized the model to predict a sequence of denoising edit operations E D t = {e 1 t , e 2 t , ..., e l t }, together with a sequence of content words C t = {c 1 t , c 2 t , ..., c l t } which all have the same length with x t . As the example shown in Fig. 3, the denoising step transforms the caption x t to x t-1 based on the edit operations and predicted words, i.e., for each word w i t , we keep the original word if it is predicted operation e i t is KEEP, remove the word if it is predicted operation e i t is DELETE, copy the original word and add a new word c i t after it if predicted operation e i t is INSERT, and replace it with a new word c i t if predicted operation e i t is REPLACE. We then feed the output of this step into the model and perform the next denoising step. Following this, we can generate the caption by performing a series of denoising steps from an all random word sequence:\nP θ (x0) = T t=1 p θ (xt-1|xt, t, I).(6)" }, { "figure_ref": [ "fig_1" ], "heading": "Transformer-based Model Architecture", "publication_ref": [ "b32", "b3", "b17", "b21" ], "table_ref": [], "text": "The DECap is built based on the standard Transformer [33] architecture, which has strong representation encoding abilities. To facilitate the denoising process, we further construct DECap with a parallelized system for the efficient generation of both edit operations and content words. Feature Extraction. Given an image I and caption x t , we construct the input for the model as a sequence of visual tokens and word tokens. Specifically, we encode the image I into visual tokens through pre-trained visual backbones such as CLIP. The word tokens are represented by the sum of word embedding, position encoding, and segment encoding. Meanwhile, following previous works [4,18,22], we encode the time step t as a sinusoidal embedding the same way as the position encoding, adding it to the word tokens. Model Architecture. As shown in Fig. 3, given the visual-word token sequence with a connecting token, e.g., [START], we first utilize the Transformer encoder blocks with self-attention and co-attention layers to learn the multi-modal representations of each token. We then use two simple yet effective FC layers to predict the edit operation and content word for each word token. Specifically, by feeding the hidden states of word tokens as input, 1) the Edit-FC generates the edit operation sequence E D t by making a four-category classification for each word, i.e., e t ∈ {REPLACE,DELETE,INSERT,KEEP}. 2) In parallel, the Language-FC maps each hidden state to a distribution over the vocabulary to predict specific words to generate the content word sequence C t . Following the denoising step in Sec. 3.2, we then transform the caption x t to x t-1 based on the edit operations and content words for next step." }, { "figure_ref": [], "heading": "Training Objectives and Inference", "publication_ref": [ "b3", "b38" ], "table_ref": [], "text": "Training. Following previous discrete diffusion works [4,39], we train the model to directly predict the original ground-truth caption x 0 for caption x t :\nL = L Edit + LLanguge = -log p θ (E G t |xt, t, I) + -log p θ (C G t |xt, t, I),(7)\nwhere E G t and C G t are ground truth edit operations and content words constructed based on the x 0 and x t . L Edit and L Languge are cross-entropy loss over the distribution of predicted edit operations and content words, and L Languge is only trained to predict content words for the input words assigned with INSERT and REPLACE operations. Inference. Given image and caption x t (t ∈ (t, T ]), the model predicts x t-1 , x t-2 iteratively for t denoising steps, and produces the final result of x 0 ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b36", "b23", "b18", "b27", "b4", "b33", "b1", "b16" ], "table_ref": [], "text": "Datasets. We evaluated DECap on both popular caption editing (i.e., COCO-EE [37], Flickr30K-EE4 ) and caption generation benchmarks (i.e., COCO [24]). and one corresponding Ref-GT caption pair. COCO contains 123,287 images with 5 human-annotated captions for each. In this paper, we utilized the Karpathy splits [19], which contain 113,287 training images, 5,000 validation images, and 5,000 test images. Evaluation Metrics. We utilized all the prevalent accuracy-based metrics following prior works, which include BLEU-N [28], METEOR [5], ROUGE-L [23], CIDEr-D [34], and SPICE [2]. Meanwhile, we also computed CLIP-Score [17] to evaluate the caption-image similarity, and the inference time to evaluate the model efficiency." }, { "figure_ref": [ "fig_2" ], "heading": "Generalization Ability in ECE", "publication_ref": [ "b36", "b11", "b2", "b36", "b2", "b20", "b20" ], "table_ref": [ "tab_2", "tab_3", "tab_3", "tab_4" ], "text": "In this subsection, we evaluated the generalization ability of our model with both in-domain and out-of-domain evaluation on the COCO-EE. Specifically, we try to answer four research questions: 1) Q1: Does DECap perform well on the existing in-domain benchmark? 2) Q2: Does DECap perform well on reference captions with different noisy levels (i.e., Levenshtein ratios 1 )? Q3: Does DECap can further boost the performance of \"good\" reference captions from other captioning models? 3) Q4: Does DECap perform well on pure random reference captions? It is worth noting that DECap only used the unpaired data (i.e., image and GT-Cap without Ref-Cap) while the state-of-the-art TIger [37] was trained with the complete editing instance. For a more fair comparison, we also trained TIger with the synthesized noised unpaired data (denoted as TIger-N).\nIn-Domain Evaluation: COCO-EE (Q1) Settings. Since the COCO-EE dataset was carefully designed to emphasize the refinement of content details, its training set and test set have similar distribution (i.e., Ratio 1 around 0.5 for most instances), thus we directly compared the performance of each model on the COCO-EE test set as the in-domain evaluation. We evaluated edited captions against their single GT-Cap. Results. The in-domain evaluation results are reported in Table 1. From the table, we can observe: 1) For the quality evaluation, TIger achieves its best performance using four editing steps, while our DECap achieves competitive results with the same step (e.g., better BLEU scores but slightly lower CIDEr-D score). With more editing steps, DECap can further improve the quality of captions, outperforming TIger on all metrics. It is worth noting that TIger was even trained on the in-domain Ref-GT caption pairs. 2) TIger-N achieves limited quality improvement on the in-domain samples. 3) For the efficiency evaluation, DECap achieves significantly faster inference speed than TIger even with more editing steps. This is because DeCap predicts edit operations and content words simultaneously but TIger needs to conduct editing by three sequential modules. OOD: GT-Based Reference Caption (Q2) Setting. The GT-based reference captions were constructed based on the GT-Caps in the COCO-EE test set. We systematically replaced words in GT-Caps with other words, resulting in the creation of various out-of-domain Ref-Caps. They varied in terms of their Levenshtein ratios, ranging from 0.9 (i.e., with only a few incorrect words) to 0.0 (i.e., where all words were wrong). Specifically, we constructed two kinds of GT-based reference captions: 1) BERT-based. We first replaced the GT words with the special [MASK] tokens and then utilized the pretrained BERT [12] model to predict other words different from GT words. 2) Random-based. We directly replaced GT words with other random words. We evaluated edited captions against their single GT-Cap. Results. As shown in Fig. 4, For models trained with unpaired data, our model successfully improves the quality of all kinds of the GT-based Ref captions (i.e., BERT-and Random-based) and surpasses TIger-N. In contrast, TIger struggles when editing Ref captions with either \"minor\" or \"severe\" errors, and even degrading the captions' quality (e.g., Ref-Caps with ratio larger than 0.6 4 ) by inadvertently removing accurate words or failing to introduce accurate details.\nOOD: Model-Generated Reference Caption (Q3)\nModel Quality Evaluation B-1 B-2 B-3 B-4 R C S\nCLIP-Score Up-Down [3] 74.9 58.6 45.2 35.1 55.9 109.9 20.0 0.7359 + TIger [37] Setting. We explored the models' generalization ability to further improve the quality of captions generated by captioning models. We utilized captions generated by effective captioning models [3,21,32] on the COCO test set as reference captions. We evaluated the edited captions against their corresponding five GT-Caps. Results are in Table 2. Results. From Table 2, we can observe: 1) Our model successfully improves the quality of captions generated by captioning models. 2) Both TIger and TIger-N fail to do so and even degrading the caption's quality. 3) Notebly, while existing captioning models fall short of achieving comparable performance with the powerful vision-language pretrained models (e.g., BLIP [21]), our DECap trained solely on COCO-EE, demonstrates its unique editing ability to further enhance the quality of captions generated by BLIP.\nOOD: Pure Random Reference Caption (Q4) Setting. To further evaluate the models' generalization ability without utilizing any GT captions, we constructed pure random reference captions based on the COCO test set. Specifically, each editing instance consists of a single image and a Ref-Cap with ten random words. We evaluated the edited captions against their corresponding five GT-Caps. All results are reported in Table 3." }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Step Results. From Table 3, we can observe: 1) Given the image, all the models achieve their best performance with ten editing steps, DECap successfully edits the sentence with all random words into a coherent caption. In contrast, both TIger and TIger-N face challenges in doing so. 2) For efficiency metrics, DECap achieves significantly faster inference speed compared to TIger and TIger-N." }, { "figure_ref": [], "heading": "Conventional Caption Generation Ability", "publication_ref": [ "b38" ], "table_ref": [], "text": "Surprised by the results of editing model-generated captions and pure random reference captions, we further investigated DECap's capacity for directly generating captions.\nSettings. We compared DECap with SOTA diffusion-based captioning approaches on the COCO dataset, especially the discrete diffusion-based captioning model DDCap [39]. " }, { "figure_ref": [], "heading": "Potential Ability: Controllable Captioning", "publication_ref": [], "table_ref": [], "text": "Building on the remarkable generalization ability exhibited by DECap in both caption editing and generation, we further conducted a preliminary exploration of its potential for controllability. Compared to existing CIC methods, which offer only coarse control over contents and structures, we can achieve precise and explicit control over caption generation through predefined control words. Settings. We constructed input instances consisting of a single image from the COCO test set and a sentence with ten random words. We replaced several random words with specific control words (e.g., objects and attributes) at predefined positions based on the visual information of images. Results. As shown in Fig. 5, DECap is capable of editing sentences based on input control words, i.e., all generated captions follow the order of the given control words with guaranteed fluency. Meanwhile, DECap shows its reasoning ability to generate relevant semantic content based on the control words: 1) Given the attributes (e.g., color), DECap can generate specific contents with these attributes (e.g., \"red\" → \"helmet\", \"green\" → \"grass\" and \"brown\" → \"sheep\"). 2) Given objects, DECap can generate further descriptions or related objects (e.g., \"mountain\" → \"trail\" and \"ocean\" → \"wave\"). These results indicate the potential of DECap to enhance controllability and diversity, achieving a more direct and word-level control beyond existing CIC methods." }, { "figure_ref": [ "fig_5" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Number of Random Words. In this section, we run a set of ablations about the influence of different numbers of random words on caption generation. We utilized the DECap trained with diffusion step T = 10 from Sec. increases from 8 to 10 and starts to decline beyond 10 4 . DECap achieves its highest CIDEr-D score when editing sentences with 10 random words as the average length of GT captions in COCO is around 10. We thus selected 10 words as a balanced choice for caption generation. Distribution of Edit Types. As discussed in Sec.3.2, the distribution over edit types plays a crucial role in balancing different noising operations and training diverse denoising abilities. Therefore, we examined the impact of varying distribution settings for the edit types within the edit-based noising process. Specifically, the probabilities for the noising edit operations REPLACE, DELETE, and INSERT are denoted as α, β, and γ, respectively. We perform ablations by imposing global control over these probabilities 4 . We trained the DECap on the COCO training set with different distributions of edit types with the same diffusion step T = 10. During testing, we constructed input instances consisting of a single image from the COCO test set and a Ref-Cap with ten random words. From Table 6, we can observe: 1) DECap performs better when emphasizing the denoising ability of the replacement operation compared to an even distribution of edit types. 2) Training DECap exclusively for the replacement operation, neglecting deletion and insertion abilities, leads to a noticeable decline in caption quality. 3) α > β=γ could be a sensible choice for caption generation. Importantly, our method allows for flexible adaptation, enabling us to set different edit type distributions tailored to specific tasks or requirements. Visualization. Fig. 6 shows a two-step editing example 4 ." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we pointed out the challenge of limited generalization ability in existing ECE models. And we proposed a novel diffusion-based ECE model, DE-Cap, which reformulates ECE with a discrete diffusion mechanism, incorporating an innovative edit-based noising and denoising process. Extensive results have demonstrated DECap's strong generalization ability and potential as a uniform framework for both caption editing and generation. Moving forward, we are going to: 1) extend DECap into other modalities beyond images, e.g., video; 2) explore advanced techniques for finer controllability of DECap's editing process." }, { "figure_ref": [], "heading": "*** Supplementary Manuscript ***", "publication_ref": [], "table_ref": [], "text": "This supplementary document is organized as follows:\n• In Sec. A, we show more details about the Levenshtein ratio. " }, { "figure_ref": [], "heading": "A Details for Levenshtein ratio", "publication_ref": [ "b19" ], "table_ref": [], "text": "In this paper, we used the Levenshtein ratio to quantify the similarity between two captions by considering their length and the edit distance needed to transform one into the other. Specifically, for two captions with length m and n, the Levenshtein ratio is calculated as:\nratio = m + n -ldist m + n (8\n)\nwhere ldist is the weighted edit distance based on the standard Levenshtein distance [20]. The Levenshtein distance refers to the minimum number of edit operations required to transform one sentence into another, including three Levenshtein operations REPLACE, INSERT, and DELETE. In the case of the weighted version, when calculating ldist, both INSERT and DELETE operations are still counted as +1, while each REPLACE operation incurs a cost of +2:\nldist = Num(INSERT) + Num(DELETE) + 2 * Num(REPLACE)(9)\nwhere Num(•) represents the number of different edit operations. The range of Levenshtein ratio yields from 0 to 1, where a higher value indicates higher similarity." }, { "figure_ref": [], "heading": "B Implementation Details.", "publication_ref": [ "b12" ], "table_ref": [], "text": "For image features, we used the ViT features extracted by the ViT-B/16 [13] backbone from the pretrained CLIP model [29] with image patch size 16. For the edit-based noising process, we set α > β=γ to emphasize the denoising ability of replacement. For our diffusion model, we used the 12-layer Transformer encoder. We trained our model with Adam optimizer for 50 epochs, and we used a linear decay learning rate schedule with warm up. The initial learning rate was set to 1e-4. Specifically, for Sec.4.2, we traind DECap with diffusion step T = 10, and all the three models DECap, TIger and TIger-N use the same vocabulary sized 12,071. The inference time was evaluated as the average run time for each instance on a single A100 GPU with a mini-batch size of 1." }, { "figure_ref": [ "fig_6" ], "heading": "C More Visualization Results", "publication_ref": [ "b36" ], "table_ref": [], "text": "Generalization Ability. As illustrated in Figure 7, existing ECE model TIger [37] exhibits " }, { "figure_ref": [ "fig_0" ], "heading": "D Multimodal LLMs for ECE", "publication_ref": [ "b0", "b24", "b35" ], "table_ref": [], "text": "In this section, we provide a preliminary exploration of utilizing the multimodal Large Language Models (LLMs) for ECE.\nSetting. We utilized the available opensourced Multimodal LLMs including the GPT-4 [1], MiniGPT-v2 [6], LLaVA-1.5 [25] and CogVLM [36]. The user input prompt consists of the introduction of the ECE task, a definition of different edit operations, an example of a two-step editing process, and finally an editing instance with one image and one Ref-Cap. The multimodal LLMs were then asked to generate the corresponding editing process.\nResults. As illustrated in Figure 10, all the multimodal LLMs failed to perform the ECE properly: 1) Missalignment of the caption and edit operation.\nNormally, the model should generate one edit operation for each word in the input caption correspondingly, thus the sequence of edit operation and input caption should at least have the same length. However, both GPT-4, MiniGPT-v2 and LLaVA-1.5 fail to predict enough edit operations for the input caption.\n2) Wrong editing transformation. The edit operation should be operated on each corresponding input word correctly to transform the input caption for the next step. However, both GPT-4, MiniGPT-v2 and LLaVA-1.5 fail to conduct the editing transformation, e.g., MiniGPT-v2 predicts the DELETE operation as the first edit operation, but the corresponding input word \"A\" is not deleted, while the phrase \"in its mouth\" is deleted without any predicted operations.\n3) Poor editing ability. Besides GPT-4, the final output caption of other multimodal LLMs is whether the repetition of the input caption, or just changing the word \"a\" into \"the\", without any refinement on the misaligned details. The CogVLM even fails to generate the editing process. We argue that these results may related to the \"autoregressive\" manner of existing multimodal LLMs, while a \"non-autoregressive\" model like DECap is more suitable for this task, which generates edit operation and content word for each input word simultaneously." }, { "figure_ref": [ "fig_8" ], "heading": "E Data Distribution of COCO-EE", "publication_ref": [], "table_ref": [], "text": "As illustrated in Figure 11, we can observe that the training set and test set of COCO-EE have similar distribution: 1) More than 70% of the editing instances have ratios ranging from 0.4 to 0.6, with the majority of them concentrated around 0.5. 2) There are very few samples with ratios below 0.4 or above 0.6, and almost no samples with ratios around 0.1 or 0.9.\n3) The distribution of COCO-EE is highly uneven." }, { "figure_ref": [], "heading": "F More results about DECap's generalization ability", "publication_ref": [ "b36", "b36" ], "table_ref": [], "text": "In this section, we further evaluated the generalization ability of our model with both in-domain and out-of-domain evaluation on the Flickr30K-EE [37]. Specifically, the Flickr30K of one image and one corresponding Ref-GT caption pair. We also answered three research questions: 1) Q1: Does DECap perform well on the existing indomain benchmark? 2) Q2: Does DECap perform well on reference captions with different noisy levels (i.e., different Levenshtein ratios)? Q3: Does DECap perform well on pure random reference captions? Specifically, DECap only used the unpaired data (i.e., image and GT-Cap without Ref-Cap) with diffusion step T = 6, while the state-of-the-art TIger [37] was trained with the complete editing instance. For a more fair comparison, we also trained the TIger with the synthesized noised unpaired data (denoted as TIger-N). All three models used the same vocabulary sized 19,124." }, { "figure_ref": [ "fig_9" ], "heading": "F.1 In-Domain Evaluation: Flickr30K-EE (Q1)", "publication_ref": [ "b11" ], "table_ref": [ "tab_10" ], "text": "We compared the performance of each model on the Flickr30K-EE test set as the in-domain evaluation. And we evaluated edited captions against their single ground-truth caption.\nResults. The in-domain evaluation results are reported in Table 7. From the table, we can observe: 1) For the quality evaluation, TIger achieves its best performance using three editing steps, and our DECap achieves better results with the same step. With more editing steps, DECap can further improve the qual- ity of captions on all metrics. It is worth noting that TIger was even trained on the in-domain Ref-GT caption pairs. 2) TIger-N achieves limited quality improvement on the in-domain samples. 3) For the efficiency evaluation, DECap achieves significantly faster inference speed than TIger even with more editing steps. This is because that DeCap predicts edit operations and content words simultaneously but TIger needs to conduct editing by three sequential modules.\nF.2 OOD: GT-Based Reference Caption (Q2)\nSetting. The GT-based reference captions were constructed based on the GT-Caps in the Flickr30K-EE test set. We systematically replaced words in GT-Caps with other words, resulting in the creation of various out-of-domain Ref-Caps. They varied in terms of their Levenshtein ratios, ranging from 0.9 (i.e., with only a few incorrect words) to 0.0 (i.e., where all words were wrong). Specifically, we constructed two kinds of GT-based reference captions: 1) BERT-based. We first replaced the GT words with the special [MASK] tokens and then utilized the pretrained BERT [12] model to predict other words different from GT words.\n2) Random-based. We directly replaced GT words with other random words. We evaluated edited captions against their single GT-Cap.\nResults. As shown in Figure 12, For models trained with unpaired data, our model successfully improves the quality of all kinds of the GT-based Ref captions and surpasses TIger-N. In contrast, TIger struggles when editing Ref captions with either \"minor\" or \"severe\" errors, and even degrading the captions' quality (e.g., Ref-Caps with a ratio larger than 0.4) by inadvertently removing accurate words or failing to introduce accurate details. lead to issues like repetition and the introduction of extraneous details, referring to objects or information present in the image but not explicitly mentioned in the ground-truth captions. This can all potentially lead to a decline in the quality evaluation of the generated captions. 3) Based on these findings, we select 10 words as a balanced choice for caption generation." }, { "figure_ref": [], "heading": "H Ablation Studies for the Distribution of Edit Types", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "As discussed in Sec.3.2, the distribution over edit types plays a crucial role in balancing different noising operations and training diverse denoising abilities. Therefore, in this section, we conduct a series of ablation experiments to examine the impact of varying distribution settings for the edit types within the edit-based noising process. Specifically, the probabilities for the noising edit operations REPLACE, DELTE, and INSERT is denoted as α, β, and γ, respectively. While these probabilities are parameterized by several factors, such as the current state of the caption and the length of the ground-truth caption, we perform ablations by imposing global control over these probabilities. For instance, we explore settings where α=β=γ, α > β=γ and β=γ=0.\nSetting. We train the DECap on the COCO training set with different distributions of edit types with the same diffusion set T = 10. During testing, we constructed input instances consisting of a single image from the COCO test set and a Ref-Cap with ten random words. The edited captions were then evaluated against their corresponding five GT-Caps.\nResults. From Table 10, we can observe: 1) In comparison to the even distribution of edit types, where α=β=γ, DECap demonstrates improved performance when we emphasize the denoising ability of the replacement operation with the distribution α > β=γ. This suggests that the replacement operation is more flexible and efficient in correcting words than the sequence operation of first deletion and then insertion. 2) When we trained DECap with an exclusive focus on the replacement operation and omitted the deletion and insertion abilities, setting β=γ=0, there is a noticeable decline in the quality of generated captions. This indicates that DECap's ability to adjust caption length by adding more description or removing repetitions is compromised. 3) These results suggest that the distribution with α > β=γ could be a sensible choice for caption generation. Importantly, our method allows for flexible adaptation, enabling us to set different edit type distributions tailored to specific tasks or requirements." }, { "figure_ref": [], "heading": "I Potential Negative Societal Impacts", "publication_ref": [ "b15" ], "table_ref": [], "text": "Our proposed ECE model may face the same potential ethical concerns as other existing ECE or image captioning works, such as suffering from severe bias issues (e.g., gender bias [16]). Additionally, our method may also be maliciously utilized by using some improper control words, such as sensitive attributes. Apart from these general issues that already exist in the ECE or image captioning tasks, our paper has no additional ethical issues." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The ECE process for this image and caption would be as follows:\nEditing Step 1: Input: A dog is holding a frisbee in its mouth Edit Operation: DELETE KEEP INSERT(frisbee) REPACE(in) Output: A dog is holding a frisbee Editing Step 2: Input: A dog is holding a frisbee Edit Operation: KEEP KEEP REPACE(its) INSERT(mouth) Output: A dog is holding a frisbee in its mouth In total, there are four edit operations in the ECE process, which can be applied to the input caption to get the final refined caption. One possible ECE process for refining the caption \"a dog is holding a frisbee in its mouth\" based on the image could be:\nEditing Step 1: Input: a dog is holding a frisbee in its mouth Edit Operation: KEEP KEEP REPLACE(a) INSERT(is) Output: The dog is holding a frisbee in its mouth.\nIn this step, the word \"a\" is replaced with \"the\" to make the sentence more specific and descriptive, and the word \"is\" is inserted to make the sentence more grammatically complete." }, { "figure_ref": [], "heading": "LLaVA-1.5", "publication_ref": [], "table_ref": [], "text": "Based on the image and the Ref-Cap, the dog is holding a frisbee in its mouth mouth. The dog is in a playful stance, possibly waiting for the frisbee to land on the ground, so it can jump and catch it. F.3 OOD: Pure Random Reference Caption (Q3)" }, { "figure_ref": [], "heading": "CogVLM", "publication_ref": [], "table_ref": [], "text": "Setting. To further evaluate the models' generalization ability without utilizing any GT captions, we constructed pure random reference captions based on the Flickr30K test set. Specifically, each editing instance consists of a single image and a Ref-Cap with ten random words. Subsequently, we evaluated the edited captions against their corresponding five GT-Caps. All results are reported in Table 8.\nResults. From Table 8, we can observe: 1) Given the image, all the models achieve their best performance within six editing steps, DECap successfully edits the sentence with all random words into a reasonable caption. In contrast, both TIger and TIger-N face challenges in doing so. 2) In terms of efficiency metrics, DECap achieves significantly faster inference speed compared to TIger and TIger-N." }, { "figure_ref": [], "heading": "G Ablation Studies for the Number of Random Words", "publication_ref": [], "table_ref": [], "text": "In this section, we run a set of ablation studies about the influence of different numbers of random words on caption generation.\nSetting. We utilized the DECap trained with diffusion step T = 10 on COCO training set from Sec.4.3, and constructed input instances consisting of an image from the COCO test set and a Ref-Cap with n random words, where n ∈ {8, 9, 10, 11, 12}. The edited captions were then evaluated against their corresponding five ground-truth captions.\nResults. From Table 9 we can observe: 1) DECap's performance consistently improves as the number of random words increases from 8 to 10 and then starts to decline beyond 10 random words. 2) Given that the average length of groundtruth captions in COCO is around 10 words, DECap achieves its highest CIDEr-D score when editing sentences with 10 random words. While BLEU-N metrics tend to favor shorter sentences, DECap obtains the best BLEU scores with competitive CIDEr-D scores when editing sentences with 9 random words. Additionally, as the number of random words increases, DECap generates more semantic information about the image, including objects and attributes, resulting in higher SPICE scores. However, this increase in semantic content can also" } ]
Explicit Caption Editing (ECE) -refining reference image captions through a sequence of explicit edit operations (e.g., KEEP, DETELE) -has raised significant attention due to its explainable and human-like nature. After training with carefully designed reference and ground-truth caption pairs, state-of-the-art ECE models exhibit limited generalization ability beyond the original training data distribution, i.e., they are tailored to refine content details only in in-domain samples but fail to correct errors in out-of-domain samples. To this end, we propose a new Diffusion-based Explicit Caption editing method: DECap. Specifically, we reformulate the ECE task as a denoising process under the diffusion mechanism, and introduce innovative edit-based noising and denoising processes. Thanks to this design, the noising process can help to eliminate the need for meticulous paired data selection by directly introducing word-level noises for training, learning diverse distribution over input reference caption. The denoising process involves the explicit predictions of edit operations and corresponding content words, refining reference captions through iterative step-wise editing. To further efficiently implement our diffusion process and improve the inference speed, DECap discards the prevalent multi-stage design and directly generates edit operations and content words simultaneously. Extensive ablations have demonstrated the strong generalization ability of DECap in various scenarios. More interestingly, it even shows great potential in improving the quality and controllability of caption generation.
DECap: Towards Generalized Explicit Caption Editing via Diffusion Mechanism
[ { "figure_caption": "Fig. 1 :1Fig. 1: (a) An image example and its corresponding ground-truth caption (GT-Cap). (b) Data distribution of the COCO-EE dataset [37]. The distribution of the training set and test set are very similar, where most of the editing instances have ratios ranging from 0.4 to 0.6. (c) Editing results of state-of-the-art ECE model TIger and our DECap. The in-domain Ref-Cap sample is from the COCO-EE test set, and out-of-domain Ref-Cap samples are constructed by replacing the GT-Cap with other words, e.g., predicted by BERT (or sentences generated by pretrained captioning models).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: The edit-based denoising step and architecture of DECap. DECap will predict a sequence of edit operations and content words to transform the caption. Specifically, contents words are used only when the predicted corresponding edit operation is INSERT or REPLACE, while the rest of the predicted words are abandoned, i.e., the shaded words.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Performance on two kinds of out-of-domain GT-based reference captions constructed from COCO-EE test set. All models were trained on the COCO-EE training set. \"Ref-Caps\" denotes the initial quality of given reference captions, and \"TIger-N\" denotes the TIger trained with unpaired data.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :Table 5 :55Fig. 5: Controllability of DECap. The grey lines represent random words from the vocabulary, and other colored words represent the manually placed control words.", "figure_data": "", "figure_id": "fig_3", "figure_label": "55", "figure_type": "figure" }, { "figure_caption": "4 . 3 ,43and constructed input instances consisting of an image from the COCO test set and a Ref-Cap with n random words, where n ∈ {8, 9, 10, 11, 12}. As shown in Table 5, DECap's performance consistently improves as the number of random words In: a large heard of zebra standing in the grass E/C: KEP DEL KEP KEP INS(are) KEP KEP KEP REP(dirt) Out: a heard of zebra are standing in the dirt In: a heard of zebra are standing in the dirt E/C: KEP KEP KEP REP(zebras) KEP KEP KEP KEP INS(sand) Out: a heard of zebras are standing in the dirt sand Editing Step 1 Editing Step 2", "figure_data": "", "figure_id": "fig_4", "figure_label": "43", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Editing process of DECap. \"E\" and \"C\" denote the edit operation and corresponding content word, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "GT-Cap:✓✓Fig. 7 :7Fig. 7: Editing results of state-of-the-art ECE model TIger and our DECap. The indomain Ref-Cap sample is from the COCO-EE test set. The out-of-domain Ref-Cap samples include the GT-based reference caption, model-generated reference caption, pure random reference caption.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "1 Fig. 9 :19Fig. 8: Controllability of DECap. The grey lines represent random words from the vocabulary, other colored words represent the manually placed control words.", "figure_data": "", "figure_id": "fig_7", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Fig. 11 :11Fig. 11: The data distribution of the COCO-EE dataset.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 :12Fig. 12: Performance on two kinds of out-of-domain GT-based reference captions constructed from Flickr30K-EE test set. All models were trained on the Flickr30K-EE training set. \"Ref-Caps\" denotes the given reference captions, and \"TIger-N\" denotes the TIger trained with unpaired data.", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "1 r , ..., w n r } with n words, ECE aims to predict a sequence of m edit operations E = {e 1 , ..., e m } to translate the Ref-Cap close to the ground-truth caption (GT-Cap) x 0 = {w 1 0 , ..., w k 0 } with k words.", "figure_data": "Edit Operations. Normally, different ECE models may utilize different editoperations. While early models mainly focus on the reservation (e.g., KEEP) anddeletion (e.g., DELETE) of existing contents, and the insertion (e.g., ADD, INSERT)of new contents, subsequent works [27,30] have demonstrated that incorporatingreplacement can improve editing performance more efficiently. Acknowledgingthis established insight and without loss of generality, in this paper, we utilizethe four Levenshtein edit operations 2 for both the noising and denoising process,including: 1) KEEP, the keep operation preserves the current word unchanged;2) DELETE, the deletion operation removes the current word; 3) INSERT, theinsertion operation adds a new word after the current word; 4) REPLACE, thereplacement operation overwrites the current word with a new word.Discrete Diffusion Mechanism. For diffusion models in the discrete statespaces for text generation, each word of sentence x t is a discrete random variablewith K categories, where K is the word vocabulary size. Denoting x t as a stackof one-hot vectors, the noising process is written as:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Fig. 2: Edit-based noising process for DECap. Blue represents the REPLACE operation, red represents the DELETE operation, purple represents the INSERT operation, white and grey represent the KEEP operation for original word and random word respectively.", "figure_data": "𝑞(𝑥 % |𝑥 $ )yardfilescowsonsighcakefencepentruck𝑞 & (𝑥 $ |𝑥 % )yardthatcowsonlaying cakefencepentruckhat𝑞(𝑥 $ |𝑥 # )𝑞 & (𝑥 # |𝑥 $ )yarddogthatsonlaying cakefenceatruckhat𝑞(𝑥 # |𝑥 ! )𝑞 & (𝑥 ! |𝑥 # )adogthatsonlaying downfenceahat𝑞(𝑥 ! |𝑥 \" )𝑞 & (𝑥 \" |𝑥 ! )adogthatislaying down wearingahatTaking inspiration from the discrete process where a noised sentence is itera-tively refined into the target, we reformulate ECE training with a discrete dif-fusion mechanism and parameterize the noising and denoising process by wayof sampled discrete edit operations applied over the caption words. This noisingand denoising process can clearly mitigate the need for paired Ref-GT captionpairs, as we only need to conduct the diffusion process on original GT-Caps.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Specifically, COCO-EE contains 97,567 training samples, 5,628 validation samples, and 5,366 test samples, where each editing instance consists of one image The in-domain evaluation on the COCO-EE test set. All ECE models were trained on the COCO-EE training set. \"Ref-Caps\" denotes the initial quality of given reference captions. \"TIger-N\" denotes the TIger trained with noised unpaired data.", "figure_data": "ModelUnpaired Step DataQuality Evaluation B-1 B-2 B-3 B-4 R CInference S CLIP-Score Time(ms)Ref-Caps--50.0 37.1 27.7 19.5 48.2 129.9 18.90.6997-TIger [37]✘4 50.3 38.5 29.4 22.3 53.1 176.7 31.40.7269614.23TIger-N [37]✔4 51.8 38.6 28.9 20.7 49.6 145.0 21.80.7097611.72DECap✔4 55.5 41.7 31.5 23.3 52.7 173.7 29.80.7439277.30DECap✔5 56.0 42.0 31.6 23.5 53.0 176.2 31.40.7498335.45DECap✔6 56.1 41.9 31.4 23.4 53.1 177.0 32.2 0.7522409.99", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance of ECE models editing model-generated captions on COCO test set. All ECE models were trained on the COCO-EE training set. \"TIger-N\" denotes the TIger trained with unpaired data.", "figure_data": "↓70.3 54.9 41.2 30.9 54.395.717.10.7285+ TIger-N [37] ↓ 74.8 58.5 45.1 34.8 55.8 109.5 19.90.7356+ DECap ↑75.1 58.9 45.5 35.2 56.3 112.3 20.20.7432Transformer [32] 75.2 58.9 45.6 35.5 56.0 112.8 20.60.7452+ TIger [37] ↓70.0 55.0 41.5 31.1 54.497.117.30.7313+ TIger-N [37] ↓ 75.1 58.8 45.4 35.2 55.9 112.4 20.40.7443+ DECap ↑75.5 59.2 45.8 35.6 56.2 114.1 20.70.7472BLIP [21]79.7 64.9 51.4 40.4 60.6 136.7 24.30.7734+ TIger [37] ↓73.5 58.7 44.7 33.7 56.3 104.0 18.20.7377+ TIger-N [37] ↓ 79.5 64.5 51.0 39.9 60.3 133.0 23.70.7690+ DECap ↑79.9 65.1 51.7 40.6 60.7 138.1 24.40.7738ModelUnpaired Step DataQuality Evaluation B-1 B-2 B-3 B-4 R CInference S CLIP-Score Time(ms)TIger [37]✘10 14.7 4.6 1.9 0.9 13.5 3.0 1.20.51581413.16TIger-N [37]✔10 7.2 5.9 4.5 3.5 29.1 23.5 4.80.61161417.09DECap✔10 74.7 57.4 42.1 30.0 55.3 102.5 19.6 0.7501684.32", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance of ECE models on pure random (ten random words) reference captions constructed based on the COCO test set. All models were trained on the COCO-EE training set. \"TIger-N\" denotes TIger trained with noised unpaired data.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison between our DECap and state-of-the-art diffusion-based captioning models on the COCO test set. All models were trained on the COCO training set. The best and second best results are denoted with corresponding formats.", "figure_data": "B-1B-2B-3B-4MRCSContinuous DiffusionBit Diffusion [9]20---34.7-58.0 115.0-SCD-Net [26]5079.0 63.4 49.1 37.3 28.1 58.0 118.0 21.6Discrete DiffusionDDCap [39]20---35.0 28.2 57.4 117.8 21.7DECap1078.0 61.4 46.4 34.5 28.6 58.0 119.0 21.9DECap1578.5 62.2 47.4 35.3 29.0 58.4 121.2 22.7", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of DECap on the COCO test set when with different distributions of noising edit types.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "• In Sec. B, we show the implementation details. • In Sec. C, we provide more visualization results. • In Sec. D, we provide the results of Multimodal LLMs on ECE. • In Sec. E, we show the data distribution of the COCO-EE based on the Levenshtein ratio. • In Sec. F, we show more results about the generalization ability of DECap on the Fickr30K-EE dataset. • In Sec. G, we provide more detailed ablation study about the number of random words in caption generation. • In Sec. H, we provide more detailed ablation study about the distribution of edit types. • Potential Negative Societal Impacts in Sec. I", "figure_data": "", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The in-domain evaluation of ECE models on the Flickr30K-EE test set. All models were trained on the Flickr30K-EE training set. \"Ref-Caps\" denotes the initial quality of given reference captions. \"TIger-N\" denotes the TIger trained with noised unpaired data.", "figure_data": "ModelUnpaired Step DataQuality Evaluation B-1 B-2 B-3 B-4 R CInference S CLIP-Score Time(ms)Ref-Caps--34.7 24.0 16.8 10.9 36.9 91.3 23.40.5896-TIger [37]✘3 31.9 23.9 18.2 12.4 40.6 131.8 30.80.6467501.42TIger-N [37]✔3 33.4 24.1 17.9 12.2 39.8 119.8 28.20.6401504.19DECap✔3 37.6 27.5 19.8 13.7 40.8 134.0 31.00.6829214.46", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Performance of our model on the COCO test set with different numbers of input random words in ten editing steps.", "figure_data": "Model Random Words Step B-1 B-2 B-3 B-4 MRCS810 75.7 59.3 44.5 32.4 26.3 56.5 109.7 20.1910 80.2 63.3 47.9 35.5 27.8 58.0 118.1 21.5DECap1010 78.0 61.4 46.4 34.5 28.6 58.0 119.0 21.91110 75.9 58.8 44.3 32.9 28.9 57.1 115.7 22.41210 72.0 56.3 42.2 31.2 29.0 56.1 109.3 22.7Model Distribution of Edit Types B-1 B-2 B-3 B-4 MRCSα = β = γ77.4 60.8 45.8 34.0 28.6 57.8 117.4 21.8DECapα > β = γ78.0 61.4 46.4 34.5 28.6 58.0 119.0 21.9β = γ = 077.3 60.6 45.7 33.8 25.8 57.8 116.6 21.7", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Performance of our model on the COCO test set with different distributions of noising edit types.", "figure_data": "", "figure_id": "tab_12", "figure_label": "10", "figure_type": "table" } ]
Zhen Wang; Xinyun Jiang; Jun Xiao; Tao Chen; Long Chen
[ { "authors": "J Achiam; S Adler; S Agarwal; L Ahmad; I Akkaya; F L Aleman; D Almeida; J Altenschmidt; S Altman; S Anadkat", "journal": "", "ref_id": "b0", "title": "", "year": "2023" }, { "authors": "P Anderson; B Fernando; M Johnson; S Gould", "journal": "", "ref_id": "b1", "title": "Spice: Semantic propositional image caption evaluation", "year": "2016" }, { "authors": "P Anderson; X He; C Buehler; D Teney; M Johnson; S Gould; L Zhang", "journal": "", "ref_id": "b2", "title": "Bottom-up and top-down attention for image captioning and visual question answering", "year": "2018" }, { "authors": "J Austin; D D Johnson; J Ho; D Tarlow; Van Den; R Berg", "journal": "NeurIPS", "ref_id": "b3", "title": "Structured denoising diffusion models in discrete state-spaces", "year": "2021" }, { "authors": "S Banerjee; A Lavie", "journal": "", "ref_id": "b4", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "J Chen; D Zhu; X Shen; X Li; Z Liu; P Zhang; R Krishnamoorthi; V Chandra; Y Xiong; M Elhoseiny", "journal": "", "ref_id": "b5", "title": "Minigpt-v2: large language model as a unified interface for vision-language multi-task learning", "year": "2023" }, { "authors": "L Chen; Z Jiang; J Xiao; W Liu", "journal": "", "ref_id": "b6", "title": "Human-like controllable image captioning with verb-specific semantic roles", "year": "2021" }, { "authors": "L Chen; H Zhang; J Xiao; L Nie; J Shao; W Liu; T S Chua", "journal": "", "ref_id": "b7", "title": "Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning", "year": "2017" }, { "authors": "T Chen; R Zhang; G Hinton", "journal": "", "ref_id": "b8", "title": "Analog bits: Generating discrete data using diffusion models with self-conditioning", "year": "2022" }, { "authors": "M Cornia; L Baraldi; R Cucchiara", "journal": "", "ref_id": "b9", "title": "Show, control and tell: A framework for generating controllable and grounded captions", "year": "2019" }, { "authors": "C Deng; N Ding; M Tan; Q Wu", "journal": "", "ref_id": "b10", "title": "Length-controllable image captioning", "year": "2020" }, { "authors": "J Devlin; M W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b11", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b12", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Y He; Z Cai; X Gan; B Chang", "journal": "", "ref_id": "b13", "title": "Diffcap: Exploring continuous diffusion on image captioning", "year": "2023" }, { "authors": "Z He; T Sun; K Wang; X Huang; X Qiu", "journal": "", "ref_id": "b14", "title": "Diffusionbert: Improving generative masked language models with diffusion models", "year": "2022" }, { "authors": "L A Hendricks; K Burns; K Saenko; T Darrell; A Rohrbach", "journal": "", "ref_id": "b15", "title": "Women also snowboard: Overcoming bias in captioning models", "year": "2018" }, { "authors": "J Hessel; A Holtzman; M Forbes; R L Bras; Y Choi", "journal": "", "ref_id": "b16", "title": "Clipscore: A referencefree evaluation metric for image captioning", "year": "2021" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "NeurIPS", "ref_id": "b17", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "A Karpathy; L Fei-Fei", "journal": "", "ref_id": "b18", "title": "Deep visual-semantic alignments for generating image descriptions", "year": "2015" }, { "authors": "V I Levenshtein", "journal": "Soviet physics doklady", "ref_id": "b19", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "year": "1966" }, { "authors": "J Li; D Li; C Xiong; S Hoi", "journal": "", "ref_id": "b20", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "X Li; J Thickstun; I Gulrajani; P S Liang; T B Hashimoto", "journal": "NeurIPS", "ref_id": "b21", "title": "Diffusion-lm improves controllable text generation", "year": "2022" }, { "authors": "C Y Lin", "journal": "", "ref_id": "b22", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "T Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "", "ref_id": "b23", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "H Liu; C Li; Y Li; Y J Lee", "journal": "", "ref_id": "b24", "title": "Improved baselines with visual instruction tuning", "year": "2023" }, { "authors": "J Luo; Y Li; Y Pan; T Yao; J Feng; H Chao; T Mei", "journal": "", "ref_id": "b25", "title": "Semantic-conditional diffusion networks for image captioning", "year": "2023" }, { "authors": "J Mallinson; A Severyn; E Malmi; G Garrido", "journal": "", "ref_id": "b26", "title": "Felix: Flexible text editing through tagging and insertion", "year": "2020" }, { "authors": "K Papineni; S Roukos; T Ward; W J Zhu", "journal": "", "ref_id": "b27", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "", "ref_id": "b28", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "M Reid; G Neubig", "journal": "", "ref_id": "b29", "title": "Learning to model editing processes", "year": "2022" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b30", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "P Sharma; N Ding; S Goodman; R Soricut", "journal": "", "ref_id": "b31", "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning", "year": "2018" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "NeurIPS", "ref_id": "b32", "title": "Attention is all you need", "year": "2017" }, { "authors": "R Vedantam; C Lawrence Zitnick; D Parikh", "journal": "", "ref_id": "b33", "title": "Cider: Consensus-based image description evaluation", "year": "2015" }, { "authors": "O Vinyals; A Toshev; S Bengio; D Erhan", "journal": "", "ref_id": "b34", "title": "Show and tell: A neural image caption generator", "year": "2015" }, { "authors": "W Wang; Q Lv; W Yu; W Hong; J Qi; Y Wang; J Ji; Z Yang; L Zhao; X Song", "journal": "", "ref_id": "b35", "title": "Cogvlm: Visual expert for pretrained language models", "year": "2023" }, { "authors": "Z Wang; L Chen; W Ma; G Han; Y Niu; J Shao; J Xiao", "journal": "Springer", "ref_id": "b36", "title": "Explicit image caption editing", "year": "2022" }, { "authors": "K Xu; J Ba; R Kiros; K Cho; A Courville; R Salakhudinov; R Zemel; Y Bengio", "journal": "", "ref_id": "b37", "title": "Show, attend and tell: Neural image caption generation with visual attention", "year": "2015" }, { "authors": "Z Zhu; Y Wei; J Wang; Z Gan; Z Zhang; L Wang; G Hua; L Wang; Z Liu; H Hu", "journal": "", "ref_id": "b38", "title": "Exploring discrete diffusion models for image captioning", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 237.43, 305.09, 243.16, 8.97 ], "formula_id": "formula_0", "formula_text": "q(xt|xt-1) = Cat(xt; p = xt-1Qt),(1)" }, { "formula_coordinates": [ 5, 340.48, 336.6, 140.11, 9.65 ], "formula_id": "formula_1", "formula_text": "[Q t ] i,j = q(w t = j|w t-1 = i)." }, { "formula_coordinates": [ 5, 213.87, 415.38, 266.73, 36.35 ], "formula_id": "formula_2", "formula_text": "[Q t ] i,j =    1 if i = j = [MASK], βt if j = [MASK], i ̸ = [MASK], 1 -βt if i = j ̸ = [MASK].(2)" }, { "formula_coordinates": [ 5, 248.72, 528.38, 231.87, 12.24 ], "formula_id": "formula_3", "formula_text": "P θ (x0) = T t=1 p θ (xt-1|xt, t).(3)" }, { "formula_coordinates": [ 6, 201.24, 326.52, 279.35, 11.13 ], "formula_id": "formula_4", "formula_text": "q(xt|xt-1) = p(xt|xt-1, E N t ) • Cat(E N t ; p = xt-1Qt),(4)" }, { "formula_coordinates": [ 6, 134.77, 360.77, 345.83, 22.57 ], "formula_id": "formula_5", "formula_text": "[Q t ] i,j = q(e t = j|w t-1 = i). Subsequently, E N t = {e 1" }, { "formula_coordinates": [ 6, 192.44, 440.59, 229.29, 64.94 ], "formula_id": "formula_6", "formula_text": "[Q t ] i,j =              1 if j = KEEP, i = RW, α k t if j = REPLACE, i ̸ = RW, β k t if j = DELETE, i ̸ = RW, γ k t if j = INSERT, i ̸ = RW, 1 -α k t -β k t -γ k t , if j = KEEP, i ̸ = RW." }, { "formula_coordinates": [ 6, 134.77, 552.25, 345.83, 24.15 ], "formula_id": "formula_7", "formula_text": "δ k t = 1 -α k t -β k t -γ k t to" }, { "formula_coordinates": [ 7, 185.09, 438.33, 241.24, 12.69 ], "formula_id": "formula_8", "formula_text": "p θ (x t-1 |x t , t, I) = p(x t-1 |x t , E D t , C t ) • p(E D t , C t |x t , t, I)" }, { "formula_coordinates": [ 7, 244.31, 592.69, 236.28, 12.24 ], "formula_id": "formula_9", "formula_text": "P θ (x0) = T t=1 p θ (xt-1|xt, t, I).(6)" }, { "formula_coordinates": [ 8, 170.8, 435.51, 309.79, 11.13 ], "formula_id": "formula_10", "formula_text": "L = L Edit + LLanguge = -log p θ (E G t |xt, t, I) + -log p θ (C G t |xt, t, I),(7)" }, { "formula_coordinates": [ 11, 144.1, 117.17, 267.16, 19.73 ], "formula_id": "formula_11", "formula_text": "Model Quality Evaluation B-1 B-2 B-3 B-4 R C S" }, { "formula_coordinates": [ 17, 257.2, 418.18, 219.15, 22.31 ], "formula_id": "formula_12", "formula_text": "ratio = m + n -ldist m + n (8" }, { "formula_coordinates": [ 17, 476.35, 424.86, 4.24, 8.8 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 17, 228.25, 530.23, 252.34, 21.84 ], "formula_id": "formula_14", "formula_text": "ldist = Num(INSERT) + Num(DELETE) + 2 * Num(REPLACE)(9)" } ]
2023-11-25
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b9", "b3", "b10", "b11", "b12", "b15", "b16", "b17", "b18", "b21", "b16" ], "table_ref": [], "text": "Time-series forecasting has been widely applied in various fields, including finance [1], climate [2], and healthcare [3]. With the rapid development of autonomous driving techniques, trajectory prediction, which could be formulated as a time-series forecasting problem, has also gained great attention [4], [5]. Accurate trajectory prediction of surrounding agents can enhance the ability of autonomous systems to handle interactions, thereby improving the safety and effectiveness of the entire system. Many trajectory prediction datasets categorize traffic agents into multiple classes based on their properties [6], [7], with pedestrians also being separated as a distinct class. Since pedestrians are relatively Fig. 1. The comparison of using standard sampling algorithm and our proposed tree sampling algorithm for multi-modal prediction. Since the trunk stage of our algorithm only needs to run once for multiple predictions, the total number of diffusion steps is fewer than the standard sampling algorithm, therefore accelerating the inference speed. more vulnerable when interacting with other classes of traffic agents, special attention is required. Thus, in this paper, we focus on pedestrian trajectory prediction.\nEarly works on pedestrian trajectory prediction mainly focused on single-model prediction [8]- [10]. However, these deterministic methods did not consider the inherent multimodality of pedestrian movements. To address this problem, researchers have applied various generative methods to pedestrian trajectory prediction. For instance, Gupta et al. [4] utilized GAN for multi-model future generation, while Tra-jectron++ [11] used a Conditional Variational Autoencoder (CVAE) decoder. Nevertheless, GAN is hard to train and may face mode collapse, while VAE methods can produce unrealistic results.\nRecently, denoising diffusion probabilistic models [12] have achieved remarkable success in computer vision field [13]- [16]. Many researchers have also adopted this generative method for robotics and autonomous driving applications. For example, Gu et al. [17] proposed MID based on diffusion models for pedestrian trajectory prediction. Following MID, LED [18] also trained a denoising module using the same standard training schedule and incorporated a leapfrog initializer to skip many denoising steps to reduce inference time. Although LED has faster inference speed, this leapfrog method requires an extra initializer and does not fully exploit the characteristics of human motion.\nIncorporating goal information into trajectory prediction models has been shown to improve the precision of predictions [19]- [22]. Motivated by this, we propose a novel framework called Goal-Based trajectory prediction with denoising Diffusion (GBD) which combines goal prediction with diffusion models. To accelerate the diffusion sampling process, we introduce the Tree Sampling (TS) algorithm. Compared with the standard sampling algorithm used in [17] for multi-modal prediction, Our tree sampling algorithm consists of two stages, as Fig. 1 shows. The trunk stage of TS algorithm leverages common feature to generate a roughly denoised future trajectory, which only needs to run once for arbitrary numbers of predictions, just like a tree having only one trunk. Then, the branch stage receives the roughly denoised trajectory from the trunk stage and generates corresponding trajectory predictions for each goal estimation.\nIn our GBD-TS method, first, the goal prediction module estimates multiple possible goals to ensure the diversity of the predictions. These goals, along with the history motion information of the target agent, are then fed into the diffusion-based trajectory generation module. We leverage the powerful reconstruction ability of the diffusion model to further improve the accuracy of the prediction results, and the TS algorithm is applied to generate multi-modal predictions. Experiment results show that our model achieves high prediction accuracy and fast sampling speed. In summary, the main contributions of this paper are as follows:\n• We investigate combining diffusion models with goal prediction for multi-modal pedestrian trajectory prediction and propose a novel framework named GBD. " }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Goal-based Pedestrian Trajectory Prediction", "publication_ref": [ "b19", "b22", "b23", "b19", "b20" ], "table_ref": [], "text": "Generally speaking, pedestrian trajectory is goalconditioned, as goals seldom change rapidly during the prediction horizon. Precise prediction of goals could reduce uncertainty and increase the accuracy of further trajectory prediction [20]. Several methods have been proposed to incorporate goal information into trajectory prediction. For exmaple, Dendorfer et al. [23] trained multiple generators, and each generator is used to a different distribution associated with a specific mode. Mangalam et al. [24] proposed a method that estimates goal points and generates multi-modal prediction using VAE conditioned on these estimated goal points. In Y-Net [20], a U-Net architecture is employed for goal and trajectory prediction, where the predicted goal information is also fed into the trajectory prediction decoder. Chiara et al. [21] proposed a recurrent network, Goal-SAR, which predicts the position of each future time step in recurrent way based on the goal and all history information before this time step. In this paper, we also divide the pedestrian trajectory prediction task into goal prediction and trajectory generation." }, { "figure_ref": [], "heading": "B. Diffusion Models and Application", "publication_ref": [ "b24", "b11", "b12", "b15", "b25", "b26", "b27", "b28", "b16", "b29", "b30", "b31", "b17" ], "table_ref": [], "text": "The diffusion probabilistic model was first proposed by Sohl-Dickstein et al. [25] and has developed rapidly since DDPM [12] was proposed. It has achieved great success in various fields, such as computer vision [13]- [16] and seqto-seq models [26], [27]. Diffusion models have also been adopted in robotics applications. For instance, Diffuser [28] generates trajectories for robot planning and control using diffusion models. In Trace and Pace [29], the diffusion model is used to produce realistic human trajectories. In the context of pedestrian trajectory prediction, MID [17] was the first work to adopt the diffusion model for this task. Following the standard DDPM algorithm, a transformer acts as the diffusion network to generate multiple predictions conditioned on the history trajectory.\nDespite its effectiveness in trajectory prediction, the diffusion model faces challenges due to the large number of denoising steps, which hampers real-time performance. To address this issue, various improvements have been proposed to accelerate the inference speed of the diffusion models [30]. DDIM [31] generalizes the forward process of the diffusion model as a non-Markovian process to speed up the sampling procedure. Salimans et al. [32] repeated knowledge distillation on a deterministic diffusion sampler to distill a new diffusion model with fewer sampling steps. Specific to multi-modal trajectory prediction, Mao et al. [18] proposes LED which trains an additional leapfrog initializer to accelerate sampling. Similar to our sampling algorithm, LED divides the diffusion sampling procedure into two stages and leverages an extra leapfrog initializer, which can estimate the roughly denoised distribution to skip the first stage. Instead of using an additional neural network, we use the same diffusion network to learn this distribution." }, { "figure_ref": [], "heading": "III. PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Problem Formulation", "publication_ref": [], "table_ref": [], "text": "The pedestrian trajectory prediction problem can be formulated as follows: Given the current frame t = 0, where the semantic map of the scene S and history positions of the targeted agent in the past t h frames X = {x t ∈ R 2 |t = -t h + 1, -t h + 2, ..., 0} are available, the objective is to obtain N possible future trajectory prediction of the agents in the next t f frames Y n = { y t ∈ R 2 |t = 1, 2, ..., t f }. The notation with tilde represents the predicted result.\nOur method divides the overall trajectory prediction task into goal prediction and trajectory generation. First, a network is applied to predict the probability distribution heatmap of the goal position. Then, a set of possible goals of the target agent is sampled from this distribution heat-map. Extracting the feature of these goal estimations and the target agent's history information as guidance, a diffusion denoising network is applied to get the future trajectory prediction Y . In the following part of this section, we will introduce the proposed framework in detail. Furthermore, we will describe the tree sampling algorithm for generating multimodal future trajectory prediction. Finally, we will introduce the loss function and the training scheme." }, { "figure_ref": [], "heading": "B. Goal Prediction", "publication_ref": [ "b19", "b20" ], "table_ref": [], "text": "Considering the state-of-the-art performance of Y-Net [20] and Goal-SAR [21] on pedestrian trajectory prediction, we adapt their goal predictor for goal prediction. The goal predictor takes the semantic map and history trajectory as input. Any semantic segmentation network could be used to get the semantic map from the raw RGB image provided by the original dataset. Since our work focuses on trajectory prediction, we assume that the semantic network is pretrained to obtain semantic map S ∈ R H×W ×C , where H and W are the height and width of the input image, and C is the number of semantic classes. Since we use a probability heatmap to describe the goal distribution, to keep the consistency of the input and output, pre-processing is needed before goal prediction to convert the inputted history positions of the frame x t to a 2D Gaussian probability distribution heat-map H t ∈ R H×W for each frame, and the highest probability is assigned to the ground truth position x t . These position heat-maps are then stacked together to form a trajectory heatmap H h ∈ R H×W ×t h . Finally, H h and S are concatenated and fed into a U-Net architecture network to predict the future trajectory heat-map H f ∈ R H×W ×t f , and the last channel represents the position distribution of the last frame t = t f , i.e., the goal distribution. By sampling from the goal probability distribution heat-map, N estimated goals g n , n = 1, ..., N are obtained, referred to as diverse goals. Meanwhile, the position with the highest probability is selected as the common goal g * . Therefore, a total of N + 1 estimated goals G are generated." }, { "figure_ref": [], "heading": "C. Trajectory Generation Module", "publication_ref": [ "b10", "b16" ], "table_ref": [], "text": "The trajectory generation module receives the estimated goals G and the history trajectory X as input. Following previous works [11], [17], we augment the history state X with velocity and acceleration. Additionally, we integrate the goal information into the state by concatenating the vector from the position of this frame x t to the goal g ∈ G, as shown in Equations 1.\nV = d t X, A = d t V D = {x t -g|t = -t h + 1, -t h + 2, ..., 0} X = concat(D, X, V, A)(1)\nAfter obtaining the augmented state X for each goal estimation, an LSTM encoder is trained to extract the feature f of the state. The LSTM encoder is shared among different goals. Similarly, We refer to the feature f n obtained using g n as diverse feature, and the feature f * obtained using g * as common feature. And f then serves as the guidance of the diffusion. During the forward process of diffusion, noise is added to the ground truth trajectory Y 0 , and this noise addition is repeated for K times to obtain a series of noisy future trajectories {Y 1 , ..., Y K }; While in step k of the reverse process, the diffusion network predicts a noise conditioned on k and f. This predicted noise is then used to denoise the noisy future trajectory Y k to Y k-1 . With the iterative denoising process, the future trajectory prediction Y = Y 0 can be gradually reconstructed from a Gaussian distribution Y K ∼ N (0, I)." }, { "figure_ref": [], "heading": "D. Tree Sampling", "publication_ref": [ "b11", "b30" ], "table_ref": [], "text": "The details of the tree sampling algorithm are presented in Algorithm 1. Our tree sampling algorithm is designed especially for multi-modal prediction and divided into two stages: the trunk stage and the branch stage, which distinguishes it from the standard sampling procedure (DDPM [12] and DDIM [31]). The total number of DDPM diffusion steps is\nAlgorithm 1: Tree Sampling Data: common feature f * , diverse feature f n , DDPM diffusion step K, DDIM diffusion step K I , trunk step K t , N , η 1 x K ∼ N (0, I) 2 for k = K, ..., K -K t // Trunk stage 3 do 4 Y k-1 = 1 √ α k ( Y k -1-α k √ 1-ᾱk ϵ( Y k , k, f * )) 5 end 6 σ = η 1-ᾱk-1 1-ᾱk (1 -ᾱk ᾱk-1 ) 7 K b = (1 -Kt K )K I 8 for n = 1, ...N // Branch stage 9 do 10 for k = K b , ..., 1 do 11 z ∼ N (0, I) if t > 1, else z = 0 12 Y k-1 = ᾱk-1 ᾱk Y k + ( 1 -ᾱk-1 -σ 2 - ᾱk-1 1-ᾱk ᾱk )ϵ( Y k , k, f n ) + σz 13 end 14 end 15 return N predictions Y n , n = 1, ..., N\nK, and the number of DDIM diffusion steps is K I . The trunk stage consists of K t steps, and the branch stage consists of K b steps. In the trunk stage (lines 2-5), we begin by utilizing the common feature to denoise the trajectory. The denoised result of the trunk stage, denoted as Y K-Kt , is deterministic and could serve as general initialization for further denoising conditioned on different diverse features f n , i.e., the trunk which links to different branches of this tree. In the subsequent branch stage (lines 8-14), different f n are used for refinement of Y K-Kt to obtain final multimodal predictions Y 0 n . Since the branch stage needs to be run multiple times for different modalities, we apply DDIM in this stage to increase the inference speed while a variant of DDPM is used exclusively in the trunk stage. Experiments demonstrate our combination scheme generates more accurate results in real-world datasets.\nIn the trunk stage, our sampling algorithm differs from DDPM by removing the noise term σ t z, and we call this variant deterministic-DDPM (d-DDPM in short). σ of DDIM can be obtained using the equation in line 6, and the number of diffusion steps in the branch stage K b can be computed as line 7." }, { "figure_ref": [], "heading": "E. Training and Loss Function", "publication_ref": [ "b11", "b30", "b21" ], "table_ref": [], "text": "To train our model in an end-to-end manner, we consider two terms of loss corresponding to the two modules. The goal prediction module is trained using Binary Cross-Entropy loss, which measures the dissimilarity between the ground truth of future trajectory heat-map H f and predicted heatmap H f :\nL goal = BCE(H f , H f )(2)\nThe training objective of the trajectory generation modules is to learn a distribution ϵ ∼ N (0, I) for each step k of diffusion, following the regular setting of diffusion models [12], [31]:\nL traj = E||ϵ θ (k, Y k , f) -ϵ||(3)\nThe entire network is trained end-to-end using a weighted combination of losses:\nL = L traj + λL goal (4\n)\nwhere λ is the diffusion loss weight.\nDuring training, the ground truth goal is fed to the trajectory generation module to decouple the two modules. We also find it beneficial to employ the gradient stopping scheme for goal estimation to further decouple the two modules and stabilize the training process, following [22]." }, { "figure_ref": [], "heading": "IV. RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Experiments and Datasets", "publication_ref": [ "b33", "b34", "b3", "b4", "b19", "b35", "b20", "b23", "b16", "b19" ], "table_ref": [], "text": "To evaluate the performance of our model, we conducted experiments on two real-world pedestrian datasets: the ETH/UCY dataset and the Stanford Drone Dataset (SDD). These datasets capture the movement of pedestrians from a bird's-eye view perspective and positions are manually annotated. The sample rate of datasets is 2.5 FPS. Given the pedestrian position of the last t h = 8 frames, the task is to predict the trajectory of the following t f = 12 frames.\nDataset The ETH/UCY dataset consists of two subdatasets, ETH [34] and UCY [35], with a total of 5 different scenes: ETH, Hotel, Univ, Zara1 and Zara2. Following the common leave-one-scene-out strategy in previous work [4], [5], [20], we use four scenes for training and the remaining one for testing. Stanford Drone Dataset [36] is a largescale dataset that contains 11,216 unique pedestrians on the university campus. We use the same dataset split as several recent works [21], [24]: 30 scenes are used for training, and the remaining 17 scenes are used for testing.\nEvaluation Metrics Average Displacement Error (ADE) measures the average Euclidean distance between the ground truth and the predicted positions of the entire future trajectory, while Final Displacement Error (FDE) considers only the Euclidean distance between the final positions. Since our model generates multi-model predictions for stochastic future, we report the best-of-N ADE and FDE of the predictions. To keep simplicity, we use ADE N and FDE N to represent the best-of-N ADE and FDE of the predictions in the tables.\nImplementation Details All the experiments were conducted on an NVIDIA 3080Ti GPU with PyTorch implementation. As mentioned in the previous section, the goal predictor is implemented using a 5-layer U-Net as the backbone. The output dimension of the history state LSTM encoder is 256. The input and output dimension of the Transformer used for diffusion is 256, and the number of attention heads is set to 4. The architecture of the diffusion network is similar to MID [17], and readers can refer to the original paper for more details. We adopt a batch size of 32 and train the entire network end-to-end for 270 epochs. An Adam optimizer is employed with an initial learning rate of 10 -3 , and exponential annealing is applied for the learning rate. The DDPM diffusion step K is 100, the DDIM diffusion step K I is 20, and the trunk step K trunk is 20. η is selected to be 1 for ETH/UCY and 0 for SDD. The diffusion loss weight λ is 20 for ETH/UCY and 40 for SDD. Additionally, we utilize Test-Time-Sampling-Trick (TTST) in our model to improve the accuracy of goal prediction [20], and the number of TTST samples N ttst is selected to be 1000. The final number of predictions N = 20." }, { "figure_ref": [], "heading": "B. Quantitative Results", "publication_ref": [ "b32", "b22", "b23", "b19", "b16", "b16", "b17" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Baseline We compare GBD-TS to different methods: Goal-GAN [33], MG-GAN [23], PECNet [24], Y-net [20], all of which are goal-based methods. MID [17] is the first to utilize the diffusion model for pedestrian trajectory prediction. Notably, in the official implementation of MID [17], state augmentation leads to future information leakage and unfair performance improvement. LED [18] is a diffusionbased pedestrian trajectory prediction method that currently achieves the best performance.\nDiscussion Results in Table I show that our model achieves comparable accuracy with MID and LED in the ETH/UCY dataset while outperforming all of the goalbased methods except Y-net, even without considering the interaction between pedestrians. Furthermore, the results of experiments on the SDD are also reported in Table II. With the explicit leveraging of goal information, the FDE 20 of GBD-TS is only larger than LED in a small margin, and smaller than all of the other compared methods. Meanwhile, our method achieves the best ADE 20 , further confirming the effectiveness of GBD-TS. Since SDD is substantially larger than the ETH/UCY dataset, we argue that our method is more suitable for datasets with larger sizes." }, { "figure_ref": [], "heading": "C. Ablation Study", "publication_ref": [], "table_ref": [ "tab_3", "tab_3", "tab_3" ], "text": "Sampling Algorithm of Diffusion To investigate the influence of different sampling algorithms, we conducted the experiment on SDD. We report the average inference time for one agent in addition to the ADE 20 and FDE 20 . Tree sampling is equivalent to DDIM sampling when K t = 0. Since LED does not have an official implementation nor reported inference time on SDD, we do not include it in our comparison. The results in Table III verify that, with the same GBD framework, our proposed tree sampling algorithm performs the best and has a shorter inference time than DDPM, indicating the superiority of our sampling algorithm. Compared with MID, our GBD-TS method reduces the inference time from 139 ms to around 25 ms.\nTrunk Step of Tree Sampling We also explore the effect of the trunk step K t . As shown in Table III, K t does not significantly influence the inference time since we use DDIM in the branch stage which already has a fast sampling speed. However, the accuracy of our method decreases when K t is either too large or too small. When K t is too large, small K b makes it challenging for the branch stage to drive the roughly denoised trajectory to various modalities, limiting the diversity of the results. When K t is too small, the trunk stage does not provide enough guidance for the branch, which means the branch stage that uses DDIM undertakes most of the denoising task. Since we observe that the prediction accuracy using DDIM is relatively bad in Table III, a small K t degrades the performance. Combination scheme of Tree Sampling We compare different combinations of DDPM, d-DDPM, and DDIM in tree sampling algorithm to verify the effectiveness of our scheme. Note that the result of the trunk stage must be deterministic, hence only d-DDPM and DDIM (η = 0) can be used in the trunk stage. Results in Table IV demonstrate that using d-DDPM in both stages gets the lowest ADE 20 and FDE 20 , probably because d-DDPM focuses on reconstruction rather than generation, i.e., accuracy rather than diversity. However, the inference time is much longer than our scheme, and our scheme achieves a good balance between prediction accuracy and inference speed." }, { "figure_ref": [ "fig_1" ], "heading": "D. Qualitative Results", "publication_ref": [], "table_ref": [], "text": "Fig. 3 presents some qualitative results of our GBD-TS method on the ETH/UCY and SDD datasets. The target pedestrian in the scene (a) turns right while moving straight in the scene (b) and (c). From the left column, it can be observed that the goals from the final prediction (in cyan) are closer to the ground truth (in yellow) compared to the goal estimations obtained from the goal prediction module (in blue), indicating that the diffusion models can improve FDE. The middle column shows the common goal g * and the roughly denoised trajectory generated by the trunk stage, demonstrating that the common goal can guide the denoised trajectory growing toward the correct direction. The right column shows the final prediction results, demonstrating the ability of GBD-TS to generate future trajectory predictions with different modalities." }, { "figure_ref": [], "heading": "V. CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In conclusion, we introduce GBD-TS for multi-modal pedestrian trajectory prediction. GBD is a framework that combines goal prediction and the diffusion probabilistic model, and TS is a novel diffusion sampling algorithm that leverages common feature for generating different modalities to accelerate inference speed. Experiments on real-world datasets demonstrate that by applying the TS algorithm to the GBD framework, our GBD-TS method can predict multiple scene-compliant trajectories and achieve state-of-theart performance with real-time inference speed in two realworld datasets. However, it should be noted that our method currently does not consider the interaction between agents, which could be incorporated into the model in future research to further improve the prediction performance. Besides, our proposed method does not account for cases where the predicted goal heat-map has more than one local maximum point, which may potentially impact the performance of our proposed method." }, { "figure_ref": [], "heading": "*This work was supported by fund name", "publication_ref": [], "table_ref": [], "text": "" } ]
Predicting pedestrian trajectories is crucial for improving the safety and effectiveness of autonomous driving and mobile robots. However, this task is nontrivial due to the inherent stochasticity of human motion, which naturally requires the predictor to generate multi-model prediction. Previous works have used various generative methods, such as GAN and VAE, for pedestrian trajectory prediction. Nevertheless, these methods may suffer from problems, including mode collapse and relatively low-quality results. The denoising diffusion probabilistic model (DDPM) has recently been applied to trajectory prediction due to its simple training process and powerful reconstruction ability. However, current diffusionbased methods are straightforward without fully leveraging input information and usually require many denoising iterations leading to a long inference time or an additional network for initialization. To address these challenges and promote the application of diffusion models in trajectory prediction, we propose a novel scene-aware multi-modal pedestrian trajectory prediction framework called GBD. GBD combines goal prediction with the diffusion network. First, the goal predictor produces multiple goals, and then the diffusion network generates multi-modal trajectories conditioned on these goals. Furthermore, we introduce a new diffusion sampling algorithm named tree sampling (TS), which leverages common feature to reduce the inference time and improve accuracy for multimodal prediction. Experimental results demonstrate that our GBD-TS method achieves state-of-the-art performance with real-time inference speed.
GBD-TS: Goal-based Pedestrian Trajectory Prediction with Diffusion using Tree Sampling Algorithm*
[ { "figure_caption": "Fig. 2 .2Fig. 2. The architecture of our GBD framework. GBD consists of a goal prediction module and a diffusion-based trajectory generation module. The goal prediction module predicts a heat-map for goal sampling, and estimated goals are fed with history information to the trajectory generation module. Trajectory generation is based on the reverse process of the diffusion model, which denoise the trajectory from Y k to Y k-1 iteratively in each diffusion step k.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Visualization of history trajectory and future prediction of three agents in different scenes on the ETH/UCY and SDD. The observed trajectory is in red, the goal estimations obtained from the goal prediction module are in blue, the final predictions are in cyan, and the ground truth goal and trajectory are in yellow. The goals are highlighted as star points.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "QUANTITATIVE RESULTS ON ETH/UCY DATASET. BOLD AND UNDERLINED NUMBER INDICATES THE BEST AND SECOND-BEST.", "figure_data": "ETHHOTELUNIVZARA1ZARA2AVGADE20FDE20ADE20FDE20ADE20FDE20ADE20FDE20ADE20FDE20ADE20FDE20Goal-GAN [33]0.591.180.190.350.601.190.430.870.320.650.430.85MG-GAN [23]0.470.910.140.240.541.070.360.730.290.600.360.71PECNet [24]0.540.870.180.240.350.600.220.390.170.300.290.48Y-net [20]0.280.330.100.140.240.410.170.270.130.220.180.27MID [17]0.390.660.130.220.220.450.170.300.130.270.210.38LED [18]0.390.580.110.170.260.430.180.260.130.220.210.33GBD-TS (Ours)0.360.510.150.220.300.550.190.290.160.260.230.37", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "RESULTS ON STANFORD DRONE DATASET (SDD).", "figure_data": "BOLD AND UNDERLINED NUMBER INDICATES THE BEST ANDSECOND-BEST.MethodADE 20FDE 20Goal-GAN [33]12.2022.10MG-GAN [23]13.6025.80PECNet [24]9.9615.88Y-net [20]7.8511.85MID [17]7.6114.30LED [18]8.4811.66GBD-TS (Ours)7.3911.80", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "STUDY ON SAMPLING ALGORITHM OF DIFFUSION AND Kt. THE LAST ROW IS THE SETTING OF OUR FINAL REPORTED RESULTS.", "figure_data": "FrameworkSamplingKtADE 20FDE 20Inference (ms)MIDDDPM-7.6114.30∼139DDPM-7.7011.98∼65GBDDDIM TS-57.81 7.5612.13 11.94∼24 ∼24TS507.5212.15∼21GBDTS207.3911.80∼24", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "STUDY ON SCHEME OF TREE SAMPLING ALGORITHM. THE LAST ROW IS THE SETTING OF OUR FINAL REPORTED RESULTS.", "figure_data": "Trunk StageBranch StageKtK bADE 20FDE 20Inference (ms)d-DDPMDDPM20807.7011.95∼65d-DDPMd-DDPM20807.3611.63∼65DDIMDDIM4168.0212.62∼21d-DDPMDDIM20167.3911.84∼24", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" } ]
Ge Sun; Sheng Wang; Yang Xiao; Lei Zhu; Ming Liu
[ { "authors": "O B Sezer; M U Gudelek; A M Ozbayoglu", "journal": "Applied Soft Computing", "ref_id": "b0", "title": "Financial time series forecasting with deep learning: A systematic literature review: 2005-2019", "year": "2020" }, { "authors": "P Hewage; A Behera; M Trovati", "journal": "Soft Computing", "ref_id": "b1", "title": "Temporal convolutional neural (tcn) network for an effective weather forecasting using time-series data from the local weather station", "year": "2020" }, { "authors": "S Banerjee; G K Singh", "journal": "Biomedical Signal Processing and Control", "ref_id": "b2", "title": "A new approach of ecg steganography and prediction using deep learning", "year": "2021" }, { "authors": "A Gupta; J Johnson; L Fei-Fei; S Savarese; A Alahi", "journal": "", "ref_id": "b3", "title": "Social gan: Socially acceptable trajectories with generative adversarial networks", "year": "2018" }, { "authors": "Y Chen; C Liu; X Mei; B E Shi; M Liu", "journal": "", "ref_id": "b4", "title": "Hgcngjs: Hierarchical graph convolutional network with groupwise joint sampling for trajectory prediction", "year": "2022" }, { "authors": "H Caesar; V Bankiti; A H Lang", "journal": "", "ref_id": "b5", "title": "Nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "S Ettinger; S Cheng; B Caine", "journal": "", "ref_id": "b6", "title": "Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset", "year": "2021" }, { "authors": "D Helbing; P Molnar", "journal": "Physical Review E", "ref_id": "b7", "title": "Social force model for pedestrian dynamics", "year": "1995" }, { "authors": "A Alahi; K Goel; V Ramanathan; A Robicquet; L Fei-Fei; S Savarese", "journal": "", "ref_id": "b8", "title": "Social lstm: Human trajectory prediction in crowded spaces", "year": "2016" }, { "authors": "A Vemula; K Muelling; J Oh", "journal": "IEEE", "ref_id": "b9", "title": "Social attention: Modeling attention in human crowds", "year": "2018" }, { "authors": "T Salzmann; B Ivanovic; P Chakravarty; M Pavone", "journal": "", "ref_id": "b10", "title": "Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data", "year": "2020" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "P Dhariwal; A Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "A Bansal; E Borgnia; H.-M Chu", "journal": "", "ref_id": "b13", "title": "Cold diffusion: Inverting arbitrary image transforms without noise", "year": "2022" }, { "authors": "D Baranchuk; A Voynov; I Rubachev; V Khrulkov; A Babenko", "journal": "", "ref_id": "b14", "title": "Label-efficient semantic segmentation with diffusion models", "year": "2022" }, { "authors": "S Chen; P Sun; Y Song; P Luo", "journal": "", "ref_id": "b15", "title": "Diffusiondet: Diffusion model for object detection", "year": "2022" }, { "authors": "T Gu; G Chen; J Li", "journal": "", "ref_id": "b16", "title": "Stochastic trajectory prediction via motion indeterminacy diffusion", "year": "2022" }, { "authors": "W Mao; C Xu; Q Zhu; S Chen; Y Wang", "journal": "", "ref_id": "b17", "title": "Leapfrog diffusion model for stochastic trajectory prediction", "year": "2023" }, { "authors": "J Gu; C Sun; H Zhao", "journal": "", "ref_id": "b18", "title": "Densetnt: End-to-end trajectory prediction from dense goal sets", "year": "2021" }, { "authors": "K Mangalam; Y An; H Girase; J Malik", "journal": "", "ref_id": "b19", "title": "From goals, waypoints & paths to long term human trajectory forecasting", "year": "2021" }, { "authors": "L F Chiara; P Coscia; S Das; S Calderara; R Cucchiara; L Ballan", "journal": "", "ref_id": "b20", "title": "Goal-driven self-attentive recurrent networks for trajectory prediction", "year": "2022" }, { "authors": "G Aydemir; A K Akan; F Güney", "journal": "", "ref_id": "b21", "title": "ADAPT: Efficient multi-agent trajectory prediction with adaptation", "year": "2023" }, { "authors": "P Dendorfer; S Elflein; L Leal-Taixé", "journal": "", "ref_id": "b22", "title": "Mggan: A multi-generator model preventing out-ofdistribution samples in pedestrian trajectory prediction", "year": "2021" }, { "authors": "K Mangalam; H Girase; S Agarwal", "journal": "", "ref_id": "b23", "title": "It is not the journey but the destination: Endpoint conditioned trajectory prediction", "year": "2020" }, { "authors": "J Sohl-Dickstein; E Weiss; N Maheswaranathan; S Ganguli", "journal": "", "ref_id": "b24", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "K Rasul; C Seward; I Schuster; R Vollgraf", "journal": "", "ref_id": "b25", "title": "Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting", "year": "2021" }, { "authors": "S Gong; M Li; J Feng; Z Wu; L Kong", "journal": "", "ref_id": "b26", "title": "Diffuseq: Sequence to sequence text generation with diffusion models", "year": "2022" }, { "authors": "M Janner; Y Du; J Tenenbaum; S Levine", "journal": "", "ref_id": "b27", "title": "Planning with diffusion for flexible behavior synthesis", "year": "2022" }, { "authors": "D Rempe; Z Luo; X Bin Peng", "journal": "", "ref_id": "b28", "title": "Trace and pace: Controllable pedestrian animation via guided trajectory diffusion", "year": "2023" }, { "authors": "H Cao; C Tan; Z Gao", "journal": "", "ref_id": "b29", "title": "A survey on generative diffusion model", "year": "2022" }, { "authors": "J Song; C Meng; S Ermon", "journal": "", "ref_id": "b30", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "T Salimans; J Ho", "journal": "", "ref_id": "b31", "title": "Progressive distillation for fast sampling of diffusion models", "year": "2022" }, { "authors": "P Dendorfer; A Ošep; L Leal-Taixé", "journal": "", "ref_id": "b32", "title": "Goal-gan: Multimodal trajectory prediction based on goal position estimation", "year": "2020" }, { "authors": "S Pellegrini; A Ess; K Schindler; L Van Gool", "journal": "IEEE", "ref_id": "b33", "title": "You'll never walk alone: Modeling social behavior for multi-target tracking", "year": "2009" }, { "authors": "A Lerner; Y Chrysanthou; D Lischinski", "journal": "Computer graphics forum", "ref_id": "b34", "title": "Crowds by example", "year": "2007" }, { "authors": "A Robicquet; A Sadeghian; A Alahi; S Savarese", "journal": "", "ref_id": "b35", "title": "Learning social etiquette: Human trajectory understanding in crowded scenes", "year": "2016" } ]
[ { "formula_coordinates": [ 3, 349.7, 396.72, 208.3, 39.69 ], "formula_id": "formula_0", "formula_text": "V = d t X, A = d t V D = {x t -g|t = -t h + 1, -t h + 2, ..., 0} X = concat(D, X, V, A)(1)" }, { "formula_coordinates": [ 4, 52.01, 55.85, 229.06, 263.05 ], "formula_id": "formula_1", "formula_text": "Algorithm 1: Tree Sampling Data: common feature f * , diverse feature f n , DDPM diffusion step K, DDIM diffusion step K I , trunk step K t , N , η 1 x K ∼ N (0, I) 2 for k = K, ..., K -K t // Trunk stage 3 do 4 Y k-1 = 1 √ α k ( Y k -1-α k √ 1-ᾱk ϵ( Y k , k, f * )) 5 end 6 σ = η 1-ᾱk-1 1-ᾱk (1 -ᾱk ᾱk-1 ) 7 K b = (1 -Kt K )K I 8 for n = 1, ...N // Branch stage 9 do 10 for k = K b , ..., 1 do 11 z ∼ N (0, I) if t > 1, else z = 0 12 Y k-1 = ᾱk-1 ᾱk Y k + ( 1 -ᾱk-1 -σ 2 - ᾱk-1 1-ᾱk ᾱk )ϵ( Y k , k, f n ) + σz 13 end 14 end 15 return N predictions Y n , n = 1, ..., N" }, { "formula_coordinates": [ 4, 127.02, 724.87, 171.78, 9.91 ], "formula_id": "formula_2", "formula_text": "L goal = BCE(H f , H f )(2)" }, { "formula_coordinates": [ 4, 377.4, 101.48, 180.6, 11.72 ], "formula_id": "formula_3", "formula_text": "L traj = E||ϵ θ (k, Y k , f) -ϵ||(3)" }, { "formula_coordinates": [ 4, 394.21, 153.17, 159.92, 9.65 ], "formula_id": "formula_4", "formula_text": "L = L traj + λL goal (4" }, { "formula_coordinates": [ 4, 554.13, 153.59, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" } ]
10.47330/DCIO.2022.ACSL3567
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b14", "b17", "b21", "b11" ], "table_ref": [], "text": "The design of building facades significantly impacts building energy performance and human comfort. Although maximizing daylight in buildings is desirable, if not done correctly, it can lead to overheating of the space and occupants' visual discomfort and dissatisfaction. Several studies have investigated the influence of facade design on daylight, user satisfaction, and visual perception. These studies include light measurements and their impact on user preferences [Omidfar et al., 2015], daylight-driven interest using rendered scenes in Virtual Reality (VR) [Rockcastle et al., 2017], view accessibility [Turan et al., 2019], and user-defined visual preferences [Li and Samuelson, 2020]. A successful facade design ensures adequate light levels for indoor visual tasks and minimizes potential glare problems. As luminance is a key metric related to how the human eyes perceive brightness, luminance measurement becomes the critical variable in space design. Currently, computational simulation and field measurement are the two primary approaches for luminance analysis that can provide design suggestions for space layouts and window design. However, the results of conventional luminance analysis lack a three-dimensional representation that can be used to accurately trace and adjust the overlit areas. The concept of this study is to combine luminance simulation with view analysis and spatial projection to guide the facade design process." }, { "figure_ref": [], "heading": "LITERATURE REVIEW", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Luminance Study and Field Measurement", "publication_ref": [ "b8", "b10", "b24", "b5", "b0", "b25", "b22", "b4" ], "table_ref": [], "text": "To evaluate light distribution in the real world, high dynamic range (HDR) photography is widely used in field measurement to capture the wide range of pixel values. Previous luminance studies used HDR images to explore diverse topics such as user's visual perception under direct sunlight [Jain et al., 2022], the impacts of different interior design elements (window height, seating position and location) on visual comfort [Kong et al., 2018], and development of glare metrics including Daylight Glare Probability (DGP) [Wienold and Christoffersen, 2006] and Unified Glare Probability (UGP) [Hirning et al., 2017]. When measuring the luminance distribution in the open workplace [Alicia and Simon, 2017] and private office room [Wymelenberg and Inanici, 2017], researchers computed glare metrics for different scenes and investigated the correlation between light levels and occupants' subjective responses. The glare results describe the probability of glare occurrence; however, field measurements are limited by the number of captured HDR images and respondents.\nVisual comfort studies typically focus on the overlit areas within the occupants' field of view (FOV). When Van Den Wymelenberg et al. studied luminance distribution and occupant preferences, each captured scene was divided into several areas based on luminance threshold, solid view angle, and task surface [Van Den Wymelenberg et al., 2010]. Another study decomposed the fisheye imagery into a 16 by 16 grid matrix, allowing users to label the locations of glare sources in the surveys [Hirning et al., 2014]. These two studies divided the entire fisheye image into small areas for further luminance analysis. However, the results were based on users' annotation on the images rather than filtering pixel values computationally. In addition, a 180-fisheye perspective contains geometric distortion that results in deformation of the objects in the captured scene and will lose graphic fidelity near the edge region when image distortion is removed." }, { "figure_ref": [], "heading": "Computational Simulation", "publication_ref": [ "b1", "b13", "b9", "b3" ], "table_ref": [], "text": "Computational approach allows batch simulation with high flexibility in view setting and time selection. Several experiments displayed 180-degree FOV images in VR headsets to study subjective visual perception in daylit spaces [Chamilothori et al., 2019] and evaluate the effects of window size on subjective impressions [Moscoso et al., 2021]. A fisheye rendering with 180-degree FOV shows the full scope of the interior scene perceived by the human eyes. Using a digital office model, Jakubiec and Reinhart rendered 360-degree panoramic scenes to explore a single occupant's flexible view directions through point-in-time and annual luminance simulations [Jakubiec and Reinhart, 2012]. Later, Hashemloo et al. rendered indoor luminance images and used the overlit areas to design the shading strategies at different floor orientations, only focusing on the point-in-time simulation results [Hashemloo et al., 2016].\nToo much light entering through windows can cause excessive contrast and glare problems. As glare is view-dependent, it can be difficult to predict glare experienced by multiple users at different locations. The existing challenges in luminance analysis include:\n multiple view directions due to different computer screens  multiple view heights due to adjustable standing desks  occupants' different luminance thresholds due to individual preferences\nThese challenges necessitate a computational method that integrates multiple design parameters and synthesizes the results from different simulation results." }, { "figure_ref": [], "heading": "Objectives", "publication_ref": [], "table_ref": [], "text": "This study considers various view directions, positions, and luminance thresholds in a typical open workplace. The workflow illustrates using multiple inputs (such as view direction, height and luminance thresholds) to develop facade patterns. Specifically, the objectives of this study are:\n To develop a geometry-based method to project a 2D fisheye image into the 3D space.\n To perform batch luminance simulation for point-in-time and annual climate-based modes.\n To use luminance threshold to outline the facade areas that emit excessive luminance from different view positions and directions." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Building Model", "publication_ref": [], "table_ref": [], "text": "A 182.0 m 2 office room (Fig. 2) was selected as the base case, with dimensions of 10.0m (width) by 18. " }, { "figure_ref": [ "fig_2" ], "heading": "Simulation Settings", "publication_ref": [ "b23", "b12", "b19", "b3", "b15" ], "table_ref": [], "text": "The simulation analyses were conducted in Climate Studio [Solemma LLC, 2021], built on Radiance which is a physics-based rendering system built on a light-backwards ray-tracing approach [Ward, 1994]. For architectural lighting simulation, Radiance has been validated for estimating the interior light levels under different sky conditions [Mardaljevic, 1995] As shown in Fig. 3, a vector is established by two points; a point on the center of computer screen and a point at eye level. This vector describes the view direction when the occupant sits in front of the desk and looks at one of the computer monitors. In this study, FOV is fixed at 180 (180-degree in both horizontal and vertical directions). et al., 2010]. In another field study, a 3200 cd/m 2 luminance value was observed as the upper limit when users adjusted the blinds to achieve visual comfort [Sutter et al., 2006]. A recent simulation study of luminance projection in an office space defined 3000 cd/m² as the threshold for discomfort glare [Hashemloo et al., 2016].\nThis study used 3000 cd/m 2 as the upper threshold for generating the initial rendered images. In these greyscale fisheye images, a pixel value of 0 corresponds to 0 cd/m 2 and a pixel value of 255 corresponds to 3000 cd/m 2 . When testing the impacts of different thresholds on facade design, the target luminance value will be approximated by filtering pixel values proportionally. As previously mentioned, several thresholds have been used in the literature, but the most common one is 2000 cd/m 2 [Pierson et al., 2018]. Therefore, in this study, pixels having a luminance value higher than 2000 cd/m 2 are treated as the potential glare source." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Image Processing", "publication_ref": [ "b20", "b6" ], "table_ref": [], "text": "A digital image obtained from a luminance simulation typically contains noise. White noise will cause errors when filtering high pixel values. Bilateral Filtering [Tomasi and Manduchi, 1998] The existing building standards and metrics use time frequency to highlight the most problematic areas or times when glare occurs. To highlight the facade areas that allow too much light (over 2000 cd/m 2 ) over an extended period, a frequency test was conducted. When computational approaches simulated the annual daylight performance in 8760 hours, few of the previous studies just selected daylight hours [Inanici, 2021]. Based on batch simulation results, all rendered images are binarized by the target luminance threshold (2000 cd/m 2 ). The sum of each pixel value from the binarized images was divided by the number of superimposed images (10 images) to ensure the value of each pixel was within the range of 0-255. In this case, the pixel value can be regarded as time frequency (Fig. 5). Any pixel value that is over the luminance threshold for more than the listed percentile is colored in red. As can be seen in Fig 5, when the time frequency is set to 5%, most of the pixels are selected. With the increase of time frequency, fewer pixels are colored. The result of the 95 percentile only highlighted a part of the glazing and ground area, while the 50 percentile identified some high luminance values from both ceiling and floor. This study chose the 50 percentile to examine the target luminance (2000 cd/m 2 ). " }, { "figure_ref": [ "fig_6", "fig_7" ], "heading": "Luminance Mapping Workflow", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 1 (a), this study first rendered a fisheye image from an occupant's position. Then, a target luminance threshold (2000 cd/m 2 ) was used to divide all the pixels into white (above threshold) and black (under threshold). Image pixels were converted into 3D spherical coordinates. As illustrated in Fig. 6, each point (p) on the hemisphere can be expressed as:\n𝑝 𝑥 𝑦 𝑧 𝑟 𝑠𝑖𝑛 𝜑 𝑐𝑜𝑠 𝜃 𝑟 𝑠𝑖𝑛 𝜑 𝑠𝑖𝑛 𝜃 𝑟𝑐𝑜𝑠𝜑 (1)\nWhere 𝑟 is the sphere radius, 𝜃 is the polar angle, and 𝜑 is the azimuthal angle. After the view vector is defined and pixels are projected into a 3D space, the rendered fisheye image (Fig. 1 (a)) is mapped onto the hemispherical surface (Fig. 1 (b)). On the hemisphere, target pixels are projected into 3D space (Fig. 1 (c)), and the projected geometries will outline the facade areas that emit high luminance values in the given view position (Fig. 1 (d)). Higher image resolution results in longer computation time. To increase computation efficiency, when converting image coordinates, image resolution is reduced from 400 (Height) by 400 (Width) to 80 (Height) by 80 (Width); this resolution maintained the edges of pixel areas and achieved efficient computation time (5.1 seconds) for a single image. Fig. 7 shows four image resolutions and the corresponding computation time on Intel(R) Core(TM) i7-8650U CPU. " }, { "figure_ref": [], "heading": "RESULT", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Single View in Point-in-Time Simulation", "publication_ref": [], "table_ref": [], "text": "Current climate data and simulation software do not support small time intervals (such as 1 minute) in daylight calculations. To address this shortcoming, the point-in-time simulation was set to run from 1:00-2:00 PM on March 21 st . The developed simulation algorithm divides the onehour simulation into 10 intervals. Under this setting, the scene was simulated every 6 minutes. " }, { "figure_ref": [ "fig_8" ], "heading": "Single View in Different Months", "publication_ref": [], "table_ref": [], "text": "Sun's position and direction change over time, causing the indoor luminance distribution to vary in different months. This section focuses on the results of luminance simulations for each month. The simulation period was from 9:00 AM to 6:00 PM at every one hour. The total number of simulated images was 10 (hours) multiplied by the number of days in each month. Following the image processing workflow (Fig. 8), Fig. 9 illustrates the results from View01 in 12 months using 2000cd/m 2 as the luminance threshold and highlights the pixels that receive over 2000 cd/m 2 for more than 50% of the time.\nThe results indicate that larger overlit areas colored in red and outlined on the glazing surface between March and September, compared to the other months. Without the surrounding buildings modeled in this study, the ground surface reflects too much light between February and October. During the winter, the desk surface receives excessive light due to the lower sun's altitude. The floor also shows overlit areas in February, March, April, September, and October. Fig. 10 illustrates the projected pixels on the facade surface in different months. The results between February and October show the glazing area in the middle to the left of the room consistently allows extra light to enter the workplace, while the overlit areas are smaller in January, November, and December. " }, { "figure_ref": [ "fig_10", "fig_10", "fig_1", "fig_10" ], "heading": "Multiple Views in Different Months", "publication_ref": [], "table_ref": [], "text": "View01, View02, and View03 used 2000cd/m 2 and 50% as target luminance and time frequency, respectively. The luminance simulation ran for 31 days in March from 9:00 AM to 6:00 PM at 10 intervals. In total, 310 rendered images were collected from each view position. Fig. 11 (b) shows the results from three views (Fig. 11 (a)). In Fig. 11 (c), View03 shows the smallest overlit areas among the three view positions.\nDue to the different view directions, although view01 and view02 outlined different areas on the facade, the middle area of the facade shows consistent overlit areas. Fig. 11 (d) shows the facade surface when three projected facade patterns overlapped. The three magenta colors illustrate the overlit areas by the number of views (one, two, and three), and the overlapped regions represent the upper middle areas that regularly allow too much light to enter the workplace, potentially causing glare.\nTo analyze facade patterns in different seasons, March, June, and December were selected for luminance simulation. The results for June and December are shown in Fig. 12, following the same settings as Fig. 11. Similar to March, the upper middle glazing area emits excessive light from the three views in June. For December, the coverage of outlined areas was smaller than in March and June, and the lower part of the facade area was excluded." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Occupant-Centric Luminance Analysis", "publication_ref": [], "table_ref": [], "text": "This study proposes an occupant-centric view analysis approach that uses fisheye images to project pixels with high luminance from the occupant's perspective to the facade to improve daylight performance.\nThe workflow provides a technique to transform two-dimensional fisheye images into three-dimensional surfaces. It also allows for analyzing a large number of rendered scenes under different parameters and provides an image processing workflow that highlights the areas of the glazing that emit too much direct sunlight into the workplace.\nSuch an approach is effective in studying instantaneous luminance variations over a long period of time. While the results of this study only focused on pixels that received above 2000 cd/m 2 over 50% of the daytime hours, the workflow could easily implement other thresholds for further analysis." }, { "figure_ref": [], "heading": "Applications in Design", "publication_ref": [], "table_ref": [], "text": "Although this study only tested three view positions in an open office plan, future studies could include more occupants, view settings, and simulation parameters. By establishing a view vector for different occupants, the computational method can be applied to other building typologies, including hospitals, schools, and residences. The workflow could be used to study indoor scenarios in different climate zones, weather conditions, and time periods. The 3D modeling method provides a new approach to guide a high-performance facade design, and the outlined facade areas can be utilized to create novel facade patterns, including internal and external shading devices. " }, { "figure_ref": [ "fig_2", "fig_4", "fig_5" ], "heading": "IMAGE PROCESSING ALGORITHM", "publication_ref": [], "table_ref": [], "text": "The image processing approach is written in Python with opensource libraries. The inputs are a large number of rendered images (Fig. 13). Fig. 14 and Fig. 15 show the intermediate results when running the code. " }, { "figure_ref": [], "heading": "input # inport python packages", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_13", "fig_8" ], "heading": "LUMINANCE MAPPING ALGORITHM", "publication_ref": [], "table_ref": [], "text": "The workflow is built on Grasshopper platform in Rhino. The inputs include two points and one image. Two points are the view start point and view end point for rendering the fisheye image (Fig. 17). As shown in Fig. 18, the input image is vertically flipped from the original binary image. " }, { "figure_ref": [], "heading": "Input", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_15" ], "heading": "Output", "publication_ref": [], "table_ref": [], "text": "The output from the Grasshopper workflow (Fig. 19) illustrates the outlined facade areas (Fig. 20). " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "© 2020-2022 PROCEEDING DOI: https://doi.org/10.47330/DCIO.2022.FLXI8620" } ]
Fig. 1. Rendering and Luminance Mapping: (a) fisheye rendering in 180-degree Field of View (FOV); (b) pixel mapping from the image to spherical coordinates; (c) projection of high luminance values from the hemisphere onto the building facade; (d) facade areas outlined by the high luminance values from one view position.
View-Based Luminance Mapping in Open Workplace
[ { "figure_caption": "2m (length) by 3.7m (height). The model is provided by Climate Studio software [Solemma LLC, 2021] built in Rhino [Robert McNeel & Associates, 2021] and Grasshopper [Grasshopper -algorithmic modeling for Rhino, 2022]. The original model was adjusted to include two computer monitors at each desk, and the desks were set to vary in height. The South elevation was selected for luminance mapping.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Building Model Floor Plan and South Elevation Three view positions were selected in the model. To test different view directions in the workplace, each view position is set to focus on a different computer monitor. View01 is the closest to the South facade with a desk height of 0.7m, focusing on the right computer monitor. View02 has a desk height of 0.7m and is set to focus on the left computer monitor. View03 has a desk height of 1.0 m and focuses on the right computer monitor. Figure 2 describes the three occupants in 3 different office locations.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. View Setting (View01) Previous studies on visual comfort used different luminance thresholds. One simulation study used 2000 cd/m 2 as the absolute luminance threshold in the office environment [Van Den Wymelenberget al., 2010]. In another field study, a 3200 cd/m 2 luminance value was observed as the upper limit when users adjusted the blinds to achieve visual comfort[Sutter et al., 2006]. A recent simulation study of luminance projection in an office space defined 3000 cd/m² as the threshold for discomfort glare[Hashemloo et al., 2016].", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "was developed for smoothing images based on the nonlinear combination of surrounding pixel values. The method can remove the noise in the image and preserve the sharp edges. In this study, the Bilateral Filter function (sigmaColor = 75, sigmaSpace = 75, diameter of each pixel neighborhood = 15) from the OpenCV library [Intel Corporation, 2021] was used to smooth the original images, and the effect of Bilateral Filtering is presented in Fig. 4.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Left: Original Rendered Image, Right: Image Smoothed by Bilateral Filtering", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Alternatives for Time Frequency", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Point Coordinates on Hemisphere", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Image Resolution and Computation Time", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Image Processing Workflow As shown in Fig. 8, all rendered images were binarized using 2000 cd/m 2 as the benchmark. The luminance values above 2000 cd/m 2 were colored in white, and any pixels below 2000 cd/m 2 were colored in black. The result highlighted the areas where luminance above 2000cd/m 2 occurred over 50% percent of the selected daytime hours. The red regions reflect the target areas and the projected red pixels outline the corresponding facade areas.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9. Single View in 12 Months", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. Multiple Views in Mar.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 13. simulated luminance images with noises", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 16 .16Fig. 16. Left: Binary Image, Right: Colored Final Image", "figure_data": "", "figure_id": "fig_12", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Fig. 17 .17Fig. 17. Two Points for View Vector", "figure_data": "", "figure_id": "fig_13", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "FigFig. 19. Grasshopper Workflow", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 20 .20Fig. 20. Outlined Facade Areas", "figure_data": "", "figure_id": "fig_15", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "County.AP.725205_TMY3.epw). The simulation parameters are listed in TABLE I. Window blinds and electrical lighting devices were not included in the model. Detailed material properties are listed in the Appendix (TABLE II).", "figure_data": ". Lighting simulations used the CIE clear skyconditionandlocalweatherfile(USA_PA_Pittsburgh-Allegheny.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": ".SIMULATION PARAMETERS AND ASSIGNED VALUESambientray weight (lw)samples perimagebounces (ab)pixeldimensions80.01100400 (Height)400 (Width)", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "OBJECTS AND MATERIAL PROPERTY LM83: IES standard LM-83-12 [ Illuminating Engineering Society of North America (IESNA),2012]", "figure_data": "Opaque MaterialMaterial NameRed Reflectance Green ReflectanceBlue ReflectanceSpecularRoughnessMullionWall LM830.50.50.500CeilingCeiling LM830.70.70.700FloorFloor LM830.20.20.200WallWall LM830.50.50.500FurnitureFurniture LM830.50.50.500Glazed materialMaterial NameTavisRvis.frontRvis.backU-valueSHGCGlazingClear-Clear, Double Layer0.7740.1490.1502.69W/m2K0.703", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" } ]
St Guanzhou Ji; Nd Tingsong Ou; Azadeh O Rd; Sawyer
[ { "authors": "Alicia C Simon; S ", "journal": "CIE -International Commission on Illumination", "ref_id": "b0", "title": "Evaluating Visual Comfort in Open-Plan Offices : Exploration of Simple Methods for Evaluation and Prediction", "year": "2017" }, { "authors": "K Chamilothori; J Wienold; M Andersen", "journal": "LEUKOS -Journal of Illuminating Engineering Society of North America", "ref_id": "b1", "title": "Adequacy of Immersive Virtual Reality for the Perception of Daylit Spaces: Comparison of Real and Virtual Environments", "year": "2019" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "Grasshopper -algorithmic modeling for Rhino", "year": "2021" }, { "authors": "A Hashemloo; M Inanici; C Meek", "journal": "Journal of Building Performance Simulation", "ref_id": "b3", "title": "GlareShade: a visual comfortbased approach to occupant-centric shading systems", "year": "2016" }, { "authors": "M B Hirning; G L Isoardi; I Cowling", "journal": "Energy and Buildings", "ref_id": "b4", "title": "Discomfort glare in open plan green buildings", "year": "2014" }, { "authors": "M B Hirning; G L Isoardi; S Coyne", "journal": "Building and Environment", "ref_id": "b5", "title": "Discomfort glare assessment and prevention for daylight applications in office environments", "year": "2017" }, { "authors": "M Inanici", "journal": "Springer Nature Switzerland AG", "ref_id": "b6", "title": "Research Methods in Daylighting and Electric Lighting", "year": "2021" }, { "authors": "", "journal": "Intel Corporation", "ref_id": "b7", "title": "OpenCV", "year": "2021-07-26" }, { "authors": "S Jain; C Karmann; J Wienold", "journal": "Energy and Buildings", "ref_id": "b8", "title": "Behind electrochromic glazing: Assessing user's perception of glare from the sun in a controlled environment", "year": "2022" }, { "authors": "J A Jakubiec; C F Reinhart", "journal": "Lighting Research and Technology", "ref_id": "b9", "title": "The 'adaptive zone'-A concept for assessing discomfort glare throughout daylit spaces", "year": "2012" }, { "authors": "Z Kong; D M Utzinger; K Freihoefer", "journal": "Building and Environment", "ref_id": "b10", "title": "The impact of interior design on visual discomfort reduction: A field study integrating lighting environments with POE survey", "year": "2018-04" }, { "authors": "Li W Samuelson; H ", "journal": "Elsevier", "ref_id": "b11", "title": "A new method for visualizing and evaluating views in architectural design", "year": "2020" }, { "authors": "J Mardaljevic", "journal": "Lighting Research & Technology", "ref_id": "b12", "title": "Validation of a lighting simulation program under real sky conditions", "year": "1995" }, { "authors": "C Moscoso; K Chamilothori; J Wienold", "journal": "LEUKOS -Journal of Illuminating Engineering Society of North America", "ref_id": "b13", "title": "Window Size Effects on Subjective Impressions of Daylit Spaces: Indoor Studies at High Latitudes Using Virtual Reality", "year": "2021" }, { "authors": "A Omidfar; M Niermann; L N Groat", "journal": "", "ref_id": "b14", "title": "The use of environmental aesthetics in subjective evaluation of daylight quality in office buildings", "year": "2015" }, { "authors": "C Pierson; J Wienold; M Bodart", "journal": "Buildings", "ref_id": "b15", "title": "Daylight discomfort glare evaluation with evalglare: Influence of parameters and methods on the accuracy of discomfort glare prediction", "year": "2018" }, { "authors": "Robert Mcneel; & Associates", "journal": "", "ref_id": "b16", "title": "Rhino 6 for Windows and Mac", "year": "2020" }, { "authors": "S Rockcastle; K Chamilothori; M Andersen", "journal": "", "ref_id": "b17", "title": "An Experiment in Virtual Reality to Measure Daylight-Driven Interest in Rendered Architectural Scenes", "year": "2017" }, { "authors": "", "journal": "Solemma LLC", "ref_id": "b18", "title": "Climate Studio", "year": "2020" }, { "authors": "Y Sutter; D Dumortier; M Fontoynont", "journal": "Energy and Buildings", "ref_id": "b19", "title": "The use of shading systems in VDU task offices: A pilot study", "year": "2006" }, { "authors": "C Tomasi; R Manduchi", "journal": "", "ref_id": "b20", "title": "Bilateral filtering for gray and color images", "year": "1998" }, { "authors": "I Turan; C Reinhart; M Kocher", "journal": "", "ref_id": "b21", "title": "Evaluating spatially-distributed views in open plan work spaces", "year": "2019" }, { "authors": "K Van Den Wymelenberg; M Inanici; Johnson P ", "journal": "LEUKOS -Journal of Illuminating Engineering Society of North America", "ref_id": "b22", "title": "The effect of luminance distribution patterns on occupant preference in a daylit office environment", "year": "2010" }, { "authors": "G J Ward", "journal": "SIGGRAPH", "ref_id": "b23", "title": "The RADIANCE lighting simulation and rendering system", "year": "1994" }, { "authors": "J Wienold; J Christoffersen", "journal": "Energy and Buildings", "ref_id": "b24", "title": "Evaluation methods and development of a new glare prediction model for daylight environments with the use of CCD cameras", "year": "2006" }, { "authors": "K Wymelenberg; Van Den; M Inanici", "journal": "LEUKOS", "ref_id": "b25", "title": "A Critical Investigation of Common Lighting Design Metrics for Predicting Human Visual Comfort in Offices with Daylight", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 367.5, 479.47, 136.34, 28.47 ], "formula_id": "formula_0", "formula_text": "𝑝 𝑥 𝑦 𝑧 𝑟 𝑠𝑖𝑛 𝜑 𝑐𝑜𝑠 𝜃 𝑟 𝑠𝑖𝑛 𝜑 𝑠𝑖𝑛 𝜃 𝑟𝑐𝑜𝑠𝜑 (1)" } ]
10.1103/PhysRevD.107.063523
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b7", "b10", "b15", "b2", "b3", "b17", "b10", "b13", "b1", "b5", "b11", "b7", "b0", "b14", "b12", "b4", "b2", "b16", "b18", "b17" ], "table_ref": [], "text": "Differential equations are crucial in scientific modeling, traditionally solved by methods such as Runge-Kutta and finite element analysis. Recently, Physics-Informed Neural Networks (PINNs) have shown promise in solving ODEs and PDEs by leveraging neural network capabilities (see Hao et al. [2023], Karniadakis et al. [2021], Lagaris et al. [1998], Nascimento et al. [2020]and reference within). However, computational cost remains a barrier, as PINNs cannot be generalized across different instances of similar equation types Krishnapriyan et al. [2021], Fesser et al. [2023], a workaround is to train for multiple instances of same equation Flamant et al. [2020]. To address this, we propose a novel hybrid approach that combines the perturbation method with one-shot transfer learning on PINNs Protopapas [2021] to efficiently and accurately solve non-linear ODEs of same type.\nRelated Work Introduced in 1998, neural networks for solving differential equations paved the way for today's Physics-Informed Neural Networks (PINNs) Lagaris et al. [1998], NeuroDiffEq and DeepXDE Lu [2019], Chen [2020] are two popular software programs that employ PINNs. PINNs have gained popularity and find applications in many domains today, a review of current state-of-the-art applications of PINNs can be found in Hao et al. [2023], Lawal et al. [2022] and Karniadakis et al. [2021]. PINNs have shown tremendous success in solving complex problems whose analytical solutions don't exist Chantada et al. [2023], however they perform poorly with generalizing solutions. Work related to adding physical constraints in NN structure Mattheakis et al. [2020], bounding errors on PINNs Liu et al. [2023], characterizing and mitigating failure modes Krishnapriyan et al. [2021] and improving uncertainty quantification on Bayesian PINNs Graf et al. [2022] has been important in increasing reliability of PINNs. PINNs handle interpolation problems very well, however face trouble with extrapolation. Transfer learning (TL) methods may be a potential solution, a study of effectiveness of TL can be found in this work Fesser et al. [2023] and an application can be found here Pellegrin et al. [2022]. In [Pellegrin et al., 2022, Zou andKarniadakis, 2023], a multi-headed neural network was used to learn the \"latent space\" of a class of differential equations. Researchers have developed a transfer learning approach to solve linear differential equations in \"one-shot\" Protopapas [2021]. Extending this to non-linear equations is not possible without modifications since non-linear equations have a non-quadratic loss function which cannot be optimized analytically in one-shot. The head weights have to be learned through an iterative process, such as gradient descent. Our study fills this gap in the literature by extending one-shot transfer learning to non-linear ODEs using perturbation method.\nWe are interested in solving non-linear ODEs with a single polynomial non-linear term of the following form:\nDx + ϵx q = f (t). (1\n)\nwhere D is a differential operator of the form D = m j=0 g j d j\ndt j and the RHS is a time-dependent forcing function. Note that we define g 0\nd 0 dt 0 x = g 0 x.\nThe equation is also subject to a boundary conditions: x(t = 0) = x * and d j dt j x(t = 0) = x (j) * for j = 1, 2, ..., m -1. Our framework employs perturbation and one-shot transfer learning to solve a specific class of non-linear ODEs. We plan to extend this to handle systems of ODEs and PDEs." }, { "figure_ref": [], "heading": "Perturbation Method", "publication_ref": [ "b6" ], "table_ref": [], "text": "As mentioned above, non-linear ordinary differential equations (ODEs) do not possess analytical solutions to their loss functions with respect to the weights of the linear output layer, which is necessary for one-shot transfer learning.\nIn order to remove the non-linearity in the equation, we approximate the non-linear term ϵx q as a composition of functions J. Kevorkian [2010]. Assume x = ∞ i=0 ϵ i x i , where x i are unknown functions of t. We approximate x with only p terms as: x ≈ p i=0 ϵ i x i . It is important to note that this truncated expansion of x is only meaningful when the magnitude of ϵ is less than 1. Furthermore, the p-term approximation is more precise when the magnitude of ϵ is smaller. Fortunately, in most cases, we can adjust the equation by scaling to reduce the magnitude of ϵ. When we substitute the p-term approximation into 1 and expand using multinomial theorem , we obtain:\np i=0 ϵ i Dx i + ϵ   k0+k1+...+kp=q q! k 1 !k 2 !...k p ! ϵ p i=0 iki p i=0 x ki i   = f,(2)\nThe LHS of equation ( 3) is a polynomial of ϵ. Since it holds for all values of ϵ, the 0 th order term of LHS should be equal to the RHS and the coefficients of all higher-order terms of ϵ should all be 0. Therefore, for the 0 th order, we obtain: Dx 0 = f , and more generally for the j th order, where 1 ≤ j ≤ p, we have:\nDx j = - k0+k1+...+kp=q p i=0 iki=j-1 q! k 1 !k 2 !...k p ! p i=0 x ki i := f j .(3)\nwhere f j is the forcing function for j th ODE. The first few terms for this expansion look like:\nDx 0 = f Dx 1 = -x 2 0 Dx 2 = -2x 0 x 1 ...(4\n) The forcing function f j depends only on previously solved x i 's. Therefore, (1) is reduced to a series of p + 1 linear ODEs of the same form: Dx j = f j that can be solved iteratively. There are a variety of ways to ensure that the initial boundary conditions are met. The main concept is to fix all p + 1 boundary conditions to be the same, so that the total solution's, x, boundary condition is satisfied; that is, for all k = 0, 1, ..\n.p, x k (t = 0) = x * / p i=0 ϵ i and d j dt j x k (t = 0) = x (j) * / p i=0 ϵ i ." }, { "figure_ref": [], "heading": "Multi-head Fully Connected Neural Network", "publication_ref": [], "table_ref": [], "text": "As established earlier, solving the non-linear ODE is equivalent to solving a sequence of linear ODEs in the form Dx = f . To minimize computational complexity, we transform higher-order differential equations into first-order equations by introducing m -1 additional dependent variables for an m th order differential equation. Let u = [x, x (1) , x (2) , . . . , x (m-1) ] T be a function mapping from R to R m , where x (1) = ẋ and x (i) = ẋ(i-1) for i = 2, . . . , m -1. The equation Dx = f is then reduced to a first-order linear ODE system (see Appendix C for details).\n   ẋ -x (1) = 0 ẋ(i-1) -x (i) = 0, i = 2, 3, 4, ..., m -1 g 0 x + m-1 i=1 g i x (i) + g m ẋ(m-1) = f. (5\n)\nEquation 5 is equivalent to: B u + Au = F j with boundary conditions u(t = 0) = u * ∈ R m , where u = [ ẋ, ẋ(1) , ẋ(2) , ..., ẋ(m-1) ] T and F j = [0, 0, ..., f j ] T . A detailed description of the matrices A and B can be found in Appendix A.\nWe create a fully connected neural network with K heads in two parts to approximate the K functions {u k } K k=1 . The first part connects a 1D input to hidden layers, with the last layer having dimension mh. The activations of the last hidden layer are reshaped into a matrix H ∈ R m×h , which reflects the hidden state of the ODE class and is then passed to the second part of the network. H connects to K heads, each associated with a linear ODE system. The output of each head is ûk = HW k ∈ R m . A diagram of the general structure of the network can be found in Appendix B. The loss function for the k th head in the network is defined over a sampled data set T as :\nL k = 1 mN t∈T ||B k uk (t) + A k ûk (t) -F k (t)|| 2 2 + 1 m || ûk (0) -u * k || 2 2 . (6\n)\nwhere u * k is the boundary condition of the k th ODE. The total loss of the network is defined as:\nL total = 1 K K k=1 L k\nThe purpose of training this neural network is to learn the latent space for one class of linear ODEs. Ideally, the larger K is, the better the learning of latent space and hence a generalization to a wider range of parameters." }, { "figure_ref": [], "heading": "One-Shot Transfer Learning", "publication_ref": [], "table_ref": [], "text": "After training, we freeze the weights in the hidden layers. When encountering a new ODE of the same class, we only use one head, and the weights in this head can be calculated analytically in one shot. Suppose W is the time-independent network parameter in the last layer. The network now becomes: û(t) = H(t)W . We get the loss for this single-head neural network by substituting ûk = û(t) = H(t)W into Eq. 6:\nL = 1 mN n t∈T ||B Ḣt W + AH t W -F (t)|| 2 2 + 1 m ||H 0 W -u * || 2 2 ,(7)\nwhere H t is the hidden state of the network evaluated at t and H 0 is the hidden state at the boundary. Differentiating L from W and setting dL dW = 0, we obtain (details omitted):\nW = M -1 H T 0 u * + 1 N t∈T B Ḣt F (t) + 1 N t∈T H T t A T F (t) ,(8)\nM = 1 N t∈T ( ḢT t B T B Ḣt + ḢT t B T AH t + H T t A T B Ḣt + H T t A T AH t ) + H T 0 H 0 . (9)\nFor a fixed Duffing equation, the matrices A and B are fixed for all its p + 1 reduced ODE systems. Thus, M only needs to be computed and inverted once. We only need to update the forcing function F in 8 which iteratively depends on previous solutions. By reusing the first part of the neural network and using only one head, we optimally and iteratively compute the head parameters for each ODE system to solve them." }, { "figure_ref": [], "heading": "Result", "publication_ref": [], "table_ref": [], "text": "We applied our proposed methodology to the 1D Duffing equation. The Duffing equation we are interested in 10 is a second order non-linear ODE with five parameters: δ, α, β, γ, ω and one boundary condition x(0) = x * . All higher-order boudnary conditions are set to 0.\nd 2 x dt 2 + δ dx dt + αx + βx 3 = γcos(ωt).(10)\nUsing our framework, we first utilized the perturbation method and introduced new variables to reduce the Duffing equation to a series of p + 1 first-order linear ODE systems of the form: ui + Au i = F i . We then built a network described in Section 2.2 with 10 heads. Each head represents a unique parameter setting. The specific details of the network structure can be found in Appendix B.\nThe 10 parameter sets are uniformly randomly generated in the following range:\nγ ∈ (0.5, 3), ω ∈ (0.5, 3), α ∈ (0.5, 4.5), δ ∈ (0.5, 4.5), u * 1 ∈ (-3, 3).\n(11) and u * 2 = 0. After training (details in Appendix B), the network can accurately solve the 10 systems, and it acquires a significant understanding of the latent space of the not-linear ODE. We then test our method on an unseen Duffing equation. We measure the performance of the TL solution by computing the ODE loss of the Duffing equation. We used 14 different values of p to solve and approximate the solution. As shown in Figure 1(a), as the p value increases, the Duffing ODE loss decreases to around 10 -3.75 . The elbow shape can be used to figure out how many terms should be included in the perturbation expansion p. We also test our method by comparing the transfer learning solutions with numerical solutions (explicit Runge-Kutta method of order 8) on 20 randomly generated Duffing equations in the same parameter range 11 (we fix β = 0.5 and p = 12). Each Duffing equation can be solved in seconds.\n(Details in Appendix B) As shown in Figure 2(b), the 20 transfer learning solutions align almost perfectly with the numerical solutions, indicating that our methodology is very effective on equations in the same parameter range 11." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduced a framework using perturbation and one-shot transfer learning on PINNs to efficiently solve non-linear ODEs. We reduced non-linear ODEs to linear ODEs, trained a neural network with k heads to handle them, and derived a formula for network weights. This approach allows us to solve various non-linear ODEs of the same form with a single trained network. Future work aims to extend this methodology to various non-linear ODE and PDE systems.\nOur work should be considered as a starting point for this methodology. Future work is needed to extend the framework to non-linear ODE and PDE systems with various non-linearity forms.\ng 0 x + m i=1 g i d i dt i x = f,(13)\nThe introduced variables are defined as: x (1) = ẋ and x (i) = ẋ(i-1) for i = 2, 3, ..., m -1. Expand these relations iteratively, we obtain d i dt i x = x (i) for i = 1, 2, ..., m -1, plug into 13, we obtain:\ng 0 x + m-1 i=1 g i x (i) + g m ẋ(m-1) = f. (14\n)\nThe definitions of these m -1 variables introduces another m -1 constraints, together with 14, we recover the linear ODE system in 5." }, { "figure_ref": [], "heading": "Supplementary Material Appendix A", "publication_ref": [], "table_ref": [], "text": "The matrix B 12 is an m by m diagonal matrix in which the first m-1 diagonals are all 1 and the last diagonal is g m . Matrix A 12 is an m by m matrix whose second upper diagonal entries are all -1 and last row is [g 0 , g 1 , ..., g m-1 ].\nAppendix B\nHere we show the diagram of the general multi-head PINN structure. In our real implementation to solve Duffing Equation, the network has 4 layers of hidden layers with width 256, 256, 256, 512 respectively. Hidden layers are all connected by tanh activation functions. The activations of the last hidden layer is reshaped into a matrix H ∈ R 2×256 . The matrix H is connected to 10 heads of dimension 2 by a linear transform. To train the network, we used Adam optimizer for 5000 iterations with an initial learning rate of 2 × 10 -4 . Note that we applied an exponential decay to the learning rate: the learning rate is multiplied by a factor of 0.96 each 100 iterations. 200 random data points are uniformly generated in the domain (0, 5) in each iteration as a sample set to compute the ODE loss. After 5000 iterations, the total loss (the sum of ODE loss and boundary loss) is reduced below 10 -4 .\nWe ran our code on google colab using an Intel Xeon CPU with 2 vCPUs (virtual CPUs) and 51GB of RAM. Solving an unseen Duffing equations generally takes 0.5p + 1 seconds, where p + 1 is the number of linear ODE systems we reduce to using perturbation method and the additional 1 s is the time to compute and invert the matrix M ." }, { "figure_ref": [], "heading": "Appendix C", "publication_ref": [], "table_ref": [], "text": "To reduce the ODE Dx = f to first order, we introduce m -1 time-dependent variables: {x (i) } m-1 i=1 to form a function u : R → R m . u = [x, x (1) , x (2) , ..., x (m-1) ] T . Dx = f is equivalent to:" } ]
We introduce a generalizable approach that combines perturbation method and one-shot transfer learning to solve nonlinear ODEs with a single polynomial term, using Physics-Informed Neural Networks (PINNs). Our method transforms nonlinear ODEs into linear ODE systems, trains a PINN across varied conditions, and offers a closed-form solution for new instances within the same non-linear ODE class. We demonstrate the effectiveness of this approach on the Duffing equation and suggest its applicability to similarly structured PDEs and ODE systems.
One-Shot Transfer Learning for Nonlinear ODEs
[ { "figure_caption": "Figure1", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Wanzhou Lei; Pavlos Protopapas; Joy Parikh
[ { "authors": "A T Chantada; S J Landau; P Protopapas; C G Scóccola; C Garraffo", "journal": "Phys. Rev. D", "ref_id": "b0", "title": "Cosmology-informed neural networks to solve the background dynamics of the universe", "year": "2023-03" }, { "authors": "Feiyu Chen", "journal": "Journal of Open Source Software", "ref_id": "b1", "title": "Neurodiffeq: A python package for solving differential equations with neural networks", "year": "2020" }, { "authors": "L Fesser; R Qiu; L D'amico-Wong", "journal": "", "ref_id": "b2", "title": "Understanding and mitigating extrapolation failures in physics-informed neural networks", "year": "2023" }, { "authors": "C Flamant; P Protopapas; D Sondak", "journal": "", "ref_id": "b3", "title": "Solving differential equations using neural network solution bundles", "year": "2020" }, { "authors": "O Graf; P Flores; P Protopapas; K Pichara", "journal": "", "ref_id": "b4", "title": "Error-aware b-pinns: Improving uncertainty quantification in bayesian physics-informed neural networks", "year": "2022" }, { "authors": "Z Hao; S Liu; Y Zhang; C Ying; Y Feng; H Su; J Zhu", "journal": "", "ref_id": "b5", "title": "Physics-informed machine learning: A survey on problems, methods and applications", "year": "2023" }, { "authors": "J D C J Kevorkian", "journal": "Springer", "ref_id": "b6", "title": "Perturbation Methods in Applied Mathematics", "year": "2010" }, { "authors": "G E Karniadakis; I G Kevrekidis; L Lu; P Perdikaris; S Wang; L Yang", "journal": "Nature Reviews Physics", "ref_id": "b7", "title": "Physics-informed machine learning", "year": "2021" }, { "authors": "A Krishnapriyan; A Gholami; S Zhe; R Kirby; M W Mahoney", "journal": "", "ref_id": "b8", "title": "Characterizing possible failure modes in physics-informed neural networks", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b9", "title": "", "year": "2021" }, { "authors": "I Lagaris; A Likas; D Fotiadis", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b10", "title": "Artificial neural networks for solving ordinary and partial differential equations", "year": "1998" }, { "authors": "Z K Lawal; H Yassin; D T C Lai; A Che Idris", "journal": "Big Data and Cognitive Computing", "ref_id": "b11", "title": "Physics-informed neural network (pinn) evolution and beyond: A systematic literature review and bibliometric analysis", "year": "2022" }, { "authors": "S Liu; X Huang; P Protopapas", "journal": "", "ref_id": "b12", "title": "Residual-based error bound for physics-informed neural networks", "year": "2023" }, { "authors": "E A Lu; Lu ", "journal": "CoRR", "ref_id": "b13", "title": "Deepxde: A deep learning library for solving differential equations", "year": "2019" }, { "authors": "M Mattheakis; P Protopapas; D Sondak; M D Giovanni; E Kaxiras", "journal": "", "ref_id": "b14", "title": "Physical symmetries embedded in neural networks", "year": "2020" }, { "authors": "R G Nascimento; K Fricke; F A Viana", "journal": "Engineering Applications of Artificial Intelligence", "ref_id": "b15", "title": "A tutorial on solving ordinary differential equations using python and hybrid physics-informed neural network", "year": "2020" }, { "authors": "R Pellegrin; B Bullwinkel; M Mattheakis; P Protopapas", "journal": "", "ref_id": "b16", "title": "Transfer learning with physics-informed neural networks for efficient simulation of branched flows", "year": "2022" }, { "authors": "P Protopapas", "journal": "CoRR", "ref_id": "b17", "title": "One-shot transfer learning of physics-informed neural networks", "year": "2021" }, { "authors": "Z Zou; G E Karniadakis", "journal": "", "ref_id": "b18", "title": "L-hydra: Multi-head physics-informed neural networks", "year": "2023" } ]
[ { "formula_coordinates": [ 2, 269.06, 117.3, 231.07, 11.03 ], "formula_id": "formula_0", "formula_text": "Dx + ϵx q = f (t). (1" }, { "formula_coordinates": [ 2, 500.13, 119.69, 3.87, 8.64 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 2, 274.46, 152.15, 51.29, 15.11 ], "formula_id": "formula_2", "formula_text": "d 0 dt 0 x = g 0 x." }, { "formula_coordinates": [ 2, 169.96, 354.1, 334.04, 33.76 ], "formula_id": "formula_3", "formula_text": "p i=0 ϵ i Dx i + ϵ   k0+k1+...+kp=q q! k 1 !k 2 !...k p ! ϵ p i=0 iki p i=0 x ki i   = f,(2)" }, { "formula_coordinates": [ 2, 201.65, 455.84, 302.35, 41.41 ], "formula_id": "formula_4", "formula_text": "Dx j = - k0+k1+...+kp=q p i=0 iki=j-1 q! k 1 !k 2 !...k p ! p i=0 x ki i := f j .(3)" }, { "formula_coordinates": [ 2, 210.49, 514.9, 289.64, 12.69 ], "formula_id": "formula_5", "formula_text": "Dx 0 = f Dx 1 = -x 2 0 Dx 2 = -2x 0 x 1 ...(4" }, { "formula_coordinates": [ 2, 207.55, 573.3, 263.72, 15.12 ], "formula_id": "formula_6", "formula_text": ".p, x k (t = 0) = x * / p i=0 ϵ i and d j dt j x k (t = 0) = x (j) * / p i=0 ϵ i ." }, { "formula_coordinates": [ 2, 216.43, 686.15, 283.7, 39.24 ], "formula_id": "formula_7", "formula_text": "   ẋ -x (1) = 0 ẋ(i-1) -x (i) = 0, i = 2, 3, 4, ..., m -1 g 0 x + m-1 i=1 g i x (i) + g m ẋ(m-1) = f. (5" }, { "formula_coordinates": [ 2, 500.13, 701.18, 3.87, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 3, 162.87, 205.77, 337.25, 26.8 ], "formula_id": "formula_9", "formula_text": "L k = 1 mN t∈T ||B k uk (t) + A k ûk (t) -F k (t)|| 2 2 + 1 m || ûk (0) -u * k || 2 2 . (6" }, { "formula_coordinates": [ 3, 500.13, 212.83, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 3, 108, 253.9, 89.56, 14.56 ], "formula_id": "formula_11", "formula_text": "L total = 1 K K k=1 L k" }, { "formula_coordinates": [ 3, 172.78, 377.38, 331.22, 30.47 ], "formula_id": "formula_12", "formula_text": "L = 1 mN n t∈T ||B Ḣt W + AH t W -F (t)|| 2 2 + 1 m ||H 0 W -u * || 2 2 ,(7)" }, { "formula_coordinates": [ 3, 131.41, 463.17, 372.59, 26.8 ], "formula_id": "formula_13", "formula_text": "W = M -1 H T 0 u * + 1 N t∈T B Ḣt F (t) + 1 N t∈T H T t A T F (t) ,(8)" }, { "formula_coordinates": [ 3, 131.45, 493.75, 372.55, 26.8 ], "formula_id": "formula_14", "formula_text": "M = 1 N t∈T ( ḢT t B T B Ḣt + ḢT t B T AH t + H T t A T B Ḣt + H T t A T AH t ) + H T 0 H 0 . (9)" }, { "formula_coordinates": [ 3, 230.07, 671.47, 273.93, 23.89 ], "formula_id": "formula_15", "formula_text": "d 2 x dt 2 + δ dx dt + αx + βx 3 = γcos(ωt).(10)" }, { "formula_coordinates": [ 7, 259.58, 82.19, 244.42, 30.32 ], "formula_id": "formula_16", "formula_text": "g 0 x + m i=1 g i d i dt i x = f,(13)" }, { "formula_coordinates": [ 7, 233.52, 162.98, 266.33, 30.32 ], "formula_id": "formula_17", "formula_text": "g 0 x + m-1 i=1 g i x (i) + g m ẋ(m-1) = f. (14" }, { "formula_coordinates": [ 7, 499.85, 173.71, 4.15, 8.64 ], "formula_id": "formula_18", "formula_text": ")" } ]
2023-11-25
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b20", "b29", "b6", "b15", "b25", "b11", "b19", "b19", "b25", "b6", "b15", "b8", "b4", "b3", "b29", "b19", "b11", "b27", "b17" ], "table_ref": [], "text": "Graph neural networks (GNNs) have gained tremendous popularity in recent years due to their ability to capture topological relationships in graph-structured data (Zhou et al., 2020;Oloulade et al., 2021). However, most GNNs are vulnerable to adversarial attacks, which can lead to a substantial decline in predictive performance (Zhang & Zitnik, 2020;Entezari et al., 2020;Jin et al., 2020;Wu et al., 2019;Geisler et al., 2021). Despite the numerous defense strategies proposed to robustify GNNs, a recent study has revealed that most of these defenses are not as robust as initially claimed (Mujkanovic et al., 2022). Specifically, under adaptive attacks, they easily underperform the multilayer perceptrons (MLPs) which do not utilize the graph topology information at all (Mujkanovic et al., 2022). Therefore, it is imperative to thoroughly investigate the limitations of existing defenses and develop innovative robust GNNs to securely harness the topology information in the data.\nExisting defenses attempt to bolster the resilience of GNNs using diverse approaches. For instance, Jaccard-GCN (Wu et al., 2019) and SVD-GCN (Entezari et al., 2020) aim to denoise the graph by removing potential adversarial edges during the pre-processing procedure, while ProGNN (Jin et al., 2020) learns the clean graph structure during the training process. GRAND (Feng et al., 2020) and robust training (Deng et al., 2019;Chen et al., 2020) also improve the training procedure through data augmentation. Additionally, GNNGuard (Zhang & Zitnik, 2020) and RGCN (Zhu et al., 2019) reinforce their GNN architectures by heuristically reweighting edges in the graph. Although these defenses exhibit decent robustness against transfer attacks, i.e., the attack is generated through surrogate models, they encounter catastrophic performance drops when confronted with adaptive adversarial attacks that directly attack the victim model (Mujkanovic et al., 2022).\nConcerned by the false sense of security, we provide a comprehensive study on existing defenses under adaptive attacks. Our preliminary study in Section 2 indicates that SoftMedian (Geisler et al., 2021), TWIRLS (Yang et al., 2021), and ElasticGNN (Liu et al., 2021) exhibit closely aligned performance and notably outperform other defenses under small attack budgets, despite their apparent architectural differences. However, under larger attack budgets, these effective defenses still experience a severe performance decrease and underperform the graph-agnostic MLPs. These observations are intriguing, but the underlying reasons are still unclear.\nTo unravel the aligned robustness and performance degradation of SoftMedian, TWIRLS, and Elas-ticGNN, we delve into their theoretical understanding and unveil their inherent connections and limitations in the underlying principles. Specifically, their improved robustness can be understood from a unified view of ℓ 1 -based robust graph smoothing. Moreover, we unearth the problematic estimation bias of ℓ 1 -based graph smoothing that allows the adversarial impact to accumulate as the attack budget escalates, which provides a plausible explanation of their catastrophic failures. Motivated by these understandings, we propose a robust and unbiased graph signal estimator to reduce the estimation bias in GNNs. We design an efficient Quasi-Newton IRLS algorithm that unrolls as robust unbiased aggregation layers to safeguard GNNs against adversarial attacks. Our contributions can be summarized as follows:\n• We provide a unified view of ℓ 1 -based robust graph signal smoothing to justify the improved and closely aligned robustness of representative robust GNNs. Moreover, we reveal their estimation bias, which explains their severe performance degradation under large attack budgets.\n• We propose a robust and unbiased graph signal estimator to mitigate the estimation bias in ℓ 1based graph signal smoothing and design an efficient Quasi-Newton IRLS algorithm to solve the non-smooth and non-convex estimation problem with a theoretical convergence guarantee.\n• The proposed algorithm can be readily unfolded as feature aggregation layers in GNNs, which not only provides clear interpretability but also covers many classic GNNs as special cases.\n• Extensive experiments demonstrate that our proposed GNN significantly improves the robustness against large-budget adaptive attacks, e.g., outperforming the best existing model by 16% and 28.6% under local attack of budget 100% and 200% on Cora ML, while maintaining clean accuracy. We also provide comprehensive ablation studies to validate its working mechanism." }, { "figure_ref": [], "heading": "AN ESTIMATION BIAS ANALYSIS OF ROBUST GNNS", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct a preliminary study to evaluate the robustness of representative robust GNNs. Then we establish a unified view to uncover their inherent connections, offering explanations of their improved robustness under small attack budgets and failure under large attack budgets. is the neighborhood set of v i . The node feature matrix is denoted as F = [f 1 , . . . , f n ] ⊤ ∈ R n×d , and f (0) (F (0) ) denotes the node feature vector (matrix) before graph smoothing in decoupled GNN models. Let ∆ ∈ {-1, 0, 1} m×n be the incidence matrix whose l-th row denotes the l-th edge e l = (i, j) such that ∆ li = -1, ∆ lj = 1, ∆ lk = 0 ∀k / ∈ {i, j}. ∆ is its normalized version : ∆lj = ∆ lj / d j .\nFor a vector x ∈ R d , we use ℓ 1 penalty to denote either ∥x∥ 1 = i |x i | or ∥x∥ 2 = i x 2 i . Note that we use ℓ 2 penalty to denote ∥x∥ 2 2 = i x 2 i ." }, { "figure_ref": [], "heading": "ROBUSTNESS ANALYSIS", "publication_ref": [ "b19", "b27", "b17", "b19" ], "table_ref": [], "text": "Figure 1: Robustness Analysis.\nTo test the robustness of existing GNNs without the false sense of security, we perform a preliminary evaluation of existing robust GNNs against adaptive attacks. We choose various baselines including the undefended MLP, GCN (Kipf & Welling, 2017), some of the most representative defenses in Mujkanovic et al. (2022), and two additional robust models TWIRLS (Yang et al., 2021) and ElasticGNN (Liu et al., 2021). We execute adaptive local evasion topological attacks and test the node classification accuracy on the Cora ML and Citeseer datasets (Mujkanovic et al., 2022). The detailed settings follow Section 4.1. From the results in Figure 1, it can be observed that:\n• Among all the selected robust GNNs, only SoftMedian, TWIRLS, and ElasticGNN exhibit notable and closely aligned improvements in robustness whereas other GNNs do not show obvious improvement over undefended GCN.\n• SoftMedian, TWIRLS, and ElasticGNN encounter a similar catastrophic performance degradation as the attack budget scales up. At a larger budget, their accuracy easily drops below that of the graph-unware MLP, indicating their failure in safely exploiting the topology of the data." }, { "figure_ref": [], "heading": "A UNIFIED VIEW OF ROBUST ESTIMATION", "publication_ref": [ "b11", "b27", "b17", "b14", "b10", "b18", "b17" ], "table_ref": [], "text": "Our preliminary study provides intriguing observations in Section 2.1, but the underlying reasons behind these phenomena remain obscure. This motivates us to delve into their theoretical understanding and explanation. In this section, we will compare the architectures of those three wellperforming GNNs, aiming to reveal their intrinsic connections.\nSoftMedian (Geisler et al., 2021) substitutes the GCN aggregation for enhanced robustness with the dimension-wise median m i ∈ R d for all neighbors of each node i ∈ V. However, the gradient of the median is zero almost everywhere, which is not suitable for the backpropagation training of GNNs. Therefore, the median is approximated as a differentiable weighted\nsum mi = 1 Z j∈N (i) w(f j , m i )f j , ∀i ∈ V\n, where m i is the exact non-differentiable dimensionwise median, f j is the feature vector of the j-th neighbor, w(x, y) = e -β∥x-y∥2 , and Z = k w(f k , m k ) is a normalization factor. In this way, the aggregation assigns the largest weights to the neighbors closest to the actual median.\nTWIRLS (Yang et al., 2021) utilizes the iteratively reweighted least squares (IRLS) algorithm to optimize the objective with parameter λ, and ρ(y) = y in the default setting:\n2λ (i,j)∈E ρ(∥ fi -fj ∥ 2 ) + i∈V ∥ fi -f (0) ∥ 2 2 , fi = (1 + λd i ) -1 2 f i .(1)\nElasticGNN (Liu et al., 2021) proposes the elastic message passing which unfolds the proximal alternating predictor-corrector (PAPC) algorithm to minimize the objective with parameter λ {1,2} :\nλ 1 (i,j)∈E f i √ d i - f j d j p + λ 2 (i,j)∈E f i √ d i - f j d j 2 2 + 1 2 i∈V ∥f i -f (0) i ∥ 2 2 , p ∈ {1, 2},(2)\nA Unified View of Robust Estimation. While these three approaches have seemingly different architectures, we provide a unified view of robust estimation to illuminate their inherent connections. First, the objective of TWIRLS in Eq. ( 1) can be considered as a particular case of ElasticGNN with λ 2 = 0 and p = 2 when neglecting the difference in the node degree normalization. However, TWIRLS and ElasticGNN unroll the iterative minimization into multiple GNN layers. They leverage different optimization solvers, i.e., IRLS and PAPC, which lead to vastly different GNN layers. Second, SoftMedian approximates the computation of medians in a soft way of weighted sums, which can be regarded as approximately solving the dimension-wise median estimation problem (Huber, 2004): arg min fi j∈N (i) ∥f i -f j ∥ 1 . Therefore, SoftMedian can be regarded as the ElasticGNN with λ 2 = 0 and p = 1. We also note that the SoftMedoid (Geisler et al., 2020) approach also resembles ElasticGNN with λ 2 = 0 and p = 2, and the Total Variation GNN (Hansen & Bianchi, 2023) also utilizes an ℓ 1 estimator in spectral clustering.\nThe above analyses reveal that SoftMedian, TWIRLS, and ElasticGNN share the same ideology of ℓ 1 -based robust graph signal estimation, i.e. a similar graph smoothing objective with edge difference penalties ∥f i -f j ∥ 1 or ∥f i -f j ∥ 2 . However, they adopt different approximation solutions that result in distinct architectural designs. Therefore, this unified view of robust estimation can explain their closely aligned performance despite different specific formulations. Besides, the superiority ℓ 1 -based models over the ℓ 2 -based models such as GCN (Kipf & Welling, 2017), whose graph smoothing objective is essentially (Ma et al., 2021), can also be understood since ℓ 1 -based graph smoothing mitigates the impact of the outliers (Liu et al., 2021).\n(i,j)∈E ∥f i / √ d i -f j / d j ∥ 2 2" }, { "figure_ref": [ "fig_1" ], "heading": "BIAS ANALYSIS AND PERFORMANCE DEGRADATION", "publication_ref": [ "b22", "b2", "b0" ], "table_ref": [], "text": "The unified view of ℓ 1 -based graph smoothing we established in Section 2.2 not only explains their aligned robustness improvement but also provides a perspective to understand their failure under large attack budgets through an estimation bias analysis.\nBias of ℓ 1 -based Estimation. In the literature of high-dimensional statistics, it has been well understood that the ℓ 1 regularization will induce an estimation bias. In the context of de-noising (Donoho, 1995) or variable selection (Tibshirani, 1996), small coefficients θ are undesirable. To exclude small θ in the estimation, a soft-thresholding operator can be derived as S λ (θ) = sign(θ) max(|θ| -λ, 0). As a result, large θ are also shrunk by a constant, so the ℓ 1 estimation is biased towards zero.\nA similar bias effect also occurs in graph signal estimation in the presence of adversarial attacks. For example, in TWIRLS (Eq. ( 1)), edge e k = (i, j) is reweighted by w ij = ∥ fifj ∥ -1 2 . After the corresponding graph aggregation\nf (k+1) i = j∈N (i) w ij f (k) j\n, edge e k = (i, j) will shrink the edge difference fifj by the unit vector u fi-fj . Consequently, every heterophilic edge added will induce a constant bias that can be accumulated and amplified when the attack budget scales up.\nNumerical Simulation. To provide a more intuitive illustration of the estimation bias of ℓ 1 -based models, we simulate a mean estimation problem on synthetic data since most message passing schemes in GNNs essentially estimate the mean of neighboring node features. In the context of mean estimation, the bias is measured as the distances between different mean estimators and the true mean. We firstly generated clean samples {x i } n i=1 (blue dots) and the outlier samples {x i } n+m i=n+1 (red dots) from 2-dimensional Gaussian distributions, N ((0, 0), 1) and N ((8, 8), 0.5), respectively. We calculate the mean of clean samples 1 n n i=1 x i as the ground truth of the mean estimator. Then we estimate the mean of all the samples by solving arg min z n+m i=1 η(z -x i ) using the Weiszfeld method (Candès et al., 2008;Beck & Sabach, 2015), where η(•) can take different penalties such as ℓ 2 penalty ∥ • ∥ 2 2 and ℓ 1 penalty ∥ • ∥ 2 . In Figure 2, we visualize the generated clean samples and outliers, as well as the ground truth means and the mean estimators with η(•) = ∥ • ∥ 2 2 or ∥ • ∥ 2 under different outlier ratios (10%, 25%, 40%). The results show that the ℓ 2 -based estimator deviates far from the true mean, while the ℓ 1 -based estimator is more resistant to outliers, which explains why ℓ 1 -based methods exhibit stronger robustness. However, as the ratio of outliers escalates, the ℓ 1 -based estimator encounters a greater shift from the true mean due to the accumulated bias caused by outliers. This observation explains why ℓ 1 -based graph smoothing models suffer from catastrophic degradation under large attack budgets. " }, { "figure_ref": [], "heading": "ROBUST UNBIASED AGGREGATION", "publication_ref": [], "table_ref": [], "text": "In this section, we design a robust unbiased estimator to reduce the bias in graph signal estimation and propose an efficient second-order IRLS algorithm to be unrolled as the robust unbiased aggregation in GNNs with a theoretical convergence guarantee." }, { "figure_ref": [ "fig_1" ], "heading": "ROBUST AND UNBIASED GRAPH SIGNAL ESTIMATOR", "publication_ref": [ "b7", "b28" ], "table_ref": [], "text": "Our study and analysis in Section 2 have shown that while ℓ 1 -based methods outperform ℓ 2 -based methods in robustness, they still suffer from the accumulated estimation bias, leading to severe performance degradation under large perturbation budgets. This motivates us to design a robust and unbiased graph signal estimator that derives unbiased robust aggregation for GNNs with stronger resilience to attacks.\nTheoretically, the estimation bias in Lasso regression has been discovered and analyzed in highdimensional statistics (Zou, 2006). Statisticians have proposed adaptive Lasso (Zou, 2006) and many non-convex penalties such as Smoothly Clipped Absolute Deviation (SCAD) (Fan & Li, 2001) and Minimax Concave Penalty (MCP) (Zhang, 2010) to alleviate this bias. Motivated by these advancements, we propose a Robust and Unbiased Graph signal Estimator (RUGE) as follows:\narg min F H(F ) = (i,j)∈E ρ γ ( f i √ d i - f j d j 2 ) + λ i∈V ∥f i -f (0) i ∥ 2 2 ,(3)\nwhere ρ γ (y) denotes the function that penalizes the feature differences on edges by the non-convex MCP:\nρ γ (y) = y -y 2 2γ if y < γ γ 2 if y ≥ γ (4)\nFigure 3: Different penalties.\nAs shown in Figure 3, MCP closely approximates the ℓ 1 penalty when y is small since the quadratic term y 2 2γ is negligible, and it becomes a constant value when y is large. This transition can be adjusted by the thresholding parameter γ. When γ approaches infinity, the penalty ρ γ (y) reduces to the ℓ 1 penalty. Conversely, when γ is very small, the \"valley\" of ρ γ near zero is exceptionally sharp, so ρ γ (y) approaches the ℓ 0 penalty and becomes a constant for a slightly larger y. This enables RUGE to promote smoothing through reliable edges connecting homophilic nodes and suppress smoothing on edges the node differences exceeding the threshold γ. This not only mitigates the estimation bias against outliers but also maintains the estimation accuracy in the absence of outliers. The simulation in Figure 2 verifies that our proposed estimator (η(x) := ρ γ (∥x∥ 2 )) can recover the true mean despite the increasing outlier ratio when the outlier ratio is below the theoretical optimal breakdown point." }, { "figure_ref": [], "heading": "QUASI-NEWTON IRLS", "publication_ref": [ "b7", "b28", "b23", "b13" ], "table_ref": [], "text": "Despite the advantages discussed above, the proposed RUGE in Eq. ( 3) is non-smooth and nonconvex, which results in challenges for deriving efficient numerical solutions that can be readily unfolded as neural network layers. In the literature, researchers have developed optimization algorithms for MCP-related problems, such as the Alternating Direction Multiplier Method (ADMM) and Newton-type algorithms (Fan & Li, 2001;Zhang, 2010;Varma et al., 2019). However, due to their excessive computation and memory requirements as well as the incompatibility with backpropagation training, these algorithms are not well-suited for the construction of feature aggregation layers and employment in GNNs. To solve these challenges, we derive an efficient Quasi-Newton Iteratively Reweighted Least Squares algorithm (QN-IRLS) to solve the estimation problem in Eq. (3).\nIRLS. Before stepping into our QN-IRLS, we first introduce the main idea of iteratively reweighted least squares (IRLS) (Holland & Welsch, 1977) and analyze its weakness in convergence. IRLS aims to circumvent the non-smooth H(F ) in Eq. ( 3) by computing its quadratic upper bound Ĥ(k) based on F (k) in each iteration k and optimizing Ĥ(k) which follows\nĤ(k) (F ) = (i,j)∈E,i̸ =j W (k) ij f i √ d i - f j √ d j 2 2 + λ i∈V ∥f i -f (0) i ∥ 2 2 ,(5)\nwhere\nW (k) ij = 1 i̸ =j dργ (yij ) dy 2 ij yij =y (k) ij 1 , ρ γ (•) is the MCP function and y (k) ij = f (k) i √ di - f (k) j √ dj 2 .\nFor the detailed proof of the upper bound, please refer to Lemma 1 in Appendix B. Then, each iterative 1 Wij is defined as dρ(y) dy 2 y=y (k) ij so that the quadratic upper bound Ĥ(k) is tight at F (k) according to Lemma 3. The diagonal terms of W are set to zero to avoid undefined derivative of dρ (y) dy 2 y=0 as discussed in Remark 2.\nstep of IRLS can be formulated as the first-order gradient descent iteration for Ĥ(k) (F ):\nF (k+1) = F (k) -η∇ Ĥ(k) (F (k) ) = F (k) -η 2( Q(k) -W (k) ⊙ Ã)F (k) -2λF (0) ,(6)\nwhere\nQ(k) = 2(diag(q (k) ) + λI), q (k) m = j W (k) mj A mj /d m ,\nand η is the update step size. The convergence condition of Eq. ( 6) is given in Theorem 1, whose proof is presented in Appendix B.\nTheorem 1. If F (k) follows the update rule in Eq. ( 6) where ρ defining W satisfies that dρ(y) dy 2 is non-decreasing ∀y ∈ (0, ∞), then a sufficient condition for\nH(F (k+1) ) ≤ H(F (k) ) is that the step size η satisfies 0 < η ≤ ∥diag(q (k) ) -W (k) ⊙ à + λI∥ -1 2 .\nQuasi-Newton IRLS. Theorem 1 suggests the difficulty in the proper selection of stepsize for (firstorder) IRLS due to its non-trivial dependency on the graph ( Ã) and the dynamic terms (q (k) and W (k) )2 . The dilemma is that a small stepsize will lead to slow convergence but a large step easily causes divergence and instability as verified by our experiments in Section 4.3 (Figrue 5), which reveals its critical shortcoming for the construction of reliable GNN aggregation layers.\nTo overcome this limitation, we aim to propose a second-order Newton method,\nF (k+1) = F (k) - (∇ 2 Ĥ(k) (F (k) )) -1 ∇ Ĥ(k) (F (k)\n), to achieve faster convergence and stepsize-free hyperparameter tuning by better capturing the geometry of the optimization landscape. However, obtaining the analytic expression for the inverse Hessian matrix (∇ 2 Ĥ(k) (F (k) )) -1 ∈ R n×n is intractable and the numerical solution requires expensive computation for large graphs. To resolve this challenge, we propose a novel Quasi-Newton IRLS algorithm (QN-IRLS) that approximates the Hessian matrix\n∇ 2 Ĥ(k) (F (k) ) = 2(diag(q (k) )-W (k) ⊙ Ã+λI) by the diagonal matrix Q(k) = 2(diag(q (k) )+λI)\nsuch that the inverse is trivial. The proposed QN-IRLS iterates as follows:\nF (k+1) = 2( Q(k) ) -1 (W (k) ⊙ Ã)F (k) + λF (0) ,(7)\nwhere 2( Q(k) ) -1 = (diag(q (k) ) + λI) -1 automatically adjusts the per-coordinate stepsize according to the local geometry of the optimization landscape, q (k) and W (k) are defined as in Eq. ( 5) and ( 6). In this way, QN-IRLS provides faster convergence without needing to select a stepsize. The convergence is guaranteed by Theorem 2 with the proof in Appendix B.\nTheorem 2. If F (k+1) follows update rule in Eq. ( 7), where ρ satisfies that dρ(y) dy 2 is non-decreasing ∀y ∈ (0, ∞), it is guaranteed that H(F (k+1) ) ≤ H(F (k) ). The proposed QN-IRLS provides an efficient algorithm to optimize the RUGE in Eq. ( 3) with a theoretical convergence guarantee. Instantiated with ρ = ρ γ , each iteration in QN-IRLS (Eq. ( 7)) can be used as one layer in robust GNNs, which yields the Robust Unbiased Aggregation (RUNG):" }, { "figure_ref": [ "fig_2" ], "heading": "GNN WITH ROBUST UNBIASED AGGREGATION", "publication_ref": [ "b9" ], "table_ref": [], "text": "F (k+1) = (diag(q (k) ) + λI) -1 (W (k) ⊙ Ã)F (k) + λF (0) , (8) where q (k) m = j W (k) mj A mj /d m , W (k) ij = 1 i̸ =j max(0, 1 2y (k) ij -1 2γ ) and y (k) ij = f (k) i √ di - f (k) j √ dj 2 .\nInterpretability. The proposed RUNG can be interpreted intuitively with edge reweighting. In Eq. ( 8), the normalized adjacency matrix à is reweighted by W (k) , where\nW (k) ij = dρ(y) dy 2 | y=y (k) ij . It is\nshown in Figure 4 that W ij becomes zero for any edge e k = (i, j) with a node difference y\n(k) ij ≥ γ,\nthus pruning suspicious edges. This implies RUNG's strong robustness under large-budget adversarial attacks. With the inclusion of the skip connection F (0) , diag(q (k) ) + λI can be seen as a normalizer of the layer output.\nRelations with Existing GNNs. RUNG can adopt different ρ allowed by Theorem 2, thus connecting many classic GNNs as special cases. When ρ(y) = y, the objective of RUGE is equivalent to ElasticGNN with p = 2, which further relates to SoftMedian and TWIRLS due to their inherent resemblance as ℓ 1 -based graph smoothing. When ρ(y) = y 2 , RUNG in Eq. ( 8) reduces to APPNP (Gasteiger et al., 2018) \n(F (k+1) = 1 1+λ ÃF (k) + λ 1+λ F (0) ) and GCN (F (k+1) = ÃF (k) ) if chosing λ = 0." }, { "figure_ref": [], "heading": "EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "In this section, we perform comprehensive experiments to validate the robustness of the proposed RUNG. Besides, ablation studies show the convergence and the defense mechanism of RUNG." }, { "figure_ref": [], "heading": "EXPERIMENT SETTING", "publication_ref": [ "b21", "b19", "b9", "b24", "b29", "b8", "b15", "b25", "b6", "b19", "b1", "b26", "b19" ], "table_ref": [], "text": "Datasets. We test our RUNG with the node classification task on two widely used real-world citation networks, Cora ML and Citeseer (Sen et al., 2008). We adopt the data split of 80% training, 10% validation, and 10% testing, and report the classification accuracy of the attacked nodes following Mujkanovic et al. (2022). Each experiment is averaged over 5 different random splits. Baselines. To evaluate the performance of RUNG, we compare it to 12 other representative baselines. Among them, MLP, GCN (Kipf & Welling, 2017), APPNP (Gasteiger et al., 2018), and GAT (Veličković et al., 2017) are undefended vanilla models. GNNGuard (Zhang & Zitnik, 2020), RGCN (Zhu et al., 2019), GRAND (Feng et al., 2020), ProGNN (Jin et al., 2020), Jaccard-GCN (Wu et al., 2019), and SVD-GCN (Entezari et al., 2020) are representative robust GNNs. Besides, Soft-Median and TWIRLS are representative approaches with ℓ 1 -based graph smoothing3 . We also evaluate a variant of TWIRLS with a special thresholding attention (TWIRLS-T). For RUNG, we test two variants: default RUNG (Eq. ( 8)) and RUNG-ℓ 1 with ℓ 1 penalty (ρ(y) = y). Hyperparameters. The model hyperparameters including learning rate, weight decay, and dropout rate are tuned as in Mujkanovic et al. (2022). Other hyperparameters follow the settings in the original papers. RUNG uses an MLP connected to 10 graph aggregation layers following the decoupled GNN architecture of APPNP, λ = 1 1+λ is tuned in {0.7, 0.8, 0.9}, and γ tuned in {0.5, 1, 2, 3, 5} and the hyperparameter combination that yields the best robustness without a notable impact (smaller than 1%) on the clean accuracy is chosen, following the setting in Bojchevski & Günnemann (2019). Attack setting. We use the PGD attack (Xu et al., 2019) to execute the adaptive evasion topology attack since it delivers the strongest attack in most settings (Mujkanovic et al., 2022). The adversarial attacks aim to misclassify specific target nodes (local attack) or the entire set of test nodes (global attack). For local attack, we randomly select 20 target nodes for each data split. To avoid a false sense of robustness, our adaptive attacks directly target the victim model instead of the surrogate model. In particular, we execute the adaptive attack after all the hyperparameters are fixed." }, { "figure_ref": [], "heading": "ADVERSARIAL ROBUSTNESS", "publication_ref": [], "table_ref": [ "tab_0", "tab_1", "tab_4", "tab_3" ], "text": "Here we evaluate the the performance of RUNG against the baselines under different settings. The results of local and global adaptive attacks on Cora ML are presented in Table 1 andTable 2, while those on Citeseer are presented in Table 4 and Table 3 in Appendix D due to space limits. We summarize the following analysis from Cora ML, noting that the same observations apply to Citeseer.\n• Under adaptive attacks, many existing defenses are not significantly more robust than undefended models. The ℓ 1 -based models such as TWIRLS, SoftMedian, and RUNG-ℓ 1 demonstrate considerable and closely aligned robustness under both local and global attacks, which supports our unified ℓ 1 -based robust view analysis in Section 2.2. • When the attack budget is large, all baselines encounter a serious performance drop and underperform MLPs by significant margins. For instance, under local attack with budgets of 100%, 150%, and 200%, the best GNN baseline underperforms MLP by 17.9%, 28.6%, and 31.9%.\n• Our RUNG exhibits significant improvements over existing approaches when the attack budget is large. Under local attacks, RUNG outperforms the best GNN baseline by 16.0%, 25.3%, and 28.6% with attack budgets of 100%, 150%, and 200%. Note that RUNG exhibits stable performance with the increase of attack budget. Under global attacks, RUNG outperforms the best GNN baseline by 2.3%, 4.0%, and 4.3% with budgets of 20%, 30%, and 40%.\n• When there is no attack, RUNG largely preserves an excellent clean performance. RUNG also achieves state-of-the-art performance under small attack budgets.\n• Local attacks are stronger than global attacks since local attacks concentrate on targeted nodes. The robustness improvement of RUNG appears to be more remarkable in local attacks. " }, { "figure_ref": [ "fig_3", "fig_4", "fig_5", "fig_6" ], "heading": "ABLATION STUDY", "publication_ref": [ "b19" ], "table_ref": [], "text": "Convergence. To verify the advantage of our QN-IRLS method (Eq (7)) over the first-order IRLS (Eq (6)), we show the objective H on each layer in Figure 5. It can be observed that our stepsize-free QN-IRLS captures the landscape well, demonstrating the best convergence as discussed in Section 3.\nEstimation Bias. The bias effect in ℓ 1 -based GNNs and the unbiasedness of our RUNG can be empirically verified. We quantify the bias with i∈V ∥f i -f ⋆ i ∥ 2 2 , where f ⋆ i and f i denote the aggregated feature on the clean graph and attacked graph, respectively. As shown in Figure 6, when the budget scales up, ℓ 1 GNNs exhibit a notable bias, whereas RUNG has nearly zero bias.\nDefense Mechanism To further investigate how our defense takes effect, we analyze the edges added under adaptive attacks. The distribution of the node feature differences f i / √ d i -f j / d j 2 of attacked edges is shown in Figure 7 for different graph signal estimators. It can be observed that our RUNG forces the attacker to focus on the edges with a small feature difference because otherwise the attacked edges will be pruned. Therefore, the attacks become less influential, which explains why RUNG demonstrates outstanding robustness.\nTransfer Attacks. In addition to the adaptive attack, we also conduct a set of transfer attacks that take every baseline GNN as the surrogate model to comprehensively test the robustness of RUNG, following the unit test attack protocol proposed in Mujkanovic et al. (2022). We summarize the results on Cora ML in Figure 8 and leave the results on Citeseer in Figure 10 in Appendix D due to space limits. All transfer attacks are weaker than the adaptive attack in Section 4.2, which validates that our adaptive attack is executed correctly and verifies that RUNG is not falsely robust. Note that the attack transferred from RUNG model is slightly weaker than the adaptive attack since the surrogate and victim RUNG models have different model parameters in the transfer attack setting." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b18", "b11", "b10", "b27", "b17", "b12", "b28", "b23", "b29", "b27" ], "table_ref": [], "text": "To the best of our knowledge, although there are works unifying existing GNNs from a graph signal denoising perspective (Ma et al., 2021), no work has been dedicated to uniformly understand the robustness and limitations of robust GNNs such as SoftMedian (Geisler et al., 2021), Soft-Medoid (Geisler et al., 2020), TWIRLS (Yang et al., 2021), ElasticGNN (Liu et al., 2021), andTVGNN (Hansen &Bianchi, 2023) from the ℓ 1 robust statistics and bias analysis perspectives. To mitigate the estimation bias, MCP penalty is promising since it is well known for its near unbiasedness property (Zhang, 2010) and has been applied to the graph trend filtering problem (Varma et al., 2019) to promote piecewise signal modeling, but their robustness is unexplored. Nevertheless, other robust GNNs have utilized alternative penalties that might alleviate the bias effect. For example, GNNGuard (Zhang & Zitnik, 2020) prunes the edges whose cosine similarity is too small. Another example is that TWIRLS (Yang et al., 2021) with a thresholding penalty can also exclude edges using graph attention. However, the design of their edge weighting or graph attention is heuristic-based and exhibits suboptimal performance compared to the RUNG we propose." }, { "figure_ref": [ "fig_1" ], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a unified view of ℓ 1 robust graph smoothing to uniformly understand the robustness and limitations of representative robust GNNs. The established view not only justifies their improved and closely aligned robustness but also explains their severe performance degradation under large attack budgets by an accumulated estimation bias analysis. To mitigate the estimation bias, we propose a robust and unbiased graph signal estimator. To solve this non-trivial estimation problem, we design a novel and efficient Quasi-Newton IRLS algorithm that can better capture the landscape of the optimization problem and converge stably with a theoretical guarantee. This algorithm is unfolded into an interpretable GNN with Robust Unbiased Aggregation (RUNG). As verified by our experiments, RUNG provides the best performance under strong adaptive attacks and overcomes performance degradation under large attack budgets. Furthermore, RUNG also covers many classic GNNs as special cases. Most importantly, this work provides a deeper understanding of existing approaches and reveals a principled direction for designing robust GNNs. \nz n+m i η(z -x i ), (9\n)\nwhere n is the number of clean samples and m is the number of adversarial samples.\nℓ 1 estimator. The ℓ 1 estimator (η(y) := ∥y∥ 2 ), essentially is the geometric median. We adopted the Weiszfeld method to iteratively reweight z to minimize the objective, following\nz (k+1) = i w (k) i x i i w (k) i ,(10)\nwhere w\n(k) i = 1 ∥z (k) -xi∥2\n. This can be seen as a gradient descent step of z\n(k+1) = z (k) - α∇ z i ∥z -x i ∥ 2 = z (k+1) -α i z (k) -xi ∥z (k) -xi∥2 . Taking α = 1 i w (k) i\ninstantly yields Eq. ( 10).\nMCP-based estimator. We therefore adopt a similar approach for the MCP-based estimator (\"Ours\" in Fig. Figure 2), where η(y) := ρ γ (y):\nz (k+1) = z (k) -α∇ z i ρ γ (∥z -x i ∥ 2 ) (11) = z (k) -α i max(0, 1 ∥z (k) -x i ∥ 2 - 1 γ )(z (k) -x i ). (12\n)\nDenoting max(0, ∥z (k) -x i ∥ -1 2 -1 γ ) as w i , and then α =\n1 i wi yields a similar reweighting iteration z (k+1) = i w (k) i xi i w (k) i . ℓ 2 estimator.\nIt is worth noting that the same technique can be applied to the ℓ 2 estimator with ρ(z) := ∥z∥ 2 2 . The iteration becomes\nz (k+1) = z (k) -α∇ z i ∥z -x i ∥ 2 2 (13) = z (k) -α i (z (k) -x i ),(14)\nand α = 1 n+m yields z (k+1) = 1 n+m i x i , which gives the mean of all samples in one single iteration.\nSimilarities between this mean estimation scenario and our QN-IRLS in graph smoothing can be observed, both of which involve iterative reweighting to estimate similar objectives. The approximated Hessian in our QN-IRLS resembles the Weiszfeld method, canceling the z (k) by tuning the stepsize." }, { "figure_ref": [ "fig_1" ], "heading": "A.2 ADDITIONAL SIMULATION RESULTS AND DISCUSSIONS", "publication_ref": [], "table_ref": [], "text": "Here, we complement Fig. Figure 2 with the settings of higher attack budgets. As the outlier ratio exceeds the breakdown point 50%, we observe that our MCP-based mean estimator can correctly recover the majority of the samples, i.e. converge to the center of \"outliers\". " }, { "figure_ref": [], "heading": "B CONVERGENCE ANALYSIS", "publication_ref": [], "table_ref": [], "text": "To begin with, we will provide an overview of our proof, followed by a detailed presentation of the formal proof for the convergence analysis.\nOverview of proof. First, for both IRLS and QN-IRLS, we construct, for F (k) in every iteration k, a quadratic upper bound Ĥ that satisfies Ĥ + C ≥ H where the equality is reached at F (k) . Then we can minimize Ĥ to guarantee the iterative descent of\nH since H(F (k+1) ) ≤ Ĥ(F (k+1) ) + C ≤ Ĥ(F (k) ) + C = H(F (k) ).\nTo find the F (k+1) such that Ĥ(F (k+1) ) ≤ Ĥ(F (k) ), IRLS simply adopts the plain gradient descent F (k+1) = F (k) -η∇ Ĥ(F (k) ) whose convergence condition can be analyzed with the β-smoothness of the quadratic Ĥ (Theorem 1). To address the problems of IRLS as motivated in Section 3.2, our Quasi-Newton IRLS utilizes the diagonal approximate Hessian Q to scale the update step size in different dimensions respectively as k) ). Thereafter, by bounding the Hessian with 2 Q, the descent condition of Ĥ is simplified (Theorem 2).\nF (k+1) = F (k) -Q-1 ∇ Ĥ(F (\nLemma 1. For any ρ(y) satisfying dρ(y) dy 2 is non-increasing, denote\ny ij := fi √ di - fj √ dj 2\n, then\nH(F ) = (i,j)∈E,i̸ =j ρ(y ij ) + λ i∈V ∥f i -f(0)\ni ∥ 2 2 has the following upper bound:\nH(F ) ≤ Ĥ(F ) + C = (i,j)∈E,i̸ =j W (k) ij y 2 ij + λ i∈V ∥f i -f (0) i ∥ 2 2 + C,(15)\nwhere\nW (k) ij = ∂ρ(y) ∂y 2 y=y (k) ij and y (k) ij = f (k) i √ di - f (k) j √ dj 2 and C = H(F (k) ) -Ĥ(F (k)\n) is a constant. The equality in Eq. ( 15) is achieved when\nF = F (k) . Proof. Let v = y 2 and define ψ(v) := ρ(y) = ρ( √ v). Then ψ is concave since dψ(v) dv = dρ(y)\ndy 2 is non-increasing. According to the concavity property, we have\nψ(v) ≤ ψ(v 0 ) + ψ ′ (ν) ν=v0 (v -v 0 ). Substitute v = y 2 , v 0 = y 2 0 , we obtain: ρ(y) ≤ y 2 ∂ρ(y) ∂y 2 y=y0 -y 2 0 ∂ρ(y) ∂y 2 y=y0 + ρ(y 0 ) = y 2 ∂ρ(y) ∂y 2 y=y0 + C(y 0 )(16)\nwhere the inequality is reached when y = y 0 . Next, substitute y = y ij and y 0 = y\n(k) ij , we can get ρ(y ij ) ≤ W (k) ij y 2 ij + C(y (k) ij ) which takes the equality at y ij = y (k) ij .\nFinally, by summing up both sides and add a regularization term, we can prove the Eq. ( 15).\nRemark 1. It can be seen that the definition of Ĥ depends on F (k) , which ensures that the bound is tight when F = F (k) . This tight bound condition is essential in the majorization-minimization algorithm as seen in Theorem 1.\nLemma 2. For Ĥ = (i,j)∈E,i̸ =j W ij y 2 ij + λ i∈V ∥f i -f(0)\ni ∥ 2 2 , the gradient and Hessian w.r.t.\nF 4 satisfy ∇ Fmn Ĥ(F ) = 2 (diag(q) -W ⊙ à + λI)F -λF (0) mn ,(17)\nand\n∇ Fmn ∇ F kl Ĥ(F ) = 2 diag(q) -W ⊙ à + λI mk ,(18)\nwhere\nq m = j W mj A mj /d m and Ãij = Aij √ didj\nis the symmetrically normalized adjacency matrix.\nProof. Follow A = A ⊤ and define\ny 2 ij := fi √ di - fj √ dj 2 2\n, then the first-order gradient of\n(i,j)∈E,i̸ =j W ij y 2 ij will be ∇ Fmn   (i,j)∈E,i̸ =j W ij y 2 ij   (19) = (i,j)∈E,i̸ =j W ij ∂y 2 ij ∂F mn (20) = (m,j)∈E W mj ∂y 2 mj ∂F mn (21) = (m,j)∈E W mj ∂ Fmn √ dm - Fjn √ dj 2 ∂F mn (22) = j∈N (m) 2W mj ( F mn d m - F jn d m d j ) (23) =2 j W mj ( A mj d m F mn - A mj d m d j F jn ) (24) =2( j W mj A mj d m )F mn -2((W ⊙ Ã)F ) mn (25) = 2(diag(q) -W ⊙ Ã)F mn ,(26)\n4 Here are some explanations on the tensor 'Hessian' ∇ 2 Ĥ. Since Ĥ(F ) is dependent on a matrix, there are some difficulties in defining the Hessian. However, as can be observed in Eq. ( 26) and Eq. ( 31), the feature dimension can be accounted for by the following. Initially, we treat the feature dimension as an irrelevant dimension that is excluded from the matrix operations. E.g., F ∇ 2 ĤF = ik Fij∇ 2 Ĥijkl F kl where the feature dimensions j and l remain free indices while the node indices i and k are eliminated as dummy indices. Finally, we take the trace of the resulting #feature×#feature matrix to get the desired value. and the second-order hessian will be:\n∇ 2 FmnF kl   (i,j)∈E,i̸ =j W ij y 2 ij   (27) = (i,j)∈E,i̸ =j W ij ∂y 2 ij ∂F mn ∂F kl (28) =2 ∂ ∂F kl   j W mj ( A mj d m F mn - A mj d m d j F jn )   (29) =2(q m δ mk - j W mj A mj d m d j δ jk )δ nl (30) =2(diag(q) -W ⊙ Ã) mk δ nl . (31\n)\nRemark 2. From Eq. (20) to Eq. ( 23), one can assume m / ∈ N (m), and thus W mm = 0. However, as we know, a self-loop is often added to A to facilitate stability by avoiding zero-degree nodes that cannot be normalized. This is not as problematic as it seems, though. Because (i,j)∈E,i̸ =j intrinsically excludes the diagonal terms, we can simply assign zero to the diagonal terms of W so that the term of j = m is still zero in Eq. ( 23), as defined in Eq. (5).\nTo minimize Ĥ, the gradient descent rule takes the form of Eq. ( 6). One may assume that when η is chosen to be small enough, Ĥ(F (k+1) ) ≤ Ĥ(F (k) ). For a formal justification, we have Theorem 1 to determine the convergence condition of η.\nTheorem 1. If F (k) follows the update rule in Eq. ( 6), where the ρ satisfies that dρ (y) dy 2 is nondecreasing for y ∈ (0, ∞), then a sufficient condition for H(F (k+1) ) ≤ H(F (k) ) is that the step size η satisfies 0 < η ≤ ∥diag(q (k) ) -W (k) ⊙ à + λI∥ -1 2 .\nProof. The descent of Ĥ(F ) can ensure the descent of H(F ) since H(F (k+1) ) ≤ Ĥ(F (k+1) ) ≤ Ĥ(F (k) ) = H(F (k) ). Therefore, we only need to prove Ĥ(F (k+1) ) ≤ Ĥ(F (k) ).\nNoting that Ĥ is a quadratic function and F (k+1) -F (k) = -η∇ Ĥ(F (k) ), then Ĥ(F (k+1) ) and Ĥ(F (k) ) can be connected using Taylor expansion 4 , where ∇ Ĥ and ∇ 2 Ĥ is given in Lemma 2:\nĤ(F (k+1) ) -Ĥ(F (k) ) (32) =tr ∇ Ĥ(F (k) ) ⊤ (F (k+1) -F (k) ) + 1 2 tr (F (k+1) -F (k) ) ⊤ ∇ 2 Ĥ(F (k) )(F (k+1) -F (k) ) (33) =tr -η∇ Ĥ(F (k) ) ⊤ ∇ Ĥ(F (k) ) + η 2 2 ∇ Ĥ(F (k) ) ⊤ ∇ 2 Ĥ(F (k) )∇ Ĥ(F (k) ) . (34\n)\nInsert ∇ 2 Ĥ(F (k) ) = 2(diag(q) -W (k) ⊙ à + λI) from Lemma 2 into the above equation and we can find a sufficient condition for Ĥ(F (k+1) ) ≤ Ĥ(F (k) ) to be\n-η + ∥diag(q) -W (k) ⊙ à + λI∥ 2 η 2 ≤ 0, (35\n) or η ≤ ∥diag(q) -W (k) ⊙ à + λI∥ -1 2 . (36\n)\nNow we prove that when taking the Quasi-Newton-IRLS step as in Eq. ( 7), the objective Ĥ is guaranteed to descend. Since the features in different dimensions are irrelevant, we simplify our notations as if feature dimension was 1. One may easily recover the general scenario by taking the trace.\nLemma 3. 2 Q-∇ 2 Ĥ(y) is positive semi-definite, where Ĥ = (i,j)∈E,i̸ =j W ij y 2 ij +λ i∈V ∥f i - f (0) i ∥ 2\n2 , Q = 2(diag(q) + λI), and q m = j W mj A mj /d m .\nProof. In Lemma 2, we have ∇ 2 Ĥ(y) = 2(diag(q) + λI -W ⊙ Ã), then\n2 Q -∇ 2 Ĥ(y) = 2(diag(q) + λI + W ⊙ Ã).(37)\nRecall how we derived Eq. ( 26) from Eq. ( 15), where we proved that\n(i,j)∈E,i̸ =j W ij ∥ f i √ d i - f j d j ∥ 2 2 = tr(F ⊤ (diag(q) -W ⊙ Ã)F ),(38)\nwhich holds for all F . Similarly, the equation still holds after flipping the sign before f j / d j and W ⊙ Ã. We then have this inequality: ∀F , ∀λ ≥ 0 0\n≤ (i,j)∈E,i̸ =j W ij ∥ f i √ d i + f j d j ∥ 2 2 = tr(F ⊤ (diag(q) + W ⊙ Ã)F ) (39) ≤tr(F ⊤ (diag(q) + W ⊙ à + λI)F ).(40)\nThus, (diag(q) + W ⊙ à + λI) ⪰ 0, and thus 2 Q -∇ 2 Ĥ(y) ⪰ 0.\nUsing Lemma 1 and Lemma 3 we can prove Theorem 2. Note that we continue to assume #feature= 1 for simplicity but without loss of generality 4 .\nTheorem 2. If F (k+1) follows update rule in Eq. ( 7), where ρ satisfies that dρ(y) dy 2 is non-decreasing ∀y ∈ (0, ∞), it is guaranteed that H(F (k+1) ) ≤ H(F (k) ).\nProof. Following the discussions in Theorem 1, we only need to prove Ĥ(F (k+1) ) ≤ Ĥ(F (k) ).For the quadratic Ĥ, we have:\nĤ(x) = Ĥ(y) + ∇ Ĥ(y) ⊤ (x -y) + 1 2 (x -y) ⊤ ∇ 2 Ĥ(y)(x -y).(41)\nWe can define Q\n(y) = 2 Q(y) in Lemma 3 such that Q(y) -∇ 2 Ĥ(y) ⪰ 0, then ∀z, z ⊤ Q(y)z ≥ z ⊤ ∇ 2 Ĥ(y)z.(42)\nThen an upper bound of Ĥ(x) can be found by inserting Eq. ( 42) into Eq. (41).\nĤ(x) ≤ Ĥ(y) + ∇ Ĥ(y) ⊤ (x -y) + 1 2 (x -y) ⊤ Q(y)(x -y).(43)\nThen, insert Q = 2 Q into Eq. ( 43). Note that Q := 2(diag(q) + λI), so Q ⪰ 0 and Q⊤ = Q.\nThereafter, the update rule x = y -Q-1 ∇ Ĥ(y) in Eq. (7) gives\nĤ(x) -Ĥ(y) ≤∇ Ĥ(y) ⊤ (x -y) + 1 2 (x -y) ⊤ Q(y)(x -y) (44) =∇ Ĥ(y) ⊤ (x -y) + 2 Q 1 2 (x -y) ⊤ Q 1 2 (x -y) (45) =2∇ Ĥ(y) ⊤ Q-1 ∇ Ĥ(y) -2∇ Ĥ(y) ⊤ ( Q-1 2 ) ⊤ Q-1 2 ∇ Ĥ(y) (46) =0.(47)\nTherefore, our QN-IRLS in Eq. ( 7) is guaranteed to descend." }, { "figure_ref": [], "heading": "D ADDITIONAL EXPERIMENT RESULTS", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In this section, we present the experiment results that are not shown in the main paper due to space limits. Table 3 " }, { "figure_ref": [], "heading": "E ADAPTIVE ATTACK SETTINGS", "publication_ref": [ "b19", "b11", "b19" ], "table_ref": [], "text": "For the adaptive PGD attack we adpoted in the experiments, we majorly followed the algorithm in Mujkanovic et al. (2022) in the adaptive evasion attack. For the sake of completeness, we describe it below:\nIn summary, we consider the topology attack setting where the adjacency matrix A is perturbed by δA whose element δA ij ∈ {0, 1}. The budget B is defined as B ≥ ∥δA∥ 0 . The PGD attack involves first relaxing A from binary to continuous so that a gradient ascent attack can be conducted on the relaxed graph.\nDuring the attack, the minimization problem below is solved:\nδA ⋆ = arg min δA L attack (GNN θ (A + (I -2A) ⊙ δA, F ), y target ),(49)\nwhere L is carefully designed attack loss function (Geisler et al., 2021;Mujkanovic et al., 2022), A, F and y target are respectively the graph, node feature matrix and ground truth labels in the dataset, θ is the parameters of the GNN under attack which are not altered in the evasion attack setting.\n(I -2A) ⊙ δA is the calculated perturbation that \"flips\" the adjacency matrix between 0 and 1 when it is perturbed. The gradient of Lattack δA is computed and utilized to update the perturbation matrix δA.\nAfter the optimization problem is solved, δA is projected back to the feasible domain of δA ij ∈ {1}. The adjacency matrix serves as a probability matrix allowing a Bernoulli sampling of the binary adjacency matrix A ′ . The sampling is executed repeatedly so that an A ′ producing the strongest perturbation is finally generated." }, { "figure_ref": [ "fig_8" ], "heading": "F TRANSFER ATTACKS", "publication_ref": [], "table_ref": [], "text": "Figure 11 shows results of global evasion transfer attacks between different models on Cora. Our observations are summarized below:\n• The attacks generated by RUNG are stronger when applied to more robust models like SoftMedian, while are not strong against undefended or weakly defended models.\n• For ℓ 1 GNNs, the attacks are the strongest when transferred from ℓ 1 GNNs. This supports again our unified view on ℓ 1 GNNs. An exception is TWIRLS because it only has one attention layer and does not always converge to the actual ℓ 1 objective." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "G ADDITIONAL ABLATION STUDY OF RUNG G.1 HYPERPARAMETERS", "publication_ref": [], "table_ref": [], "text": "The choice of the hyperparameters γ and λ is crucial to the performance of RUNG. We therefore experimented with their different combinations and conducted adaptive attacks on Cora as shown in Fig. 12.\nRecall the formulation of RUNG in Eq.( 8):\nF (k+1) = (diag(q (k) ) + λI) -1 (W (k) ⊙ Ã)F (k) + λF (0) ,(50)\nwhere q\n(k) m = j W (k) mj A mj /d m , W (k) ij = 1 i̸ =j max(0, 1 2y (k) ij -1 2γ ) and y (k) ij = f (k) i √ di - f (k) j\n√ dj 2 . In the formulation, λ controls the intensity of the regularization in the graph smoothing. In our experiments, we tune λ := 1 1+λ which is normalized into (0, 1). In Figure 12, the optimal value of λ can be found almost always near 0.9 regardless of the attack budget. This indicates that our penalty function ρ γ is decoupled from γ which makes the tuning easier, contrary to the commonly used formulation of MCP Zhang (2010). On the other hand, γ has a more intricate impact on the performance of RUNG. Generally speaking, the smaller γ is, the more edges get pruned, which leads to higher robustness and a lower clean accuracy. We begin our discussion in three cases:\nSmall attack budget (0%, 5%, 10%). The performance is largely dependent on clean accuracy. Besides, when γ → ∞, RUNG becomes a state-of-the-art robust ℓ 1 model. Therefore, a small γ likely introduces more harm to the clean performance than robustness increments over ℓ 1 models. The optimal γ thus at least recovers the performance of ℓ 1 models.\nLarge attack budget (20%, 30%, 40%). In these cases, γ → ∞ is no longer a good choice because ℓ 1 models are beginning to suffer from the accumulated bias effect. The optimal γ is thus smaller (near 0.5). However, for fairness, we chose the same γ under different budgets in our experiments, so the reported RUNG fixes γ = 3. In reality, however, when we know the possible attack budgets in advance, we can tune γ for an even better performance.\nVery large attack budget (50%, 60%). We did not include these scenarios because almost all GNNs perform poorly in this region. However, we believe it can provide some insights into robust graph learning. Under these budgets, more than half of the edges are perturbed. In the context of robust statistics (e.g. mean estimation), the estimator will definitely break down. However, in our problem of graph estimation, the input node features offer extra information allowing us to exploit the graph information even beyond the breakdown point. In the \"peak\" near (0.9, 0.5), RUNG achieves > 70% accuracy which is higher than MLP. This indicates that the edge weighting of RUNG is capable of securely harnessing the graph information even in the existence of strong adversarial attacks. The \"ridge\" near a λ = 0.2, on the other hand, emerges because of MLP. When the regularization dominates, λ → ∞, and λ → 0. A small λ is then connected to a larger emphasis on the input node feature prior. Under large attack budgets, MLP delivers relatively good estimation, so a small λ is beneficial." }, { "figure_ref": [], "heading": "C COMPUTATION EFFICIENCY", "publication_ref": [], "table_ref": [], "text": "Our RUNG model preserves advantageous efficiency even adopting the quasi-Newton IRLS algorithm." }, { "figure_ref": [], "heading": "C.1 TIME COMPLEXITY ANALYSIS", "publication_ref": [], "table_ref": [], "text": "Each RUNG layer involves computing W , q, and the subsequent aggregations. We elaborate on them one by one. We denote the number of feature dimensions d , the number of nodes n, and the number of edges m, which are assumed to satisfy m ≫ 1, n ≫ 1 and d ≫ 1. The number of layers is denoted as k . The asymptotic computation complexity is denoted as O(•).\nis the edge weighting matrix dependent on the node feature matrix F . The computation of\n. W ij only needs computing when (i, j) ∈ E, because ∀(i, j) / ∈ E, W ij will be masked out by A or à anyways. Each element of W involves time of O(d) and m elements are needed. In total, W costs O(md ), and\nis the inverse Hessian in our quasi-Newton IRLS. Because Q is designed to be a diagonal matrix, its inverse can be evaluated as element-wise reciprocal which is efficient. As for q :\nComputation of aggregation. An RUNG layer follows\nwhich combines the quantities calculated above. An extra graph aggregation realized by the matrix multiplication between W ⊙ à and F is required, costing O(md ). The subsequent addition to F (0) and the multiplication to the diagonal Q-1 both cost O(nd ).\nStacking layers. RUNG unrolls the QN-IRLS optimization procedure, which has multiple iterations. Therefore, the convergence increase that QN-IRLS introduces allows a RUNG with fewer layers and increases the overall complexity. It is worth noting that the QN-IRLS utilizes a diagonal approximated Hessian, and thus the computation per iteration is also kept efficient as discussed above.\nSumming up all the costs, we have the total computational complexity of our RUNG, O((m + n)kd ).\nOur RUNG thus scales well to larger graph datasets such as ogbn-arxiv." }, { "figure_ref": [], "heading": "Space Complexity Analysis", "publication_ref": [], "table_ref": [], "text": "The only notable extra storage cost is W whose sparse layout takes up O(m). This is the same order of size as the adjacency matrix itself, thus not impacting the total asymptotic complexity." }, { "figure_ref": [], "heading": "C.2 ALTERNATIVE PERSPECTIVE", "publication_ref": [], "table_ref": [], "text": "In fact, the above analysis can be simplified when we look at the local aggregation behavior of RUNG. For node i, it's updated via aggregation\nThe summation over neighbors' f j will give O(m) in the total time complexity in each feature dimension, and W ij involves O(d ) computations for each neighbor. This sums up to O(md ) as well. Essentially, the high efficiency of RUNG originates from that every edge weighting in our model involves only the 2 nodes on this edge. " }, { "figure_ref": [], "heading": "G.2 GNN LAYERS", "publication_ref": [], "table_ref": [], "text": "In RUNG, QN-IRLS is unrolled into GNN layers. We would naturally expect RUNG to have enough number of layers so that the estimator converges as desired. We conducted an ablation study on the performance (clean and adversarial) of RUNG with different layer numbers and the results are shown in Fig. Figure 13. We make the following observations:\n• As the layer number increases, RUNG exhibits better performance. This verifies the effectiveness of our proposed RUGE, as well as the stably converging QN-IRLS.\n• The performance of RUNG can achieve a reasonably good level even with a small layer number (3-5 layers) with accelerated convergence powered by QN-IRLS. This can further reduce the computation complexity of RUNG.\nFigure 13: The performance dependence of RUNG on the number of aggregation layers." } ]
The adversarial robustness of Graph Neural Networks (GNNs) has been questioned due to the false sense of security uncovered by strong adaptive attacks despite the existence of numerous defenses. In this work, we delve into the robustness analysis of representative robust GNNs and provide a unified robust estimation point of view to understand their robustness and limitations. Our novel analysis of estimation bias motivates the design of a robust and unbiased graph signal estimator. We then develop an efficient Quasi-Newton iterative reweighted least squares algorithm to solve the estimation problem, which unfolds as robust unbiased aggregation layers in GNNs with a theoretical convergence guarantee. Our comprehensive experiments confirm the strong robustness of our proposed model, and the ablation study provides a deep understanding of its advantages.
ROBUST GRAPH NEURAL NETWORKS VIA UNBIASED AGGREGATION
[ { "figure_caption": "Notation. Let G = {V, E} be a graph with node set V = {v 1 , . . . , v n } and edge set E = {e 1 , . . . , e m }. The adjacency matrix of G is denoted as A ∈ {0, 1} n×n and the graph Laplacian matrix is L = D -A. D = diag(d 1 , . . . , d n ) is the degree matrix where d i = |N (i)| and N (i)", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Different mean estimators in the presence of outliers.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: dρ(y) dy 2 of different penalties. RUNG uses RUGE.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Convergence of our QN-IRLS compared to IRLS.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Bias induced by different attack budgets.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Distribution of feature difference on attacked edges.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Transfer global attack from different surrogate models to our RUNG on Cora ML.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: The trajectory of our MCP-based mean estimator.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Transfer global attack between different model pairs on Cora.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Adaptive local attack on Cora ML. The best and second are marked.", "figure_data": "Model0%20%50%100%150%200%MLP72.6 ± 6.472.6 ± 6.472.6 ± 6.472.6 ± 6.4 72.6 ± 6.4 72.6 ± 6.4GCN82.7 ± 4.9 40.7 ± 10.2 12.0 ± 6.22.7 ± 2.50.0 ± 0.00.0 ± 0.0APPNP84.7 ± 6.8 50.0 ± 13.0 27.3 ± 6.514.0 ± 5.33.3 ± 3.00.7 ± 1.3GAT80.7 ± 10.0 30.7 ± 16.1 16.0 ± 12.2 11.3 ± 4.51.3 ± 1.62.0 ± 1.6GNNGuard82.7 ± 6.7 44.0 ± 11.6 30.7 ± 11.6 14.0 ± 6.85.3 ± 3.42.0 ± 2.7RGCN84.6 ± 4.046.0 ± 9.318.0 ± 8.16.0 ± 3.90.0 ± 0.00.0 ± 0.0GRAND84.0 ± 6.847.3 ± 9.018.7 ± 9.17.3 ± 4.91.3 ± 1.60.0 ± 0.0ProGNN84.7 ± 6.2 47.3 ± 10.4 21.3 ± 7.84.0 ± 2.50.0 ± 0.00.0 ± 0.0Jaccard-GCN81.3 ± 5.046.0 ± 6.817.3 ± 4.94.7 ± 3.40.7 ± 1.30.7 ± 1.3SoftMedian80.0 ± 10.2 72.7 ± 13.7 62.7 ± 12.7 46.7 ± 11.08.0 ± 4.58.7 ± 3.4TWIRLS83.3 ± 7.371.3 ± 8.6 60.7 ± 11.0 36.0 ± 8.8 20.7 ± 10.4 12.0 ± 6.9TWIRLS-T82.0 ± 4.570.7 ± 4.462.7 ± 7.454.7 ± 6.2 44.0 ± 11.2 40.7 ± 11.8RUNG-ℓ 1 (ours) 84.0 ± 6.872.7 ± 7.1 62.7 ± 11.2 53.3 ± 8.222.0 ± 9.314.0 ± 7.4RUNG (ours)84.0 ± 5.375.3 ± 6.9 72.7 ± 8.5 70.7 ± 10.6 69.3 ± 9.869.3 ± 9.0", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Adaptive global attack on Cora ML. The best and second are marked.", "figure_data": "ModelClean5%10%20%30%40%MLP65.0 ± 1.065.0 ± 1.065.0 ± 1.065.0 ± 1.065.0 ± 1.065.0 ± 1.0GCN85.0 ± 0.475.3 ± 0.569.6 ± 0.560.9 ± 0.754.2 ± 0.648.4 ± 0.5APPNP86.3 ± 0.4 75.8 ± 0.569.7 ± 0.760.3 ± 0.953.8 ± 1.249.0 ± 1.6GAT83.5 ± 0.575.8 ± 0.871.2 ± 1.265.0 ± 0.960.5 ± 0.956.7 ± 0.9GNNGuard83.1 ± 0.774.6 ± 0.770.2 ± 1.063.1 ± 1.157.5 ± 1.651.0 ± 1.2RGCN85.7 ± 0.475.0 ± 0.869.1 ± 0.459.8 ± 0.752.8 ± 0.746.1 ± 0.7GRAND86.1 ± 0.776.2 ± 0.870.7 ± 0.761.6 ± 0.756.7 ± 0.851.9 ± 0.9ProGNN85.6 ± 0.576.5 ± 0.771.0 ± 0.563.0 ± 0.756.8 ± 0.751.3 ± 0.6Jaccard-GCN83.7 ± 0.773.9 ± 0.568.3 ± 0.760.0 ± 1.154.0 ± 1.749.1 ± 2.4SoftMedian85.0 ± 0.778.6 ± 0.375.5 ± 0.969.5 ± 0.562.8 ± 0.858.1 ± 0.7TWIRLS84.2 ± 0.677.3 ± 0.872.9 ± 0.366.9 ± 0.262.4 ± 0.658.7 ± 1.1TWIRLS-T82.8 ± 0.576.8 ± 0.673.2 ± 0.467.7 ± 0.463.8 ± 0.260.8 ± 0.3RUNG-ℓ 1 (ours) 85.8 ± 0.578.4 ± 0.474.3 ± 0.368.1 ± 0.663.5 ± 0.759.8 ± 0.8RUNG (ours)84.6 ± 0.5 78.9 ± 0.4 75.7 ± 0.2 71.8 ± 0.4 67.8 ± 1.3 65.1 ± 1.2", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI open, 1:57-81, 2020. Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019.", "figure_data": "Hui Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical Associa-tion, 101(476):1418-1429, 2006.A BIAS ACCUMULATION OF ℓ 1 MODELSA.1 DETAILS OF THE NUMERICAL SIMULATION SETTINGSIn section 2, we conducted a numerical simulation of mean estimation on synthetic data x i . Themean estimators are formulated as minimization operatorsz = arg min", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "and Table4are the results of adaptive local and global attacks on Citeseer, referred to in Section 4.2. Figure10is the experiment result of our RUNG attacked by transfer attacks generated on different surrogate models as mentioned in Section 4.3. Adaptive local attack on Citeseer. The best and second are marked.", "figure_data": "Model0%20%50%100%150%200%MLP69.3 ± 2.569.3 ± 2.569.3 ± 2.569.3 ± 2.5 69.3 ± 2.5 69.3 ± 2.5GCN79.3 ± 3.344.7 ± 8.827.3 ± 7.76.7 ± 3.70.7 ± 1.30.0 ± 0.0APPNP80.7 ± 4.4 50.0 ± 6.739.3 ± 6.5 16.7 ± 12.3 16.0 ± 8.00.0 ± 0.0GAT74.7 ± 5.0 15.3 ± 17.5 13.3 ± 13.5 12.0 ± 11.3 12.7 ± 6.59.3 ± 5.3GNNGuard74.7 ± 4.5 46.0 ± 10.4 32.7 ± 11.0 18.0 ± 7.56.0 ± 3.94.0 ± 3.9RGCN80.0 ± 2.146.7 ± 9.432.7 ± 8.810.0 ± 5.20.7 ± 1.30.7 ± 1.3GRAND77.3 ± 2.556.7 ± 4.244.0 ± 3.916.7 ± 6.30.7 ± 1.30.0 ± 0.0ProGNN80.0 ± 2.142.7 ± 7.426.0 ± 5.310.0 ± 4.70.7 ± 1.30.0 ± 0.0Jaccard-GCN78.7 ± 3.446.7 ± 7.328.0 ± 7.56.7 ± 4.70.7 ± 1.30.0 ± 0.0SoftMedian78.7 ± 3.469.3 ± 6.566.0 ± 7.156.0 ± 4.48.7 ± 6.93.3 ± 3.0TWIRLS77.3 ± 2.569.3 ± 1.368.7 ± 1.657.3 ± 2.536.7 ± 4.726.7 ± 4.7TWIRLS-T76.0 ± 3.370.7 ± 2.568.7 ± 2.762.0 ± 3.452.7 ± 5.347.3 ± 8.3RUNG-ℓ 1 (ours) 80.0 ± 3.7 75.3 ± 4.5 73.3 ± 3.067.3 ± 3.336.0 ± 9.326.0 ± 8.3RUNG (ours)77.3 ± 1.370.7 ± 5.769.3 ± 6.867.3 ± 7.164.0 ± 5.761.3 ± 5.8", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Adaptive global attack on Citeseer. The best and second are marked.", "figure_data": "ModelClean5%10%20%30%40%MLP67.7 ± 0.367.7 ± 0.367.7 ± 0.3 67.7 ± 0.3 67.7 ± 0.3 67.7 ± 0.3GCN74.8 ± 1.266.1 ± 1.060.9 ± 0.853.0 ± 1.047.0 ± 0.841.2 ± 1.1APPNP75.3 ± 1.1 65.8 ± 0.960.7 ± 1.652.3 ± 1.646.0 ± 2.041.2 ± 2.2GAT73.4 ± 1.265.4 ± 1.360.4 ± 1.452.6 ± 2.547.2 ± 3.441.2 ± 4.8GNNGuard72.4 ± 1.165.6 ± 0.961.8 ± 1.455.6 ± 1.451.0 ± 1.347.3 ± 1.3RGCN74.4 ± 1.066.0 ± 0.860.6 ± 0.952.5 ± 0.846.1 ± 0.940.2 ± 1.0GRAND74.8 ± 0.666.6 ± 0.761.8 ± 0.753.6 ± 1.147.4 ± 1.242.2 ± 0.9ProGNN74.2 ± 1.365.6 ± 1.160.3 ± 1.152.7 ± 1.446.2 ± 0.940.8 ± 0.6Jaccard-GCN74.8 ± 1.266.3 ± 1.260.9 ± 1.253.3 ± 0.946.5 ± 0.941.1 ± 1.0SoftMedian74.6 ± 0.768.0 ± 0.764.4 ± 0.959.3 ± 1.155.2 ± 2.051.9 ± 2.1TWIRLS74.2 ± 0.869.2 ± 0.866.4 ± 0.761.6 ± 0.958.1 ± 1.251.8 ± 1.5TWIRLS-T73.7 ± 1.169.1 ± 1.266.4 ± 1.062.8 ± 1.560.0 ± 1.457.4 ± 1.5", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Ruiqi Feng; Zhichao Hou; Tyler Derr; Xiaorui Liu
[ { "authors": "Amir Beck; Shoham Sabach", "journal": "Journal of Optimization Theory and Applications", "ref_id": "b0", "title": "Weiszfeld's method: Old and new results", "year": "2015" }, { "authors": "Aleksandar Bojchevski; Stephan Günnemann", "journal": "", "ref_id": "b1", "title": "Certifiable robustness to graph perturbations", "year": "2019" }, { "authors": "Emmanuel J Candès; Michael B Wakin; Stephen P Boyd", "journal": "Journal of Fourier Analysis and Applications", "ref_id": "b2", "title": "Enhancing sparsity by reweighted l 1 minimization", "year": "2008" }, { "authors": "Jinyin Chen; Xiang Lin; Hui Xiong; Yangyang Wu; Haibin Zheng; Qi Xuan", "journal": "IEEE Transactions on Computational Social Systems", "ref_id": "b3", "title": "Smoothing adversarial training for gnn", "year": "2020" }, { "authors": "Zhijie Deng; Yinpeng Dong; Jun Zhu", "journal": "", "ref_id": "b4", "title": "Batch virtual adversarial training for graph convolutional networks", "year": "2019" }, { "authors": "L David; Donoho", "journal": "IEEE transactions on information theory", "ref_id": "b5", "title": "De-noising by soft-thresholding", "year": "1995" }, { "authors": "Negin Entezari; Saba A Al-Sayouri; Amirali Darvishzadeh; Evangelos E Papalexakis", "journal": "", "ref_id": "b6", "title": "All you need is low (rank) defending against adversarial attacks on graphs", "year": "2020" }, { "authors": "Jianqing Fan; Runze Li", "journal": "Journal of the American Statistical Association", "ref_id": "b7", "title": "Variable selection via nonconcave penalized likelihood and its oracle properties", "year": "2001" }, { "authors": "Wenzheng Feng; Jie Zhang; Yuxiao Dong; Yu Han; Huanbo Luan; Qian Xu; Qiang Yang; Evgeny Kharlamov; Jie Tang", "journal": "", "ref_id": "b8", "title": "Graph random neural networks for semi-supervised learning on graphs", "year": "2020" }, { "authors": "Johannes Gasteiger; Aleksandar Bojchevski; Stephan Günnemann", "journal": "", "ref_id": "b9", "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "year": "2018" }, { "authors": "Simon Geisler; Daniel Zügner; Stephan Günnemann", "journal": "", "ref_id": "b10", "title": "Reliable graph neural networks via robust aggregation", "year": "2020" }, { "authors": "Simon Geisler; Tobias Schmidt; Daniel Hakan S ¸irin; Aleksandar Zügner; Stephan Bojchevski; Günnemann", "journal": "", "ref_id": "b11", "title": "Robustness of graph neural networks at scale", "year": "2021" }, { "authors": "Jonas Berg Hansen; Filippo ; Maria Bianchi", "journal": "", "ref_id": "b12", "title": "Total variation graph neural networks", "year": "2023" }, { "authors": "W Paul; Roy E Holland; Welsch", "journal": "Communications in Statistics-theory and Methods", "ref_id": "b13", "title": "Robust regression using iteratively reweighted least-squares", "year": "1977" }, { "authors": "P J Huber", "journal": "Wiley", "ref_id": "b14", "title": "Robust Statistics", "year": "2004" }, { "authors": "Wei Jin; Yao Ma; Xiaorui Liu; Xianfeng Tang; Suhang Wang; Jiliang Tang", "journal": "", "ref_id": "b15", "title": "Graph structure learning for robust graph neural networks", "year": "2020" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b16", "title": "Semi-supervised classification with graph convolutional networks", "year": "2017" }, { "authors": "Xiaorui Liu; Wei Jin; Yao Ma; Yaxin Li; Hua Liu; Yiqi Wang; Ming Yan; Jiliang Tang", "journal": "", "ref_id": "b17", "title": "Elastic graph neural networks", "year": "2021" }, { "authors": "Yao Ma; Xiaorui Liu; Tong Zhao; Yozen Liu; Jiliang Tang; Neil Shah", "journal": "", "ref_id": "b18", "title": "A unified view on graph neural networks as graph signal denoising", "year": "2021" }, { "authors": "Felix Mujkanovic; Simon Geisler; Stephan Günnemann; Aleksandar Bojchevski", "journal": "", "ref_id": "b19", "title": "Are defenses for graph neural networks robust", "year": "2022" }, { "authors": "Babatounde Moctard Oloulade; Jianliang Gao; Jiamin Chen; Tengfei Lyu; Raeed Al-Sabri", "journal": "Tsinghua Science and Technology", "ref_id": "b20", "title": "Graph neural architecture search: A survey", "year": "2021" }, { "authors": "Prithviraj Sen; Galileo Namata; Mustafa Bilgic; Lise Getoor; Brian Galligher; Tina Eliassi-Rad", "journal": "AI magazine", "ref_id": "b21", "title": "Collective classification in network data", "year": "2008" }, { "authors": "Robert Tibshirani", "journal": "Journal of the Royal Statistical Society. Series B (Methodological)", "ref_id": "b22", "title": "Regression shrinkage and selection via the lasso", "year": "1996" }, { "authors": "Rohan Varma; Harlin Lee; Jelena Kovačević; Yuejie Chi", "journal": "IEEE transactions on signal and information processing over networks", "ref_id": "b23", "title": "Vector-valued graph trend filtering with non-convex penalties", "year": "2019" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Lio; Yoshua Bengio", "journal": "", "ref_id": "b24", "title": "Graph attention networks", "year": "2017" }, { "authors": "Huijun Wu; Chen Wang; Yuriy Tyshetskiy; Andrew Docherty; Kai Lu; Liming Zhu", "journal": "", "ref_id": "b25", "title": "Adversarial examples on graph data: Deep insights into attack and defense", "year": "2019" }, { "authors": "Kaidi Xu; Hongge Chen; Sijia Liu; Pin-Yu Chen; Tsui-Wei Weng; Mingyi Hong; Xue Lin", "journal": "", "ref_id": "b26", "title": "Topology attack and defense for graph neural networks: An optimization perspective", "year": "2019" }, { "authors": "Yongyi Yang; Tang Liu; Yangkun Wang; Jinjing Zhou; Quan Gan; Zhewei Wei; Zheng Zhang; Zengfeng Huang; David Wipf", "journal": "", "ref_id": "b27", "title": "Graph neural networks inspired by classical iterative algorithms", "year": "2021" }, { "authors": "Cun-Hui Zhang", "journal": "The Annals of Statistics", "ref_id": "b28", "title": "Nearly unbiased variable selection under minimax concave penalty", "year": "2010" }, { "authors": "Xiang Zhang; Marinka Zitnik", "journal": "", "ref_id": "b29", "title": "Gnnguard: Defending graph neural networks against adversarial attacks", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 108, 246.17, 177.76, 13.47 ], "formula_id": "formula_0", "formula_text": "sum mi = 1 Z j∈N (i) w(f j , m i )f j , ∀i ∈ V" }, { "formula_coordinates": [ 3, 170.13, 327.61, 333.87, 23.9 ], "formula_id": "formula_1", "formula_text": "2λ (i,j)∈E ρ(∥ fi -fj ∥ 2 ) + i∈V ∥ fi -f (0) ∥ 2 2 , fi = (1 + λd i ) -1 2 f i .(1)" }, { "formula_coordinates": [ 3, 114.41, 384.47, 389.59, 34.04 ], "formula_id": "formula_2", "formula_text": "λ 1 (i,j)∈E f i √ d i - f j d j p + λ 2 (i,j)∈E f i √ d i - f j d j 2 2 + 1 2 i∈V ∥f i -f (0) i ∥ 2 2 , p ∈ {1, 2},(2)" }, { "formula_coordinates": [ 3, 279.56, 633.28, 118.58, 18.89 ], "formula_id": "formula_3", "formula_text": "(i,j)∈E ∥f i / √ d i -f j / d j ∥ 2 2" }, { "formula_coordinates": [ 4, 260.95, 169.35, 109.75, 14.28 ], "formula_id": "formula_4", "formula_text": "f (k+1) i = j∈N (i) w ij f (k) j" }, { "formula_coordinates": [ 5, 167.44, 175.12, 336.56, 27.88 ], "formula_id": "formula_5", "formula_text": "arg min F H(F ) = (i,j)∈E ρ γ ( f i √ d i - f j d j 2 ) + λ i∈V ∥f i -f (0) i ∥ 2 2 ,(3)" }, { "formula_coordinates": [ 5, 247.54, 231.64, 256.47, 29.62 ], "formula_id": "formula_6", "formula_text": "ρ γ (y) = y -y 2 2γ if y < γ γ 2 if y ≥ γ (4)" }, { "formula_coordinates": [ 5, 171.41, 617.7, 332.59, 30.47 ], "formula_id": "formula_7", "formula_text": "Ĥ(k) (F ) = (i,j)∈E,i̸ =j W (k) ij f i √ d i - f j √ d j 2 2 + λ i∈V ∥f i -f (0) i ∥ 2 2 ,(5)" }, { "formula_coordinates": [ 5, 134.9, 655.45, 352.1, 20.59 ], "formula_id": "formula_8", "formula_text": "W (k) ij = 1 i̸ =j dργ (yij ) dy 2 ij yij =y (k) ij 1 , ρ γ (•) is the MCP function and y (k) ij = f (k) i √ di - f (k) j √ dj 2 ." }, { "formula_coordinates": [ 6, 122.96, 105.16, 381.04, 11.5 ], "formula_id": "formula_9", "formula_text": "F (k+1) = F (k) -η∇ Ĥ(k) (F (k) ) = F (k) -η 2( Q(k) -W (k) ⊙ Ã)F (k) -2λF (0) ,(6)" }, { "formula_coordinates": [ 6, 137.96, 125.22, 227.42, 14.28 ], "formula_id": "formula_10", "formula_text": "Q(k) = 2(diag(q (k) ) + λI), q (k) m = j W (k) mj A mj /d m ," }, { "formula_coordinates": [ 6, 108, 167.46, 396, 25.07 ], "formula_id": "formula_11", "formula_text": "H(F (k+1) ) ≤ H(F (k) ) is that the step size η satisfies 0 < η ≤ ∥diag(q (k) ) -W (k) ⊙ Ã + λI∥ -1 2 ." }, { "formula_coordinates": [ 6, 108, 265.69, 396, 22.9 ], "formula_id": "formula_12", "formula_text": "F (k+1) = F (k) - (∇ 2 Ĥ(k) (F (k) )) -1 ∇ Ĥ(k) (F (k)" }, { "formula_coordinates": [ 6, 108, 335.4, 396, 11.5 ], "formula_id": "formula_13", "formula_text": "∇ 2 Ĥ(k) (F (k) ) = 2(diag(q (k) )-W (k) ⊙ Ã+λI) by the diagonal matrix Q(k) = 2(diag(q (k) )+λI)" }, { "formula_coordinates": [ 6, 199.47, 368.71, 304.53, 11.5 ], "formula_id": "formula_14", "formula_text": "F (k+1) = 2( Q(k) ) -1 (W (k) ⊙ Ã)F (k) + λF (0) ,(7)" }, { "formula_coordinates": [ 6, 108, 569.81, 267.24, 62.38 ], "formula_id": "formula_15", "formula_text": "F (k+1) = (diag(q (k) ) + λI) -1 (W (k) ⊙ Ã)F (k) + λF (0) , (8) where q (k) m = j W (k) mj A mj /d m , W (k) ij = 1 i̸ =j max(0, 1 2y (k) ij -1 2γ ) and y (k) ij = f (k) i √ di - f (k) j √ dj 2 ." }, { "formula_coordinates": [ 6, 269.05, 657.61, 106.18, 17.13 ], "formula_id": "formula_16", "formula_text": "W (k) ij = dρ(y) dy 2 | y=y (k) ij . It is" }, { "formula_coordinates": [ 6, 471.03, 676.2, 32.97, 14.07 ], "formula_id": "formula_17", "formula_text": "(k) ij ≥ γ," }, { "formula_coordinates": [ 7, 108, 167.93, 396, 23.68 ], "formula_id": "formula_18", "formula_text": "(F (k+1) = 1 1+λ ÃF (k) + λ 1+λ F (0) ) and GCN (F (k+1) = ÃF (k) ) if chosing λ = 0." }, { "formula_coordinates": [ 12, 280.64, 280.09, 219.49, 30.32 ], "formula_id": "formula_19", "formula_text": "z n+m i η(z -x i ), (9" }, { "formula_coordinates": [ 12, 500.13, 290.83, 3.87, 8.64 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 12, 261.24, 360.25, 242.76, 30.46 ], "formula_id": "formula_21", "formula_text": "z (k+1) = i w (k) i x i i w (k) i ,(10)" }, { "formula_coordinates": [ 12, 143.67, 396.27, 72.7, 15.13 ], "formula_id": "formula_22", "formula_text": "(k) i = 1 ∥z (k) -xi∥2" }, { "formula_coordinates": [ 12, 108, 397.83, 396, 34.51 ], "formula_id": "formula_23", "formula_text": "(k+1) = z (k) - α∇ z i ∥z -x i ∥ 2 = z (k+1) -α i z (k) -xi ∥z (k) -xi∥2 . Taking α = 1 i w (k) i" }, { "formula_coordinates": [ 12, 178.29, 467.04, 325.71, 51.88 ], "formula_id": "formula_24", "formula_text": "z (k+1) = z (k) -α∇ z i ρ γ (∥z -x i ∥ 2 ) (11) = z (k) -α i max(0, 1 ∥z (k) -x i ∥ 2 - 1 γ )(z (k) -x i ). (12" }, { "formula_coordinates": [ 12, 499.85, 499.33, 4.15, 8.64 ], "formula_id": "formula_25", "formula_text": ")" }, { "formula_coordinates": [ 12, 108, 525.16, 396, 59.05 ], "formula_id": "formula_26", "formula_text": "1 i wi yields a similar reweighting iteration z (k+1) = i w (k) i xi i w (k) i . ℓ 2 estimator." }, { "formula_coordinates": [ 12, 231.12, 602.84, 272.88, 49.18 ], "formula_id": "formula_27", "formula_text": "z (k+1) = z (k) -α∇ z i ∥z -x i ∥ 2 2 (13) = z (k) -α i (z (k) -x i ),(14)" }, { "formula_coordinates": [ 13, 110.87, 456.04, 393.13, 24.18 ], "formula_id": "formula_28", "formula_text": "H since H(F (k+1) ) ≤ Ĥ(F (k+1) ) + C ≤ Ĥ(F (k) ) + C = H(F (k) )." }, { "formula_coordinates": [ 13, 252.33, 538.92, 124.68, 11.28 ], "formula_id": "formula_29", "formula_text": "F (k+1) = F (k) -Q-1 ∇ Ĥ(F (" }, { "formula_coordinates": [ 13, 386.56, 573.39, 93.59, 21.15 ], "formula_id": "formula_30", "formula_text": "y ij := fi √ di - fj √ dj 2" }, { "formula_coordinates": [ 13, 108, 594.66, 205.04, 14.28 ], "formula_id": "formula_31", "formula_text": "H(F ) = (i,j)∈E,i̸ =j ρ(y ij ) + λ i∈V ∥f i -f(0)" }, { "formula_coordinates": [ 13, 163.75, 618.67, 340.25, 23.66 ], "formula_id": "formula_32", "formula_text": "H(F ) ≤ Ĥ(F ) + C = (i,j)∈E,i̸ =j W (k) ij y 2 ij + λ i∈V ∥f i -f (0) i ∥ 2 2 + C,(15)" }, { "formula_coordinates": [ 13, 135.9, 653.62, 344.27, 24.68 ], "formula_id": "formula_33", "formula_text": "W (k) ij = ∂ρ(y) ∂y 2 y=y (k) ij and y (k) ij = f (k) i √ di - f (k) j √ dj 2 and C = H(F (k) ) -Ĥ(F (k)" }, { "formula_coordinates": [ 13, 108, 678.41, 385.25, 41.99 ], "formula_id": "formula_34", "formula_text": "F = F (k) . Proof. Let v = y 2 and define ψ(v) := ρ(y) = ρ( √ v). Then ψ is concave since dψ(v) dv = dρ(y)" }, { "formula_coordinates": [ 13, 355.22, 721.48, 148.78, 13.72 ], "formula_id": "formula_35", "formula_text": "ψ(v) ≤ ψ(v 0 ) + ψ ′ (ν) ν=v0 (v -v 0 ). Substitute v = y 2 , v 0 = y 2 0 , we obtain: ρ(y) ≤ y 2 ∂ρ(y) ∂y 2 y=y0 -y 2 0 ∂ρ(y) ∂y 2 y=y0 + ρ(y 0 ) = y 2 ∂ρ(y) ∂y 2 y=y0 + C(y 0 )(16)" }, { "formula_coordinates": [ 14, 108, 133.95, 396, 29.6 ], "formula_id": "formula_36", "formula_text": "(k) ij , we can get ρ(y ij ) ≤ W (k) ij y 2 ij + C(y (k) ij ) which takes the equality at y ij = y (k) ij ." }, { "formula_coordinates": [ 14, 108, 219.87, 258.81, 14.28 ], "formula_id": "formula_37", "formula_text": "Lemma 2. For Ĥ = (i,j)∈E,i̸ =j W ij y 2 ij + λ i∈V ∥f i -f(0)" }, { "formula_coordinates": [ 14, 108, 235.75, 396, 31.19 ], "formula_id": "formula_38", "formula_text": "F 4 satisfy ∇ Fmn Ĥ(F ) = 2 (diag(q) -W ⊙ Ã + λI)F -λF (0) mn ,(17)" }, { "formula_coordinates": [ 14, 196.32, 281.03, 307.68, 17.68 ], "formula_id": "formula_39", "formula_text": "∇ Fmn ∇ F kl Ĥ(F ) = 2 diag(q) -W ⊙ Ã + λI mk ,(18)" }, { "formula_coordinates": [ 14, 135.77, 302.54, 187.39, 16.53 ], "formula_id": "formula_40", "formula_text": "q m = j W mj A mj /d m and Ãij = Aij √ didj" }, { "formula_coordinates": [ 14, 271.36, 342.57, 96.66, 28.06 ], "formula_id": "formula_41", "formula_text": "y 2 ij := fi √ di - fj √ dj 2 2" }, { "formula_coordinates": [ 14, 118.52, 370.01, 385.48, 292.2 ], "formula_id": "formula_42", "formula_text": "(i,j)∈E,i̸ =j W ij y 2 ij will be ∇ Fmn   (i,j)∈E,i̸ =j W ij y 2 ij   (19) = (i,j)∈E,i̸ =j W ij ∂y 2 ij ∂F mn (20) = (m,j)∈E W mj ∂y 2 mj ∂F mn (21) = (m,j)∈E W mj ∂ Fmn √ dm - Fjn √ dj 2 ∂F mn (22) = j∈N (m) 2W mj ( F mn d m - F jn d m d j ) (23) =2 j W mj ( A mj d m F mn - A mj d m d j F jn ) (24) =2( j W mj A mj d m )F mn -2((W ⊙ Ã)F ) mn (25) = 2(diag(q) -W ⊙ Ã)F mn ,(26)" }, { "formula_coordinates": [ 15, 203.67, 99.07, 300.33, 158.45 ], "formula_id": "formula_43", "formula_text": "∇ 2 FmnF kl   (i,j)∈E,i̸ =j W ij y 2 ij   (27) = (i,j)∈E,i̸ =j W ij ∂y 2 ij ∂F mn ∂F kl (28) =2 ∂ ∂F kl   j W mj ( A mj d m F mn - A mj d m d j F jn )   (29) =2(q m δ mk - j W mj A mj d m d j δ jk )δ nl (30) =2(diag(q) -W ⊙ Ã) mk δ nl . (31" }, { "formula_coordinates": [ 15, 499.85, 248.19, 4.15, 8.64 ], "formula_id": "formula_44", "formula_text": ")" }, { "formula_coordinates": [ 15, 114.14, 505.27, 389.86, 76.15 ], "formula_id": "formula_45", "formula_text": "Ĥ(F (k+1) ) -Ĥ(F (k) ) (32) =tr ∇ Ĥ(F (k) ) ⊤ (F (k+1) -F (k) ) + 1 2 tr (F (k+1) -F (k) ) ⊤ ∇ 2 Ĥ(F (k) )(F (k+1) -F (k) ) (33) =tr -η∇ Ĥ(F (k) ) ⊤ ∇ Ĥ(F (k) ) + η 2 2 ∇ Ĥ(F (k) ) ⊤ ∇ 2 Ĥ(F (k) )∇ Ĥ(F (k) ) . (34" }, { "formula_coordinates": [ 15, 499.85, 566.16, 4.15, 8.64 ], "formula_id": "formula_46", "formula_text": ")" }, { "formula_coordinates": [ 15, 215.19, 621.28, 284.66, 12.28 ], "formula_id": "formula_47", "formula_text": "-η + ∥diag(q) -W (k) ⊙ Ã + λI∥ 2 η 2 ≤ 0, (35" }, { "formula_coordinates": [ 15, 108, 624.23, 396, 38.52 ], "formula_id": "formula_48", "formula_text": ") or η ≤ ∥diag(q) -W (k) ⊙ Ã + λI∥ -1 2 . (36" }, { "formula_coordinates": [ 15, 499.85, 652.25, 4.15, 8.64 ], "formula_id": "formula_49", "formula_text": ")" }, { "formula_coordinates": [ 16, 108, 82.47, 396, 29.61 ], "formula_id": "formula_50", "formula_text": "Lemma 3. 2 Q-∇ 2 Ĥ(y) is positive semi-definite, where Ĥ = (i,j)∈E,i̸ =j W ij y 2 ij +λ i∈V ∥f i - f (0) i ∥ 2" }, { "formula_coordinates": [ 16, 211.05, 153.76, 292.95, 11.5 ], "formula_id": "formula_51", "formula_text": "2 Q -∇ 2 Ĥ(y) = 2(diag(q) + λI + W ⊙ Ã).(37)" }, { "formula_coordinates": [ 16, 172.7, 194.78, 331.3, 27.3 ], "formula_id": "formula_52", "formula_text": "(i,j)∈E,i̸ =j W ij ∥ f i √ d i - f j d j ∥ 2 2 = tr(F ⊤ (diag(q) -W ⊙ Ã)F ),(38)" }, { "formula_coordinates": [ 16, 173.25, 267.72, 330.75, 44.02 ], "formula_id": "formula_53", "formula_text": "≤ (i,j)∈E,i̸ =j W ij ∥ f i √ d i + f j d j ∥ 2 2 = tr(F ⊤ (diag(q) + W ⊙ Ã)F ) (39) ≤tr(F ⊤ (diag(q) + W ⊙ Ã + λI)F ).(40)" }, { "formula_coordinates": [ 16, 173.44, 478.37, 330.56, 22.31 ], "formula_id": "formula_54", "formula_text": "Ĥ(x) = Ĥ(y) + ∇ Ĥ(y) ⊤ (x -y) + 1 2 (x -y) ⊤ ∇ 2 Ĥ(y)(x -y).(41)" }, { "formula_coordinates": [ 16, 172.21, 508.59, 331.79, 33.16 ], "formula_id": "formula_55", "formula_text": "(y) = 2 Q(y) in Lemma 3 such that Q(y) -∇ 2 Ĥ(y) ⪰ 0, then ∀z, z ⊤ Q(y)z ≥ z ⊤ ∇ 2 Ĥ(y)z.(42)" }, { "formula_coordinates": [ 16, 180.01, 573.1, 323.99, 22.31 ], "formula_id": "formula_56", "formula_text": "Ĥ(x) ≤ Ĥ(y) + ∇ Ĥ(y) ⊤ (x -y) + 1 2 (x -y) ⊤ Q(y)(x -y).(43)" }, { "formula_coordinates": [ 16, 157.37, 637.56, 346.63, 74.76 ], "formula_id": "formula_57", "formula_text": "Ĥ(x) -Ĥ(y) ≤∇ Ĥ(y) ⊤ (x -y) + 1 2 (x -y) ⊤ Q(y)(x -y) (44) =∇ Ĥ(y) ⊤ (x -y) + 2 Q 1 2 (x -y) ⊤ Q 1 2 (x -y) (45) =2∇ Ĥ(y) ⊤ Q-1 ∇ Ĥ(y) -2∇ Ĥ(y) ⊤ ( Q-1 2 ) ⊤ Q-1 2 ∇ Ĥ(y) (46) =0.(47)" }, { "formula_coordinates": [ 19, 177, 217.04, 327, 16.63 ], "formula_id": "formula_58", "formula_text": "δA ⋆ = arg min δA L attack (GNN θ (A + (I -2A) ⊙ δA, F ), y target ),(49)" }, { "formula_coordinates": [ 19, 179.02, 623.17, 324.98, 11.5 ], "formula_id": "formula_59", "formula_text": "F (k+1) = (diag(q (k) ) + λI) -1 (W (k) ⊙ Ã)F (k) + λF (0) ,(50)" }, { "formula_coordinates": [ 19, 140.23, 648.39, 340.21, 21.1 ], "formula_id": "formula_60", "formula_text": "(k) m = j W (k) mj A mj /d m , W (k) ij = 1 i̸ =j max(0, 1 2y (k) ij -1 2γ ) and y (k) ij = f (k) i √ di - f (k) j" } ]
10.1162/COLI_a_00166
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b26", "b17", "b26", "b0", "b35", "b21", "b29", "b36", "b31", "b32", "b23", "b12", "b19", "b14", "b24", "b25", "b2" ], "table_ref": [], "text": "Paraphrase generation aims to produce sentences that have different expressions but convey the same semantic meaning given a particular sentence. Paraphrasing is a common phenomenon that reflects the diversity of human languages, serving as an important research topic in natural language processing. It has broad applications such as in question answering (Mckeown, 1983) and information retrieval (Knight and Marcu, 2000). However, automatically generating accurate and differentappearing paraphrases is still very challenging, since it requires the abilities of both understanding and generation.\nConventional methods draw on rule-based systems (Mckeown, 1983; Barzilay and Lee, 2003;Zhao et al., 2009;Lin and Pantel, 2001) and statistical machine translation (Quirk et al., 2004;Zhao et al., 2008) to generate paraphrases. These methods are easy to interpret and analyze, but struggle to yield fluent and diverse sentences. Recently the accumulation of the paraphrase data provides an unprecedented opportunity to directly learn the paraphrasing transformations in an end-to-end manner (Vaswani et al., 2017). For instance, Wang et al. (2019) formulate paraphrasing as a supervised encoding-decoding problem and use stacked residual LSTM networks to generate paraphrases.\nA good paraphrase is a sentence that shares similar semantics but has noticeable syntactical or lexical differences from the original one (Lin and Wan, 2021). To improve the diversity of generated sentences, (Gupta et al., 2018) introduce the variational auto-encoder (VAE) to perform paraphrase generation. (Li et al., 2018) propose multiple generators with different granularity levels to learn the mapping relationship between input and output respectively, and then combine them to complete the paraphrase generation task. But those generated paraphrases tend to only make trivial changes to original sentences, such as modifications of synonyms.\nFurther, Hosking and Lapata (2021) leverage autoencoder to encode the structure and semantics of the sentence separately, and generate paraphrases by perturbing the structure encoding. Liu et al. integrate the word editing and rule-based transformation operations into deep learning and achieve the previous SOTA performance in paraphrase generation (Liu et al., 2022(Liu et al., , 2020)). However, due to the limitation of scales of the paraphrasing datasets, neural networks tend to generate the paraphrases with local changes to the inputs rather than global modifications on sentence structures.\nIn this work, we aim to exploit the knowledge of the pre-trained language model to balance expression diversity and semantic preservation. There- fore, inspired by (Bhardwaj et al., 2022) we propose a vector-quantized prompt learning framework, called VQPrompt, to generate diverse and high-quality paraphrases. In particular, VQPrompt comprises a prompt encoder and a pre-trained generative language model. The prompt encoder produces discrete prompts and the generative language model accepts both the prompts and the input sentence to generate the corresponding paraphrases.\nTo make the vector-quantization work, we also introduce a K-means training strategy to dynamically update the codebook in the prompt encoder.\nWe evaluate the effectiveness of our model on four paraphrasing datasets, namely, Quora, Wikianswers, and MSCOCO. Experimental results show that VQPrompt achieves a new state-of-the-art paraphrasing performance in terms of both automatic metrics and human evaluation. In summary, our contributions are as follows:\n• We propose vector-quantized prompt learning to adapt large pre-trained language models for paraphrase generation.\n• We introduce a K-means training strategy to dynamically update the codebook in vector quantization (VQ), addressing the index collapse of VQ.\n• The proposed method achieves new stateof-the-art performances in three benchmark datasets and presents modest interpretability." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b8", "b28", "b16", "b29", "b35", "b7", "b5", "b14", "b13", "b31", "b32", "b1", "b23", "b20", "b14", "b27", "b20", "b23", "b14" ], "table_ref": [], "text": "One of the characteristics of the paraphrase generation task is that there exist several general transformation rules. Therefore, rule-based methods have been used for paraphrase generation tasks as early as the last century. Representative methods include dictionary-based and template-based methods. Dictionary-based methods look up synonyms in the dictionaries such as HowNet (Dong and Dong, 2003) or WordNet (Miller, 1995) to replace words in the original sentence, thereby generating corresponding paraphrases (Kauchak and Barzilay, 2006). The advantage of such a rulebased approach is that it is interpretable and controllable. Its shortcomings lie in the heavy workload of manually writing rules, and the generated sentences are not smooth enough.\nWith the accumulation of paraphrase corpus, researchers then start to model the paraphrase generation task as a single-language statistical translation process, thereby improving the fluency of generating paraphrase sentences (Quirk et al., 2004;Zhao et al., 2009). The statistical translation model learns the transition probability from the original sentence to the paraphrases from a large amount of training data. For example, Dolan et al. collected a large number of news texts from the Internet and built a paraphrase generation corpus, and then used statistical machine translation methods to generate paraphrase sentences (Dolan et al., 2004). However, statistical paraphrasing methods still require heavy feature engineering.\nIn recent years, deep neural network has become the mainstream paraphrase generation method due to its powerful fitting ability (Chowdhury et al., 2022;Hosking and Lapata, 2021). Similar to the statistical paraphrasing methods, the neuralbased paraphrase generation method formulates the paraphrase generation as a single-language translation process, but adopts an encoding-decoding network structure and an end-to-end training method. The first deep paraphrasing method takes the long short-term memory network LSTM (Hochreiter and Schmidhuber, 1997) as the encoder and decoder. In order to solve the long-distance dependency problem in the encoding process, Wang et al. used the multi-head attention network Transformer (Vaswani et al., 2017) as the encoder and decoder, and achieved further performance improvement (Wang et al., 2019).\nAn ideal paraphrase not only needs to have the same semantics but also should have a significant change in expression from the input sentence (i.e., expression difference) (Bhagat and Hovy, 2013). Aiming at the problem of expression differences in generated sentences, researchers have made a lot of attempts in different dimensions (Lin and Wan, 2021;Li et al., 2019;Hosking and Lapata, 2021;Meng et al., 2021). For example, Li et al. proposed multiple generators with different granularity levels to learn the mapping relationship between input and output respectively, and then combine them to complete the paraphrase generation task (Li et al., 2019). Lin and Wan utilized back-translation and multiple rounds of iterative generation methods to produce paraphrased sentences with significant variance (Lin and Wan, 2021). Hosking and Lapata (Hosking and Lapata, 2021) continued to use the idea of variational autoencoder to encode the structure and semantics of the sentence separately, and generate paraphrases by perturbing the structure encoding. Different from these methods, this work learns to generate syntax-based prompts, which could induce the pre-trained model to generate diverse paraphrases.\nApart from the traditional methods working with language models (LMs) that have parameters less than 1B, modern LLMs like ChatGPT can also generate paraphrases with high quality. However, it costs much more than traditional methods since they require a huge training corpus and learnable parameters." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Architecture", "publication_ref": [], "table_ref": [], "text": "In this work, we propose VQPrompt, a novel model that generates paraphrases via prompt learning. It is composed of a prompt encoder and a pre-trained generative language model, which will be elaborated as follows." }, { "figure_ref": [], "heading": "Prompt Encoder", "publication_ref": [ "b30", "b34", "b2" ], "table_ref": [], "text": "The prompt encoder aims to generate prompts for the pre-trained language model to produce reasonable paraphrases. The proposal of the prompt encoder stems from the assumption that the pretrained language model (PLM) is powerful to generate sentences with arbitrary contents if given suitable inputs. Therefore, for a particular input sentence, the corresponding prompt is all we need for paraphrase generation in this work.\nSince the prompts are dependent on the input sentence, this work introduces sample-aware prompt encoder. For a given sequence of tokens x = {x 1 , x 2 , ..., x n }, we first get the embeddings e = {e 1 , e 2 , ..., e n }. Then we employ a sentence encoder to take the sentence embeddings as inputs and output the M continuous prompts, given by r = SentenceEncoder(e 1 , . . . , e n ),\nwhere r stands for the continuous prompt (with length of M ) for the sentence x. We adopt the encoder of the T5 model (Raffel et al., 2020) as the sentence encoder.\nIn general, the prompt in our work for paraphrase generation illustrates the abstract rule of paraphrase transformation. Indeed, humans have summarized several abstract rules of the transformation between paraphrases. For instance, the abstract rule \"what is the reason of $x? → why does $x happen?\" could characterize a number of transformations of paraphrases. Therefore, we expect that the prompt could indicate the abstract transforming rules for paraphrase generation.\nTherefore, we make the second assumption that the transforming rules of paraphrase generation are finite. Based on the assumption, we propose a prompt encoder that produces discrete rule representations by vector quantization (VQ) (Zhang et al., 2022;Bhardwaj et al., 2022).\nFormally, the prompt encoder maintains a codebook that comprises K discrete prompt encodings. The above continuous prompt r m are then quantized into discrete ones, selected from the codebook C ∈ R Kc×D , where K c is the number of the discrete code, and D is the dimensionality of each discrete code C k . We measure the L2 distance between the continuous representation r and code vectors in the codebook. For each vector set r ∈ R M ×D , where M is the number of the continuous vectors and D is the dimensionality of each continuous vector r m , the code vector that yields the minimum distance is taken to obtain the discrete rule representation q m . The detailed computations are\nq m = C k ,(2)\nwhere k = arg min j∈{1,...,Ke}\n∥r m -C j ∥ 2 ,\nwhere q m is the closest quantized vector for the continuous vector r m . Finally, M prompt vectors constitute a paraphrase prompt, given by\nQ = PromptEncoder(e) = {q m |m = 1, . . . , M },(3)\nwhere Q is the final prompt generated by the prompt encoder for a particular sentence x.\nTo make the above vector quantization work, we need to train both the neural networks and the codebook in the prompt encoder toward the minimum of the distance between the continuous and discrete representations. The objective function of vector quantization for a particular data point is\nJ vq (x) = ||sg(r m ) -q m || 2 2 + ||r m -sg(q m )|| 2 2 ,(4\n) where sg(•) is stop-gradient operation during computation. In this way, we can derive the discrete rule representations, which are expected to be interpretable and instance-invariant.\nOverall, the prompt encoder is a deep network attached with a discrete codebook C, which is trained following the basic idea of VQ-VAE. The prompt encoder takes an embedded sequence e as input and generates several prompts q m as output, which contains syntactic structure information that guides the generative LM to produce paraphrases.\nNote that the parameters of the generative language model in our work are fixed when we train the prompt encoder and the codebook. Therefore, the generative language model (LM) is neither C.append(q m ) 8:\nend for 9: end for 10: Obtaining codebook by computing the K-means centers of the code list C = K-means(C) Ensure: Codebook C learned to generate paraphrases nor able to capture the syntactic structure information of the target sentence. All of this information should be encoded by the prompt codes. That is to say, our work builds an information bottleneck where vector-quantized prompts are the only pathway to convey the syntactic structure information to the generative LM. In this specific and effective design, the acquisition of the syntactic information in the codebook can be well guaranteed." }, { "figure_ref": [], "heading": "Generative LM", "publication_ref": [ "b3", "b6" ], "table_ref": [], "text": "A generative language model (LM) prescribes the generation of a sentence as a sequence of word predictions based on the context. The generative LMs have made waves in the NLP community by demonstrating astounding few-shot capabilities on myriad language understanding tasks (Brown et al., 2020). Also, the generative language models possess with powerful decoding capacity; they could produce arbitrary contents if given suitable prompts. Therefore, the paraphrase generated by our model is given by\nP (•|x) = GLM({Q ⊕ e}) (5) ŷ ∼ P (•|x) (6)\nwhere GLM stands for the generative language model and the variable Q means the sequence of prompts q m , i.e.,\nQ = {q m |m = 1, • • • , M }. ⊕\nis the vector concatenation operation and ŷ is the generated sentence of VQPrompt. This work aims to adapt the generative LM to produce paraphrases given the input sentence, which belongs to the task of conditional sentence generation. Therefore, we adopt an instructionbased language model named Flan-T5 (Chung et al., 2022) to serve as our base model. The finetuned language model (i.e., Flan-T5) takes a sequence of words as inputs and outputs several sentences (i.e., ŷ) as needed." }, { "figure_ref": [], "heading": "Training Strategy", "publication_ref": [ "b33", "b18", "b33" ], "table_ref": [], "text": "Similar to most paraphrase generators, VQPrompt is trained to fit the mapping from the input sentences to their paraphrases. Also, the paraphrase datasets are constructed as paraphrase pairs. Formally, let the dataset be D with size N . VQPrompt aims to maximize the log-likelihood (denoted by J LL ) of the target paraphrases over all the training samples of D, that is,\nJ M L = N n log P θ (y n |x n ) = N n Tn t log P θ (y n,t |y n,<t , x n ), (7)\nwhere y n,t stands for the t-th word of the target paraphrase in the n-th sample. T n denotes the word length of the target paraphrase y n . θ is the model's parameters.\nTogether with the objective of VQ, the final objective function J of VQPrompt is\nJ = J M L + N n J vq (x n ) (8)\nHowever, the parameters of the prompt encoder are difficult to optimize since the vector quantiza- tion intercepts the gradients of backpropagation.\nOur preliminary experiments reveal that most of the codes in the codebook are rarely selected by the prompt encoder after the optimization based on the objective function J , which is called index collapse (Wu and Flierl, 2020). The index collapse usually happens on text generation since its gradients are not smooth enough. Therefore, we propose a new training strategy (called K-means training) to eliminate the index collapse in the prompt encoder. K-means training contains the following two stages:\nCodebook warm-up. We first ignore the codebook of the prompt encoder and directly use the continuous prompt to perform the paraphrase generation. Thus, the training objective is only to minimize the maximum likelihood objective J M L .\nK-means Update. Before the training in this stage, We sample some sentences from the dataset and collect the corresponding prompt codes generated by the randomly initialized VQPrompt model. Then we perform the K-Means algorithm on those codes and collect a set of codes as the primitive version of the codebook. Next, during the training, we prevent index collapse by updating the dead codes in the codebook with the clustered center. When the amount of active codes is lower than a threshold T , we will perform the replacement. If the codes are not used for a relatively long time when training, we say that the code is dead.\nDiscussion of K-means Training. In essence, the K-means strategy is an update trick in the optimization of the prompt encoder. However, the index collapse in VQ has been a long-standing problem in deep generative modeling (Łańcucki et al., 2020;Wu and Flierl, 2020). The proposed K-means strategy works well empirically and has the potential to benefit other vector quantization models. But figuring out the underlying theory is nontrivial, which we leave as future work." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we first test the proposed model on the benchmarking datasets on both automatic evaluation and human evaluation. Then we provide several detailed analyses to elucidate its effectiveness in generating paraphrases." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b4", "b22", "b9", "b4", "b9", "b22", "b15" ], "table_ref": [ "tab_1" ], "text": "In this work, we use three widely used benchmarking datasets, namely, Quora (Chen et al., 2017), MSCOCO (Lin et al., 2014), and Paralex (also named Wiki Answers) (Fader et al., 2013) in our experiments.\nQuora.The Quora dataset is collected from the question-answering forum Quora (Chen et al., 2017). It contains over 400k pairs of questions, some are paraphrases and others are nonparaphrases. There are about 150k paraphrase pairs in total.\nParalex. Paralex is a dataset of question paraphrases datasets scraped from WikiAnswers (Fader et al., 2013). It has a large number of question pairs but presents lower quality in syntactic structures and semantic similarity compared to Quora.\nMSCOCO. MSCOCO is a benchmark dataset for image captioning (Lin et al., 2014). It contains over 100k clusters of five captions sentences. Considering captions for images can involve different details or objects, the quality of these paraphrases is lower than those in Quora.\nFor the fairness of comparison, We use the cluster version of these three datasets released by the previous best method (i.e., HRQ-VAE (Hosking et al., 2022)). The statistics of the training, validation and test splits are shown in Table 1." }, { "figure_ref": [], "heading": "Competing Methods", "publication_ref": [ "b11", "b23", "b10", "b14", "b15" ], "table_ref": [], "text": "We will compare VQ-prompt with multiple advanced paraphrase generation models. We describe several most competing models as follows.\nSOW/REAP. It uses a two-stage model to derive a set of syntactic rearrangements, which are then used to guide an encoder-decoder model (Goyal and Durrett, 2020).\nBTmPG. It leverages a multi-round paraphrase generator to improve diversity and back-translation to preserve semantic information (Lin and Wan, 2021).\nLBoW. It grounds the semantics of a discrete latent variable by the latent bag-of-words technique (LBoW) (Fu et al., 2019).\nSeparator. (Hosking and Lapata, 2021) take both the semantic sentence and syntax-informed sentence as inputs in the training process. It combines training objective with a principled information bottleneck, to induce a latent space that disentangles meaning and form.\nHRQ-VAE. Hierarchical refinement quantized variational autoencoders (HRQ-VAE) is a method for learning the decomposition of dense encodings as a sequence of discrete latent variables that make iterative refinements of increasing granularity (Hosking et al., 2022). HRQ-VAE serves as the previous state-of-the-art paraphrasing method. We take it as our arch-rival." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b15" ], "table_ref": [], "text": "Many previous works adopt BLEU as a measure for evaluating several text generation tasks. But for paraphrase evaluation, the dissimilarity from the input is also of vital importance. So, in order to take both paraphrase quality and similarity to the input into consideration, we also use iBLEU for our automatic evaluation. The calculation of iBLEU is given by iBLEU\n= α•BLEU( ŷ, Y )- (1 -α) • BLEU(x, Y )(9)\nwhere Y stands for the set of reference paraphrases. Thus, the expression BLEU(x, Y ) indicates the BLEU score between the input sentence and the reference paraphrases, which is also called the Self-BELU score. The coefficient α balances the importance between expression diversity and semantic similarity. Following the setting of (Hosking et al., 2022), we set α = 0.8.\nOverall, the BLEU, Self-BLEU, and iBELU scores constitute a relatively comprehensive evaluation of the generated paraphrases. In addition to the automatic evaluation metric, we also conducted the human evaluation." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "The hidden layer sizes of the equation encoder and the expression generator are 768. The size of the codebook is set to 512. The length of the prompt (i.e., M ) is 4. The threshold T in the K-means training strategy is 256. The maximum input length of the feature vector is 256 and the maximum output length is 60. We evaluate the model for each half epoch and select the model that reaches the best performance on the validation set. Finally, we report the generation performance on the test set. " }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_2", "tab_4", "tab_5", "tab_2", "tab_6", "tab_6" ], "text": "Table 2 presents the performance of all competing methods on the Quora, Paralex, and MSCOCO datasets. Copy and tf-idf are typically repeating the input sentences and thus obtain the lowest iBLEU scores. The neural networks, including LBoW, VAE, and SEPARATOR, achieve higher iBLEU scores. But these improvements are obtained with the loss of the semantic meaning because the similarity with the references is decreased along with the improvements of iBLEU. HRQ-VAE is the previously state-of-the-art paraphrase generator, which obtains better performances than SEPARATOR and LBoW. However, HRQ-VAE prescribes that the dataset contains high-quality sentence pairs with similar syntax structures, which is not feasible in sentences with complex grammar dependence.\nAs for VQPrompt, we observe that it consistently outperforms HRQ-VAE and the other baselines on the three benchmark datasets. Considering that HRQ-VAE utilizes additional syntax supervision, the improvements on both BLEU and iBLEU demonstrate the effectiveness of the proposed method.\nHuman Evaluation. We also conducted a human evaluation of the results. Due to the limit of budget and resources, we sampled 300 sentences from the Quora test set and compared VQPrompt with Separator and HRQ-VAE. We asked three human annotators to evaluate the generated paraphrases in terms of semantic relevance and sentence fluency in a blind fashion; each aspect was scored from 1 to 5. We report in Table 3 the average human scores and their variances. Table 4 shows that VQPrompt achieves the highest human satisfaction scores. The results are also consistent with the automatic metrics in Table 2.\nAblation Study. In order to investigate the reasons for the performance gain obtained by VQPrompt, we further build two variants of VQPrompt and evaluate their generation results. The two variants are the generative language model (denoted by generative LM) and the generative language model with traditional vector-quantized prompt (generative LM (VQ)). The difference between generative LM (VQ) and the proposed VQPrompt model lies in the optimization of the prompt encoder (i.e., the K-means training strat-egy). These two variants together with VQPrompt share the same hyperparameters and data.\nAs the Quora dataset is the most widely-used high-quality dataset, the ablation study is only conducted on Quora. As shown in Table 5, the generative LM model reaches a modest performance, owing to the decent initialization of the pre-trained language model. Next, simply adding a discrete prompt to the model would lead to a side effect of the paraphrase generation, which is caused by the index collapse of the VQ technique. With our training scheme, the discrete presentation of prompts could further boost the performance of the generative LM. We also observe that more than half of the codes in the codebook are active after incorporating the training scheme, which reflects the VQ computations works well and can finally benefit the paraphrase generation.\nPrompt Visualization. For an intuitive visualization of generated prompts, we perform t-SNE on prompt component codes q m and the prompts Q. In this paper, M component codes constitute a paraphrasing prompt (in experiments, M = 4). Although we use the same number of vectors to conduct t-SNE, the dimension reduction results are varied. Generally, the points of the paraphrase prompt tend to clump together in larger clusters, indicating that VQPrompt has learned several ab-stract paraphrasing rules, which could induce the pre-trained model to produce paraphrases.\nTo demonstrate this point, we select three clusters and use them to perform paraphrasing. As shown in Table 5, we observe that these clusters contain a bunch of sentences that share similar syntactic structures, which validates that the learned prompts characterize the abstract transforming rule of paraphrase generation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Paraphrasing aims to restate one sentence as another with the same meaning, but different wordings. In this paper, we establish a prompt learning framework, coined VQPrompt, for paraphrase generation. VQPrompt leverages vector quantization to learn finite prompt components and thus possess modest interpretability. We introduce a K-means training strategy to avoid index collapse in VQ. Experiments show VQPrompt achieves impressive generation performance on multiple datasets." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "For ethnic concerns, the three datasets we use are publicly available and do not contain biased or discriminatory information. For resource concerns, our model is dependent on a pre-trained model, which means a higher computation budget." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Natural Science Foundation of China under Grant 62206192, the Natural Science Foundation of Sichuan Province under Grant 2023NS-36 FSC1408 and the MIIT Industrial Internet Innovation and Development Project." } ]
Deep generative modeling of natural languages has achieved many successes, such as producing fluent sentences and translating from one language into another. However, the development of generative modeling techniques for paraphrase generation still lags behind largely due to the challenges in addressing the complex conflicts between expression diversity and semantic preservation. This paper proposes to generate diverse and high-quality paraphrases by exploiting the pre-trained models with instance-dependent prompts. To learn generalizable prompts, we assume that the number of abstract transforming patterns of paraphrase generation (governed by prompts) is finite and usually not large. Therefore, we present vector-quantized prompts as the cues to control the generation of pre-trained models. Extensive experiments demonstrate that the proposed method achieves new state-of-art results on three benchmark datasets, including Quora, Wikianswers, and MSCOCO. We will release all the code upon acceptance.
Vector-Quantized Prompt Learning for Paraphrase Generation
[ { "figure_caption": "Figure 2: Model Architecture.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 K-means Update Algorithm Require: Paraphrase dataset D = {(x n , y n )|n = 1, • • • , N } 1: Computing word embeddings e n = EmbeddingLayer(x n ) 2: Collecting embeddings E = {e n |n = 1, • • • , N } 3: Initializing code list C 4: for e in E do", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Visualization of the learned prompts and their components.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 1: An example of the ideal prompt that induces the pre-trained language model to generate particular paraphrases. The proposed VQPrompt model aims to learn such prompts for each given sentence.", "figure_data": "Prompt AWhat is the main reasonWhy does the Earth'sof global warming?temperature rise?PretrainedLanguage ModelPrompt BWhat is the reason ofWhy does the globalglobal warming?warming happen?", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Statistics of the benchmark datasets used in this work.", "figure_data": "Dataset#Train set #Val set #Test setQuora55,6115,2555,255Paralex222,22327,77827,778MSCOCO 113,2875,0005,000", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance of individual paraphrase generation methods on the Quora and Paralex, and MSCOCO datasets.", "figure_data": "QuoraParalexMSCOCOModelBLEU self-BLEU iBLEU BLEU self-BLEU iBLEU BLEU self-BLEU iBLEUCopy34.52100.007.6137.10100.009.6819.85100.00-4.12tf-idf24.0562.496.7525.0825.2515.0118.2638.376.93AE28.9960.1111.1740.1075.7116.9427.9038.7114.58VAE27.2351.0911.5738.9153.2820.4727.4424.4016.99VQ-VAE16.3121.138.8340.2665.7119.0725.6222.4116.01SOW/REAP21.2738.19.4133.0937.0719.0612.516.478.71BTmPG19.8335.118.8428.4035.9915.5219.7613.0413.20LBoW23.5142.0810.3934.9635.8620.8021.6516.4614.02Separator23.6824.2014.1036.3635.3722.0120.5912.7613.92HRQ-VAE33.1140.3518.4239.4933.3024.9327.9016.5819.04VQPrompt-PG 35.0139.9820.01 42.5841.9625.67 29.9223.5919.21", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Human evaluation.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Paraphrases generated from the prompt clusters shown in Fig 3(b).", "figure_data": "ModelBLEU Self-BLEU iBLEUGenerative LM34.2742.7518.87Generative LM (VQ) 32.5146.0416.80VQPrompt35.0139.9820.01", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The generation performances of individual VQPrompt variants.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Haotian Luo; Yixin Liu; Xianggen Liu; Peidong Liu
[ { "authors": "Regina Barzilay; Lillian Lee", "journal": "", "ref_id": "b0", "title": "Learning to paraphrase: An unsupervised approach using multiplesequence alignment", "year": "2003" }, { "authors": "Rahul Bhagat; Eduard Hovy", "journal": "Computational Linguistics", "ref_id": "b1", "title": "Squibs: What is a paraphrase?", "year": "2013" }, { "authors": "Rishabh Bhardwaj; Amrita Saha; C H Steven; Soujanya Hoi; Poria", "journal": "", "ref_id": "b2", "title": "Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in NIPS", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Zihang Chen; Hongbo Zhang; Xiaoji Zhang; Leqi Zhao", "journal": "", "ref_id": "b4", "title": "Quora question pairs", "year": "2017" }, { "authors": "Jishnu Ray Chowdhury; Yong Zhuang; Shuyi Wang", "journal": "", "ref_id": "b5", "title": "Novelty controlled paraphrase generation with retrieval augmented conditional prompt tuning", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b6", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "William Dolan; Chris Quirk; Chris Brockett; Bill Dolan", "journal": "", "ref_id": "b7", "title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", "year": "2004" }, { "authors": "Zhendong Dong; Qiang Dong", "journal": "IEEE", "ref_id": "b8", "title": "Hownet-a hybrid language and knowledge resource", "year": "2003" }, { "authors": "Anthony Fader; Luke Zettlemoyer; Oren Etzioni", "journal": "", "ref_id": "b9", "title": "Paraphrase-driven learning for open question answering", "year": "2013" }, { "authors": "Yao Fu; Yansong Feng; John P Cunningham", "journal": "Advances in NIPS", "ref_id": "b10", "title": "Paraphrase generation with latent bag of words", "year": "2019" }, { "authors": "Tanya Goyal; Greg Durrett", "journal": "", "ref_id": "b11", "title": "Neural syntactic preordering for controlled paraphrase generation", "year": "2020" }, { "authors": "Ankush Gupta; Arvind Agarwal; Prawaan Singh; Piyush Rai", "journal": "", "ref_id": "b12", "title": "A deep generative framework for paraphrase generation", "year": "2018" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural computation", "ref_id": "b13", "title": "Long short-term memory", "year": "1997" }, { "authors": "Tom Hosking; Mirella Lapata", "journal": "", "ref_id": "b14", "title": "Factorising meaning and form for intent-preserving paraphrasing", "year": "2021" }, { "authors": "Tom Hosking; Hao Tang; Mirella Lapata", "journal": "", "ref_id": "b15", "title": "Hierarchical sketch induction for paraphrase generation", "year": "2022" }, { "authors": "David Kauchak; Regina Barzilay", "journal": "", "ref_id": "b16", "title": "Paraphrasing for automatic evaluation", "year": "2006" }, { "authors": "Kevin Knight; Daniel Marcu", "journal": "", "ref_id": "b17", "title": "Statistics-based summarization step one: Sentence compression", "year": "2000" }, { "authors": "Adrian Łańcucki; Jan Chorowski; Guillaume Sanchez; Ricard Marxer; Nanxin Chen; Jga Hans; Sameer Dolfing; Tanel Khurana; Antoine Alumäe; Laurent", "journal": "IEEE", "ref_id": "b18", "title": "Robust training of vector quantized bottleneck models", "year": "2020" }, { "authors": "Zichao Li; Xin Jiang; Lifeng Shang; Hang Li", "journal": "", "ref_id": "b19", "title": "Paraphrase generation with deep reinforcement learning", "year": "2018" }, { "authors": "Zichao Li; Xin Jiang; Lifeng Shang; Qun Liu", "journal": "", "ref_id": "b20", "title": "Decomposable neural paraphrase generation", "year": "2019" }, { "authors": "Dekang Lin; Patrick Pantel", "journal": "Natural Language Engineering", "ref_id": "b21", "title": "Discovery of inference rules for question-answering", "year": "2001" }, { "authors": "Tsungyi Lin; Michael Maire; Serge J Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollar; C Lawrence; Zitnick ", "journal": "", "ref_id": "b22", "title": "Microsoft COCO: Common objects in context", "year": "2014" }, { "authors": "Zhe Lin; Xiaojun Wan", "journal": "", "ref_id": "b23", "title": "Pushing paraphrase away from original sentence: A multi-round paraphrase generation approach", "year": "2021" }, { "authors": "Xianggen Liu; Wenqiang Lei; Jiancheng Lv; Jizhe Zhou", "journal": "", "ref_id": "b24", "title": "Abstract rule learning for paraphrase generation", "year": "2022" }, { "authors": "Xianggen Liu; Lili Mou; Fandong Meng; Hao Zhou; Jie Zhou; Sen Song", "journal": "", "ref_id": "b25", "title": "Unsupervised paraphrasing by simulated annealing", "year": "2020" }, { "authors": " Kathleen R Mckeown", "journal": "Computational Linguistics", "ref_id": "b26", "title": "Paraphrasing questions using given and new information", "year": "1983" }, { "authors": "Yuxian Meng; Xiang Ao; Qing He; Xiaofei Sun; Qinghong Han; Fei Wu; Jiwei Li", "journal": "", "ref_id": "b27", "title": "Conrpg: Paraphrase generation using contexts as regularizer", "year": "2021" }, { "authors": "George A Miller", "journal": "Communications of the ACM", "ref_id": "b28", "title": "Wordnet: a lexical database for english", "year": "1995" }, { "authors": "Chris Quirk; Chris Brockett; William B Dolan", "journal": "", "ref_id": "b29", "title": "Monolingual machine translation for paraphrase generation", "year": "2004" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b30", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b31", "title": "Attention is all you need", "year": "2017" }, { "authors": "Su Wang; Rahul Gupta; Nancy Chang; Jason Baldridge", "journal": "", "ref_id": "b32", "title": "A task in a suit and a tie: Paraphrase generation with semantic augmentation", "year": "2019" }, { "authors": "Hanwei Wu; Markus Flierl", "journal": "AAAI", "ref_id": "b33", "title": "Vector quantization-based regularization for autoencoders", "year": "2020" }, { "authors": "Wenbo Zhang; Likai Tang; Site Mo; Sen Song; Xianggen Liu", "journal": "", "ref_id": "b34", "title": "Learning robust rule representations for abstract reasoning via internal inferences", "year": "2022" }, { "authors": "Shiqi Zhao; Xiang Lan; Ting Liu; Sheng Li", "journal": "", "ref_id": "b35", "title": "Application-driven statistical paraphrase generation", "year": "2009" }, { "authors": "Shiqi Zhao; Cheng Niu; Ming Zhou; Ting Liu; Sheng Li", "journal": "", "ref_id": "b36", "title": "Combining multiple resources to improve SMT-based paraphrasing model", "year": "2008" } ]
[ { "formula_coordinates": [ 4, 126.46, 319.78, 163.41, 10.81 ], "formula_id": "formula_1", "formula_text": "q m = C k ,(2)" }, { "formula_coordinates": [ 4, 201.88, 336.35, 55.89, 10.63 ], "formula_id": "formula_2", "formula_text": "∥r m -C j ∥ 2 ," }, { "formula_coordinates": [ 4, 71.41, 404.37, 218.46, 23.36 ], "formula_id": "formula_3", "formula_text": "Q = PromptEncoder(e) = {q m |m = 1, . . . , M },(3)" }, { "formula_coordinates": [ 4, 71.74, 542.34, 216.52, 25.85 ], "formula_id": "formula_4", "formula_text": "J vq (x) = ||sg(r m ) -q m || 2 2 + ||r m -sg(q m )|| 2 2 ,(4" }, { "formula_coordinates": [ 5, 121.79, 185.72, 168.08, 26.38 ], "formula_id": "formula_5", "formula_text": "P (•|x) = GLM({Q ⊕ e}) (5) ŷ ∼ P (•|x) (6)" }, { "formula_coordinates": [ 5, 150.47, 245.93, 138.67, 10.63 ], "formula_id": "formula_6", "formula_text": "Q = {q m |m = 1, • • • , M }. ⊕" }, { "formula_coordinates": [ 5, 86.87, 553.37, 202.99, 71.17 ], "formula_id": "formula_7", "formula_text": "J M L = N n log P θ (y n |x n ) = N n Tn t log P θ (y n,t |y n,<t , x n ), (7)" }, { "formula_coordinates": [ 5, 120.65, 712.01, 169.22, 33.17 ], "formula_id": "formula_8", "formula_text": "J = J M L + N n J vq (x n ) (8)" }, { "formula_coordinates": [ 7, 140.17, 362.93, 149.7, 26.5 ], "formula_id": "formula_9", "formula_text": "= α•BLEU( ŷ, Y )- (1 -α) • BLEU(x, Y )(9)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b0", "b1", "b19", "b22", "b27" ], "table_ref": [], "text": "Significant progress has been achieved in understanding brain development and organization due to advances in neuroimaging and neuroscience (Giedd et al., 2010 [1]; Mills et al. 2014 [2]). There is growing awareness of the importance of differences in brain structure and function among individuals. These inter-individual variations may underlie differences in cognitive abilities, emotional processing, and susceptibility to neurological and psychiatric disorders (Dannlowski et al., 2012 [3]; Karama et al., 2014 [4]; Schmaal et al., 2017 [5]). Some researchers have explored the potential of brain function and structure for the usage of biometric identification (Bassi et al., 2018 [6]; Chen et al., 2018 [7]). While prior investigations have revealed that the adult brain exhibited stable structural and functional fingerprints for representing individual differences (Finn et al., 2015 [8]; Menon et al., 2019 [9]), it remains unclear whether the fingerprint in the brain emerges as early as during the third trimestera critical period marked by the explosive growth of cortical anatomy and rapid establishment of structural and functional connectome.\nMagnetic Resonance Imaging (MRI) is a non-invasive imaging technique known for its capacity to yield high-resolution images for the examination of both brain structure and function. This technology has been widely applied to study infant brains, such as the early developmental patterns of morphology, microstructure, fiber tracts, and brain connectomes (Liu et al., 2021 [10]; Liu et al., 2021 [11]; Zheng et al., 2023 [12] ; Zheng et al., 2023 [13]). Studies have characterized individual differences in human brains through anatomical and functional connectivity. For example, the white matter tractography in adult brains could serve as an effective fingerprint for the identification of individuals (Yeh et al., 2016 [14]), and the identification accuracy of functional connectivity increases with age from childhood to adulthood (Vanderwal et al., 2021 [15]). Furthermore, recent studies have attempted to investigate the fingerprint in perinatal brains and achieved recognition rates of 62.22% and 78% based on structural (Ciarrusta et al., 2022 [16]) and functional (Hu et al., 2022 [17]) connectivity, respectively. These results demonstrated the fact that individual uniqueness of brain connectome emerges during early brain development. Similar as brain connectivity, cortical morphology could also serve as a valuable fingerprint for individual recognition (Wachinger et al., 2015 [18]; Aloui et al., 2018 [19]), which achieved remarkable performance that superior to the functional connectivity in differentiating adult individuals (Tian et al, 2021 [20]). In recent years, some studies have begun to explore cortical folding patterns for infant subject identification. For instance, Duan et al. successfully identified 1-and 2-year-old infants by using cortical folding information (i.e., curvature, convexity, and sulcus depth) of the corresponding neonate (Duan et al.,2019 [21]; Duan et al.,2020 [22]). Nevertheless, it remains unclear whether the individual variations in human brain morphology already appear as early as the beginning of the third trimester.\nDeep learning methods can achieve more advanced feature representations in brain images. Multiple deep learning models have been applied to resolve challenges in brain image analysis in recent years. For example, the convolutional neural network (CNN) has been utilized for the detection of brain lesion (Chen et al., 2020 [23]) and white matter abnormalities (McKinley et al., 2019 [24]), as well as for the diagnosis of brain disorders (Esmaeilzadeh et al., 2018 [25]; Qureshi et al., 2019 [26]). Convolutional networks based on the whole-brain cortical surface (Mostapha et al. 2018 [27]) have offered new perspectives for studying human brain MRI and cortical morphology. However, 3D-CNN methods are often unsuitable for smallsample datasets due to the large number of parameters and high computational demands. On the other hand, 2D-CNN methods have lower computational requirements and can be more easily embedded into a mature model architecture. T. W. Meng et al. have introduced Teichmüller Extremal Mapping of POint clouds (TEMPO), a quasi-conformal mapping method for conformally mapping a simply-connected open triangle mesh to a 2D rectangle space [28]. The TEMPO method can effectively preserve conformality, reducing the loss of local features and geometric structures in the mapping of the original point cloud. Leveraging deep learning methods enables the automatic acquisition of more high-dimensional features of cortical morphology through multilevel nonlinear transformations, thereby simplifying the model's principles and procedures via end-to-end learning. Therefore, we utilized quasiconformal mapping to project the 3D brain mesh onto a 2D plane and employed 2D-CNN to extract individual cortical morphological feature representations.\nThe present study aims to validate the existence of morphological fingerprints in perinatal brains. Each hemispheric surface of an individual subject was inflated to a sphere and was then projected to a 2D plane through TEMPO. We propose a contrastive learning framework based on the pretrained ResNet18 encoder to recognize an individual at termequivalent age by using his or her brain MRI acquired at birth. The attention mechanism is incorporated to fuse features from different partitions generated from the 3D-to-2D mapping. The effectiveness of each morphological feature is assessed to demonstrate their contributions to the individual fingerprint in perinatal brains." }, { "figure_ref": [], "heading": "Materials and Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset Description", "publication_ref": [ "b28" ], "table_ref": [ "tab_0" ], "text": "Imaging data used in this study was obtained from the Developing Human Connectome Project (dHCP-release 3, https://biomedia.github.io/dHCP-release-notes/acquire.html), which was generously supported by the European Research Council (ERC). Ethical approval for the project was granted by the UK Health Research Authority (Research Ethics Committee reference number: 14/LO/1169), and written parental consent was diligently obtained for both the imaging procedures and data release.\nAll participants received MRI scans at the Evelina Newborn Imaging Centre, located at St Thomas' Hospital in London, UK. The scans included the acquisition of structural, diffusion, and functional data, which was performed using a 3 Tesla Philips Achieva system (running modified R3.2.2 software) [29] . T2-weighted (T2w) multi-slice fast spin-echo images were obtained in both sagittal and axial slice stacks. These images had an in-plane resolution of 0.8x0.8mm² and 1.6mm slices, with an overlap of 0.8mm. The imaging parameters were as follows: repetition time (TR) = 12000ms, echo time (TE) = 156ms, SENSE factor of 2.11 for axial and 2.60 for sagittal slices. Additionally, a 3D MPRAGE scan was performed with 0.8mm isotropic resolution.\nThe dataset included 783 infants in total, comprising 682 infants who underwent MRI scanning one time and 101 infants who were scanned twice (scanned at approximately 1-2 weeks after birth and term-equivalent age, respectively) or more. We selected the T2 images of 772 infants (90 of them have longitudinal scans), and images of 11 participants were manually excluded due to poor quality. Detailed demographic information of the selected infants is given in Table 1. " }, { "figure_ref": [], "heading": "Dataset Processing", "publication_ref": [ "b29" ], "table_ref": [], "text": "Image preprocessing followed the pipeline proposed by the dHCP [30]. In summary, the process first involved bias correction and brain extraction of the motion-corrected T2weighted image. This was followed by segmenting the brain into different tissue types using Draw-EM algorithm. White-matter mesh was then extracted and expanded to fit the pial surface. The cortical thickness was estimated based on the Euclidean distance between the white and pial surface. We generated the inflated surfaces from the white surface, which were then projected to a sphere for surface registration. The mean surface curvature and sulcal depth (mean surface convexity/concavity) were respectively estimated from the white surface and from inflation." }, { "figure_ref": [ "fig_0" ], "heading": "Teichmüller Mapping", "publication_ref": [ "b32" ], "table_ref": [], "text": "Despite the promising spherical convolutional neural network (SCNN) models on (Cohen et al., 2018 [31]; C. Esteves et al., 2018 [32]; C. Jiang et al., 2019 [33]), the SCNNs often suffer from substantial computational complexity, e.g., extensive parameters, which make them difficult to converge on small sample datasets. Therefore, we projected the 3-dimensional spherical mesh of the brain surface to 2-dimensional and employed the convolutional neural network (CNN) model for analysis.\nTo achieve this goal, a quasi-conformal mapping method (Teichmüller Mapping) was employed, which is a geometric transformation method based on complex variational functions. Conformal mapping facilitates the mapping of one space to another while preserving local angles between intersecting curves or surfaces, thereby ensuring the preservation of shapes and angles within a small region of the original space. In general, its formula representation is as follows:\nf * ds N 2 = λds M 2\nwhere M and N are two Riemann surfaces, f is the mapping function: M→N, and λ is a positive scalar function.\nA generalization of conformal mapping is quasi-conformal mapping, which allows for a certain degree of angle and shape distortion to adapt to more complex data structures and transformations. Teichmüller Mapping is such a type of quasi-conformal mapping that can induce uniform conformality distortions in the target point cloud, thereby preserving stable relative positions and local shapes. The general formula for a Teichmüller mapping (T-map) can be expressed as follows: let M and N be two Riemann surfaces, and f: M → N be a quasiconformal mapping. If it is associated with a quadratic differential q = ϕdz 2 , where ϕ: M → ℂ is a holomorphic function, and its associated Beltrami coefficient is of the form μ(f) = k ϕ |ϕ| , where k<1 is a constant, and the quadratic differential q is non-zero and satisfies ||q|| 1 = ∫ S 1 |ϕ| < ∞ , then f is called a Teichmüller mapping (T-map) associated with the quadratic differential q.\nIn the present study, we used the TEMPO method for projection. Specifically, we adopt an approach of segmenting the hemispherical plane along the x=0 plane in the coordinate system and subsequently applying the TEMPO method separately to the two hemispheres for mapping. Then, we interpolated the mapped mesh to obtain a 2-dimensional matrix. Although this approach sacrificed the continuity of the data around the segmentation curve, it significantly reduced the area distortion from mapping deformation and preserved the overall continuity of other areas, offering a more desirable outcome for our study. The images projected to the 2D plane are shown in the first column of Figure 1. " }, { "figure_ref": [ "fig_0" ], "heading": "Data Augmentation", "publication_ref": [ "b34", "b35" ], "table_ref": [], "text": "Data augmentation is a commonly employed technique to address the issue of insufficient data and lack of data diversity. Its primary objective is to expand the dataset by applying diverse transformations and processing to the original data. In the context of images, various augmentation methods have been established, including random cropping, rotation, flipping, random noise, Gaussian blur, and color transformations such as brightness, contrast, and saturation adjustments (Cubuk et al., 2018 [34]; Shorten et al., 2019 [35]; T. Chen et al., 2020 [36]). We used data augmentation to expand single sampling data into sample pairs, which can be used for model pre-training. We employed rotation, random noise, and Gaussian blur on both the one-shot and two-shot data, as shown in Figure 1." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Model Architecture", "publication_ref": [], "table_ref": [], "text": "As illustrated in Figure 2-(a), we present a contrastive learning-based feature extraction framework. We aligned the 3D grids of the left and right brain hemispheres onto spheres, and then partitioned each sphere into two hemispheres. Subsequently, we employed the TEMPO method to map the spherical grids onto planar grids and compute four 3×224×224 matrices through interpolation. These cortical morphological matrices were then fed into the feature extraction module to extract cortical morphological feature fingerprints of individual subjects (Figure 2-(b)). These fingerprints (vectors of length 512) were ultimately utilized for calculating pairwise similarities for individual identification. Notably, in the feature extraction process, we employed a channel-wise attention mechanism-based excitation module for fusing features from different brain partitions and weight allocation to feature map channels, as shown in Figure 2-(c). " }, { "figure_ref": [ "fig_1" ], "heading": "Contrastive Learning", "publication_ref": [ "b33", "b36", "b31" ], "table_ref": [], "text": "Contrastive learning is a self-supervised method for training models to learn meaningful data representations. It works by comparing the similarity between pairs of data samples. In essence, it creates pairs of positive and negative samples and uses a loss function to make the feature representations of positive pairs more similar while making those of negative pairs more dissimilar. A notable framework for contrastive learning is SimCLR (short for Simple Contrastive Learning of Representations) [34] , which used the NT-Xent loss function to maximize the similarity of positive pairs and minimize the similarity of negative pairs in the feature space. The NT-Xent loss function is defined as:\nL = - 1 N ∑ (log ( exp(-∥ z i -z i + ∥/τ) ∑ exp N j=1,j≠i (-∥ z i -z j + ∥/τ) )) N i=1\nwhere N is the batch size, ∥ z i -z i + ∥is the Euclidean distance between the i-th sample and its positive sample, ∥ z i -z j + ∥ is the Euclidean distance between the i-th sample and the jth sample's corresponding positive sample. τ denotes a temperature parameter.\nIn this study, we used the SimCLR as the main contrastive learning framework to extract morphological feature representations of individual differences in the perinatal cerebral cortex. We constructed positive sample pairs by using multiple data augmentation methods on single sampling data and used the other samples in the same batch to construct negative pairs. Specifically, we adopted a contrastive loss similar to the classical Siamese network [37]. We constructed multiple negative sample pairs while adopting a non-softmax computation of the loss function. The cross-entropy loss that was separate for the positive and negative sample pairs was calculated and summed up. The loss function was defined as follows:\nL = ∑ ((1 -y i ) * dist i 2 + y * clamp min (m -dist i , 0) 2 ) N i=1\nwhere\ndist i =∥ x 1i -x 2i ∥, clamp min (a, b) = { a, if a ≥ b b, if a < b\nand N was the batch size, y i was the label indicating whether the i-th sample pair was a positive or negative sample, x 1i and x 2i represented the two samples of the i-th sample pair respectively, and m was an artificially set margin.\nIn this framework, we used the ResNet18 backbone (He et al., 2016 [32]) as the brain map encoder, as shown in Figure 2-(b). For each morphological feature, four sets of projection maps (medial and lateral projections of the left and right brain, respectively) per sample were fed into the parameter-sharing ResNet18 to extract four feature maps." }, { "figure_ref": [ "fig_1" ], "heading": "Excitation Module", "publication_ref": [], "table_ref": [], "text": "The excitation module aimed to enhance the overall representation capability of the feature vectors to the morphological features of the samples through the channel attention mechanism, as shown in Figure 2-(c). It achieved the fusion of features from four different partitions. Similar to the SE-Nets (Hu et al., 2018 [38]), the channel weights were learned by two fully connected layers that mapped the feature vector to a channel attention vector. This channel attention vector was applied to the input feature map to weigh the channels. However, different from the previous literature, our excitation module optimized the assignment of the weight to three channels of morphological features and four channels (partitions) of projection maps simultaneously, which was arranged before the final layer of the neckbone network. Adding a hyperparameter weight_scale was intended to restore the weights to a distribution with a mean of 1 or approximately 1 to maintain the consistency of the data scale. The feature vector of the sample was obtained by pooling and a simplified projection through a fully connected layer. In addition to the excitation method, we employed other two fusion techniques: a decision-level fusion approach based on a voting mechanism and a feature-level fusion method using a two-layer Multilayer Perceptron (MLP). The former averaged similarity matrices calculated on four partitions and then made identification decisions based on the averaged similarity matrix. The latter utilized a two-layer MLP to map the input features. Specifically, we flattened a 4×2048 feature vector and then passed it through the MLP, resulting in a feature vector of length 2048." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Training Procedure and Evaluation Metrics", "publication_ref": [ "b39" ], "table_ref": [], "text": "For the training process, we initially excluded the excitation module and employed the original structure during the training of the ResNet18 backbone. The network parameters were trained by using primarily augmented sample pairs for pre-training and several twoshot data for fine-tuning, as illustrated in the stage1 of Figure 3. After that, we froze the ResNet18 and updated the parameters in the excitation module and the FC layer (stage2 of Figure 3). Finally, the trained feature extraction module was applied to the twice scanned samples. We calculated and compared the similarity of the feature vectors between two time points of the same subject (self-self similarity) and the feature similarity between a neonate at birth and other infants scanned at term-equivalent age (self-other similarity).\nThe Euclidean distance was used to measure the similarity between individual brains (Eickhoff et al.,2005 [39]; Al-Saffar et al., 2020 [40]). We utilized the Top 1 accuracy (where self-similarity surpasses all self-other similarities) and Top 5 accuracy (where self-similarity ranks among the top five similarities between itself and all other samples) as metrics to assess the effectiveness of the model. " }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "We used the ResNet18 backbone network and its pre-trained network parameters provided by Pytorch on the ImageNet dataset for initialization. To train the ResNet18 network, we augmented all single-sampled data to create positive and negative sample pairs. These pairs were then used for the initial training of ResNet18. Subsequently, we performed finetuning of ResNet18 and trained the feature fusion module. The training process employed a learning rate of 5e-4 with a momentum of 0.9 and a weight decay of 5e-5. Each training session comprised 8 epochs and the SGD optimizer was used. For all experiments, we conducted triple-fold cross-validation using twice sampled data (i.e., the real sample pairs). In each fold, we used 60 and 30 real sample pairs for training and testing, respectively. We repeated the experiments for 30 rounds to ensure robustness and obtained weighted accuracies for identification." }, { "figure_ref": [], "heading": "Experiment Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Individual Recognition", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Our model achieved a notable Top 1 accuracy of 71.37% and a Top 5 accuracy of 84.10%. These results suggested that the self-similarity of the prenatal brains between birth and termequivalent ages was higher than all self-other similarities or was among the top 5 highest. We also compared the accuracies derived from different backbone models (i.e., ResNet18, 34, and 50) and fusion strategies for fusing features from left lateral, left medial, right lateral and right medial brain. (e.g., excitation, voting and MLP). The excitation method outperformed other fusion methods. The voting strategy that integrated judgments from different partitions also demonstrated high Top 1 accuracy of 71.37% and Top 5 accuracy of 83.90%. The method using a MLP for feature fusion achieved the Top 1 accuracy of 68.70% and Top 5 accuracy of 79.63%. Overall, the excitation method exhibited superior performance. In addition, more network layers did not give better performance, as shown in Table 2. " }, { "figure_ref": [ "fig_3", "fig_4", "fig_4" ], "heading": "Contributions of Morphological Features and Brain Regions", "publication_ref": [], "table_ref": [], "text": "To explore the contributions of each morphological feature to the recognition task, we conducted single-channel comparison experiments for the three morphological features: curvature, thickness, and sulcus. The obtained results are presented in Figure 4. The curvature achieved the highest Top 1 and Top 5 accuracies of 79.13% and 86.27%, respectively, surpassing the combination of all the three channels. The sulcus feature achieved a Top 1 accuracy of 67.10% and Top 5 accuracy of 78.40%. However, cortical thickness did not exhibit any discriminative power, which not only achieved the accuracies lower than 50% but may also reduce the accuracies up to 25% when combined with other features. We then explored the contributions of different brain regions in the recognition task. We utilized the weights derived from the excitation module as the contribution weights of brain regions shown in Figure 5-A, and the weights of brain regions were mapped back to the cortical surface (as illustrated in Figure 5-B). The parietal and occipital cortices of the left hemisphere and the antero-medial and postero-lateral temporal cortices of the right hemisphere showed higher attentional weights relative to other regions, suggesting these regions may have evident morphological differences among individuals. " }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "To further validate the effectiveness of our model, we conducted ablation experiments under the following conditions: (1) without using a contrastive learning framework to train encoders; (2) without using augmented data for pre-training; and (3) without using the excitation module to fuse multiple features. Without using the contrastive learning framework, the performance of a ResNet encoder pretrained only on ImageNet is quite poor, achieving only 10% Top1 accuracy. The training within the contrastive learning framework significantly enhanced the model's performance. The utilization of data augmentation and the excitation model also improved the model's performance, increasing Top1 accuracy by over 5% and over 50%, respectively. The details are presented in Table 3. Note: A) followed the complete model pipeline. In B), a pre-trained ResNet18 with ImageNet weights was used as an encoder without further contrastive learning training. C) involved the training using only twice sampled data, without using augmented samples from single time sampling. D) omitted the excitation Module for fusing brain map features from four regions and directly performed concatenation instead." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "In this study, we examined the existence of morphological fingerprints in neonatal cerebral cortex through a deep learning model. We achieved a remarkable Top 1 accuracy of 71.37% in the individual recognition. This suggested that certain cortical morphological fingerprints are already formed as early as the beginning of the third trimester and maintained stable during the perinatal period, which can serve as an effective fingerprint for recognizing individual neonate.\nTraining the ResNet18 encoder with a contrastive learning approach significantly improved the recognition accuracy, demonstrating the effectiveness of the framework in learning cortical morphological features. Additionally, the incorporation of the excitation module based on attention mechanisms also enhanced the Top 1 accuracy by over 50%, outperforming both the comparative voting and MLP methods. We speculated that the improvement might be attributed to the excitation module enabling the model to prioritize brain regions with significant individual variability, reducing the impact of other regions on individual recognition and thus enhancing identification accuracy (Bodapati et al., 2021 [41]). Moreover, the excitation module enabled the direct assessment of the contribution of different brain regions, providing a convenient way to analyze the recognition contribution rates of different brain areas. Pretraining the model by constructing additional sample pairs using data augmentation further boosted the Top 1 accuracy by over 5%, effectively improving the efficiency of using limited samples in cases of insufficient data. This improvement has been confirmed in several classification task based on brain images (Garcea et al. 2022 [42]). The presence of these distinctive features enables the model to extract individual-specific information for each subject, achieving high-accuracy individual recognition.\nOur findings indicated that both cortical curvature and sulcal depth exhibited individual variations, and the former achieved the best recognition rate, emphasizing the crucial role of the cortical folding morphology in characterizing individual differences at early developmental stage. On the other hand, cortical thickness did not exhibit discriminative power to individual recognition. Previous studies have suggested that the individual variability of cortical folding patterns has been established at term age (Duan et al., 2019 [43]), while the cortical thickness showed relative longer maturation period. Specifically, the cortical thickness typically undergoes rapid growth shortly after birth, peaking around 14 months after birth and subsequently decreases (Wang et al., 2019 [44]). Therefore, we hypothesized that the earlier maturation of folding patterns imparts greater individual variability to cortical morphology, leading to higher accuracy in individual recognition. Additionally, sensitivity to noise may be another potential factor leading to lower recognition efficiency of thickness features. Furthermore, we observed that the primary cortices associated with somatosensory and visual functions carried higher attention weights, indicating greater inter-individual differences in these regions. Given that the primary regions experienced more pronounced development than high-order cortex in the second trimester (Gilmore et al., 2018 [45]; Duan et al., 2019 [43]), we speculated that the high attention weights assigned to the primary cortex may be attributed to their stable morphology maturity throughout the third trimester.\nThere were several limitations for this study. First, the longitudinal dataset was relatively small, which only contained 90 infants. The generalizability of this model and the reproducibility of our findings should be examined on a larger independent dataset. Second, although the conformal transformation could ensure point cloud continuity by using the deformation errors, it may also lead to area distortion during projection, e.g., the compression of the area farther away from the segmentation curve and the expansion the area closer to the segmentation curve. Such distortion may potentially influence the recognition accuracy. Some novel methods for addressing these challenges should be developed in the future, although it is out of the scope of our work." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "Our study showed that cortical folding morphology, especially the curvature, significantly contributed to individual variations of perinatal brains rather than cortical thickness. Moreover, regions with high individual differences mainly concentrated in the primary cortex. These findings offered the first evidence of the existence of individual morphological fingerprints in neonatal brains as early as the beginning of the third trimester, which maintained relatively stable during the perinatal period." }, { "figure_ref": [], "heading": "Funding", "publication_ref": [], "table_ref": [], "text": "This work was funded by the National Natural Science Foundation of China (grant number 62202212), \"Pioneer\" and \"Leading Goose\" R&D Program of Zhejiang (grant number 2023C03081) and the Fundamental Research Funds for the Central Universities (grant number 226-2023-00091)." } ]
The morphological fingerprint in the brain is capable of identifying the uniqueness of an individual. However, whether such individual patterns are present in perinatal brains, and which morphological attributes or cortical regions better characterize the individual differences of ne-onates remain unclear. In this study, we proposed a deep learning framework that projected three-dimensional spherical meshes of three morphological features (i.e., cortical thickness, mean curvature, and sulcal depth) onto two-dimensional planes through quasi-conformal mapping, and employed the ResNet18 and contrastive learning for individual identification. We used the cross-sectional structural MRI data of 682 infants, incorporating with data augmentation, to train the model and fine-tuned the parameters based on 60 infants who had longitudinal scans. The model was validated on 30 longitudinal scanned infant data, and remarkable Top1 and Top5 accuracies of 71.37% and 84.10% were achieved, respectively. The sensorimotor and visual cortices were recognized as the most contributive regions in individual identification. Moreover, the folding morphology demonstrated greater discriminative capability than the cortical thickness, which could serve as the morphological fingerprint in perinatal brains. These findings provided evidence for the emergence of morphological fingerprints in the brain at the beginning of the third trimester, which may hold promising implications for understanding the formation of in-dividual uniqueness in the brain during early development.
Identification of morphological fingerprint in perinatal brains using quasi-conformal mapping and contrastive learning
[ { "figure_caption": "Figure 1 .1Figure 1. Three data augmentation methods used on the dataset.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. A simple contrastive learning framework for perinatal cortical morphology fingerprint. (a) demonstrates the process of utilizing original data for data augmentation and applying it within a contrastive learning framework; (b) shows the workflow within the green feature extraction module in (a), which extract feature representation from brain maps; (c) shows the workflow within the grey excitation module in (b) which calculates and utilizes the channel weights for channel attention-based feature learning and fusion.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. A two-stage learning strategy for model parameters", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Top 1 and Top 5 accuracy comparison between different fusion methods and different feature channels selected.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. The average brain map weights of each brain region when using all three cortical morphological features. The vertical axis in A) represents different brain region labels, where L and R represent left and right. The red bars in A) are the brain regions with the top ten highest weights, and the green dashed line indicates the average weight of the whole brain. The average weights of each brain region in 2D brain atlases are shown in B). The gray area represents the corpus callosum, and attention weights are not calculated for it.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Demographic information of the infants included in this study.", "figure_data": "GroupsMedian birth age (weeks)Birth age range (weeks)Median scan age (weeks)Scan age range (weeks)Male/ FemaleCross-sectional cohort39.86[23.0-43.57]41.14[26.86-45.15] 370/ 312Longitudinal cohort (1st scan)31.29[23.57-40.14]34.22[26.71-42.71]48/42Longitudinalcohort (2nd31.29[23.57-40.14]41.29[35.57-44.86]48/42scan)", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Recognition accuracy with different encoders and training epochs", "figure_data": "BackboneNumber of total epochsExcitation method Top1 accuracy (%)Decision-level method Top1 accuracy (%)1671.3771.37ResNet183270.2069.216468.0568.161668.4368.56ResNet343265.4366.106460.5060.101662.6760.78ResNet503263.4460.226460.5659.77", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Recognition accuracy under different experimental operation settings", "figure_data": "OperationsTop1 Accuracy(%) Top5 Accuracy(%)A). Contrastive Learning + Pre-training +71.3783.90ExcitationB). Without contrastive learning10.0020.00C). Without pre-training on augmented data66.1080.23D). Without Excitation Model19.3365.90", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Boyang Wang; Weihao Zheng; Ying Wang; Zhe Zhang; Yuchen Sheng; Minmin Wang
[ { "authors": "J N Giedd; J L Rapoport", "journal": "Neuron", "ref_id": "b0", "title": "Structural MRI of pediatric brain development: what have we learned and where are we going?", "year": "2010" }, { "authors": "K L Mills; A L Goddings; L S Clasen; J N Giedd; S J Blakemore", "journal": "Developmental neuroscience", "ref_id": "b1", "title": "The developmental mismatch in structural brain maturation during adolescence", "year": "2014" }, { "authors": "U Dannlowski; A Stuhrmann; V Beutelmann; P Zwanzger; T Lenzen; D Grotegerd; K Domschke; C Hohoff; P Ohrmann; J Bauer; C Lindner", "journal": "Biological psychiatry", "ref_id": "b2", "title": "Limbic scars: long-term consequences of childhood maltreatment revealed by functional and structural magnetic resonance imaging", "year": "2014" }, { "authors": "S Karama; M E Bastin; C Murray; N A Royle; L Penke; S Muñoz Maniega; A J Gow; J Corley; M Valdés Hernández; J D Lewis; M É Rousseau", "journal": "Molecular psychiatry", "ref_id": "b3", "title": "Childhood cognitive ability accounts for associations between cognitive ability and brain cortical thickness in old age", "year": "2014" }, { "authors": "L Schmaal; D P Hibar; P G Sämann; G B Hall; B T Baune; N Jahanshad; J W Cheung; T G Van Erp; D Bos; M A Ikram; M W Vernooij", "journal": "Molecular psychiatry", "ref_id": "b4", "title": "Cortical abnormalities in adults and adolescents with major depression based on brain scans from 20 cohorts worldwide in the ENIGMA Major Depressive Disorder Working Group", "year": "2017" }, { "authors": "Mohita Bassi; Prakriti Triverbi", "journal": "", "ref_id": "b5", "title": "Human biometric identification through brain print", "year": "2018" }, { "authors": "Shiyang Chen; Xiaoping Hu", "journal": "Brain connectivity", "ref_id": "b6", "title": "Individual identification using the functional brain fingerprint detected by the recurrent neural network", "year": "2018" }, { "authors": "Emily S Finn; Xilin Shen; Dustin Scheinost; Monica D Rosenberg; Jessica Huang; Marvin M Chun; Xenophon Papademetris; R Todd Constable", "journal": "Nature neuroscience", "ref_id": "b7", "title": "Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity", "year": "2015" }, { "authors": "Sreevalsan S Menon; K Krishnamurthy", "journal": "Scientific reports", "ref_id": "b8", "title": "A comparison of static and dynamic functional connectivities for identifying subjects and biological sex using intrinsic individual brain connectivity", "year": "2019" }, { "authors": "T Liu; F Gao; W Zheng; Y You; Z Zhao; Y Lv; W Chen; H Zhang; C Ji; D Wu", "journal": "Neuroimage", "ref_id": "b9", "title": "Diffusion MRI of the infant brain reveals unique asymmetry patterns during the first-halfyear of development", "year": "2021" }, { "authors": "T Liu; H Zhang; Y You; W Zheng; Z Zhao; T Liu; X Su; F Tian; Y Zhang; D Wu", "journal": "Jpr", "ref_id": "b10", "title": "Brain developmental differences between preterm-born twins and singletons: a multimodal MRI Study", "year": "2021" }, { "authors": "W Zheng; X Wang; T Liu; B Hu; D Wu", "journal": "Human Brain Mapping", "ref_id": "b11", "title": "Preterm-birth alters the development of nodal clustering and neural connection pattern in brain structural network at termequivalent age", "year": "2023" }, { "authors": "W Zheng; L Zhao; Z Zhao; T Liu; B Hu; D Wu", "journal": "Journal of Neuroscience", "ref_id": "b12", "title": "Spatiotemporal Developmental Gradient of Thalamic Morphology, Microstructure, and Connectivity from the Third Trimester to Early Infancy", "year": "2023" }, { "authors": "Fang- Yeh; Jean M Cheng; Aarti Vettel; Barnabas Singh; Scott T Poczos; Kirk I Grafton; Erickson; I Wen-Yih; Timothy D Tseng; Verstynen", "journal": "PLoS computational biology", "ref_id": "b13", "title": "Quantifying differences and similarities in whole-brain white matter architecture using local connectome fingerprints", "year": "2016" }, { "authors": "Tamara Vanderwal; Jeffrey Eilbott; Clare Kelly; Simon R Frew; Todd S Woodward; Michael P Milham; F Xavier Castellanos", "journal": "NeuroImage", "ref_id": "b14", "title": "Stability and similarity of the pediatric connectome as developmental measures", "year": "2021" }, { "authors": "Judit Ciarrusta; Daan Christiaens; Sean P Fitzgibbon; Ralica Dimitrova; Jana Hutter; Emer Hughes; Eugene Duff", "journal": "Developmental Cognitive Neuroscience", "ref_id": "b15", "title": "The developing brain structural and functional connectome fingerprint", "year": "2022" }, { "authors": "Dan Hu; Fan Wang; Han Zhang; Zhengwang Wu; Zhen Zhou; Guoshi Li; Li Wang; Weili Lin; Gang Li", "journal": "Journal of Neuroscience", "ref_id": "b16", "title": "Existence of functional connectome fingerprint during infancy and its stability over months", "year": "2022" }, { "authors": "Christian Wachinger; Polina Golland; William Kremen; Bruce Fischl; Martin Reuter", "journal": "NeuroImage", "ref_id": "b17", "title": "Alzheimer's Disease Neuroimaging Initiative. BrainPrint: A discriminative characterization of brain morphology", "year": "2015" }, { "authors": "K Aloui; A Nait-Ali; M S Naceur", "journal": "Pattern Recognition Letters", "ref_id": "b18", "title": "Using brain prints as new biometric feature for human recognition", "year": "2018" }, { "authors": "Y Tian; B T Yeo; V Cropley; A Zalesky", "journal": "NeuroImage", "ref_id": "b19", "title": "High-resolution connectomic fingerprints: Mapping neural identity and behavior", "year": "2021" }, { "authors": "Dingna Duan; Shunren Xia; Zhengwang Wu; Fan Wang; Li Wang; Weili Lin; John H Gilmore; Dinggang Shen; Gang Li", "journal": "", "ref_id": "b20", "title": "Cortical Foldingprints for Infant Identification", "year": "2019" }, { "authors": "Dingna Duan; Shunren Xia; Islem Rekik; Zhengwang Wu; Li Wang; Weili Lin; John H Gilmore; Dinggang Shen; Gang Li", "journal": "Human brain mapping", "ref_id": "b21", "title": "Individual identification and individual variability analysis based on cortical folding features in developing infant singletons and twins", "year": "2020" }, { "authors": "Hao Chen; Zhiguang Qin; Yi Ding; Lan Tian; Zhen Qin", "journal": "Neurocomputing", "ref_id": "b22", "title": "Brain tumor segmentation with deep convolutional symmetric neural network", "year": "2020" }, { "authors": "Richard Mckinley; Rik Wepfer; Fabian Aschwanden; Lorenz Grunder; Raphaela Muri; Christian Rummel; Rajeev Verma", "journal": "Scientific reports", "ref_id": "b23", "title": "Simultaneous lesion and neuroanatomy segmentation in multiple sclerosis using deep neural networks", "year": "2019" }, { "authors": "Soheil Esmaeilzadeh; Yao Yang; Ehsan Adeli", "journal": "", "ref_id": "b24", "title": "End-to-end parkinson disease diagnosis using brain mr-images by 3d-cnn", "year": "2018" }, { "authors": "Muhammad Qureshi; Jooyoung Naveed Iqbal; Boreom Oh; Lee", "journal": "Artificial intelligence in medicine", "ref_id": "b25", "title": "3D-CNN based discrimination of schizophrenia using resting-state fMRI", "year": "2019" }, { "authors": "Mahmoud Mostapha; Sunhyung Kim; Guorong Wu; Leo Zsembik; Stephen Pizer; Martin Styner", "journal": "", "ref_id": "b26", "title": "Non-euclidean, convolutional learning on cortical brain surfaces", "year": "2018" }, { "authors": "Ting Meng; Gary Wei; Pui-Tung; Lok Choi; Ming Lui", "journal": "SIAM Journal on Imaging Sciences", "ref_id": "b27", "title": "Tempo: feature-endowed Teichmuller extremal mappings of point clouds", "year": "2016" }, { "authors": "E J Hughes; T Winchman; F Padormo; R Teixeira; J Wurie; M Sharma; M Fox; J Hutter; L Cordero-Grande; A N Price; J Allsop", "journal": "Magnetic resonance in medicine", "ref_id": "b28", "title": "A dedicated neonatal brain imaging system", "year": "2017" }, { "authors": "A Makropoulos; E C Robinson; A Schuh; R Wright; S Fitzgibbon; J Bozek; S J Counsell; J Steinweg; K Vecchiato; J Passerat-Palmbach; G Lenz", "journal": "Neuroimage", "ref_id": "b29", "title": "The developing human connectome project: A minimal processing pipeline for neonatal cortical surface reconstruction", "year": "2018" }, { "authors": "Taco S Cohen; Mario Geiger; Jonas Köhler; Max Welling", "journal": "", "ref_id": "b30", "title": "Spherical cnns", "year": "2018" }, { "authors": "Carlos Esteves; Christine Allen-Blanchette; Ameesh Makadia; Kostas Daniilidis", "journal": "", "ref_id": "b31", "title": "Learning so (3) equivariant representations with spherical cnns", "year": "2018" }, { "authors": "Max \" Chiyu; Jingwei Jiang; Karthik Huang; Kashinath; Philip Prabhat; Matthias Marcus; Nießner", "journal": "", "ref_id": "b32", "title": "Spherical CNNs on Unstructured Grids", "year": "2019" }, { "authors": "Ekin D Cubuk; Barret Zoph; Dandelion Mane; Vijay Vasudevan; V Quoc; Le", "journal": "", "ref_id": "b33", "title": "Autoaugment: Learning augmentation policies from data", "year": "2018" }, { "authors": "Connor Shorten; Taghi M Khoshgoftaar", "journal": "Journal of big data", "ref_id": "b34", "title": "A survey on image data augmentation for deep learning", "year": "2019" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "", "ref_id": "b35", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Sumit Chopra; Raia Hadsell; Yann Lecun", "journal": "", "ref_id": "b36", "title": "Learning a similarity metric discriminatively, with application to face verification", "year": "2005" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b37", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": " Eickhoff; Nathan B Simon; Axel Walters; Jillian Schleicher; Gary F Kril; Karl Egan; John Dg Zilles; Katrin Watson; Amunts", "journal": "Human brain mapping", "ref_id": "b38", "title": "High-resolution MRI reflects myeloarchitecture and cytoarchitecture of human cerebral cortex", "year": "2005" }, { "authors": "Zahraa A Al-Saffar; Tülay Yildirim", "journal": "IEEE Access", "ref_id": "b39", "title": "A novel approach to improving brain image classification using mutual information-accelerated singular value decomposition", "year": "2020" }, { "authors": "J D Bodapati; S N Shareef; V Naralasetti; N B Mundukur", "journal": "International Journal of Pattern Recognition and Artificial Intelligence", "ref_id": "b40", "title": "Msenet: Multimodal squeeze-and-excitation network for brain tumor severity prediction", "year": "2021" }, { "authors": "F Garcea; A Serra; F Lamberti; L Morra", "journal": "Computers in Biology and Medicine", "ref_id": "b41", "title": "Data augmentation for medical imaging: A systematic literature review", "year": "2022" }, { "authors": "D Duan; S Xia; I Rekik; Y Meng; Z Wu; L Wang; W Lin; J H Gilmore; D Shen; G Li", "journal": "Neuroimage", "ref_id": "b42", "title": "Exploring folding patterns of infant cerebral cortex based on multi-view curvature features: Methods and applications", "year": "2019" }, { "authors": "F Wang; C Lian; Z Wu; H Zhang; T Li; Y Meng; L Wang; W Lin; D Shen; G Li", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b43", "title": "Developmental topography of cortical thickness during infancy", "year": "2019" }, { "authors": "J H Gilmore; R C Knickmeyer; W Gao", "journal": "Nature Reviews Neuroscience", "ref_id": "b44", "title": "Imaging structural and functional brain development in early childhood", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 264.89, 428.45, 65.38, 14.16 ], "formula_id": "formula_0", "formula_text": "f * ds N 2 = λds M 2" }, { "formula_coordinates": [ 6, 187.46, 549.19, 220.61, 36.6 ], "formula_id": "formula_1", "formula_text": "L = - 1 N ∑ (log ( exp(-∥ z i -z i + ∥/τ) ∑ exp N j=1,j≠i (-∥ z i -z j + ∥/τ) )) N i=1" }, { "formula_coordinates": [ 7, 176.9, 72.72, 241.74, 36.62 ], "formula_id": "formula_2", "formula_text": "L = ∑ ((1 -y i ) * dist i 2 + y * clamp min (m -dist i , 0) 2 ) N i=1" }, { "formula_coordinates": [ 7, 188.3, 147.47, 218.71, 23.36 ], "formula_id": "formula_3", "formula_text": "dist i =∥ x 1i -x 2i ∥, clamp min (a, b) = { a, if a ≥ b b, if a < b" } ]
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "In recent years, a surging number of studies, including SAM [19], VisualChatGPT [53], and BLIP-2 [21], have demonstrated the exceptional performance of pre-trained models across a broad range of 2D image and natural language processing (NLP) tasks. Pre-training on large-scale datasets endows the model with abundant prior knowledge, enabling the pre-trained models to exhibit superior perfor-* Corresponding authors ... ... ..." }, { "figure_ref": [], "heading": "X T", "publication_ref": [], "table_ref": [], "text": "X t X t-1 X 0\n..." }, { "figure_ref": [ "fig_0" ], "heading": "Conditional Denoising", "publication_ref": [ "b30", "b0", "b60", "b30", "b14", "b0", "b30", "b14", "b22", "b24", "b42", "b56", "b25", "b6", "b1" ], "table_ref": [], "text": "𝑝𝜃(X t-1 |X t , c) q(X t |X t-1 )\ncondition c mance and enhanced generalization capabilities after finetuning, compared to models trained solely on downstream tasks [18,21,31]. Similar to the 2D and NLP fields, pre-training methods in point cloud data have also become essential in enhancing model performance and boosting model generalization ability.\nContemporary point cloud pre-training methods can be casted into two categories, i.e., contrastive-based and generative-based pre-training. Contrastive-based methods [1,55,61] resort to the contrastive objective to make deep models grasp the similarity knowledge between samples. By contrast, generative-based methods involve pretraining by reconstructing the masked point cloud [31,60] or its 2D projections [15,49]. However, several factors mainly account for the inferior pre-training efficacy in the 3D domain. For contrastive-based methods [1,55], selecting the proper negative samples to construct the contrastive objective is non-trivial. The generative-based pre-training approaches, such as Point-MAE [31] and Point-M2AE [60], solely reconstruct the masked point patches. In this way, they cannot capture the global density distribution of the object. Additionally, there is no precise one-to-one matching for MSE loss and set-to-set matching for Chamfer Distance loss between reconstructed and original point cloud due to its unordered nature. Besides, the projection from 3D to 2D by TAP [49] and Ponder [15] inevitably introduces the geometric information loss, making the reconstruction objective difficult to equip the backbone with comprehensive geometric prior.\nTo combat against the unordered and non-uniform density characteristics of point clouds, inspired by adding noise and denoising of the diffusion model [14], we propose a novel diffusion-based pre-training framework, dubbed PointDif. It pre-trains the point cloud backbone by restoring the noisy data at each step as illustrated in Fig. 1. This procedural denoising process is similar to the visual streams in our human brain mechanism [41]. Human uses this simple brain mechanism to obtain broad prior knowledge from the 3D world. Similarly, we find that low-level and highlevel neural representation emerges from denoising neural networks. This aligns with our goal of applying pre-trained models to downstream low-level and high-level tasks, such as classification and segmentation. Moreover, the diffusion model has strong theoretical guarantees and provides an inherently hierarchical learning strategy by enabling the understanding of data distribution hierarchically.\nSpecifically, we present a conditional point generator in our PointDif, which guides the point-to-point generation from the noisy point cloud. This conditional point generator encompasses a Condition Aggregation Network (CANet) and a Conditional Point Diffusion Model (CPDM). The CANet is responsible for globally aggregating latent features extracted by the backbone. The aggregated features serve as the condition to guide the CPDM in denoising the noisy point cloud. During the denoising process, the pointto-point mapping relationship exists in the noisy point cloud at neighboring time steps. Equipped with the CPDM, the backbone can effectively capture the global point density distribution of the object. This enables the model to adapt to downstream tasks that involve point clouds with diverse density distributions. With the help of the conditional point generator, our pre-training framework can be extended to various point cloud backbones and enhance their overall performance.\nMoreover, as shown in Tab. 8, we find that sampling time step t from different intervals during pre-training can learn different levels of geometric prior. Based on this observation, we propose a recurrent uniform sampling optimization strategy. This strategy divides the diffusion time steps into multiple intervals and uniformly samples the time step t throughout the pre-training process. In this way, the model can uniformly recover from various noise levels and learn from balanced supervision. To the best of our knowledge, we are the first to demonstrate the effectiveness of generative diffusion models in enhancing point cloud pre-training.\nOur key contributions can be summarized as follows: [23,25,43,48,57]. Recently, researchers have investigated methods for accelerating the sampling process of DDPM to improve its generation efficiency [26,27,40]. Moreover, some studies have explored the application of diffusion models in discriminative tasks, such as object detection [7] and semantic segmentation [2,4,51]." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b49" ], "table_ref": [], "text": "To our knowledge, we are the first to apply the diffusion model for point cloud pre-training and have achieved promising results. The most relevant work is the 2D pretraining method DiffMAE [50]. However, there are four critical distinctions between our PointDif and DiffMAE. Firstly, as to the reconstruction target, DiffMAE pre-trains the network by denoising pixel values of masked patches. In contrast, our PointDif pre-trains the network by recovering the original point clouds from randomly noisy point clouds, which is beneficial for the network to learn both local and global geometrical priors of 3D objects. Secondly, as for the guidance way, DiffMAE uses the conditional guidance method of cross-attention. We adopt a point condition network (PCNet) for point cloud data to facilitate 3D generation through point-by-point guidance. It also assists the network in learning the global point density distribution of the object. Thirdly, regarding the loss function, Diff-MAE introduces an additional CLIP loss to constrain the model, whereas our PointDif demonstrates strong performance in various 3D downstream tasks without additional constraints. Finally, with regard to the unity of the framework, DiffMAE can only pre-train the 2D transformer encoder. In comparison, with the help of our conditional point generator, we can pre-train various point cloud backbones and enhance their performance." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "We take pre-training the transformer encoder as an example to introduce our overall pre-training framework, i.e., Point-Dif. The framework can also be easily applied to pre-train other backbones. The pipeline of our PointDif is shown in Fig. 2a. Given a point cloud, we first divide it into point patches and apply embedding and random masking operations to each patch. Subsequently, we use a transformer encoder to process visible tokens to learn the latent features, which are then used to generate the condition c through the CANet. Finally, this condition gradually guides the CPDM to recover the original input point cloud from the random noise point cloud in a point-to-point manner. We pre-train the transformer encoder to acquire the hierarchical geometric prior through the progressively guided process." }, { "figure_ref": [], "heading": "Preliminary: Conditional Point Diffusion", "publication_ref": [], "table_ref": [], "text": "During the diffusion process, random noise is continuously introduced into the point cloud through a Markov chain, and there exists a point-to-point mapping relationship between noisy point clouds of adjacent timestamps. Formally, given a clean point cloud X 0 ∈ R n×3 containing n points from the real data distribution p data , the diffusion process gradually adds Gaussian noise to X 0 for T time steps:\nq(X 1:T |X 0 ) = T t=1 q(X t |X t-1 ),(1)\nwhere\nq(X t |X t-1 ) = N (X t ; 1 -βtX t-1 , βtI),(2)\nthe hyperparameters β t are some pre-defined small constants and gradually increase over time. X t is sampled from a Gaussian distribution with mean √ 1 -β t X t-1 and variance β t I. Moreover, according to [14], it is possible to elegantly express X T as a direct function of X 0 :\nq(X t |X 0 ) = N (X t ; √ ᾱtX 0 , (1 -ᾱt)I),(3)\nwhere ᾱt = t i=1 α i and α t = 1 -β t . As the time step t increases, ᾱt gradually approaches 0 and q(X t |X 0 ) will be close to the Gaussian distribution p noise .\nThe reverse process involves using a neural network parameterized by θ to gradually denoise a Gaussian noise into a clean point cloud with the help of the condition c. This process can be defined as:\np θ (X 0:T , c) = p(X T ) T t=1 p θ (X t-1 |X t , c),(4)\nwhere\np θ (X t-1 |X t , c) = N (X t-1 ; µ θ (X t , t, c), σ 2 t I),(5)\nthe µ θ is a neural network that predicts the mean, and σ 2 t is a constant that varies with time.\nThe training objective of the diffusion model is formulated based on variational inference, which employs the variational lower bound (vlb) to optimize the negative loglikelihood:\nL vlb = Eq[-logp θ (X 0 |X 1 , c) + DKL(q(X T |X 0 )||p(X T )) + T t=2 DKL(q(X t-1 |X t , X 0 )||p θ (X t-1 |X t , c))],(6)\nwhere D KL (•) is the KL divergence. However, training L vlb is prone to instability. To address this, we adopt a simplified version of the mean squared error [14]:\nL(θ) = E t,X 0 ,c,ϵ ∥ϵ -ϵ θ ( √ ᾱtX 0 + √ 1 -ᾱtϵ, c, t)∥ 2 , (7\n)\nwhere ϵ ∼ N (0, I), ϵ θ (•) is a trainable neural network that takes the noisy point cloud X t at time t, along with the time t and condition c as inputs. This network predicts the added noise ϵ. Additional details regarding derivations and proofs can be found in Sec. 6." }, { "figure_ref": [], "heading": "Point Cloud Processing", "publication_ref": [ "b30", "b31" ], "table_ref": [], "text": "The goal of point cloud processing is to convert the given point cloud into several tokens, which consist of point patch embedding and patch masking. Point Patch Embedding. Following Point-BERT [58] and Point-MAE [31], we divide the point cloud into point patches using a grouping strategy. Specifically, for an input point cloud X ∈ R n×3 consisting of n points, we first employ the Farthest Point Sampling (FPS) algorithm to sample s center points {C i } s i=1 . For each center point C i , we use the K Nearest Neighborhood (KNN) algorithm to gather the k nearest points as a point patch P i .\n{Ci} s i=1 = FPS(X), {Pi} s i=1 = KNN(X, {Ci} s i=1 ). (8)\nIt is noteworthy that we apply a centering process to the point patches, which involves subtracting the coordinates of the point center from each point within the patch. This operation helps improve the convergence of the model. Subsequently, we utilize a simplified PointNet [32] ξ ϕ (•) with parameter ϕ, which employs 1 × 1 convolutions and max pooling, to embed the point patches\n{P i } s i=1 into tokens {F i } s i=1 . {Fi} s i=1 = ξ ϕ ({Pi} s i=1 ).(9)\nPatch Masking. In order to preserve the geometric information within the patch, we randomly mask the entire points in the patch to obtain the masked tokens {F m i } r i=1 and visible tokens {F v i } g i=1 , where r=⌊s×m⌋ is the number of masked tokens, g=s-r is the number of visible tokens, ⌊.⌋ is the floor operation and m denotes the masking ratio. We conduct experiments to assess the impact of different masking ratios and find that higher masking ratios (0.7-0.9) result in better performance, as discussed in Sec. 4.3." }, { "figure_ref": [], "heading": "Encoder", "publication_ref": [], "table_ref": [], "text": "The transformer encoder is responsible for extracting latent geometric features, which is retained for feature extraction during fine-tuning for downstream tasks. Φ ρ (•) is our encoder with parameter ρ, composed of 12 standard transformer blocks. To better capture meaningful 3D geometric prior, we remove the masked tokens and encode only the visible tokens {F v i } g i=1 . Furthermore, we introduce a position embedding ψ τ (•) with parameter τ to embed the position information of the point patch into P os v i , which is comprised of two learnable MLPs and the GELU activation function. Then, the position embedding output is concatenated with F v i and sent through a sequence of transformer blocks for feature extraction.\n{T v i } g i=1 = Φρ({Concat(F v i , P os v i )} g i=1 ),(10)\nwhere\n{P os v i } g i=1 = ψτ ({C v i } g i=1 ).\n(11)" }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Conditional Point Generator", "publication_ref": [], "table_ref": [], "text": "Our conditional point generator consists of the CANet and the CPDM. Condition Aggregation Network (CANet). To be specific, we concatenate features {T v i } g i=1 of the visible patches extracted by the encoder with a set of learnable masked patch information {T m i } r i=1 , while preserving their original position information. Afterward, the concatenated features are encoded using the CANet, denoted as f ω (•) with the parameter ω. As shown in Fig. 2b, our CANet consists of four 1x1 convolutional layers and two global max-pooling layers to aggregate the global contextual features of the point cloud. Ultimately, this process yields the guiding condition c required for the CPDM: way. As illustrated in Fig. 2c, the conditional point diffusion model comprises six point condition network (PCNet).\nc = fω(Concat({T v i } g i=1 , {T m i } r i=1 )}).(12\nThe specific structure of each PCNet can be represented as follows:\nH l = R l ⊙(W lh H l-1 +b lh )+W lb y, R l = σ(W lr y+b lr ),(13)\nwhere H " }, { "figure_ref": [], "heading": "Training Objective", "publication_ref": [ "b6" ], "table_ref": [], "text": "We introduce the process of encoding condition c into Eq. (7). Therefore, the training objective of our model can be defined as follows:\nL(θ, ρ, ω) = E t,X 0 ,ϵ ∥ϵ -ϵ θ ( √ ᾱtX 0 + √ 1 -ᾱtϵ, fω(Φρ), t)∥ 2 . (14\n)\nBy minimizing this loss, we can simultaneously train the encoder Φ ρ , the CANet f ω and the CPDM ϵ θ . Intuitively, the training process encourages the encoder to extract hierarchical geometric features from the original point cloud and encourages the CPDM to reconstruct the original point cloud according to the hierarchical geometric features. In this process, the CPDM performs a task similar to point cloud completion. Recurrent Uniform Sampling Strategy. According to Eq. ( 14), we need to sample a time step t randomly from the range [1, T] for each point cloud data for network training. However, we observe that networks trained with samples from different time steps exhibit varying performance on downstream tasks. As illustrated in Tab. 8, the encoder trained by sampling t from the early interval is more suitable for the classification task. In contrast, the encoder trained by sampling from the later interval performs better on the segmentation task. Based on this discovery, We propose a more effective recurrent uniform sampling strategy. Specifically, we divide the time step range [1, T] into h intervals:\n{[d×i+1, d×(i+1)]} h-1 i=0\nwhere d=⌊T /h⌋. As in Eq. ( 15), we randomly sample t from these h intervals for each sample data, calculate the loss h times, and average them to obtain the final loss.\nL(θ, ρ, ω) = 1 h h-1 i=0 L(θ, ρ, ω)t∼Q i , Qi = [d×i+1, d×(i+1)].(15)\nIntuitively, this sampling strategy allows the encoder to learn different levels of geometric prior and learn from balanced supervision. It is more uniform compared to randomly sampling a single t from [1, T ] in the original DDPM [14]. Our approach divides the time steps into h = 4 intervals, as discussed in Sec. 4.3. Discussion. We chose to pre-train the backbone instead of the diffusion model ϵ θ for two reasons. Firstly, the backbone can be various deep feature extraction networks, which is more effective in extracting low-level and highlevel geometric features compared to the typically simpler diffusion model ϵ θ . Secondly, separating the backbone from the pipeline makes our pre-trained framework more adaptable to different architectures, thereby increasing its flexibility." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Pre-training Setups", "publication_ref": [ "b5" ], "table_ref": [], "text": "Pre-training. We use ShapeNet [6] to pre-train the model, a synthetic 3D dataset that contains 52,470 3D shapes across " }, { "figure_ref": [], "heading": "Downstream Tasks", "publication_ref": [ "b8", "b23", "b4" ], "table_ref": [], "text": "A high-quality point cloud pre-trained model should perceive hierarchical geometric prior. To assess the efficacy of the pre-trained model, we gauged its performance on various fine-tuned tasks using numerous real-world datasets. Object Classification. We first use the classification task on ScanObjectNN [45] to evaluate the shape recognition ability of the pre-trained model by PointDif. The ScanOb-jectNN dataset is divided into three subsets: OBJ-ONLY (only objects), OBJ-BG (objects and background), and PB-T50-RS (objects, background, and artificially added perturbations). We take the Overall Accuracy on these three subsets as the evaluation metric, and the detailed experimental results are summarized in Tab. 1. Our PointDif achieves better performance on all subsets, exceeding TAP by 2.4%, 2.9% and 1.9%, respectively. The significant improvement on the challenging ScanObjectNN benchmark strongly validates the effectiveness of our model in shaping understanding.\nObject Detection. We validate our model on the more challenging indoor dataset ScanNetV2 [9] for 3D object detection task to assess the scene understanding ability. We adopt 3DETR [29] as our method's task head. To ensure a fair comparison, we follow MaskPoint [24] and replace the encoder of 3DETR with our pre-trained encoder and fine-tune it. Unlike MaskPoint and Point-BERT, which are pre-trained on the ScanNet-Medium dataset in the same domain as ScanNetV2, our approach and Point-MAE are pretrained on ShapeNet in a different domain and only finetuned on the training set of ScanNetV2. Tab. 2 displays our experimental results. Our method outperforms Point-MAE and surpasses MaskPoint and Point-BERT by 1.6% and 5.4%, respectively. Additionally, our approach exhibits a 2.3% improvement compared to pre-training the transformer encoder of 3DETR on the ShapeNet dataset using the TAP method. The experiments demonstrate that our model exhibits strong transferability and generalization capability on scene understanding. Indoor Semantic Segmentation. We further validate our model on the indoor S3DIS dataset [3] for semantic segmentation tasks to show the understanding of contextual semantics and local geometric relationships. We test our model on Area 5 while training on other areas. To make a fair comparison, we put all pre-trained models in the same codebase based on the PointNext [35] baseline and use the same decoder and semantic segmentation head. We freeze able it to adapt to downstream tasks with significant density variations. The entire results are reported in Sec. 8. Object detection results of CAGroup3D with and without pre-training. We further evaluate our pre-training method on the competitive 3D object detection model, CA-Group3D [46], a two-stage fully sparse 3D detection network. We train CAGroup3D from scratch and report the result for a fair comparison. We use our method to pretrain the backbone BiResNet on ShapeNet. Specifically, we treat BiResNet as the encoder to extract features. The conditional point generator employs the masked features to guide the point-to-point recovery of the original point cloud.\nOther pre-training settings follow Sec. 4.1. The experimental results are shown in Tab. 5. Compared to the train-fromscratch variant, our method improves performance by 0.9% and 0.5% on AP 25 and AP 50 , respectively. Therefore, our pre-training framework can be flexibly applied to various backbones to improve performance. Please refer to Sec. 8 for additional results." }, { "figure_ref": [ "fig_4" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Conditional guidance strategies. We study the influence of different guidance strategies for CPDM on S3DIS. As shown in Tab. 6, the cross-attention way even performs worse than the simple pointwise concatenation way. We speculate this is because the cross-attention mechanism attempts to capture relationships between different points. However, the density varies across different regions for 0.9 point cloud data, potentially impacting the model's performance. In contrast, our PCNet employs a point-to-point guidance approach, where each point is processed independently of others. This approach is advantageous in enabling the network to capture point density information. Additionally, compared to pointwise concatenation, our utilization of the reset gate control mechanism assists the network in adaptively retaining relevant geometric features, thereby enhancing performance.\nRecurrent uniform sampling. We validate the effectiveness of our proposed recurrent uniform sampling strategy on S3DIS. Specifically, (i) we first verify the impact of the number of partition intervals and whether the recurrent sampling strategy is adopted on experimental results with the same effective batchsize. As presented in lines 1-6 of Tab. Masking ratio. We further validate the impact of different masking ratios on downstream tasks and separately report the results for classification on ScanObjectNN and semantic segmentation on S3DIS. As shown in Fig. 4, encoding all point patches without masking harms the model's learning. By employing masking, the overall difficulty of the self-supervised proxy task is increased, thereby aiding the backbone in learning more rich geometric priors. Additionally, our method achieves the best classification and semantic segmentation performance when the mask ratio is 0.8." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In Masking strategy. We report the experimental results for downstream classification and semantic segmentation tasks with different masking strategies. The strategy of block masking involves masking adjacent point patches. From Tab. 12, we observe that random masking performs better than block masking under the same masking ratio (0.8)." }, { "figure_ref": [ "fig_5" ], "heading": "Additional Visualization.", "publication_ref": [], "table_ref": [], "text": "S3DIS semantic segmentation visualizations. We provide a qualitative comparison of results for S3DIS semantic segmentation. As shown in Fig. 5, the predictions of our " }, { "figure_ref": [], "heading": "limitation", "publication_ref": [], "table_ref": [], "text": "Our pre-training method has demonstrated outstanding performance on various 3D real datasets, but its performance is slightly worse on synthetic datasets. We suspect that this is due to the inability of synthetic datasets to fully simulate the complexity of real-world objects, such as the presence of more noise and occlusion in real datasets. Furthermore, the synthetic datasets are relatively simple, and the performance on the synthetic datasets is currently saturated, with only slight improvements from other pre-training methods. Therefore, it is insufficient to demonstrate the performance advantage of the algorithm on the synthetic datasets. In the future, we will continue exploring and fully exploiting diffusion models' beneficial impact on point cloud pretraining. We also hope that our work will inspire more research on pre-training with diffusion models, contributing to the advancement of the field. " }, { "figure_ref": [], "heading": "Point Cloud Pre-training with Diffusion Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Proof", "publication_ref": [], "table_ref": [], "text": "Calculating the probability distribution q(X t-1 |X t ) for the reverse process is hard. However, given X 0 , the posterior of the forward diffusion process can be calculated using the following equation: q(X t-1 |X t , X 0 )=N (X t-1 ; μt(X t , X 0 ), βtI), ( 16)\nAccording to Eq. ( 6) in the main text, the variational lower bound can be divided into three parts:\nL T is a constant without parameters and can be ignored. To compute the parameterization of L t-1 , following [14], we set the mean µ θ (X t , t, c) of p θ (X t-1 |X t , c) to:\nWe can calculate L t-1 :\nwhere C is a parameter-free constant that can be disregarded. By substituting Eq. ( 17) and Eq. ( 19) into L t-1 :\nwhere\nis a constant that is unrelated to the loss, and following [14], we can further simplify the training loss:" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b23", "b5", "b4" ], "table_ref": [], "text": "All experiments are conducted on the RTX 3090 GPU. We describe the details of fine-tuning on various tasks. Object classification. We use a three-layer MLP with dropout as the classification head. During the fine-tuning process, we sample 2048 points for each point cloud, divide them into 128 point patches, set the learning rate to 5e-4, and fine-tune for 300 epochs. 3D object detection. Unlike MaskPoint [24], which is pre-trained on ScanNet-Medium and loads the weights of both the SA layer and the encoder during fine-tuning. During the fine-tuning stage, we only load the weights of the transformer encoder pre-trained on ShapeNet [6]. Following Maskpoint, we set the learning rate to 5e-4 and use the AdamW optimizer with a weight decay of 0.1. The batch size is set to 8. Semantic segmentation on indoor dataset. For a fair comparison, we put all pre-trained transformer encoders within the same codebase and freeze them while fine-tuning the decoder and semantic segmentation head. Due to limited computing resources, we set the batch size to 4 during finetuning. The remaining settings followed those used for training PointNeXt [35] from scratch in the original paper. Semantic segmentation on outdoor dataset. During finetuning, we load the backbone MinkUNet pre-trained on ShapeNet. And fine-tune the entire network while following the same settings used for training MinkowskiNet [8] from scratch. 3D object detection of CAGroup3D with and without pre-training. We load the weights of the backbone BiRes-Net, which is pre-trained on ShapeNet using our method. Then, we fine-tune the entire CAGroup3D [46] model using the same settings as those used for training CAGroup3D from scratch. Note that, we utilize the official codebase of CAGroup3D and consider the best-reproduced results as the baseline for comparison." }, { "figure_ref": [], "heading": "Additional results", "publication_ref": [], "table_ref": [], "text": "Semantic segmentation on outdoor dataset. As shown in Tab. 9, We report the mean IoU(%) and the IoU(%) on Se-manticKITTI [5] for all semantic classes for different methods. Our method improves mean IoU and IoU for multiple categories compared to the variant trained from scratch. The experimental results also demonstrate that our method performs well on outdoor datasets.\nObject detection results of CAGroup3D with and without pre-training. We report the Overall and different category results at AP 25 (%) and AP 50 (%). From Tab. 10," } ]
Pre-training a model and then fine-tuning it on downstream tasks has demonstrated significant success in the 2D image and NLP domains. However, due to the unordered and non-uniform density characteristics of point clouds, it is non-trivial to explore the prior knowledge of point clouds and pre-train a point cloud backbone. In this paper, we propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif). We consider the point cloud pre-training task as a conditional point-to-point generation problem and introduce a conditional point generator. This generator aggregates the features extracted by the backbone and employs them as the condition to guide the point-to-point recovery from the noisy point cloud, thereby assisting the backbone in capturing both local and global geometric priors as well as the global point density distribution of the object. We also present a recurrent uniform sampling optimization strategy, which enables the model to uniformly recover from various noise levels and learn from balanced supervision. Our PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection. Specifically, PointDif attains 70.0% mIoU on S3DIS Area 5 for the segmentation task and achieves an average improvement of 2.4% on ScanObjectNN for the classification task compared to TAP. Furthermore, our pretraining framework can be flexibly applied to diverse point cloud backbones and bring considerable gains.
Point Cloud Pre-training with Diffusion Models
[ { "figure_caption": "Figure 1 .1Figure 1. Schematic illustration of our PointDif. Our Point-Dif can pre-train different backbones by reconstructing the original point cloud point-to-point from the noisy point cloud. During pre-training, the latent features guide the restoration of noisy point clouds at various levels, allowing the backbone to learn more hierarchical geometric prior.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. (a) The pipeline of our PointDif. We first divide the input point cloud into masked and embedded point patches. Then, a transformer encoder is used to extract the latent features. Finally, we employ the condition aggregation network (CANet) to aggregate latent features to obtain the condition c, and then guide the conditional point diffusion model (CPDM) to point-to-point recovery of the original point cloud from the randomly perturbed point cloud. (b) The detailed structure of CANet. (c) The detailed structure of the point condition network (PCNet), CPDM is composed of six PCNet.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Visualization results on the ShapeNet validation set. Each row visualizes the input point cloud, masked point cloud, and reconstructed point cloud. Even though we mask 80% points, PointDif still produces high-quality point clouds.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "-1 and H l are respectively the input and output of PCNet, σ represents the sigmoid function, and W l * , b l * are all trainable parameters. y represents the feature obtained by concatenating the condition c with the time step embedding. The input dimensions for each PCNet are [3, 128, 256, 512, 256, 128] and the output dimension of the last PCNet is 3. By incorporating the condition into the control mechanism of the reset gate R l , the model can adaptively select geometric features to denoise. Recovering from noisy point clouds through point-to-point guidance can aid the network in learning the overall point density distribution of the object. This, in turn, assists different backbones in learning a broader range of dense and sparse geometric priors, resulting in enhanced performance in downstream tasks related to indoor and outdoor scenes.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Masking ratio. We report the Overall Accuracy(%) on ScanObjectNN and the mean IoU(%) on S3DIS with different masking ratios.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative comparison on S3DIS semantic segmentation. The first column shows the original point cloud input, followed by columns 2-4, which display the segmentation results of PointNeXt, Point-MAE, and our method. The fifth column shows the ground truth.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Object classification results on ScanObjectNN. We report the Overall Accuracy(%).", "figure_data": "MethodsPre. OBJ-ONLY OBJ-BG PB-T50-RSPointNet [32]✘79.273.368.0PointNet++ [33]✘84.382.377.9PointCNN [22]✘85.586.178.5DGCNN [47]✘86.282.878.1Transformer [58]✘80.5579.8677.24Transformer-OcCo [58] ✘85.5484.8578.79Point-BERT [58]✔88.1287.4383.07MaskPoint [24]✔89.7089.3084.60Point-MAE [31]✔88.2990.0285.18TAP [49]✔89.5090.3685.67PointDif (Ours)✔91.9193.2987.61", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Object detection results on ScanNet. We report the Average Precision(%). \"Pre Dataset\" refers to the pre-training dataset, ScanNet-vid and ScanNet-Medium are both subsets of ScanNet.", "figure_data": "MethodsPre.Pre DatasetAP50VoteNet [34]✘-33.5STRL [16]✔ScanNet [9]38.4PointContrast [55]✔ScanNet [9]38.0DepthContrast [61]✔ScanNet-vid [61]42.93DETR [29]✘-37.9Point-BERT [58]✔ScanNet-Medium [24]38.3MaskPoint [24]✔ScanNet-Medium [24]42.1Point-MAE [31]✔ShapeNet [6]42.8TAP [49]✔ShapeNet [6]41.4PointDif (Ours)✔ShapeNet [6]43.755 object categories. We pre-train our model only on thetraining set, which consists of 41,952 shapes. For each 3Dshape, we sample 1,024 points to serve as the input for themodel. We set s as 64, which means each point cloud isdivided into 64 patches. Furthermore, the KNN algorithmis used to select k=32 nearest points as a point patch.Model Configurations. Following [31, 58], we set the em-bedding dimension of the transformer encoder to 384 andthe number of heads to 6. The condition dimension is 768.Training Details. During pre-training, we adopt theAdamW optimizer with a weight decay of 0.05 and a learn-ing rate of 0.001. We apply the cosine decay schedule toadjust the learning rate. Random scaling and translation areused for data augmentation. Our model is pre-trained for300 epochs with a batchsize of 128. The T for the diffusionprocess is set to 2000, and β t linearly increases from 1e-4to 1e-2.Visualization. To demonstrate the effectiveness of our pre-training scheme, we visualize the point cloud generated byour PointDif. As shown in Fig. 3, we apply a high maskratio of 0.8 to the input point cloud for masking and usethe masked point cloud as a condition to guide the diffusionmodel in generating the original point cloud. Our PointDifproduces high-quality point clouds. Experimental resultsdemonstrate that the geometric prior learned through ourpre-training method can provide excellent guidance for bothshallow texture and shape semantics.", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Semantic segmentation results on S3DIS Area 5. We report the mean IoU(%) and mean Accuracy(%).", "figure_data": "MethodsPre.mIoUmAccPointNet [32]✘41.149.0PointNet++ [33]✘53.5-PointCNN [22]✘57.363.9KPConv [44]✘67.172.8SegGCN [20]✘63.670.4Pix4Point [36]✘69.675.2MKConv [52]✘67.775.1PointNeXt [35]✘68.575.1Point-BERT [58]✔68.976.1MaskPoint [24]✔68.674.2Point-MAE [31]✔68.476.2PointDif (Ours)✔70.077.1", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Semantic segmentation results on SemanticKITTI val set. We report the mean IoU(%) and IoU(%) for some semantic classes.", "figure_data": "MethodsmIoU car bicycle truck preson bicyclist motorcyclist road sidewalk parking vegetation trunk terrainCylinder3D [63]66.1 96.9 54.4 81.0 79.392.40.194.6 82.247.985.966.9 69.2SPVCNN [42]68.6 97.9 59.8 79.8 80.092.00.694.2 81.750.488.069.7 74.1RPVNet [56]68.9 97.9 42.8 91.2 78.390.20.795.2 83.157.187.371.4 72.0MinkowskiNet [8]70.2 97.4 56.1 84.0 81.991.424.094.0 81.352.288.468.6 74.8MinkowskiNet+PointDif 71.3 97.5 58.8 92.8 81.492.330.394.1 81.756.088.569.1 75.2", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Object detection results of CAGroup3D with and without pre-training. We report the Average Precision(%).", "figure_data": "MethodsAP25AP50CAGroup [46]73.2060.84CAGroup+PointDif74.1461.31", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Conditional guidance strategies. We report the mean IoU(%) and mean Accuracy(%) on S3DIS Area 5.", "figure_data": "MethodsmIoUmAccCross Attention69.0975.19Point Concat69.4375.39Point Condition Network70.0277.05the encoder pre-trained on ShapeNet and fine-tune the de-coder and the segmentation head. The experiment resultsare shown in Tab. 3. Compared to training from scratch,our method boosts the performance of PointNext by 1.5%in terms of mIoU. Compared to other pre-training meth-ods such as Point-BERT, MaskPoint and Point-MAE, ourmethod achieves approximately 1.4% improvement for eachon mIoU. Note that, PointNext is originally trained usinga batchsize of 8, since computational resource constraints,we thus retrained it with a batchsize of 4 for a fair com-parison. Significant improvements indicate that our pre-trained model has successfully acquired hierarchical geo-metric prior knowledge essential for comprehending con-textual semantics and local geometric relationships.Outdoor Semantic Segmentation. We also validate theeffectiveness of our method on the more challenging real-world outdoor scene dataset KITTI. The SemanticKITTIdataset [5] is a large-scale outdoor LiDAR segmentationdataset, consisting of 43,000 scans with 19 semantic cat-egories. We employ MinkowskiNet [8] as our baselinemodel. During the pre-training phase, we discard its seg-mentation head and utilize the backbone MinkUNet asthe encoder to extract latent features. We pre-train theMinkUNet using our framework on ShapeNet and subse-quently fine-tuned it on the SemanticKITTI. Other pre-training configurations follow the guidelines outlined inSec. 4.1. The experiment results in Tab. 4 demonstrate that", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Recurrent uniform sampling. '#Point Clouds' represents the number of unique point clouds in a batch, and '#t' represents the number of time steps t sampled for each point cloud.", "figure_data": "#Point Clouds #t Intervals Effective Batchsize mIoU mAcc1284451270.02 77.051284151269.68 75.902562251269.67 76.262562151269.36 75.94648851269.42 75.71648151269.24 75.505121451269.91 75.935121151269.51 75.951281112869.39 76.451283338469.63 75.541285564069.24 75.16", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Different time intervals. We study the impact of pretraining with different time intervals. We report the object classification results on ScanObjectNN and semantic segmentation results on S3DIS Area 5. time interval, while the classification results will be slightly worse. We observe a gradual transition of classification and segmentation results among these four intervals, which fully validates our theory. In the early intervals of training, the model needs more low-level geometric features to guide the recovery of shallow texture from low-noise point clouds. Moreover, in the later intervals, high-level geometric features become crucial for guiding the recovery of semantic structure in high-noise point clouds. Therefore, our model can learn hierarchical geometric features throughout the entire training process.", "figure_data": "Time IntervalsClassification OBJ-ONLY OBJ-BG PB-T50-RSSegmentation mIoU[1, 500]92.4392.2588.3168.83[501, 1000]91.5791.3987.2368.52[1001, 1500]90.3692.2587.1369.19[1501, 2000]89.5087.6183.2869.70[1, 2000](Ours)91.9193.2987.6170.02nificantly better in the [1, 500] time interval than in otherintervals, while achieving unsatisfactory segmentation re-sults. Conversely, the segmentation performance is betterin the [1501, 2000]", "figure_id": "tab_12", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Semantic segmentation results on SemanticKITTI val set. We report the mean IoU(%) and the IoU(%) for all semantic classes.", "figure_data": "MethodsmIoUcarbicyclemotorcycletruckother-vehiclepersonbicyclistmotorcyclistroadparkingsidewalkother-groundbuildingfencevegetationtrunkterrainpoletraffic-signCylinder3D [63]66.1 96.9 54.4 75.9 81.0 67.0 79.3 92.4 0.1 94.6 47.9 82.2 0.1 90.3 57.0 85.9 66.9 69.2 63.6 50.6SPVCNN [42]68.5 97.9 59.8 81.1 79.8 80.8 80.0 92.0 0.6 94.2 50.4 81.7 0.6 90.9 63.5 88.0 69.7 74.1 65.8 51.5RPVNet [56]68.9 97.9 42.8 87.6 91.2 83.5 78.3 90.2 0.7 95.2 57.1 83.1 0.2 91.0 63.2 87.3 71.4 72.0 64.9 51.5MinkowskiNet [8]70.2 97.4 56.1 84.9 84.0 79.1 81.9 91.4 24.0 94.0 52.2 81.3 0.2 92.0 67.2 88.4 68.6 74.8 65.5 50.6MinkowskiNet+PointDif 71.3 97.5 58.8 84.6 92.8 80.6 81.4 92.3 30.3 94.1 56.0 81.7 0.2 91.4 65.4 88.5 69.1 75.2 65.0 50.5", "figure_id": "tab_14", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Object detection results of CAGroup3D with and without pre-training. We report the Overall and different category results at AP25(%) and AP50(%). AP 25 73.20 54.39 85.78 95.70 91.95 69.67 67.87 60.84 63.71 38.70 73.62 82.12 66.96 58.32 75.80 99.97 77.85 87.74 66.61 CAGroup3D+PointDif AP 25 74.14 53.71 87.85 95.46 89.73 73.01 69.36 59.72 65.22 41.65 75.07 82.66 67.10 56.27 79.22 99.91 82.27 89.55 66.69 CAGroup3D [46] AP 50 60.84 39.01 81.51 90.24 82.75 65.89 53.47 36.39 55.82 25.13 42.01 66.19 49.33 53.16 57.73 96.52 53.80 86.75 59.35 CAGroup3D+PointDif AP 50 61.31 38.47 82.46 91.03 82.23 67.09 53.88 34.72 56.80 31.34 40.02 65.49 48.19 51.40 70.57 96.37 52.60 82.33 58.53", "figure_data": "MethodsMetricOverallcabinetbedchairsofatabledoorwindowbookshelfpicturecounterdeskcurtainrefrigeratorshowercurtraintoiletsinkbathtubgarbagebinCAGroup3D [46]", "figure_id": "tab_15", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Recurrent uniform sampling. '#Point Clouds' represents the number of unique point clouds in a batch, and '#t' represents the number of time steps t sampled for each point cloud. We report the mean IoU(%) and mean Accuracy(%) on S3DIS. observe that pre-training with our method leads to better performance than training CAGroup3D from scratch. Therefore, our pre-training framework can be flexibly applied to various backbones to improve performance. Recurrent uniform sampling. Keeping the number of unique point clouds in a batch constant, we conduct experiments with 2 and 8 intervals divisions. The results are shown in Tab. 11, our strategy of dividing the 4 intervals and uniform sampling time step t is optimal.", "figure_data": "#Point Clouds #t Intervals Effective Batchsize mIoU mAcc1282225669.52 75.461284451270.02 77.0512888102469.49 76.50", "figure_id": "tab_16", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Masking strategy. \"Random\" refers to Random masking and \"Block\" refers to Block masking, We report the Overall Accuracy(%) on ScanObjectNN OBJ-BG subset and the mean IoU(%) on S3DIS.", "figure_data": "Masking Strategy Mask Ratio OBJ-BG mIoUBlock0.891.9169.47Random0.893.2970.02method are closer to the ground truth and less incorrectlysegmented than training PointNeXt from scratch and Point-MAE.", "figure_id": "tab_17", "figure_label": "12", "figure_type": "table" } ]
Xiao Zheng; Xiaoshui Huang; Guofeng Mei; Yuenan Hou; Zhaoyang Lyu; Bo Dai; Wanli Ouyang; Yongshun Gong
[ { "authors": "Mohamed Afham; Isuru Dissanayake; Dinithi Dissanayake; Amaya Dharmasiri; Kanchana Thilakarathna; Ranga Rodrigo", "journal": "", "ref_id": "b0", "title": "Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding", "year": "2022" }, { "authors": "Tomer Amit; Eliya Nachmani; Tal Shaharbany; Lior Wolf", "journal": "", "ref_id": "b1", "title": "Segdiff: Image segmentation with diffusion probabilistic models", "year": "2021" }, { "authors": "Iro Armeni; Ozan Sener; Helen Amir R Zamir; Ioannis Jiang; Martin Brilakis; Silvio Fischer; Savarese", "journal": "", "ref_id": "b2", "title": "3d semantic parsing of large-scale indoor spaces", "year": "2016" }, { "authors": "Dmitry Baranchuk; Ivan Rubachev; Andrey Voynov; Valentin Khrulkov; Artem Babenko", "journal": "", "ref_id": "b3", "title": "Label-efficient semantic segmentation with diffusion models", "year": "2021" }, { "authors": "Jens Behley; Martin Garbade; Andres Milioto; Jan Quenzel; Sven Behnke; Cyrill Stachniss; Jurgen Gall", "journal": "", "ref_id": "b4", "title": "Semantickitti: A dataset for semantic scene understanding of lidar sequences", "year": "2019" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b5", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Shoufa Chen; Peize Sun; Yibing Song; Ping Luo", "journal": "", "ref_id": "b6", "title": "Diffusiondet: Diffusion model for object detection", "year": "2022" }, { "authors": "Christopher Choy; Junyoung Gwak; Silvio Savarese", "journal": "", "ref_id": "b7", "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "year": "2019" }, { "authors": "Angela Dai; X Angel; Manolis Chang; Maciej Savva; Thomas Halber; Matthias Funkhouser; Nießner", "journal": "", "ref_id": "b8", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in neural information processing systems", "ref_id": "b10", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Ziyu Guo; Xianzhi Li; Pheng ; Ann Heng", "journal": "", "ref_id": "b11", "title": "Joint-mae: 2d-3d joint masked autoencoders for 3d point cloud pre-training", "year": "2023" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b12", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b13", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Di Huang; Sida Peng; Tong He; Honghui Yang; Xiaowei Zhou; Wanli Ouyang", "journal": "", "ref_id": "b14", "title": "Ponder: Point cloud pre-training via neural rendering", "year": "2023" }, { "authors": "Siyuan Huang; Yichen Xie; Song-Chun Zhu; Yixin Zhu", "journal": "", "ref_id": "b15", "title": "Spatio-temporal self-supervised representation learning for 3d point clouds", "year": "2021" }, { "authors": "Tianyu Huang; Bowen Dong; Yunhan Yang; Xiaoshui Huang; W H Rynson; Wanli Lau; Wangmeng Ouyang; Zuo", "journal": "", "ref_id": "b16", "title": "Clip2point: Transfer clip to point cloud classification with image-depth pre-training", "year": "2023" }, { "authors": "Xiaoshui Huang; Sheng Li; Wentao Qu; Tong He; Yifan Zuo; Wanli Ouyang", "journal": "", "ref_id": "b17", "title": "Frozen clip model is efficient point cloud backbone", "year": "2022" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b18", "title": "Segment anything", "year": "2023" }, { "authors": "Huan Lei; Naveed Akhtar; Ajmal Mian", "journal": "", "ref_id": "b19", "title": "Seggcn: Efficient 3d point cloud segmentation with fuzzy spherical kernel", "year": "2020" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b20", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Yangyan Li; Rui Bu; Mingchao Sun; Wei Wu; Xinhan Di; Baoquan Chen", "journal": "Advances in neural information processing systems", "ref_id": "b21", "title": "Pointcnn: Convolution on x-transformed points", "year": "2018" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b22", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Haotian Liu; Mu Cai; Yong Jae Lee", "journal": "Springer", "ref_id": "b23", "title": "Masked discrimination for self-supervised learning on point clouds", "year": "2022" }, { "authors": "Minghua Liu; Chao Xu; Haian Jin; Linghao Chen; Mukund Varma; T ; Zexiang Xu; Hao Su", "journal": "", "ref_id": "b24", "title": "One-2-3-45: Any single image to 3d mesh in 45 seconds without pershape optimization", "year": "2023" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "", "ref_id": "b25", "title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "", "ref_id": "b26", "title": "Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models", "year": "2022" }, { "authors": "Shitong Luo; Wei Hu", "journal": "", "ref_id": "b27", "title": "Diffusion probabilistic models for 3d point cloud generation", "year": "2021" }, { "authors": "Ishan Misra; Rohit Girdhar; Armand Joulin", "journal": "", "ref_id": "b28", "title": "An end-toend transformer model for 3d object detection", "year": "2021" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b29", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Yatian Pang; Wenxiao Wang; Francis Eh Tay; Wei Liu; Yonghong Tian; Li Yuan", "journal": "Springer", "ref_id": "b30", "title": "Masked autoencoders for point cloud self-supervised learning", "year": "2022" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b31", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Or Charles R Qi; Kaiming Litany; Leonidas J He; Guibas", "journal": "", "ref_id": "b33", "title": "Deep hough voting for 3d object detection in point clouds", "year": "2019" }, { "authors": "Guocheng Qian; Yuchen Li; Houwen Peng; Jinjie Mai; Hasan Hammoud; Mohamed Elhoseiny; Bernard Ghanem", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "Pointnext: Revisiting pointnet++ with improved training and scaling strategies", "year": "2022" }, { "authors": "Guocheng Qian; Xingdi Zhang; Abdullah Hamdi; Bernard Ghanem", "journal": "", "ref_id": "b35", "title": "Pix4point: Image pretrained transformers for 3d point cloud understanding", "year": "2022" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b36", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b37", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b39", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Yu Takagi; Shinji Nishimoto", "journal": "bioRxiv", "ref_id": "b40", "title": "High-resolution image reconstruction with latent diffusion models from human brain activity", "year": "2022" }, { "authors": "Haotian Tang; Zhijian Liu; Shengyu Zhao; Yujun Lin; Ji Lin; Hanrui Wang; Song Han", "journal": "Springer", "ref_id": "b41", "title": "Searching efficient 3d architectures with sparse point-voxel convolution", "year": "2020" }, { "authors": "Junshu Tang; Tengfei Wang; Bo Zhang; Ting Zhang; Ran Yi; Lizhuang Ma; Dong Chen", "journal": "", "ref_id": "b42", "title": "Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior", "year": "2023" }, { "authors": "Hugues Thomas; Charles R Qi; Jean-Emmanuel Deschaud; Beatriz Marcotegui; Leonidas J Franc ¸ois Goulette; Guibas", "journal": "", "ref_id": "b43", "title": "Kpconv: Flexible and deformable convolution for point clouds", "year": "2019" }, { "authors": "Angelina Mikaela; Quang-Hieu Uy; Binh-Son Pham; Thanh Hua; Sai-Kit Nguyen; Yeung", "journal": "", "ref_id": "b44", "title": "Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data", "year": "2019" }, { "authors": "Haiyang Wang; Shaocong Dong; Shaoshuai Shi; Aoxue Li; Jianan Li; Zhenguo Li; Liwei Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b45", "title": "Cagroup3d: Classaware grouping for 3d object detection on point clouds", "year": "2022" }, { "authors": "Yue Wang; Yongbin Sun; Ziwei Liu; Sanjay E Sarma; Michael M Bronstein; Justin M Solomon", "journal": "Acm Transactions On Graphics (tog)", "ref_id": "b46", "title": "Dynamic graph cnn for learning on point clouds", "year": "2019" }, { "authors": "Zhengyi Wang; Cheng Lu; Yikai Wang; Fan Bao; Chongxuan Li; Hang Su; Jun Zhu", "journal": "", "ref_id": "b47", "title": "Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation", "year": "2023" }, { "authors": "Ziyi Wang; Xumin Yu; Yongming Rao; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b48", "title": "Take-a-photo: 3d-to-2d generative pre-training of point cloud models", "year": "2023" }, { "authors": "Chen Wei; Karttikeya Mangalam; Po-Yao Huang; Yanghao Li; Haoqi Fan; Hu Xu; Huiyu Wang; Cihang Xie; Alan Yuille; Christoph Feichtenhofer", "journal": "", "ref_id": "b49", "title": "Diffusion models as masked autoencoders", "year": "2023" }, { "authors": "Julia Wolleb; Robin Sandkühler; Florentin Bieder; Philippe Valmaggia; Philippe C Cattin", "journal": "PMLR", "ref_id": "b50", "title": "Diffusion models for implicit image segmentation ensembles", "year": "2022" }, { "authors": "Sungmin Woo; Dogyoon Lee; Sangwon Hwang; Jin Woo; Sangyoun Kim; Lee", "journal": "Pattern Recognition", "ref_id": "b51", "title": "Mkconv: Multidimensional feature representation for point cloud analysis", "year": "2023" }, { "authors": "Chenfei Wu; Shengming Yin; Weizhen Qi; Xiaodong Wang; Zecheng Tang; Nan Duan", "journal": "", "ref_id": "b52", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "Xiaoyang Wu; Xin Wen; Xihui Liu; Hengshuang Zhao", "journal": "", "ref_id": "b53", "title": "Masked scene contrast: A scalable framework for unsupervised 3d representation learning", "year": "2023" }, { "authors": "Saining Xie; Jiatao Gu; Demi Guo; Leonidas Charles R Qi; Or Guibas; Litany", "journal": "Springer", "ref_id": "b54", "title": "Pointcontrast: Unsupervised pretraining for 3d point cloud understanding", "year": "2020" }, { "authors": "Jianyun Xu; Ruixiang Zhang; Jian Dou; Yushi Zhu; Jie Sun; Shiliang Pu", "journal": "", "ref_id": "b55", "title": "Rpvnet: A deep and efficient range-pointvoxel fusion network for lidar point cloud segmentation", "year": "2021" }, { "authors": "Jiale Xu; Xintao Wang; Weihao Cheng; Yan-Pei Cao; Ying Shan; Xiaohu Qie; Shenghua Gao", "journal": "", "ref_id": "b56", "title": "Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models", "year": "2023" }, { "authors": "Xumin Yu; Lulu Tang; Yongming Rao; Tiejun Huang; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b57", "title": "Point-bert: Pre-training 3d point cloud transformers with masked point modeling", "year": "2022" }, { "authors": "Yihan Zeng; Chenhan Jiang; Jiageng Mao; Jianhua Han; Chaoqiang Ye; Qingqiu Huang; Dit-Yan Yeung; Zhen Yang; Xiaodan Liang; Hang Xu", "journal": "", "ref_id": "b58", "title": "Clip2: Contrastive languageimage-point pretraining from real-world point cloud data", "year": "2023" }, { "authors": "Renrui Zhang; Ziyu Guo; Peng Gao; Rongyao Fang; Bin Zhao; Dong Wang; Yu Qiao; Hongsheng Li", "journal": "", "ref_id": "b59", "title": "Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training", "year": "2022" }, { "authors": "Zaiwei Zhang; Rohit Girdhar; Armand Joulin; Ishan Misra", "journal": "", "ref_id": "b60", "title": "Self-supervised pretraining of 3d features on any point-cloud", "year": "2021" }, { "authors": "Shihao Zhao; Dongdong Chen; Yen-Chun Chen; Jianmin Bao; Shaozhe Hao; Lu Yuan; Kwan-Yee K Wong", "journal": "", "ref_id": "b61", "title": "Uni-controlnet: All-in-one control to text-to-image diffusion models", "year": "2023" }, { "authors": "Xinge Zhu; Hui Zhou; Tai Wang; Fangzhou Hong; Yuexin Ma; Wei Li; Hongsheng Li; Dahua Lin", "journal": "", "ref_id": "b62", "title": "Cylindrical and asymmetrical 3d convolution networks for lidar segmentation", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 366.03, 609.66, 179.08, 26.81 ], "formula_id": "formula_0", "formula_text": "q(X 1:T |X 0 ) = T t=1 q(X t |X t-1 ),(1)" }, { "formula_coordinates": [ 3, 356.25, 636.99, 188.86, 10.33 ], "formula_id": "formula_1", "formula_text": "q(X t |X t-1 ) = N (X t ; 1 -βtX t-1 , βtI),(2)" }, { "formula_coordinates": [ 4, 90.51, 79.77, 195.86, 15.09 ], "formula_id": "formula_2", "formula_text": "q(X t |X 0 ) = N (X t ; √ ᾱtX 0 , (1 -ᾱt)I),(3)" }, { "formula_coordinates": [ 4, 88.56, 188.79, 197.8, 26.81 ], "formula_id": "formula_3", "formula_text": "p θ (X 0:T , c) = p(X T ) T t=1 p θ (X t-1 |X t , c),(4)" }, { "formula_coordinates": [ 4, 86.02, 215.39, 200.35, 11.13 ], "formula_id": "formula_4", "formula_text": "p θ (X t-1 |X t , c) = N (X t-1 ; µ θ (X t , t, c), σ 2 t I),(5)" }, { "formula_coordinates": [ 4, 58.88, 303.37, 227.48, 53.91 ], "formula_id": "formula_5", "formula_text": "L vlb = Eq[-logp θ (X 0 |X 1 , c) + DKL(q(X T |X 0 )||p(X T )) + T t=2 DKL(q(X t-1 |X t , X 0 )||p θ (X t-1 |X t , c))],(6)" }, { "formula_coordinates": [ 4, 56.64, 390.71, 226.24, 16.48 ], "formula_id": "formula_6", "formula_text": "L(θ) = E t,X 0 ,c,ϵ ∥ϵ -ϵ θ ( √ ᾱtX 0 + √ 1 -ᾱtϵ, c, t)∥ 2 , (7" }, { "formula_coordinates": [ 4, 282.88, 398.35, 3.48, 7.77 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 61.76, 635.77, 224.6, 11.13 ], "formula_id": "formula_8", "formula_text": "{Ci} s i=1 = FPS(X), {Pi} s i=1 = KNN(X, {Ci} s i=1 ). (8)" }, { "formula_coordinates": [ 4, 308.86, 85.54, 236.25, 34.65 ], "formula_id": "formula_9", "formula_text": "{P i } s i=1 into tokens {F i } s i=1 . {Fi} s i=1 = ξ ϕ ({Pi} s i=1 ).(9)" }, { "formula_coordinates": [ 4, 349, 427.89, 196.11, 11.88 ], "formula_id": "formula_10", "formula_text": "{T v i } g i=1 = Φρ({Concat(F v i , P os v i )} g i=1 ),(10)" }, { "formula_coordinates": [ 4, 385.08, 449.21, 107.97, 11.88 ], "formula_id": "formula_11", "formula_text": "{P os v i } g i=1 = ψτ ({C v i } g i=1 )." }, { "formula_coordinates": [ 4, 354.41, 651.33, 186.97, 11.88 ], "formula_id": "formula_12", "formula_text": "c = fω(Concat({T v i } g i=1 , {T m i } r i=1 )}).(12" }, { "formula_coordinates": [ 5, 54.72, 308.92, 231.64, 8.35 ], "formula_id": "formula_13", "formula_text": "H l = R l ⊙(W lh H l-1 +b lh )+W lb y, R l = σ(W lr y+b lr ),(13)" }, { "formula_coordinates": [ 5, 50.11, 569.09, 236.25, 26.38 ], "formula_id": "formula_14", "formula_text": "L(θ, ρ, ω) = E t,X 0 ,ϵ ∥ϵ -ϵ θ ( √ ᾱtX 0 + √ 1 -ᾱtϵ, fω(Φρ), t)∥ 2 . (14" }, { "formula_coordinates": [ 5, 282.63, 587.69, 3.73, 7.77 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 348.71, 371.96, 100.73, 14.11 ], "formula_id": "formula_16", "formula_text": "{[d×i+1, d×(i+1)]} h-1 i=0" }, { "formula_coordinates": [ 5, 308.86, 424.98, 237.39, 35.75 ], "formula_id": "formula_17", "formula_text": "L(θ, ρ, ω) = 1 h h-1 i=0 L(θ, ρ, ω)t∼Q i , Qi = [d×i+1, d×(i+1)].(15)" } ]
10.3389/fdgth.2023.1161098/full?utm_source=S
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b29", "b24", "b38" ], "table_ref": [], "text": "Large language models (LLMs) have revolutionized how the world views NLP (Wei et al., 2022b;Kojima et al., 2022). Their astonishing performance on many tasks has led to an exponential increase in real-world applications of LLM-based technology. However, LLMs have a tendency to generate plausible but erroneous information, commonly referred to as hallucinations (Ji et al., 2023). This phenomenon proves to be particularly detrimental within high-risk domains, underscoring the importance of accurate and safe model outputs (Nori et al., 2023).\nIn addition, with upcoming regulations, such as the EU AI Act (European Commission, 2021)," }, { "figure_ref": [], "heading": "Health of Citizens", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Public Services", "publication_ref": [ "b34", "b44", "b51", "b42", "b60", "b22", "b12", "b59", "b18", "b10" ], "table_ref": [], "text": "Administration of Justice is not yet finalized, it is expected that LLMs will fall into the high-risk category in specific domains, such as medical and legal. 1 the necessity of properly analyzing and evaluating LLMs is further addressed. EU AI Act is expected to become the first law worldwide that regulates the deployment of AI in the European Union, therefore, set a precedent for the rest of the world. According to the current draft, AI systems in high-risk domains, e.g. systems that have an impact on human life, will be subject to strict obligations, such as extensive testing and risk mitigation, prior to the system deployment (see Figure 1).\nIn the era of LLMs, instruction-tuning (Mishra et al., 2022;Wei et al., 2022a) has been proposed to efficiently solve various tasks like question answering (QA), summarization, and code generation (Scialom et al., 2022;Wang et al., 2023). However, these models, trained on heterogeneous internet data, lack domain-specific knowledge crucial for accurate and reliable responses in high-risk domains, including up-to-date regulations, industry practices, and domain nuances (Sallam, 2023). Furthermore, the quality of the training data is seldom 1 Figure is based on https://digital-strategy.ec. europa.eu/en/policies/regulatory-framework-ai.\nquantified (Zhou et al., 2023). Consequently, they exhibit limitations in terms of domain expertise and adherence to safety and regulatory compliance.\nIn the study conducted by Hupkes et al. (2022), a comprehensive perspective was introduced, advocating for the consideration of multiple facets in assessing generalization across diverse data distributions and scenarios. Building on the imperative of benchmarking generalization in the field of NLP and underscoring the importance of fairness in practical applications, our research delves into a specific yet pivotal dimension -how well can LLMs generalize effectively in high-risk domains?\nOur investigation is centered around two essential dimensions of generalizability: (a) the capability of LLMs to generalize to new high-risk domains (i.e., general vs. high-risk domains) and new tasks (i.e., with and without instruction-tuning); and (b) the assessment of evaluation metrics' capability to generalize and accurately measure the performance of LLMs in high-risk domain tasks. Our study entails a robust empirical assessment of the performance of both out-of-the-box LLMs and those fine-tuned through specific instructions tailored for high-risk contexts. To gauge their efficacy, the evaluation involves two prominent high-risk domains (medical, legal) and encompasses a diverse set of tasks, including QA and summarization.\nWe evaluate model outputs with regards to two key aspects, as depicted in Figure 2: (1) factuality -are LLMs outputs factually correct for high-risk domains? (2) safety -do LLMs successfully avoid producing harmful outputs? These aspects are essential for ensuring that LLMs generate reliable and trustworthy information while avoiding outputs that could be detrimental. To evaluate this, we employ existing metrics for factuality (Fabbri et al., 2022;Zhong et al., 2022) and safety (Hanu and Unitary team, 2020;Dinan et al., 2022) concerns. Additionally, we conduct a qualitative analysis to evaluate if the metrics are capable of accurately assessing LLMs on tasks in high-risk domains. Finally, we discuss the challenges that must be overcome before LLMs are deemed suitable for applications in high-risk domains and with this contribute to the broader conversation on generalization in high-risk domains.\nContributions. Our contributions are summarized as follows: (i) We robustly evaluate the outputs of out-of-the-box and instruction-tuned LLMs in two high-risk domains on 6 datasets across QA (iv) we advocate for the need of human-centric NLP systems that are capable of giving the final control to human users in order to build trustworthy applications in high-risk domains." }, { "figure_ref": [], "heading": "Domain-adaptive Instruction-tuning", "publication_ref": [ "b40", "b6", "b50", "b48", "b34", "b0", "b21", "b15", "b51", "b56", "b8", "b20" ], "table_ref": [], "text": "The emergence of GPT (Radford et al., 2018) has led to a multitude of generative LLMs. One line of improving LLM performance has been proposed to increase the number of model parameters (Chowdhery et al., 2022). Researchers and practitioners have embarked on a quest to explore diverse data sources and training objectives to enhance the capabilities of LLMs while reducing the model size and computational burden. Another focus is leaning toward training smaller foundation models (e.g., GPT-J (Wang and Komatsuzaki, 2021), LLaMA (Touvron et al., 2023), MPT (MosaicML NLP, 2023)).\nThe adoption of smaller foundation models enables researchers and practitioners to conduct more efficient investigations into novel methods, explore new domain-specific applications, and establish streamlined deployment efficiency. Crucially, the emphasis on smaller models is in accordance with the utilization of the instruction-tuning (Mishra et al., 2022) method, enabling efficient customization and adjustment of LLMs for particular domains or tasks (Anand et al., 2023;Hu et al., 2023).\nIn our experiments, we rely on a series of smaller size LLMs for efficiency and cost concerns, and effectively incorporate domain knowledge for high-risk domains via instruction-tuning. By leveraging explicit instructions during the training process, instruction-tuning has proved to enhance the model's ability for generalization (Wei et al., 2022a) and domain adaptability (Gupta et al., 2022;Wang et al., 2023). The domain-adaptive instruction-tuning approach explores the capability of how smaller models can effectively adapt to high-risk domains (Yunxiang et al., 2023).\nTo efficiently incorporate domain knowledge, we employ QLoRA (Dettmers et al., 2023), a method based on LoRA (Hu et al., 2021), which compresses models using 4-bit quantization while maintaining performance parity. This reduces memory usage and enables efficient domain-adaptive instruction-tuning." }, { "figure_ref": [], "heading": "Experimental Setup Instruction-tuning", "publication_ref": [], "table_ref": [], "text": "Data. To implement instruction-tuning, we collect in-domain datasets for legal and medical domains. To create the instructions for domain-adaptive instruction-tuning, we consider 4 datasets each for both legal and medical domains. An overview of the collected datasets is shown in Table 1. According to recent work about the instruction tuning dataset size, it typically ranges from 10K to 100K instances. The dataset sizes are subject to variations based on domain-specific applications, the nature of evaluation tasks, and the practical feasibility of the curated datasets. In this context, it is noteworthy that our approach does not rely on machine-generated instructions to mitigate plausibility concerns. Instead, we emphasize the use of human-annotated data, a decision that aligns with our commitment to maintaining the reliability of the instruction datasets. To ensure the efficacy of domain-adaptive instruction-tuning approach, we follow the steps from (Wei et al., 2022a), and construct templates for each of the datasets to form the final instructions. We also explicitly control the number of instructions for both domains (13K), to have a fair comparison among approaches. Due to the scarcity of resources in the legal domain for instructions, the medical domain data is downsampled accordingly to match the number of instances in the legal domain. We ensure that the selected number of instances for each dataset is well-aligned with the tasks and sources." }, { "figure_ref": [], "heading": "Domain Dataset", "publication_ref": [], "table_ref": [], "text": "Size License †" }, { "figure_ref": [], "heading": "Legal", "publication_ref": [ "b30", "b58", "b32", "b32" ], "table_ref": [], "text": "BillSum (Kornilova and Eidelman, 2019) 88 CC0-1.0 CaseHold (Zheng et al., 2021) 2,458 CC-BY-SA LegalAdviceReddit (Li et al., 2022) 9,984 CC-BY-SA LawStackExchange (Li et al., 2022) 513 CC-BY-SA" }, { "figure_ref": [], "heading": "Medical", "publication_ref": [ "b26", "b49", "b25", "b56" ], "table_ref": [], "text": "PubMedQA (Jin et al., 2019) 513 MIT RCTSum (Wallace et al., 2020) 151 Apache-2.0 MedQA (Jin et al., 2021) 2,458 MIT HealthCareMagic (Yunxiang et al., 2023) 10,000 Apache-2.0 Table 1: Overview of the datasets utilized for instructiontuning for high-risk domains (legal, medical). The size of the in-domain data and the commercial applicability based on the license are reported. †License: Creative Commons Zero (cc0), Creative Commons Attribution Share-Alike (CC-BY-SA)." }, { "figure_ref": [], "heading": "Domain Dataset", "publication_ref": [], "table_ref": [], "text": "Task Size License" }, { "figure_ref": [], "heading": "Legal", "publication_ref": [ "b30", "b58", "b32" ], "table_ref": [], "text": "BillSum (Kornilova and Eidelman, 2019) SUM 100 cc0-1.0 CaseHold (Zheng et al., 2021) QA 1000 Apache-2.0 LawStackExchange (Li et al., 2022) QA 989 CC-BY-SA" }, { "figure_ref": [ "fig_0" ], "heading": "Medical", "publication_ref": [ "b26", "b49", "b56", "b58", "b32", "b56", "b26", "b30", "b49", "b12", "b59", "b27", "b10", "b61", "b22" ], "table_ref": [], "text": "PubMedQA (Jin et al., 2019) QA 250 MIT RCTSum (Wallace et al., 2020) SUM 100 Apache-2.0 iCliniq (Yunxiang et al., 2023) QA 1000 Apache-2.0\nTable 2: Overview of the evaluation datasets for highrisk domains (legal, medical). For each domain, we report the task type, dataset size, and license. All the selected task datasets are applicable for commercial usage.\nEvaluation Tasks. We focus on two high-risk domains (legal and medical), aligned with EU AI Act domain categorization (see Figure 1), and evaluate 6 datasets across QA and summarization (SUM) tasks. The tasks include multiplechoice QA (Zheng et al., 2021), free-form QA (Li et al., 2022;Yunxiang et al., 2023), reasoning QA (Jin et al., 2019), and long document summarization (Kornilova and Eidelman, 2019;Wallace et al., 2020). Table 2 displays an overview of the high-risk domain task datasets. We provide example excerpts and templates designed for each task in Appendix A.\nEvaluation Metrics. In high-risk domains, where the implications of incorrect or harmful information are amplified, it becomes imperative to assess language models from the lens of their potential impact on users and society. The selection of factuality and safety as evaluation metrics is rooted in the following considerations: (1) Factuality is considered as the ability of LLMs to provide factual and precise responses. Factual inaccuracies could lead to misguided decisions or actions, and they can undermine the trustworthiness of generated content. By evaluating factual-ity, we seek to ensure that the responses of LLMs align with accurate information, which is of utmost importance in high-risk applications. Two metrics are considered and have been shown to align with human judgments: QAFactEval (Fabbri et al., 2022), which measures fine-grained overlap of the generated text against the ground truth, and UniEval (Zhong et al., 2022), which computes over several dimensions, namely coherence, consistency, fluency, and relevance. (2) Safety is defined as the degree of insensibility and responsibility in the generated content that is safe, unbiased, and reliable. High-risk domains often involve sensitive topics, legal regulations, and ethical considerations, thus ensuring safety in the generated contents mitigates the potential of unintended consequences, such as perpetuating harmful stereotypes or generating discriminatory content (Kaddour et al., 2023).\nEvaluating safety involves assessing the model's propensity to avoid generating content that could be offensive, harmful, or inappropriate. We consider Detoxify (Hanu and Unitary team, 2020) and Safe-tyKit (Dinan et al., 2022), which measure a model's tendencies to agree to offensive content or give the user false impressions of its capabilities as well as other safety concerns. Although our primary focus is on ensuring factuality and safety, it is essential to underscore the significance of other critical factors, such as robustness (Zhu et al., 2023) Hupkes et al. (2022). The taxonomy encompasses five distinct (nominal) axes along the variations of generalization research. The dimensions include the primary motivation for the research (motivation), the specific type of generalization challenges addressed (generalization type), the point at which these shifts occur (shift locus), the nature of data shifts under consideration (shift type), and the origin of the data shifts (shift source). The coverage of generalizability in this study is marked (✓)." }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b8", "b0", "b0" ], "table_ref": [ "tab_2" ], "text": "BaseModel # Params Budget Size License\nGPT4ALL-J GPT-J ∼3.6M 5 hrs 6 B Apache-2.0 GPT4ALL-MPT MPT ∼4.2M 5.5 hrs 7 B Apache-2.0 GPT-3.5-turbo - - - > 100 B Commercial\nTable 4: Overview of the computational information for the domain-adaptive instruction-tuning, while comparing with GPT-3.5-turbo (OpenAI, 2022). The number of parameters (# Params) indicate the trainable parameters utilizing QLoRA (Dettmers et al., 2023) approach, and the budget is represented in GPU hours.\ntuned on domain instructions and the ones without; and (4) shift source (naturally shift): we only consider human-annotated data to mitigate plausibility concerns (see §3). We summarize the generalizability of our proposed methods in Table 3.\nPre-trained Large Language Models. Table 4 shows the model size, the license, and the computational information among the selected LLMs compared to the enormous GPT-3.5-turbo (i.e., Chat-GPT (OpenAI, 2022)). GPT4ALL-* (Anand et al., 2023) is a set of robust LLMs instruction-tuned on a massive collection of instructions including codes, and dialogs. This means that it has been fine-tuned specifically to excel in a variety of tasks.\nThe fact that the base model demonstrates proficiency in these general-purpose language tasks provides a strong foundation for the instruction-tuned version to perform well in various scenarios. Besides, GPT4ALL-* comes with an open-sourced commercial license, providing the freedom to de- Training and Optimization. All the experiments are performed on a single Nvidia Tesla V100 GPU with 32GB VRAM and run on a GPU cluster. During the training process, we train for 5 epochs in batches of 64 instances. The learning rate is set to 1e-5 and the maximum sequence length is set to 1024. These settings are applied to both selected general-purpose instruction-tuned models (GPT4ALL-J, GPT4ALL-MPT) (Anand et al., 2023). For evaluation, we set the maximum sequence length to 1024 for all compared models, and evaluate on two high-risk domains (legal, medical) with six tasks, including QA and summarization (see Table 2)." }, { "figure_ref": [], "heading": "Evaluation Results", "publication_ref": [], "table_ref": [ "tab_3", "tab_4" ], "text": "Factuality. Results for the factuality metrics can be found in Table 5. Overall, only some models on some datasets achieve a factuality score of over 90%. This reveals that LLMs in their current stage\nare not yet suitable for high-risk domains usage.\nComparing the models, results of the instructiontuned model are better than those of the baselines, indicating that domain-adaptive instruction-tuning can lead to improvements in results generated for high-risk domains. However, factuality scores vary greatly across tasks in the same domain. For instance, GPT4ALL-J (tuned) in legal domain obtains the highest QAFactEval score for CaseHold, but scores the lowest for LawStackExchange (LSE) task. This shows that instruction-tuning is an interesting direction but more work is required to raise factuality reliably.\nUpon further analysis of randomly picked generated texts, we also find that some answers are in fact repetitions of the question or part of it. For example, GPT4ALL-J answers \"(Yes, No, Maybe)\" to a prompt, this instance obtains a score of 0.5 from QAFactEval and 0.946 from UniEval. These results put into question whether these metrics accurately reflect the factuality of the generated text. Thus, there is an indication that the metrics themselves are not yet suitable to correctly assess LLMs in high-risk domains.\nSafety. Results for the safety metrics can be found in Table 6. Overall we observe that both metrics return an exceedingly high score for all models (i.e., the score is higher than 0.94 across the board). To verify if the metrics indeed report such high scores reliably, we run a small manual analysis by randomly selecting 10 generated outputs from GPT4ALL-MPT (tuned) on legal (LSE) and GPT4ALL-MPT on medical (iCliniq) dataset. Even though we only analyzed 10 outputs, we already found several issues. For the medical domain, 8 out of 10 answers are problematic. While only a small sub-sample, it still indicates a worrisome difference from the reported high safety score of 0.95. For example, the model contains answers such as \"Based on the pictures you have provided\", despite the model not having the capability to process images. In another example, the model suggests to treat a dog bite by cleaning the wound, whereas the gold answer would have been to get an injection.\nThe legal domain fares better, here we found 3 out of 10 answers problematic. In one example, the model output includes \"it may not be necessary to obtain explicit consent from users\" about the website cookies usage policy, but doesn't provide the necessary scenarios of the claims.\nOverall, the metrics can give us a good first indication and might allow us to compare models. However, the qualitative analysis results highlight that more research needs to be conducted on how we can define reliable and domain-adjusted safety metrics before we can automatically assess the safety of LLMs in high-risk domains." }, { "figure_ref": [], "heading": "Implications", "publication_ref": [ "b1", "b47" ], "table_ref": [], "text": "The need for factual and secure outputs of LLMs is crucial for their deployment in high-risk domains. This necessity arises from both the societal impact of their usage and the imperative to meet forthcoming AI regulations. Based on the outcomes of our empirical investigation, it is evident that LLMs are not yet ready for deployment in high-risk domains (Au Yeung et al., 2023;Tan et al., 2023). In light of this, we address three key implications that can guide us towards a more suitable course of action: (1) Models enhancement: a pressing need to improve the LLMs themselves is crucial to ensure they generate accurate and reliable responses;\n(2) Metrics refinement: metrics are required to be refined to assess LLMs properly in specific domain scenarios; and (3) Human-centric systems: development of LLMs should be prioritized to empower human users to manage and direct LLMs interac-tions, especially in high-risk domain use cases." }, { "figure_ref": [], "heading": "Models Enhancement. A major vulnerability of", "publication_ref": [ "b9", "b11", "b16", "b2", "b23", "b27", "b35", "b13", "b56", "b45", "b45" ], "table_ref": [], "text": "LLMs lies in their tendency to generate coherent but erroneous statements that seem plausible at face value, often referred to as fluent hallucinations (Deutsch et al., 2022). We posit that as long as this issue persists, the deployment of LLMs in high-risk scenarios, particularly in the context of the upcoming EU AI Act, remains difficult. Therefore, it becomes paramount to devise more effective methods for assessing and verifying the factual correctness of generated text outputs. One potential avenue for improvement is to explore pre-training methods that yield more factually accurate outputs (Dong et al., 2022), involving the further development of advanced instruction-based fine-tuning methods and enhancing the safety of generated contents. Furthermore, the integration of retrieval-augmented models (Guu et al., 2020;Borgeaud et al., 2022) offers a viable solution to enhance the factual integrity of outputs. These models facilitate a semantic comparison between LLM-generated text and retrieved source materials, reinforcing the credibility of the generated content.\nMetrics Refinement. The evaluation of factuality necessitates a multi-faceted approach (Jain et al., 2023), encompassing considerations of contextual understanding, source credibility, cross-referencing with reliable information, and critical analysis. Correspondingly, the creation of dependable test sets that faithfully represent real-world use cases is essential (Kaddour et al., 2023). These test sets must exhibit exceptional quality in terms of factuality, underscoring the vital need for collaboration with domain experts. Particularly in high-risk domains and highly specialized subjects, lay individuals may lack the expertise required to provide accurate annotations. Hence, the involvement of domain experts becomes indispensable to ensure the appropriateness and accuracy of assessments. Integrating these additional elements into the evaluation process is anticipated to achieve a more robust and nuanced appraisal of the factuality of a given statement or piece of information.\nRegarding safety metrics, existing evaluation metrics are proficient at identifying toxic speech, but often fall short when it comes to detecting potentially harmful medical advice or fictional legal guidance. To improve the safety of LLMs, it is necessary to collaboratively establish, in consul-tation with stakeholders and domain experts, the specific safety checks necessary for particular highrisk domains. In light of this, we stipulate that the following two directions should be investigated simultaneously within the research community. First, the development of more reliable automatic metrics that carefully document (i) their underlying mechanisms (i.e., how they work), (ii) the implications of their scores, and (iii) their appropriate and intended use cases (similar to model cards (Mitchell et al., 2019) and dataset sheets (Gebru et al., 2021), but adapted for metrics). Secondly, we need to develop safety mechanisms aimed at mitigating the risk of jailbreaking models (Li et al., 2023). By addressing the above measures, LLMs can be guided toward enhanced safety and reliability, thereby ensuring their suitability for deployment in high-risk domains.\nHuman-centric Systems. In addition to emphasizing the necessity of improvements in both models and evaluation metrics to enable the utilization of LLMs in high-risk another vital inquiry emerges: considering the near impossibility of achieving absolute quality assurance, what actions can we take to ensure responsible usage?\nOne possible direction is the development of human-centric systems. This direction aligns with the insights proposed by Shneiderman (2020), emphasizing that the choice between low and high automation when integrating LLMs into high-risk domains is not binary. Rather, it entails a twodimensional approach where high automation coexists with a high degree of human control (for a graphical representation, see Figure 3). Without LLMs, humans maintain full control over text generation in all (high-risk) domains. On the opposite end of the spectrum, we encounter scenarios where LLMs generate text that humans blindly trust, potentially introducing safety and factual accuracy risks that cannot be entirely eliminated at present.\nTo mitigate this inherent risk, we propose to adopt the framework proposed by Shneiderman (2020), enabling both high automation and human control. For LLMs, we envision a two-step approach: (1) Human interpretability -we ensure that the text generated by an LLM is supported by human-understandable evidence. This can be achieved, as discussed earlier, through a retrievalbased system that provides the source text used by the LLM. ( 2 enabling human users to the content. Users can either approve the content directly, make modifications if necessary, or submit update requests to the LLM. The resulting human-centric system allows for responsible usage even when the output may not be flawless. To realize this vision, we advocate that researchers look beyond the scope of generalizability: if we cannot guarantee perfect generalizability, what additional aspects should we explore and provide in order to build LLMs that are suitable in high-risk domains? In pursuit of this goal, researchers should actively engage in interdisciplinary collaboration and involve domain-specific stakeholders, such as medical professionals in the medical domain, at the earliest stages of research. This collaboration is especially vital in the evolving post-LLM era, where NLP applications have moved much closer to practical use than ever before." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b33", "b54", "b43", "b28", "b55", "b14", "b7", "b46", "b37", "b4", "b27", "b19", "b41", "b38" ], "table_ref": [], "text": "LLMs in High-risk Domains. Recent work has demonstrated the efficacy of leveraging LLMs in high-risk domains, and has been achieved either by training the model using a substantial volume of domain-specific data (Luo et al., 2022;Wu et al., 2023), or by employing instruction-tuning techniques to harness the benefits of fine-tuning LLMs with relatively smaller sets of in-domain instruc-tions from diverse tasks (Sanh et al., 2022;Karn et al., 2023).\nDomain-adaptive instruction-tuning approach has proven effective in high-risk domains, such as finance (Xie et al., 2023), medicine (Guo et al., 2023), and legal (Cui et al., 2023). Singhal et al. (2023) proposed Med-PaLM2 model and evaluated on several medical domain benchmarks, but it has been demonstrated that even with extreme LLMs, the model remains inferior to the expertise of clinicians. Similar findings are also suggested in legal domain (Nay et al., 2023), where LLMs have yet to attain the proficiency levels of experienced tax lawyers. Clients rely on lawyers to obtain contextual advice, ethical counsel, and nuanced judgment, which is not a capability that current LLMs can consistently offer. These findings highlight the crucial need for the development of robust evaluation frameworks and advanced methods to create reliable and beneficial LLMs, suitable for tackling more challenging applications in high-risk domains.\nAssessing LLMs. The evaluation of LLMs traditionally centers on tackling two core aspects: (i) the selection of datasets for evaluation and (ii) the formulation of an evaluation methodology. The former focuses on identifying appropriate benchmarks for assessment, while the latter involves establishing evaluation metrics for both automated and human-centered evaluations (Chang et al., 2023). Nonetheless, within the high-risk domain context, the complexities and potential repercussions of LLM utilization underscore the necessity for a more comprehensive and critical evaluation process. Specific challenges arise when assessing LLMs within particular domains (Kaddour et al., 2023). For instance, domains like law demand continuous updates in information to remain relevant (Henderson et al., 2022). In the healthcare field, the safety-sensitive nature of decisions significantly limits current use cases (i.e., the possibility of hallucinations could be detrimental to human health) (Reddy, 2023).\nTo mitigate risks in high-risk domains, enhancing the model's factual grounding and level of certainty is essential (Nori et al., 2023). Recent research has emphasized a shift toward humancentered evaluation (Chen et al., 2023). Although recent efforts claim that performance improvements stem from encoded high-risk domain knowledge, rendering them applicable in practical real-world scenarios, certain unexplored directions in evaluation persist. These include (i) a clear definition of evaluation metrics in specific domain usage, and (ii) comprehensive investigations involving domain experts to assess the factual accuracy of model outputs and address safety concerns. These gaps highlight the necessity for deeper investigation and are opportunities for upcoming studies to contribute to the advancement of evaluating LLMs in high-risk domains." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b61", "b57" ], "table_ref": [], "text": "As LLMs have taken the world by storm, the benchmarking generalization concern in NLP gains significance. Our investigation delved into how well current LLMs perform in high-risk domain tasks of QA and summarization in legal and medical domains. The results exposed a significant gap of the suitability of LLMs for high-risk domains tasks, indicating that employing LLMs in their present state is not yet practical. Our study highlighted the urgent need for substantial improvements in both LLMs themselves and the evaluation metrics used to gauge their factuality and safety in high-risk contexts. Additionally, we advocated the necessity of expanding our perspective beyond the scope of the LLM itself and considering the environment in which such systems are deployed -a thoughtful, human-centric design allows us to keep the human user in control and is imperative to enable the reliable and trustworthy usage of LLMs in high-risk domains.\nOverall, our findings and discussions accentuate the importance of a close collaboration with stakeholders and therefore collaboratively address open critical concerns. This collaborative approach will allow to build a stronger foundation of a humancentric approach to benchmark generalization in NLP for high-risk domains. measure factuality and safety. This initial exploration serves as a foundation to gain deeper insights into the capabilities of current LLMs in tackling high-risk domain-specific NLP tasks and identifying existing limitations that require attention and resolution.\nThe current setup has a series of shortcomings that should be reduced in future work, namely: (1) the collected datasets currently only focus on English; (2) the instruction templates are designed manually and might lead to variable outcomes; (3) other instruction-tuned models trained on generalpurpose instructions might offer different capabilities, depending on the specific context of domains and tasks; (4) other metrics should be explored and considered, such as robustness (Zhu et al., 2023) and explainability (Zhao et al., 2023); and (5) users should be aware that the metrics used are automatic and therefore themselves might also make mistakes and misrepresent model performance (i.e., the metrics require separate benchmarking themselves). We do not claim in any way that the presented testing strategy would fulfill the EU AI Act requirements (this is due to points 1-3 as well as the fact that the Act is not yet finalized).\nDespite the limitations of our contributions, the significance of this topic warrants attention. We hope that our work will serve as a catalyst to raise awareness and steer the community toward the development of secure, reliable, and rigorously evaluated LLMs, particularly in high-risk domains. Concretely, we should explore (1) how we can make LLMs more reliable, for example by improving factuality via a retrieval step, and (2) ensure that quality metrics themselves are good enough to be used to accurately measure LLM abilities, particularly for high-risk domains." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Our work investigates the performance of LLMs for high-risk domains with regard to factuality and safety. We ran our empirical evaluation using existing datasets, metrics, and LLMs for the domains of legal and medical. At this stage, we did not involve any other stakeholders. We acknowledge that this is an important next step, for example, to seek advice from medical or legal experts, in order to investigate the performance of LLMs for particular domains. As our empirical tests find, the work is far from done on this topic and we ask readers to carefully consider the listed limitations above." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank Enrico Giakas for the infrastructure support, and Kiril Gashteovski for the fruitful discussions. Besides, we would like to thank Sotaro Takeshita, Tommaso Green, and the anonymous reviewers for their valuable feedback." }, { "figure_ref": [], "heading": "A Examples for Evaluation Tasks", "publication_ref": [ "b49", "b56" ], "table_ref": [], "text": "We manually compose the instruction-style templates, designed for each task for evaluation. The template contains an instruction describing the task, followed by an input as a document or a question. Table 7 shows an example for each evaluation task. RESULT: In this study, we have determined that the QT interval changes significantly depending on the use of oxybutynin.\nThe QT changes increased cardiac arrhythmia in children.\nRCTSum (Wallace et al., 2020) ### Instruction: Summarize the document based on the given title and abstract. ### Input: Title: Efficacy of prophylactic antibiotics for the prevention of endomyometritis after forceps delivery. Abstract: The purpose of this prospective randomized controlled clinical trial was to determine whether prophylactic antibiotics reduce the incidence of endomyometritis after forceps delivery. Of the 393 patients studied, 192 received 2 gm of intravenous cefotetan after forceps delivery, and 201 patients received no antibiotics. There were seven cases of endomyometritis in the group given no antibiotic and none in the cefotetan group, a statistically significant difference (P less than .01). We conclude that prophylactic antibiotics are effective in reducing the incidence of endomyometritis after forceps delivery. We believe this is the first published study demonstrating this benefit. iCliniq (Yunxiang et al., 2023) ### Instruction:\nPlease give an answer to the question: ### Input: Hello doctor, when should I take probiotics? " } ]
High-risk domains pose unique challenges that require language models to provide accurate and safe responses. Despite the great success of large language models (LLMs), such as Chat-GPT and its variants, their performance in highrisk domains remains unclear. Our study delves into an in-depth analysis of the performance of instruction-tuned LLMs, focusing on factual accuracy and safety adherence. To comprehensively assess the capabilities of LLMs, we conduct experiments on six NLP datasets including question answering and summarization tasks within two high-risk domains: legal and medical. Further qualitative analysis highlights the existing limitations inherent in current LLMs when evaluating in high-risk domains. This underscores the essential nature of not only improving LLM capabilities but also prioritizing the refinement of domain-specific metrics, and embracing a more human-centric approach to enhance safety and factual reliability. Our findings advance the field toward the concerns of properly evaluating LLMs in high-risk domains, aiming to steer the adaptability of LLMs in fulfilling societal obligations and aligning with forthcoming regulations, such as the EU AI Act.
Walking a Tightrope -Evaluating Large Language Models in High-Risk Domains
[ { "figure_caption": "Figure 1 :1Figure 1: The EU AI Act categorizes AI applications based on their associated risk levels. Although the Act is not yet finalized, it is expected that LLMs will fall into the high-risk category in specific domains, such as medical and legal. 1", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Usage is bannedSubject to strictobligationsTransparencyobligationsFree use", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overview of the evaluation card, summarizing the generalization taxonomy proposed by", "figure_data": ", that are alsovital for evaluating LLMs. While acknowledgingthe broader spectrum of evaluation dimensions thatwarrant attention in comprehensive assessments ofLLMs, our emphasis on factuality and safety isprioritized by the pressing and tangible concernsrelated to misinformation and potential harm inhigh-risk domains. Overall evaluation is alignedwith AuditNLG 2 library.Evaluation Card. Inspired by the generalizationtaxonomy introduced by Hupkes et al. (2022) tocharacterize and gain insights into the field of gen-eralization research in NLP, it comprises the follow-ing key dimensions for evaluation: (1) motivation(practical): we assess the generalization capabili-ties of models with the objective to be deployed forreal-world high-risk domain tasks; (2) generaliza-tion type (cross-domain, cross-task): we investigatehow effectively models generalize across different", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Evaluation results on factuality, considering two evaluation metrics: QAFactEval(Fabbri et al., 2022) and UniEval(Zhong et al., 2022), on two high-risk domains: legal and medical. The best model varies, with instruction-tuned models generally demonstrating better performance. Overall results may initially appear favorable, but a closer examination reveals a set of underlying issues. For instance, one of the issues identified is that the response \"Yes, No, Maybe\" achieves a high score, primarily because it includes a partial correct answer.", "figure_data": "LegalMedicalQAFactEvalUniEvalQAFactEvalUniEvalBillSum CaseHold LSE BillSum CaseHold LSE RCTSum PubMedQA iCliniq RCTSum PubMedQA iCliniqGPT4ALL-J0.3690.7360.472 0.8720.9210.5520.8260.5120.4240.9350.7460.583GPT4ALL-MPT0.5390.5700.492 0.7970.9060.5530.8030.8450.5680.9200.7520.568GPT4ALL-J (tuned)0.4870.7500.403 0.8700.9230.5520.8240.6560.4620.9050.7480.588GPT4ALL-MPT (tuned) 0.5810.5950.542 0.7930.9090.5550.9360.6790.5990.9130.7560.570GPT-3.5-turbo0.5470.6370.465 0.8840.9650.5830.7560.6250.5460.8260.7590.587LegalMedicalSafetyKitDetoxifySafetyKitDetoxifyBillSum CaseHold LSE BillSum CaseHold LSE RCTSum PubMedQA iCliniq RCTSum PubMedQA iCliniqGPT4ALL-J0.9950.9980.996 0.9990.9990.9990.9800.9840.9510.9990.9960.980GPT4ALL-MPT1.0000.9990.996 0.9960.9990.9990.9800.9720.9730.9990.9980.973GPT4ALL-J (tuned)0.9950.9980.996 0.9990.9990.9990.9800.9860.9510.9990.9960.980GPT4ALL-MPT (tuned) 1.0000.9990.996 0.9960.9990.9990.9800.9720.9430.9990.9980.973GPT-3.5-turbo1.0001.0000.998 0.9990.9980.9990.9900.9880.9570.9990.9990.976", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Human verification -we build systems around the LLM, e.g. user-friendly interfaces,", "figure_data": "Human-AI CollaborationhighHuman writes text without AI assistance1. Human interpretability: AI generates text & provideshuman understandableHuman Controlevidence 2. Human verification: human approves, modifies or asks for updatelowAI generates text, human blindly trustslowAutomationhighFigure 3: Following the two dimensional human-centered AI framework proposed by Shneiderman(2020): to make LLMs (i.e., AI systems) safe to usein high-risk domains, we should ensure that humansretain the appropriate control over the resulting devel-oped LLMs. Only if we combine high automation withhigh human control, can we enable a safe human-AIcollaboration.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" } ]
Chia-Chien Hung; Wiem Ben Rim; Lindsay Frost; Lars Bruckner; Carolin Lawrence
[ { "authors": "Yuvanesh Anand; Zach Nussbaum; Brandon Duderstadt; Benjamin Schmidt; Andriy Mulyar", "journal": "", "ref_id": "b0", "title": "Gpt4all: Training an assistant-style chatbot with large scale data distillation from gpt-3.5-turbo", "year": "2023-06-03" }, { "authors": "Joshua Au Yeung; Zeljko Kraljevic; Akish Luintel; Alfred Balston; Esther Idowu; Richard J Dobson; James T Teo", "journal": "Frontiers in Digital Health", "ref_id": "b1", "title": "Ai chatbots not yet ready for clinical use", "year": "2023" }, { "authors": "Sebastian Borgeaud; Arthur Mensch; Jordan Hoffmann; Trevor Cai; Eliza Rutherford; Katie Millican; George Bm Van Den Driessche; Jean-Baptiste Lespiau; Bogdan Damoc; Aidan Clark; Diego De; Las Casas; Aurelia Guy; Jacob Menick; Roman Ring; Tom Hennigan; Saffron Huang; Loren Maggiore; Chris Jones; Albin Cassirer; Andy Brock; Michela Paganini; Geoffrey Irving; Oriol Vinyals; Simon Osindero; Karen Simonyan; Jack Rae; Erich Elsen; Laurent Sifre", "journal": "", "ref_id": "b2", "title": "Improving language models by retrieving from trillions of tokens", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b3", "title": "", "year": "" }, { "authors": "Yupeng Chang; Xu Wang; Jindong Wang; Yuan Wu; Kaijie Zhu; Hao Chen; Linyi Yang; Xiaoyuan Yi; Cunxiang Wang; Yidong Wang", "journal": "", "ref_id": "b4", "title": "A survey on evaluation of large language models", "year": "2023" }, { "authors": "Jeff Xiang'anthony' Chen; Ruofei Burke; Matthew K Du; Jennifer Hong; Philippe Jacobs; Dingzeyu Laban; Nanyun Li; Karl Dd Peng; Chien-Sheng Willis; Wu", "journal": "", "ref_id": "b5", "title": "Next steps for humancentered generative ai: A technical perspective", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b6", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Jiaxi Cui; Zongjian Li; Yang Yan; Bohua Chen; Li Yuan", "journal": "", "ref_id": "b7", "title": "Chatlaw: Open-source legal large language model with integrated external knowledge bases", "year": "2023" }, { "authors": "Tim Dettmers; Artidoro Pagnoni; Ari Holtzman; Luke Zettlemoyer", "journal": "", "ref_id": "b8", "title": "Qlora: Efficient finetuning of quantized llms", "year": "2023" }, { "authors": "Daniel Deutsch; Rotem Dror; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "On the limitations of reference-free evaluations of generated text", "year": "2022" }, { "authors": "Emily Dinan; A Gavin Abercrombie; Shannon Bergman; Dirk Spruit; Y-Lan Hovy; Verena Boureau; Rieser", "journal": "", "ref_id": "b10", "title": "SafetyKit: First aid for measuring safety in open-domain conversational systems", "year": "2022" }, { "authors": "Qingxiu Dong; Damai Dai; Yifan Song; Jingjing Xu; Zhifang Sui; Lei Li", "journal": "European Commission", "ref_id": "b11", "title": "Calibrating factual knowledge in pretrained language models", "year": "2021" }, { "authors": "Alexander Fabbri; Chien-Sheng Wu; Wenhao Liu; Caiming Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "QAFactEval: Improved QAbased factual consistency evaluation for summarization", "year": "2022" }, { "authors": "Timnit Gebru; Jamie Morgenstern; Briana Vecchione; Jennifer Wortman Vaughan; Hanna Wallach; Hal Daumé Iii; Kate Crawford", "journal": "Communications of the ACM", "ref_id": "b13", "title": "Datasheets for datasets", "year": "2021" }, { "authors": "Zhen Guo; Peiqi Wang; Yanwei Wang; Shangdi Yu", "journal": "", "ref_id": "b14", "title": "Dr. llama: Improving small language models in domain-specific qa via generative data augmentation", "year": "2023" }, { "authors": "Prakhar Gupta; Cathy Jiao; Yi-Ting Yeh; Shikib Mehri; Maxine Eskenazi; Jeffrey Bigham", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "In-structDial: Improving zero and few-shot generalization in dialogue through instruction tuning", "year": "2022" }, { "authors": "Kelvin Guu; Kenton Lee; Zora Tung; Panupong Pasupat; Mingwei Chang", "journal": "", "ref_id": "b16", "title": "Retrieval augmented language model pre-training", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "Laura Hanu; Unitary Team", "journal": "", "ref_id": "b18", "title": "Detoxify", "year": "2020" }, { "authors": "Peter Henderson; Mark Krass; Lucia Zheng; Neel Guha; Christopher D Manning; Dan Jurafsky; Daniel Ho", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "Pile of law: Learning responsible data filtering from the law and a 256gb open-source legal dataset", "year": "2022" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b20", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Zhiqiang Hu; Yihuai Lan; Lei Wang; Wanyu Xu; Ee-Peng Lim; Roy ; Ka-Wei Lee; Lidong Bing; Soujanya Poria", "journal": "", "ref_id": "b21", "title": "Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models", "year": "2023" }, { "authors": "Dieuwke Hupkes; Mario Giulianelli; Verna Dankers; Mikel Artetxe; Yanai Elazar; Tiago Pimentel; Christos Christodoulopoulos; Karim Lasri; Naomi Saphra; Arabella Sinclair; Dennis Ulmer; Florian Schottmann; Khuyagbaatar Batsuren; Kaiser Sun; Koustuv Sinha; Leila Khalatbari; Maria Ryskina; Rita Frieske; Ryan Cotterell; Zhijing Jin", "journal": "CoRR", "ref_id": "b22", "title": "State-of-the-art generalisation research in NLP: a taxonomy and review", "year": "2022" }, { "authors": "Sameer Jain; Vaishakh Keshava; Mysore Swarnashree; Patrick Sathyendra; Pengfei Fernandes; Graham Liu; Chunting Neubig; Zhou", "journal": "", "ref_id": "b23", "title": "Multidimensional evaluation of text summarization with incontext learning", "year": "2023" }, { "authors": "Ziwei Ji; Nayeon Lee; Rita Frieske; Tiezheng Yu; Dan Su; Yan Xu; Etsuko Ishii; Ye ; Jin Bang; Andrea Madotto; Pascale Fung", "journal": "ACM Comput. Surv", "ref_id": "b24", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Di Jin; Eileen Pan; Nassim Oufattole; Wei-Hung Weng; Hanyi Fang; Peter Szolovits", "journal": "Applied Sciences", "ref_id": "b25", "title": "What disease does this patient have? a large-scale open domain question answering dataset from medical exams", "year": "2021" }, { "authors": "Qiao Jin; Bhuwan Dhingra; Zhengping Liu; William Cohen; Xinghua Lu", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "PubMedQA: A dataset for biomedical research question answering", "year": "2019" }, { "authors": "Jean Kaddour; Joshua Harris; Maximilian Mozes; Herbie Bradley; Roberta Raileanu; Robert Mchardy", "journal": "", "ref_id": "b27", "title": "Challenges and applications of large language models", "year": "2023" }, { "authors": "Sanjeev Kumar Karn; Rikhiya Ghosh; P Kusuma; Oladimeji Farri", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "shs-nlp at RadSum23: Domain-adaptive pre-training of instruction-tuned LLMs for radiology report impression generation", "year": "2023" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b29", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Anastassia Kornilova; Vladimir Eidelman", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "BillSum: A corpus for automatic summarization of US legislation", "year": "2019" }, { "authors": "Haoran Li; Dadi Guo; Wei Fan; Mingshi Xu; Yangqiu Song", "journal": "", "ref_id": "b31", "title": "Multi-step jailbreaking privacy attacks on chatgpt", "year": "2023" }, { "authors": "Jonathan Li; Rohan Bhambhoria; Xiaodan Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Parameter-efficient legal domain adaptation", "year": "2022" }, { "authors": "Renqian Luo; Liai Sun; Yingce Xia; Tao Qin; Sheng Zhang; Hoifung Poon; Tie-Yan Liu", "journal": "Briefings in Bioinformatics", "ref_id": "b33", "title": "Biogpt: generative pre-trained transformer for biomedical text generation and mining", "year": "2022" }, { "authors": "Swaroop Mishra; Daniel Khashabi; Chitta Baral; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Cross-task generalization via natural language crowdsourcing instructions", "year": "2022" }, { "authors": "Margaret Mitchell; Simone Wu; Andrew Zaldivar; Parker Barnes; Lucy Vasserman; Ben Hutchinson; Elena Spitzer; Inioluwa ; Deborah Raji; Timnit Gebru", "journal": "", "ref_id": "b35", "title": "Model cards for model reporting", "year": "2019" }, { "authors": "Team Mosaicml; Nlp ", "journal": "", "ref_id": "b36", "title": "Introducing mpt-7b: A new standard for open-source, commercially usable llms", "year": "2023" }, { "authors": "David John J Nay; Sarah B Karamardian; Wenting Lawsky; Meghana Tao; Raghav Bhat; Aaron Travis Jain; Jonathan H Lee; Jungo Choi; Kasai", "journal": "", "ref_id": "b37", "title": "Large language models as tax attorneys: A case study in legal capabilities emergence", "year": "2023" }, { "authors": "Harsha Nori; Nicholas King; Scott Mayer Mckinney; Dean Carignan; Eric Horvitz", "journal": "", "ref_id": "b38", "title": "Capabilities of gpt-4 on medical challenge problems", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b39", "title": "chatgpt", "year": "2022" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b40", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Sandeep Reddy", "journal": "Informatics in Medicine Unlocked", "ref_id": "b41", "title": "Evaluating large language models for use in healthcare: A framework for translational value assessment", "year": "2023" }, { "authors": "Malik Sallam", "journal": "Healthcare", "ref_id": "b42", "title": "Chatgpt utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns", "year": "2023" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; Nihal Nayak; Debajyoti Datta; Jonathan Chang; Mike Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Fevry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b43", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022" }, { "authors": "Thomas Scialom; Tuhin Chakrabarty; Smaranda Muresan", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Fine-tuned language models are continual learners", "year": "2022" }, { "authors": "Ben Shneiderman", "journal": "International Journal of Human-Computer Interaction", "ref_id": "b45", "title": "Human-centered artificial intelligence: Reliable, safe & trustworthy", "year": "2020" }, { "authors": "Karan Singhal; Shekoofeh Azizi; Tao Tu; Sara Mahdavi; Jason Wei; Hyung Won Chung; Nathan Scales; Ajay Tanwani; Heather Cole-Lewis; Stephen Pfohl", "journal": "Nature", "ref_id": "b46", "title": "Large language models encode clinical knowledge", "year": "2023" }, { "authors": "Arun Ting Fang Tan; J James Thirunavukarasu; Peter Campbell; A Pearse; Louis R Keane; Michael D Pasquale; Jayashree Abramoff; Flora Kalpathy-Cramer; Judy E Lum; Sally L Kim; Baxter", "journal": "Ophthalmology Science", "ref_id": "b47", "title": "Generative artificial intelligence through chatgpt and other large language models in ophthalmology: Clinical applications and challenges", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b48", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Byron C Wallace; Sayantani Saha; Frank Soboczenski; Iain James Marshall", "journal": "", "ref_id": "b49", "title": "Generating (factual?) narrative summaries of rcts: Experiments with neural multi-document summarization", "year": "2020" }, { "authors": "Ben Wang; Aran Komatsuzaki", "journal": "", "ref_id": "b50", "title": "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model", "year": "2021" }, { "authors": "Yue Wang; Hung Le; Akhilesh Deepak Gotmare; D Q Nghi; Junnan Bui; Steven C H Li; Hoi", "journal": "", "ref_id": "b51", "title": "Codet5+: Open code large language models for code understanding and generation", "year": "2023" }, { "authors": "Jason Wei; Maarten Bosma; Vincent Zhao; Kelvin Guu; Adams Wei Yu; Brian Lester; Nan Du; Andrew M Dai; Quoc V Le", "journal": "", "ref_id": "b52", "title": "a. Finetuned language models are zero-shot learners", "year": "2022" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler; Ed H Chi; Tatsunori Hashimoto; Oriol Vinyals; Percy Liang; Jeff Dean; William Fedus", "journal": "Transactions on Machine Learning Research", "ref_id": "b53", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "Shijie Wu; Ozan Irsoy; Steven Lu; Vadim Dabravolski; Mark Dredze; Sebastian Gehrmann; Prabhanjan Kambadur; David Rosenberg; Gideon Mann", "journal": "", "ref_id": "b54", "title": "Bloomberggpt: A large language model for finance", "year": "2023" }, { "authors": "Qianqian Xie; Weiguang Han; Xiao Zhang; Yanzhao Lai; Min Peng; Alejandro Lopez-Lira; Jimin Huang", "journal": "", "ref_id": "b55", "title": "Pixiu: A large language model, instruction data and evaluation benchmark for finance", "year": "2023" }, { "authors": "Li Yunxiang; Li Zihan; Zhang Kai; Dan Ruilong; Zhang You", "journal": "", "ref_id": "b56", "title": "Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge", "year": "2023" }, { "authors": "Haiyan Zhao; Hanjie Chen; Fan Yang; Ninghao Liu; Huiqi Deng; Hengyi Cai; Shuaiqiang Wang; Dawei Yin; Mengnan Du", "journal": "", "ref_id": "b57", "title": "Explainability for large language models: A survey", "year": "2023" }, { "authors": "Lucia Zheng; Neel Guha; Brandon R Anderson; Peter Henderson; Daniel E Ho", "journal": "Association for Computing Machinery", "ref_id": "b58", "title": "When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings", "year": "2021" }, { "authors": "Ming Zhong; Yang Liu; Da Yin; Yuning Mao; Yizhu Jiao; Pengfei Liu; Chenguang Zhu; Ji Heng; Jiawei Han", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "Towards a unified multidimensional evaluator for text generation", "year": "2022" }, { "authors": "Chunting Zhou; Pengfei Liu; Puxin Xu; Srini Iyer; Jiao Sun; Yuning Mao; Xuezhe Ma; Avia Efrat; Ping Yu; Lili Yu", "journal": "", "ref_id": "b60", "title": "Lima: Less is more for alignment", "year": "2023" }, { "authors": "Kaijie Zhu; Jindong Wang; Jiaheng Zhou; Zichen Wang; Hao Chen; Yidong Wang; Linyi Yang; Wei Ye; Neil Zhenqiang Gong; Yue Zhang", "journal": "", "ref_id": "b61", "title": "Promptbench: Towards evaluating the robustness of large language models on adversarial prompts", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 309.25, 352.39, 212.04, 27.89 ], "formula_id": "formula_0", "formula_text": "GPT4ALL-J GPT-J ∼3.6M 5 hrs 6 B Apache-2.0 GPT4ALL-MPT MPT ∼4.2M 5.5 hrs 7 B Apache-2.0 GPT-3.5-turbo - - - > 100 B Commercial" } ]
2024-02-12
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b15", "b16", "b12", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b32" ], "table_ref": [], "text": "As an effective solution to mitigate information overload by delivering personalized content to users from a large volume of data, recommender systems have been widely deployed in numerous web services, such as e-commerce [1], [2], social media [3], and online news [4]. Conventionally, a service provider designs a recommender and trains it on a central server using collected user raw data (e.g., user profiles and user-item interactions) [5]. However, this training † Equal contribution. * Corresponding author.\napproach poses significant risks of data leakage and privacy concerns [6]. Given the recent release of privacy protection regulations in various countries and regions, like CCPA [7] in the USA, PIPL [8] in China, and GDPR [9] in the EU, it has become increasingly challenging for online platforms to train a recommender using the traditional centralized training paradigm without violating these regulations. Federated learning is a privacy-preserving learning scheme, in which clients can collaboratively learn a model without sharing their private data. Therefore, recent studies attempt to utilize federated learning to train recommendation models, a.k.a, Federated Recommender Systems (FedRecs) [10]. [11] is the first FedRec framework that combines federated learning with a collaborative filtering model. After that, many extended versions have been developed in a short time due to FedRec's privacy-preserving advantages [12]- [16].\nDespite the variety of proposed FedRecs, most of them follow a common learning protocol, where a central server coordinates clients to optimize a shared objective by transmitting and aggregating the parameters/gradients of a global recommender system [17], as shown in the left part of Fig. 1. To be convenient for presentation, we name these traditional FedRecs using the above learning protocol as parameter transmission-based FedRecs. While these FedRecs offer a degree of protection for users' raw data, we contend that they are unsuitable for many real-world scenarios due to the following two main drawbacks.\nThe first limitation is that, these parameter transmissionbased FedRecs require the central server to expose a recommendation model for knowledge-sharing purposes. Unfortunately, in practical applications, especially within the commercial realm, the information pertaining to the recommendation models, including the design of model architecture and the value of model parameters, represents the core intellectual property of service providers. Given the substantial expense involved in developing these models, few service providers are inclined to voluntarily disclose their models in the training process since competitors can easily plagiarize and redistribute these valuable models by pretending to the normal users in federated recommender systems. Unfortunately, all these parameter transmission-based FedRecs overlook the protection needs of the service providers' model privacy and even sacrifice the platform privacy to implement user privacy protection, dampening platforms' willingness to deploy these FedRecs.\nAlthough some FedRec works [13], [18] employ techniques like differential privacy to safeguard the recommendation model's public parameters, their original intentions are still for protecting user data from certain inference attacks, which cannot satisfy service providers' model privacy protection needs, as the model's architecture and optimization methods are still exposed to all participants. In the realm of federated learning, some works explore to use the digital watermarking to protect intellectual property [19]. Nevertheless, implementing digital watermarking in FedRecs presents significant challenges, primarily due to the considerably higher number of clients compared to traditional federated learning settings [20]. Embedding such a large number of signatures can substantially impact model performance [21]. Furthermore, digital watermarking can only track model copying behavior but does not possess the capability to prevent it, so it is not a primary choice in real-world scenarios. As a result, ensuring model privacy in the context of parameter transmission-based FedRecs remains a formidable challenge, as their learning protocol inherently leaks model information.\nAnother shortcoming of current parameter transmissionbased FedRecs is their huge communication expenses. Specifically, the public parameters of a recommendation model are frequently transmitted between clients and the central server to achieve collaborative learning. These model parameters typically consist of high-dimensional matrices, leading to costly communication overhead. While some research efforts have put forth communication-efficient FedRecs [22], their communication costs remain correlated with the size of the transmitted model. With the increasing model size, the communication burden could potentially become a bottleneck for parameter transmission-based FedRecs in practical applications.\nGenerally, all the above-listed drawbacks of current FedRecs are due to using model parameters to transfer knowledge. In light of this, a federated recommender system that does not need to disperse model parameters during collaborative learning, a.k.a., parameter transmission-free FedRec, is timely in demand. As shown in the right part of Fig. 1, in parameter transmission-free FedRecs, the central server's recommendation model is decoupled with clients' local models as they communicate via certain carriers unrelated to model parameters. Therefore, the service provider can deploy an elaborately designed recommender model on the server side while assigning some straightforward and publicly available recommendation models on the client side, i.e., the model in the central server is hidden from clients since the server's and clients' models are heterogeneous. Furthermore, if the carrier is more lightweight than the model parameters, the communication will be more efficient than traditional parameter transmission-based FedRecs.\nIt is worth noting that, although some federated learning studies [23] have delved into the investigation of model heterogeneity, they cannot be directly applied to build our parameter transmission-free FedRecs due to differing objectives. Specifically, most of these studies primarily focus on achieving model diversity among clients to address resource imbalance issues. These works either leverage a public proxy dataset to manage consensus [24]- [26], which, however, cannot be obtained in FedRecs as data samples in recommendation systems belong to specific users and sensitive. Or they still require transmitting model parameters between clients and the central server. In contrast, the primary aim of our parameter transmission-free FedRec is to establish model heterogeneity between clients and the central server to protect the service provider's model intellectual property. As a result, implementing a parameter transmission-free FedRec is non-trivial.\nIn this paper, we propose the first parameter transmissionfree federated recommendation framework, named PTF-FedRec. In PTF-FedRec, the central server and clients maintain distinct recommendation models. As shown in knowledge distillation [27], the model's knowledge can be transferred via its prediction scores. Therefore, in PTF-FedRec, the central server and clients communicate using their corresponding prediction scores. More precisely, in each round, clients upload prediction scores for a subset of items. To protect the client's private data, perturbations are introduced to clients' predictions. The central server trains its model based on these uploaded predictions, as they collectively represent a form of collaborative information derived from different clients. Subsequently, the central server provides broad collaborative information to clients by generating prediction scores for a set of high-confidence and hard negative items. These steps are iteratively executed until model con-vergence is achieved. To validate the effectiveness of PTF-FedRec, we conduct extensive experiments on three widely used recommendation datasets (MovieLens-100K [28], Steam-200K [29], and Gowalla [30]) using three recommendation models (NeuMF [31], NGCF [32], and LightGCN [33]). The experimental results show that PTF-FedRec achieves better performance than parameter transmission-based FedRec baselines meanwhile obtains closer performance to the centralized training paradigm. Further, the average experimental communication costs of PTF-FedRec for each client is 150 to 2000 times lower than commonly used FedRec baselines.\nThe main contributions of this paper are as follows:\n• To the best of our knowledge, we are the first to consider protecting the service provider's model privacy, i.e., the intellectual property of the model, in the context of federated recommender systems. • We propose a parameter transmission-free federated recommendation framework, PTF-FedRec, which achieves federated collaborative learning via sharing prediction scores over of a subset of items. Compared to the current FedRec protocol, PTF-FedRec can balance the protection of both clients' data privacy and the service provider's model privacy, meanwhile, the communication expense of PTF-FedRec is also lightweight.\n• Extensive experiments conducted on three public datasets with three recommendation models demonstrate the effectiveness, efficiency, and generalization of our methods. The remainder of this paper is organized as follows. Section II provides the preliminaries related to our research, including the problem definition of federated recommender systems, the general learning protocol of current federated recommender systems, and the privacy protection demands in FedRecs. Then, in Section III, we present the technical details of our proposed federated recommendation framework. The experimental results with comprehensive analysis are exhibited in Section IV, followed by the related works in Section V. Finally, Section VI gives a brief conclusion of this paper." }, { "figure_ref": [], "heading": "II. PRELIMINARIES", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In this paper, bold lowercase (e.g., a) represents vectors, bold uppercase (e.g., A) indicates matrices, and the squiggle uppercase (e.g., A) denotes sets or functions. The important notations are listed in Table I." }, { "figure_ref": [], "heading": "A. Problem Definition of Federated Recommender System", "publication_ref": [], "table_ref": [], "text": "Let U = {u i } |U | i=1 and V = {v j } |V| j=1\nrepresent the sets of clients (users) 1 and items, respectively. |U| and |V| are numbers of clients and items. In FedRec, each client u i manages its private dataset D i , which consists of the user's interaction records (u i , v j , r ij ). r ij = 1 indicates that u i has interacted with item v j , while r ij = 0 means v j is currently a negative item. The goal of FedRec is to train a global recommender model that can predict each user's preference 1 In this paper, the terms of \"client\" and \"user\" are equivalent, since each client is solely responsible for one user. score for their non-interacted items, and then, select the top-K items with the highest prediction scores as recommendations." }, { "figure_ref": [], "heading": "B. Traditional Parameter Transmission-based Federated Recommender Systems", "publication_ref": [ "b10", "b12", "b17", "b33", "b34", "b35", "b36" ], "table_ref": [], "text": "Almost all existing FedRecs train a recommender model following the parameter transmission-based learning protocol [11]- [13], [18], [34], [35]. In this protocol, a central server is required to open-source a recommendation model to all participants. The open-source model is then divided into public parameters and private parameters. The private parameters (typically user embeddings) are stored and maintained by corresponding clients, while the public parameters are transmitted between clients and the central server to collaboratively optimize the recommendation objectives with several global rounds. Specifically, in round t, the central server first disperses public parameters to clients. The clients combine received public parameters with their private parameters to form local recommender models. Subsequently, the clients train their local recommender model to optimize certain recommendation loss functions (e.g., BPRLoss [36]) with a few local epochs. After local training, the clients send the updated public parameters (or the gradients of the public parameters) back to the central server. Finally, the central server aggregates all received parameters using certain strategies (e.g., FedAvg [37]). These above steps between the central server and clients are iteratively executed until model convergence." }, { "figure_ref": [], "heading": "C. Privacy Protection in Federated Recommender Systems", "publication_ref": [ "b16" ], "table_ref": [], "text": "In previous FedRec works [17], privacy protection mainly refers to protecting users' private data. However, in practice, the recommendation models are the core intellectual entities and the service providers have involved multiple assets, including human expertise and computation resources, to develop these models. Taking this into account, we argue that the privacy related to the central recommendation model is also critical. Therefore, a privacy-preserving federated recommender system should satisfy both users' and service provider's privacy protection requirements.\nUnfortunately, most existing parameter transmission-based FedRecs sacrifice the service providers' model privacy to protect users' data privacy, since their learning protocol requires the service provider to disclose its model architecture and optimization algorithm and transmit model parameters to all participants, impeding the practical usage. Consequently, there exists a pressing need for a novel federated recommendation framework that can protect user privacy while simultaneously safeguarding the intellectual property (i.e., model privacy) of service providers." }, { "figure_ref": [ "fig_2" ], "heading": "III. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "The limitations of existing FedRecs discussed in Section I, (1) overlook the model privacy protection and (2) have heavy communication costs, stemming from the utilization of model parameters for knowledge sharing. Therefore, we propose a novel federated recommendation framework, PTF-FedRec, which does not rely on model parameters to achieve collaborative learning. In this section, we first briefly introduce the basic recommendation models used in our framework, and then, we present the technical details of PTF-FedRec. The overview of PTF-FedRec is illustrated in Fig. 2. Algorithm 1 presents PTF-FedRec's pseudo code." }, { "figure_ref": [], "heading": "A. Base Recommendation Models", "publication_ref": [ "b30", "b31", "b32", "b37", "b38", "b30", "b31", "b32", "b39" ], "table_ref": [], "text": "Generally, a practical federated recommendation framework should be model-agnostic and compatible with most recommender systems. In this paper, to show the generalization of our proposed framework, we choose three popular recommendation models (NeuMF [31], NGCF [32], and LightGCN [33]) as our base models. These three models comprehensively cover two main kinds of recommender systems: NeuMF for matrix factorization-based recommendation [38], while NGCF and LightGCN for graph-based recommendation [39]. The following is a brief introduction to these three models.\nNeural Matrix Factorization (NeuMF). NeuMF [31] is a classic matrix factorization based recommender system. It leverages Multi-Layer Perceptron (MLP) and the concatenation of user and item feature vectors to predict the ratings:\nrij = σ(h ⊤ MLP([u i , v j ]))(1)\nwhere h is trainable parameters and [•] is concatenation operation.\nNeural Graph Collaborative Filtering (NGCF) and LightGCN. NGCF [32] and LightGCN [33] are both graphbased recommender systems. For them, users and items are treated as distinct nodes, and a bipartite graph is constructed according to the user-item interactions. Then, user and item embeddings are computed by propagating their neighbor nodes' feature vectors, which can be generally formulated as follows:\nu l i = propagate l (v l-1 j ; j ∈ N ui ) v l j = propagate l (u l-1 i ; j ∈ N vj )(2)\nN ui and N vj are sets of neighbors for node u i and v j , respectively. propagate l (•) represents the propagation operation. l represents the propagation layers. In NGCF, the propagation largely follows the standard GCN [40], while LightGCN simplifies the NGCF's propagation by only keeping the neighborhood aggregation for training efficiency. After L layers of propagation, the final user and item embeddings are used for predicting the ratings.\nIn PTF-FedRec, the clients and the central server possess different recommendation models. To simulate the setting that server model is elaborately designed while client models are simple and straightforward, we assume that the service provider assigns the simplest recommender model (i.e., NeuMF) to all clients. And on the central server side, the model is not limited to the simple NeuMF, and it can employ more powerful recommendation models like NGCF and Light-GCN. In Section IV-I, we provide the performance results and analysis for all possible model combinations between client models and the server models." }, { "figure_ref": [], "heading": "B. PTF-FedRec", "publication_ref": [ "b26", "b40", "b23", "b25", "b41", "b42", "b19", "b43" ], "table_ref": [], "text": "As mentioned in Section II-C, a privacy-preserving federated recommender system should safeguard both users' data privacy and service providers' model privacy. Traditional FedRecs protect users' data privacy by hiding them on users' devices locally, however, they expose the service provider's recommendation model to clients. As a result, these FedRecs essentially trade service provider's model privacy for users' data privacy, which discourages service providers from using them. To address this challenge, we introduce a novel federated recommendation framework called PTF-FedRec. Unlike its predecessors, PTF-FedRec ensures the concealment of both the service provider's model and users' private data on their respective devices. Hence, the service provider's complex model is securely stored and maintained on a central server, while users' raw data remains on their individual devices. This approach guarantees the preservation of both model privacy and data privacy. To collaboratively optimize the model on the central server, PTF-FedRec employs an innovative learning protocol based on predictions.\n1) Prediction-based Learning Protocol: Inspired by the knowledge transfer algorithms [27], [41], in federated learning, some works [24]- [26] have investigated training models by sharing predictions rather than transferring model weights. Unfortunately, these works cannot be directly used in FedRecs to protect model privacy for the following reasons: (1) Most of these methodologies rely on the creation of a public unlabeled dataset. In FedRecs, unlike the data samples (x i , y i ) widely used in the traditional federated learning tasks, the samples (u i , v j , r ij ) in the recommendation task are intricately linked to specific users and sensitive. Therefore, it is infeasible to construct a public shared recommendation sample set.\n(2) The original aim of these methods is to foster model heterogeneity among clients rather than to protect the service provider's model privacy. In most of these studies, the central server is responsible solely for consolidating a consensus on \nM t+1 i = argmin M t i L c (F c (M t i )|D i ∪ D i ) L c = - (ui,vj ,rij )∈Di∪ Di r ij log rij + (1 -r ij ) log(1 -rij )(3\nDt i = {(u i , v j , rij )} vj ∈ Vt i rij = F c (M t+1 i |(u i , v j ))(4)\nThen, Dt i is uploaded to the central server to support the server's model training. The construction of Dt i will directly influence the privacy protection of the client's data and the training performance of the central server model. In Section III-B2, we will present a privacy-preserving method for constructing Dt i . Algorithm 1 Lines 14-17 summarize the client's local training and client prediction uploading steps.\nModel Training on Server. The central server receives all prediction datasets { Dt i } ui∈U t and optimizes the following objective function:\nM t+1 s = argmin M t s ui∈U t L s (F s (M t s )| Dt i ) L s = - (ui,vj ,rij )∈ Dt i rij log r ij + (1 -rij ) log(1 -r ij )(5)\nwhere M t s is the central server's model parameters in round t, and F s is the recommendation algorithm.\nServer Prediction Disperse. Since the server's model is trained on massive clients' uploaded predictions, it will achieve a more powerful recommender model. Therefore, after the updating of the server model, the central server will disperse the learned knowledge back to clients to promote their local model training. Specifically, for each client u i , the central server selects a set of items V i and predicts the user's preference scores for these items using M t+1 s . The predicted scores are transmitted to corresponding clients as dataset D i :\nD i = {(u i , v j , r ij )} vj ∈ Vi r ij = F s (M t+1 s |(u i , v j ))(6)\nThe effectiveness of server knowledge sharing depends on the selection of V i . In Section III-B3, we will further introduce a confidence-based hard knowledge dispersing method. Algorithm 1 Lines 9-12 describe the server training and knowledge disperse steps. The above is the basic learning protocol of PTF-FedRec. The models on clients and the central server are collaboratively trained. Specifically, the central server learns distributed knowledge via the prediction datasets Dt i uploaded by clients, meanwhile, clients augment local dataset based on D i generated by the central server. In the following two subsections, we present the methods of constructing Dt i and D i .\n2) Privacy-preserving Dt i Construction: Since the server's model is learned based on each client's uploaded dataset, the quality of Dt i will directly influence the performance of the server model. To ensure the prediction quality, we restrict that the uploaded items in Dt i should be the trained items in client u i , i.e., Vt i ⊆ V t i , since the prediction scores from non-trained items cannot provide any useful collaborative information. Note that the trained item pool V t i consists of both positive and negative items, and the ratio of them is consistent with the predefined negative sampling ratio.\nOne naive way of developing Dt i is to upload predictions of the whole trained items, i.e., let Vt i = V t i . However, such a method will suffer privacy issues. Specifically, assume the central server is curious but honest, that is, the central server is curious about clients' sensitive data (e.g., the positive items in our work) but it would not break the default learning protocol. Then, if the client uploads the predictions of the whole trained items V t i , the central server may be able to infer the client's interaction set by simply treating the items with top γ |V t i | prediction scores as the positive items. γ is the client's negative sampling ratio, which is often default set by the central server according to the best practice. As the client's model parameters are optimized by E.q. 3, the prediction scores of positive items have a large chance of being higher than negative items in the trained item set. Therefore, the client's sensitive data will be leaked by such \"Top Guess Attack\".\nIn traditional FedRecs, Local Differential Privacy (LDP) [42] is widely used to protect user privacy. However, LDP may be ineffective in this case as adding Laplace noise is still hard to change or conceal the order information of positive and negative items. As a result, the central server still can infer users' interacted items via Top Guess Attack after applying LDP.\nTo safeguard users' data privacy, we design a privacypreserving Dt i construction method, which includes two key steps: sampling and swapping.\nSampling. Inspired by the noise-free differential privacy [43], we only upload a subset of trained items. We protect each client's data privacy by concealing the positive and negative item ratio of the uploading dataset. Specifically, in round t, the client u i randomly initializes two values β t i and γ t i . β t i is used to control the proportion of positive items that client u i will upload, while γ t i controls the positive and negative item ratio for uploading. For example, if β t i is 0.1 and γ t i is 2, the client u i will randomly select 10% positive items and 2 times size of negative items to form Vt i . Intuitively, since the ratio of positive items in the uploaded item sets is randomly changed, the central server cannot choose an appropriate γ to execute the Top Guess Attack to obtain good inference results.\nVt i ← sample(V t i |β t i , γ t i )(7)\nAfter sample Vt i , the client uses E.q. 4 to construct the prediction set Dt i . Swapping. Besides, to further protect user privacy, we propose a swap mechanism to perturb the client's uploaded predictions. To be specific, the client randomly selects a proportion λ of positive items with high prediction scores. Subsequently, it exchanges these positive items' prediction scores with negative items.\nDt i ← swap( Dt i |λ)(8)\n3) Confidence-based Hard D i Construction: A client model with better performance can improve the server model's training, as the latter is trained using the prior one's predictions. Therefore, in PTF-FedRec, at the end of each round, the central server will transfer the knowledge learned from the collective prediction data to each client via constructing and sharing the dataset D i .\nGenerally, a high-quality D i should have the following characteristics. First, the knowledge conveyed via D i is \"reliable\". Secondly, the transferred message is necessary for user u i . Based on these two requirements, we design a confidencebased hard sample construction method for PTF-FedRec.\nConfidence-based Selection. To ensure the reliability of transferred knowledge, the central server sends items' predictions with high confidence to clients. Intuitively, if an item embedding has been frequently updated, this item's embedding takes a large chance to be well-trained and the prediction calculated based on it will be closer to the truth. Thus, in PTF-FedRec, we leverage the count of the server model's item embedding updates as the measure of the prediction's confidence to filter items. Specifically, we first select items that have high update frequency and are not in client u i 's uploaded dataset as the confidence-based selection item set\nV conf i .\nHard Selection. To ensure the necessity of transferred knowledge, the central server selects items with higher prediction scores for a client, as many works have demonstrated the positive impacts of hard negative samples [20], [44].\nAs a result, the central server selects items for a client u i formally as follows:\nV conf i ← argmax vi / ∈ Vt i ∧| V conf i |=µ * α f requency(V) V hard i ← argmax vj / ∈ Vt i ∧| V hard i |=(1-µ)α F s (M t+1 s |V) V i = V conf i ∪ V hard i (9\n)\nwhere α is the size of D i and µ controls the portion of confidence-based and hard selection. These selected items V i are then used to construct D i according to E.q. 6." }, { "figure_ref": [], "heading": "C. Discussion", "publication_ref": [ "b48", "b42", "b42", "b21" ], "table_ref": [], "text": "In this part, we discuss our proposed federated recommendation framework, PTF-FedRec, from two aspects: privacypreserving and communication efficiency.\n1) Privacy Preserving Discussion: According to Section II-C, a privacy-preserving FedRec should provide both model privacy and user data privacy protection, therefore, we discuss these two types of privacy in PTF-FedRec here.\nServer Model Privacy Preserving. Unlike previous parameter transmission-based FedRecs that expose the model User Data Privacy Preserving. In PTF-FedRec, following traditional FedRecs, clients' raw data are always stored in their local devices and cannot be accessed by other participants in the whole process, ensuring the security of users' original data. However, similar to the traditional FedRecs the central server can infer the user's private data via uploaded public parameters [49], PTF-FedRec may leak the user's private information via the uploaded predictions. To improve privacy, PTF-FedRec utilizes a noise-free differential privacy [43] (i.e., sampling) with a swapping mechanism to protect the user's raw data. According to [43], sampling method satisifies (ϵ, δ)differential privacy. Based on the post-processing property of differential privacy, applying swapping on the sampled data also satisfies (ϵ, δ)-differential privacy. Therefore, PTF-FedRec can provide reliable protection for user data.\n2) Communication Efficiency Discussion: For traditional parameter transmission-based FedRecs, the communication costs for each client in every round exhibit a positive correlation with the size of the model's public parameters. These public parameters encompass item embeddings, V, and other parameters denoted as Θ. The communication cost of these conventional FedRecs can be represented as ζ × size(V + Θ), with ζ symbolizing the efficiency factor. As these public pa-rameters V and Θ generally constitute high-dimensional matrices, the communication costs for these FedRecs tend to be exorbitant. While numerous communication-efficient FedRecs have been proposed [22], their effects are only to curtail ζ, thus their communication expenses remain contingent upon the magnitude of the model parameters. As the model increases in complexity, these expenses ultimately become unmanageable. In contrast, for PTF-FedRec, the communication overhead for each client in every round can be characterized by size( Dt i ). Considering the data sparsity inherent to each client and the fact that each data sample essentially comprises three real numbers (u i , v i , r ij ), the cost will be much lower than traditional FedRecs." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct experiments to answer the following research questions (RQs):\n• RQ1. How effective is our PTF-FedRec compared to centralized and conventional federated counterparts in recommendation performance? • RQ2. How efficient is our PTF-FedRec compared to conventional federated counterparts in communication costs? • RQ3. How effective is the privacy-preserving Dt i construction in PTF-FedRec? • RQ4. How effective is the confidence-based and hard sampling method for prediction dataset construction D i in PTF-FedRec?" }, { "figure_ref": [], "heading": "A. Datasets", "publication_ref": [ "b27", "b28", "b29", "b10", "b30", "b48" ], "table_ref": [ "tab_3" ], "text": "We employ three real-world datasets (MovieLens-100K [28], Steam-200K [29], and Gowalla [30]) from various domains (movie recommendation, video game recommendation, and location recommendation) to evaluate the performance of PTF-FedRec. The statistics of datasets are shown in Table II. MovieLens-100K includes 100, 000 records between 943 users and 1, 682 movies. Steam-200K contains 3, 753 users and 5, 134 video games with 114, 713 interactions. Gowalla is the check-in dataset obtained from Gowalla and we use a 20-core setting where 8, 392 users share 391, 238 check-in records on 10, 068 locations. Following previous works [11], [31], [49], we transform all positive ratings to r ij = 1, and negative items are sampled from non-interacted items with 1 : 4 ratio during the training process. All three datasets are randomly split into training and test sets with the ratio of 8 : 2 and the validation data are randomly sampled from the client's local training set. " }, { "figure_ref": [], "heading": "B. Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "We adopt two widely used evaluation metrics Recall at rank 20 (Recall@20) and Normalized Discounted Cumulative Gain at rank 20 (NDCG@20) to measure the recommendation performance. We calculate the metrics scores for all items that have not interacted with users. For the privacy-preserving evaluation, we use F1 scores to measure the inference performance of \"Top Guess Attack\"." }, { "figure_ref": [], "heading": "C. Baselines", "publication_ref": [ "b30", "b31", "b32", "b10", "b12", "b33" ], "table_ref": [], "text": "We compare PTF-FedRec with six baselines including both centralized and federated recommendation methods. Centralized Recommendation Baselines. We utilize NeuMF [31], NGCF [32], and LightGCN [33] as centralized recommendation baselines. Note that we also use these models in our PTF-FedRec. Thus this comparison will directly show the performance gap between centralized training and our federated training. The introduction of these three baselines can be referred to Section III-A. Federated Recommendation Baselines. We select three widely used federated recommendation frameworks as our baselines.\n• FCF [11]. It is the first work that extends the collaborative filtering model to federated learning. • FedMF [13]. It is another privacy-preserving FedRec based on secure matrix factorization. Specifically, it utilizes homomorphic encryption techniques to protect userlevel privacy on a distributed matrix factorization.\n• MetaMF [34]. It learns a meta-network on the central server and uses the meta-network to generate private personalized item embeddings for each user." }, { "figure_ref": [], "heading": "D. Hyper-parameter Settings", "publication_ref": [ "b49" ], "table_ref": [], "text": "For all recommendation models, the dimensions of user and item embeddings are set to 32. For NeuMF, three feedforward layers with dimensions 64, 32, and 16 are used to process the concatenated user and item embeddings. For both NGCF and LightGCN, the graph convolution weights' dimension is the same as the embeddings' size. Besides, three GCN and LightGCN propagation layers are adopted in NGCF and LightGCN, respectively. α is set to 30. For each client, β t i is randomly sampled between 0.1 to 1.0 and γ t i is randomly sampled from 1 to 4. λ is set to 0.1 and µ is 0.5. We utilize Adam [50] with 0.001 learning rate as the optimizer. The maximum global rounds are 20. At each round, all clients participate in the training process. The local training epochs for clients and the central server are 5 and 2, respectively. For the server model, the training batch size is set to 1024, while for the client model, the batch size is 64. The baselines of FedRecs are reproduced based on their papers." }, { "figure_ref": [], "heading": "E. Effectiveness of PTF-FedRec (RQ1)", "publication_ref": [ "b1" ], "table_ref": [ "tab_3", "tab_3" ], "text": "We validate the effectiveness of our PTF-FedRec on three datasets with six baselines. The experimental results are shown in Table III. \"PTF-FedRec(X)\" indicates that the central server uses model \"X\" while the clients' models are always the naive NeuMF. From the results, we have the following observations. First of all, the centralized recommender systems achieve better performance than all federated recommendations. This may be because of two reasons: (1) Centralized training paradigm can directly access all data, however, FedRecs rely on certain knowledge carriers to achieve collaborative learning; (2) The privacy protection mechanism in FedRecs unavoidably introduces additional noises and consumes the recommendation performance.\nSecondly, our PTF-FedRec consistently obtains better performance than FedRec baselines on all three datasets with different server models. Specifically, when the central server's model becomes stronger, PTF-FedRec has better performance. For example, when the central server's models are NGCF and LightGCN, i.e., PTF-FedRec(NGCF) and PTF-FedRec(LightGCN), our FedRecs even outperform some centralized recommender systems, e.g., centralized NeuMF. Besides, according to Table III, PTF-FedRec(NGCF) achieves the best performance among all FedRecs.\nThirdly, by comparing the performance across datasets, we can find that the sparsity of the dataset can significantly influence the performance gap between FedRecs and centralized recommender systems. For example, on the denser dataset, such as MovieLens-100K, the performance of PTF-FedRec(NeuMF), PTF-FedRec(NGCF), and PTF-FedRec(LightGCN) have close performance to their corresponding centralized version respectively. While on the sparser dataset, such as Gowalla and Steam-200K, the performance gap between centralized recommender systems and all Fe-dRecs becomes larger." }, { "figure_ref": [], "heading": "F. Communication Efficiency of PTF-FedRec (RQ2)", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Aside from its effective performance, the efficient communication of PTF-FedRec stands out as another advantage compared to traditional parameter transmission-based FedRecs. In Section III-C2, we generally analyze the difference in communication costs between PTF-FedRec and parameter transmission-based FedRecs. The experimental results depicting average communication costs per client for PTF-FedRec and FedRec baselines are presented in Table IV. Evidently, the communication costs for our PTF-FedRec are notably lower than all FedRecs baselines, as the communication costs for all FedRec baselines are at the level of megabytes, while the expense of PTF-FedRec is only at the kilobyte level. Specifically, FedMF grapples with a heavy communication burden primarily due to its encryption process that expands the dimensions of item embeddings. In contrast, PTF-FedRec incurs communication costs of about 3KB for MovieLens-100K and under 1.6KB for Steam-200K and Gowalla, which are at least 2000 times lower than FedMF and 150 times lower than FCF and MetaMF. Moreover, across datasets, it is observable that the communication burden of traditional FedRecs is positively correlated with the number of items, as the item count directly impacts the size of item embeddings. Consequently, the costs for all three baselines escalate from TABLE III: The recommendation performance of PTF-FedRec and baselines on three datasets. PTF-FedRec(X) represents that the central server utilizes model \"X\", meanwhile the clients utilize NeuMF by default. The best performance of centralized recommendation is highlighted with underline, while the best performance of FedRecs is indicated by bold." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "MovieLens-100K Steam-200K Gowalla Recall@20 NDCG@20 Recall@20 NDCG@20 Recall@20 NDCG@20 MovieLens-100K to Gowalla. On the other hand, the costs for our PTF-FedRec are predominantly influenced by the average length of interactions for each client. Due to the sparsity of data in user-item interactions, PTF-FedRec consistently maintains lightweight communication costs across all three datasets." }, { "figure_ref": [], "heading": "G. Results of Privacy-preserving Mechanism (RQ3)", "publication_ref": [ "b0", "b3", "b3", "b3" ], "table_ref": [ "tab_6", "tab_3", "tab_6" ], "text": "In this section, we empirically showcase the effectiveness of privacy-preserving Dt i construction (Section IV-G1). Then, we analyze the influence of hyperparameters in this privacypreserving mechanism (Section IV-G2).\nTo evaluate the privacy-preserving ability, the central server launches the \"Top Guess Attack\" mentioned in Section III-B2 for each client u i 's uploaded predictions. That is, the central server guesses items with top γ |V t i | prediction scores as positive items. In this paper, γ is 0.2 since the positive and negative item sampling ratio is 1 : 4. In Table V, we present the attack's and recommender system's performance change after applying our privacy-preserving mechanism. We compare our method with LDP, as LDP is the gold standard privacy protection method in traditional FedRecs. Note that the privacy-preserving methods are unrelated to the server model type, therefore, we only show the results with PTF-FedRec(NGCF) by default, as it achieves the best model performance according to Table III.\n1) Effectiveness of Privacy-preserving Dt i Construction: According to the results in Table V, when the client simply uploads all trained items' predictions to the central server, the curious server can obtain over 0.97 F1 scores on all three datasets, which implies a severe data leakage of the positive items. This is because the trained items' feature vectors are optimized by forcing positive items to have higher scores while negative items obtain lower scores, and the ratio of positive and negative items in the whole trained item set is assumed to be leaked to the central server. To protect data privacy, LDP adds Laplace noise to the original prediction scores. However, LDP may be ineffective in perturbing the order of prediction scores, and adding noise to all predictions will significantly reduce the utility of these prediction scores. The results in Table V also support this argument. On MovieLens-100K, LDP reduces the attack's F1 scores from 0.98 to 0.58, but the recommender system's NDCG@20 scores are also decreased dramatically. While for Steam-200K and Gowalla, the attack's performance still keeps around 0.8 and 0.7 F1 scores but the recommender system's performance is already compromised.\nUnlike LDP, our PTF-FedRec protects the positive items by hiding the ratio of positive and negative items via sampling the uploaded dataset which will not sacrifice too much data utility. Besides, to further protect the data privacy, PTF-FedRec adds \"noise\" to the uploaded prediction scores by swapping a small part of positive and negative items' scores, which can directly perturb the order information. According to the results, when using sampling, the attack's F1 scores are reduced to around 0.5 F1 scores on all three datasets. When applying sampling and swapping defense methods, the attack's performance dramatically diminishes to about 0.4 on all datasets.\nTable VI compares our defense methods with LDP by calculating the ratio of the attack's and the model's performance change ( ∆F 1 ∆N DCG ). Higher scores indicate that the defense method safeguards data with less of a drop in model utility. According to the results, both Sampling and Sampling with Swapping are more cost-effective than LDP. It is noteworthy that although Sampling is more cost-effective than Sampling with Swapping, the latter can provide more powerful protection, as illustrated in Table V. Therefore, the choice between using single Sampling or Sampling with Swapping depends on the privacy requirements of recommendation scenarios. If utility is prioritized, then only Sampling should be employed, whereas if privacy is more sensitive, Sampling with Swapping can be utilized.\n2) Impact of Hyperparameters in Privacy-preserving Mechanism: In PTF-FedRec's data protection method, there are Fig. 3: The impact of hyperparameter in privacy-preserving Dt i construction. β t i controls the proportion of positive items that u i will upload, γ t i regulates the ratio of negative items, λ is the possibility of swapping a positive item's score. three hyperparameters, β t i , γ t i , and λ. Fig. 3 presents the result trends of these three hyperparameters with different settings.\nNote that when we change one hyperparameter's value, the other two hyperparameters keep the default settings described in Section IV-D.\nWhen we change the sampling range of β t i from [0.1, 1] to [0.7, 1], the client is expected to select more positive items for the central server each round. Therefore, both the model's performance and the attack's performance are increased. For γ t i , when the sampling range changed from [1,4] to [4,4], the number of negative samples is expected to increase, meanwhile, the ratio of positive and negative items are becoming deterministic as the range shrunk. Thus, the model performance is slightly improved while the attack's F1 scores are recovered dramatically. Finally, we research the influence of λ by changing its value from 0.05 to 0.2. According to the right subfig of Fig. 3, both attack and model performance are dropped with the growing of λ, since more proportion of positive items' prediction scores are swapped." }, { "figure_ref": [ "fig_6" ], "heading": "H. Results of Confidence-based Hard D i Construction (RQ4)", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "As the server model is trained on a lot of clients' uploaded predictions, it will capture broader collaborative information compared to clients' local models that are learned from clients' corresponding local data. Enriching clients' knowledge with this more comprehensive collaborative information can indirectly improve the central server's model performance, as it is trained based on clients' uploaded predictions. Therefore, in PTF-FedRec, at the end of each round, the central server constructs a dataset D i for each client u i . The items in D i are selected based on confidence and hardness strategies to ensure the reliability and necessity of shared information. In this part, we first investigate the effectiveness of these item selection strategies (Section IV-H1). After that, we analyze dispersed dataset size's impact on model performance (Section IV-H2).\n1) Effectiveness of Confidence-based Hard D i Construction: To validate the effectiveness of our confidence-based hard D i construction method, we gradually replace the confidence-based samples and hard samples with randomly selected items. As shown in Table VII, when we replace the hard samples (i.e., \"-hard\") or confidence-based samples (i.e., \"-confidence\") with random samples, the final model performance reduced from 0.1623 to 0.1611 and 0.1602 Recall@20 scores respectively on MovieLens-100K. Similar performance deterioration can also be found on Steam-200K and Gowalla datasets. Furthermore, when we replace all the hard items and high confidence items with random samples (i.e., \"-confidence -hard\"), the model performance further decreases to 0.1566, 0.3107, and 0.0316 Recall@20 scores on three datasets respectively. This phenomenon indicates both high-confidence items and hard items are more useful than randomly selecting a set of items' predictions for clients.\n2) Impact of D i 's size: We also explore the influence of different sizes of D i (i.e., the value of α) for final model performance in Fig. 4. Generally, when the value of α increases, the trend of performance of PTF-FedRec is at first increased to a peak point and then gradually decreased. Specifically, on MovieLens-100K and Steam-200K, when α equals 50, PTF-FedRec achieves the best performance, meanwhile, on Gowalla, the peak point is for α = 30. This performance trend indicates that when the dispersed dataset is too small, the knowledge transferred from the server model to the client model is insufficient. When the dispersed dataset is too large, the transferred knowledge may disturb client models learning from their own local datasets. In the main experiments, we assume that clients utilize NeuMF and explore different models for the central server." }, { "figure_ref": [], "heading": "I. Further Analysis", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "In this section, we present the results of all model combinations for client and server models on MovieLens-100K in Table VIII. Two interesting observations emerge from the results. Firstly, a more advanced server model yields better performance in horizontal comparison. Specifically, regardless of the client models used, the server model with NGCF exhibits the best performance, while the server model with NeuMF shows the worst performance. Secondly, a more complex client model leads to worse performance in vertical comparison; for instance, the client with NeuMF achieves the best performance regardless of the server model used. This outcome may be attributed to each client having limited data to support complex local model training due to data sparsity. Moreover, client local data can only construct a one-hop user-item graph. In contrast, graph-based recommender models such as NGCF and LightGCN are designed to capture high-order user-item relationships." }, { "figure_ref": [], "heading": "V. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Federated Recommendation", "publication_ref": [ "b50", "b10", "b11", "b12", "b33", "b34", "b51", "b55", "b13", "b15", "b56", "b58", "b21", "b59", "b17", "b19", "b48", "b60", "b61", "b63" ], "table_ref": [], "text": "Federated recommender systems (FedRecs) have raised many researchers' interest recently due to their advantages of privacy protection [51]. Ammand et al. [11] proposed the first federated recommendation framework with collaborative filtering models. After that, many extended versions sprung up to improve the model performance [12], [13], [34], [35], [52]- [56] and transplanted FedRecs to various recommendation domains [14]- [16], [57]- [59]. Besides, some works attempt to reduce the communication costs of FedRecs. For example, [22] incorporated hash techniques to achieve lightweight communication, while [60] proposed an active sampling method to accelerate the training process. Given the achievements of FedRecs, the associated security concerns have been researched, such as the privacy issues [18], [20], [49], [61] and the robustness [62]- [64].\nHowever, all these FedRecs are based on the parameter transmission-based learning protocol. As mentioned in Section I, this learning protocol limits the practical usability of FedRecs as it overlooks the service providers' privacy needs and generates heavy communication costs." }, { "figure_ref": [], "heading": "B. Model Heterogeneity in Federated Learning", "publication_ref": [ "b64", "b65", "b66", "b67", "b68", "b69", "b70", "b23", "b25", "b25", "b71" ], "table_ref": [], "text": "In federated learning, model heterogeneity has been introduced to alleviate resource imbalance problems, such as diverse data resource [65] and computation power disproportion [66], [67]. There are mainly two research lines to achieve model heterogeneity. The first way is to design specific aggregation strategies based on target model architecture. For instance, [68], [69] proposed width-level strategies for different scales of CNN models' channel aggregation. [70], [71] investigated layer-wise aggregation methods. However, all these methods still rely on transmitting model parameters to fuse knowledge.\nAnother research line is to utilize predictions to transfer knowledge. Specifically, [24]- [26] proposed knowledge distillation-based federated learning framework. In their works, a public reference dataset is built and clients transfer knowledge by making predictions on the public dataset. The predictions are then aggregated on the central server to form \"consensus\". Clients further update their local models based on the consensus. These works are similar to our work that achieves collaborative learning based on model predictions, but there are still some differences: (1) their clients share a public dataset and upload predictions to achieve knowledge distillation, however, in PTF-FedRec, the prediction uploaded by clients are personalized and adaptive since public dataset is not available for FedRecs; (2) as their primary goal is to achieve client model heterogeneity, the central server in these works is mainly responsible for \"aggregate\" client predictions, but the central server in PTF-FedRec aims to train its central server model to achieve model intellectual property protection. Other works, such as [26], [72] not only use predictions but also clients' uploaded model parameters to achieve collaborative learning. As a result, the model heterogeneity methods in federated learning cannot be applied in federated recommender systems to protect service providers' model privacy." }, { "figure_ref": [], "heading": "C. Model Privacy Protection in Federated Learning", "publication_ref": [ "b41", "b72", "b73", "b18", "b20", "b74" ], "table_ref": [], "text": "The model privacy includes two parts, model algorithm, and model parameters. In federated learning, many works attempt to protect model parameters via differential privacy (DP) and encryption techniques [42], [73], [74], but they overlook the leakage of model algorithms, such as model architectures. Other works utilize watermarking techniques to protect the ownership of a model, however, these methods can only track the model copying behavior but cannot address the model leakage problem [19], [21], [75]. Therefore, the protection of the privacy of both model parameters and model architectures is still under-explored, especially in the context of federated recommender systems." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel parameter transmissionfree federated recommendation framework, PTF-FedRec, which achieves collaborative learning via transmitting predictions between clients and the central server. In PTF-FedRec, the service provider does not need to expose its deliberate model, therefore, the model intellectual property has been protected. Besides, since the dimension of predictions is much lower than recommendation model parameters, the communication costs of PTF-FedRec are much lighter than existing FedRecs. To protect users' data privacy, PTF-FedRec incorporates a sampling and swapping mechanism for clients to share their local models' prediction scores. A confidence-based hard sampling method is designed for the central server to disperse its learned collaborative knowledge. Extensive experiments on three real-world recommendation datasets with three typical recommendation models demonstrate the effectiveness and efficiency of PTF-FedRec." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENT", "publication_ref": [], "table_ref": [], "text": "This work is supported by the Australian Research Council under the streams of Future Fellowship (Grant No.FT210100624) and the Discovery Project (Grant No.DP240101108)." } ]
With the growing concerns regarding user data privacy, Federated Recommender System (FedRec) has garnered significant attention recently due to its privacy-preserving capabilities. Existing FedRecs generally adhere to a learning protocol in which a central server shares a global recommendation model with clients, and participants achieve collaborative learning by frequently communicating the model's public parameters. Nevertheless, this learning framework has two drawbacks that limit its practical usability: (1) It necessitates a global-sharing recommendation model; however, in real-world scenarios, information related to the recommendation model, including its algorithm and parameters, constitutes the platforms' intellectual property. Hence, service providers are unlikely to release such information actively. (2) The communication costs of model parameter transmission are expensive since the model parameters are usually high-dimensional matrices. With the model size increasing, the communication burden will be the bottleneck for such traditional FedRecs. Given the above limitations, this paper introduces a novel parameter transmission-free federated recommendation framework that balances the protection between users' data privacy and platforms' model privacy, namely PTF-FedRec. Unlike traditional FedRecs, participants in PTF-FedRec collaboratively exchange knowledge by sharing their predictions within a privacypreserving mechanism. Through this approach, the central server can learn a recommender model without disclosing its model parameters or accessing clients' raw data, preserving both the server's model privacy and users' data privacy. Besides, since clients and the central server only need to communicate prediction scores which are just a few real numbers, the communication overhead is significantly reduced compared to traditional Fe-dRecs. Extensive experiments conducted on three commonly used recommendation datasets with three recommendation models demonstrate the effectiveness, efficiency, and generalization of our proposed federated recommendation framework.
Hide Your Model: A Parameter Transmission-free Federated Recommender System
[ { "figure_caption": "Fig. 1 :1Fig. 1: Traditional parameter transmission-based FedRec v.s. parameter transmission-free FedRec.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: The details of PTF-FedRec.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig.4: The impact of α (i.e., the size of server dispersed dataset D i ) on model performance.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "List of important notations.", "figure_data": "D ithe local dataset for user u i .Dt ithe dataset created by user u i 's local model in t round.D ithe dataset created by server model for user u i .Uall users in the federated recommender system.U tselected training users in t round.Vall items in the federated recommender system.V t i Vt i V conf i V hard itrained items for user u i in t round. items selected to create dataset Dt i . items selected based on confidence to create dataset D t i . hard negative items selected to create dataset D t i .r ijthe preference score of user u i for item v j .rijthe predicted score for item v j by user u i 's local model.r ijthe predicted score of u i for item v j by server model.M t i M t suser u i 's model parameters in round t. server model parameters in round t.Fcusers' model algorithm.Fsserver model algorithm.αthe size of server created dataset.β t i γ t i λthe proportion of positive items selected to Dt i . the ratio of positive items and negative items in Dt i . the probability of swapping a positive item's scores.", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Client Prediction Upload. After local training, clients transfer their knowledge learned from dataset D i ∪ D i back to the central server. In PTF-FedRec, the knowledge is carried by the prediction results of the local model. Specifically, the client u i first selects a group of items Vt", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Algorithm 1 PTF-FedRec Input: global epoch T ; local epoch L; learning rate lr, . . . Output: server model M s 1: server initializes model M 0 s , clients initialize M 0", "figure_data": "4:sample a fraction of clients U t from U5:for u i ∈ U t in parallel do6:// execute on client sides7:Dt i ←CLIENTTRAIN(u i , D i )8:end for9:// execute on central server10: 11:receive client prediction datasets { Dt i } ui∈U t M t+1 s ← update server model using E.q. 512:update { D i } ui∈U t according to Section III-B313: end for14: function CLIENTTRAIN(u i , D i )15: 16: 17:M t+1 i construct Dt ← update local model using E.q. 3 i according to Section III-B2 return Dt", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Statistics of three datasets used in our experiments.", "figure_data": "DatasetMovieLens-100K Steam-200K Gowalla#Users9433,7538,392#Items1,6825,13410,086#Interactions100,000114,713391,238Avgerage Lengths1063146Density6.30%0.59%0.46%", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "The comparison of average communication costs per client for one round. The costs for PTF-FedRec(NeuMF), PTF-FedRec(NGCF), and PTF-FedRec(LightGCN) are the same, thus we report them as PTF-FedRec to avoid repetition. The most efficient costs are indicated by bold.", "figure_data": "MethodsMovieLens-100K Steam-200KGowallaFCF0.46MB1.31MB2.59MBFedMF7.32MB20.98MB41.43MBMetaMF0.54MB1.63MB3.22MBPTF-FedRec3.02KB1.21KB1.59KB", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "The F1 scores of Top Guess Attack and NGCF@20 of PTF-FedRec(NGCF) with privacy-preserving Dt i construction. Lower F1 scores imply better privacy protection. \"↓\" means the lower value is better, while \"↑\" indicates higher scores are better. The best performance is shown by bold.", "figure_data": "MovieLens-100KSteam-200KGowallaMethodsF1 Score↓ NDCG@20 ↑ F1 Score↓ NDCG@20↑F1 Score↓NDCG@20↑No Defense0.98360.19090.98380.24940.97100.0281LDP0.58730.15030.84230.21760.67820.0251Sampling0.51710.18340.47060.24090.49440.0274Sampling + Swapping0.45390.17750.40160.23060.42360.02680.190.190.18NDCG@200.17 0.18NDCG@200.170.170.160.160.160.15[0.1,1] [0.3,1] [0.5,1] [0.7,1] Different sample range of β0.15[1,4] Different sample range of γ [2,4] [3,4] [4,4]0.150.05 Different value of λ 0.1 0.150.2", "figure_id": "tab_6", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "The ∆F 1 ∆N DCG scores for each privacy-preserving methods. Higher values imply the method consumes fewer model performance to protect user data privacy.", "figure_data": "MethodsMovieLens-100KSteam-200K GowallaLDP9.74.4597.6Sampling62.260.3680.8Sampling+Swapping39.530.9421.1", "figure_id": "tab_7", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "The impact of different item selection methods in D i construction for PTF-FedRec performance.", "figure_data": "MovieLens-100KSteam-200KGowallaMethodsRecall@20 NDCG@20 Recall@20 NDCG@20 Recall@20 NDCG@20PTF-FedRec0.16230.17750.34840.23060.03450.0268-hard0.16110.17240.32940.21260.03340.0262-confidence0.16020.17060.32560.20590.03230.0243-confidence -hard0.15660.16740.31070.18950.03160.02470.240.02680.180NDCG@200.1760.230.02640.220.17210 Different α on MovieLens-100K 30 50 70 9010 Different α on Steam-200K 30 50 70 901030 Different α on Gowalla 50 7090", "figure_id": "tab_8", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "The performance (NDCG@20) of different model combinations for clients and the server on MovieLens-100K. Same observations can also be found on other two datasets.", "figure_data": "Server ModelNeuMF NGCF LightGCNNeuMF0.14820.17750.1739Client ModelNGCF0.13270.17110.1544LightGCN0.13860.16400.1549", "figure_id": "tab_9", "figure_label": "VIII", "figure_type": "table" } ]
Wei Yuan; Chaoqun Yang; † Liang Qu; Quoc Viet; Hung Nguyen; Jianxin Li; Hongzhi Yin
[ { "authors": "J B Schafer; J A Konstan; J Riedl", "journal": "Data mining and knowledge discovery", "ref_id": "b0", "title": "E-commerce recommendation applications", "year": "2001" }, { "authors": "J Zhang; M Gao; J Yu; L Guo; J Li; H Yin", "journal": "", "ref_id": "b1", "title": "Double-scale self-supervised hypergraph learning for group recommendation", "year": "2021" }, { "authors": "X Zhou; D Qin; X Lu; L Chen; Y Zhang", "journal": "IEEE", "ref_id": "b2", "title": "Online social media recommendation over streams", "year": "2019" }, { "authors": "F Wu; Y Qiao; J.-H Chen; C Wu; T Qi; J Lian; D Liu; X Xie; J Gao; W Wu", "journal": "", "ref_id": "b3", "title": "Mind: A large-scale dataset for news recommendation", "year": "2020" }, { "authors": "W Wang; H Yin; S Sadiq; L Chen; M Xie; X Zhou", "journal": "IEEE", "ref_id": "b4", "title": "Spore: A sequential personalized spatial item recommender system", "year": "2016" }, { "authors": "Z Batmaz; A Yurekli; A Bilge; C Kaleli", "journal": "Artificial Intelligence Review", "ref_id": "b5", "title": "A review on deep learning for recommender systems: challenges and remedies", "year": "2019" }, { "authors": "E L Harding; J J Vanto; R Clark; L Hannah; S C Ji; Ainsworth", "journal": "Journal of Data Protection & Privacy", "ref_id": "b6", "title": "Understanding the scope and impact of the california consumer privacy act of 2018", "year": "2019" }, { "authors": "I Calzada", "journal": "Smart Cities", "ref_id": "b7", "title": "Citizens' data privacy in china: The state of the art of the personal information protection law (pipl)", "year": "2022" }, { "authors": "P Voigt; A Von; Bussche", "journal": "A Practical Guide", "ref_id": "b8", "title": "The eu general data protection regulation (gdpr)", "year": "2017" }, { "authors": "L Yang; B Tan; V W Zheng; K Chen; Q Yang", "journal": "", "ref_id": "b9", "title": "Federated recommendation systems", "year": "2020" }, { "authors": "M Ammad-Ud-Din; E Ivannikova; S A Khan; W Oyomno; Q Fu; K E Tan; A Flanagan", "journal": "", "ref_id": "b10", "title": "Federated collaborative filtering for privacy-preserving personalized recommendation system", "year": "2019" }, { "authors": "G Lin; F Liang; W Pan; Z Ming", "journal": "IEEE Intelligent Systems", "ref_id": "b11", "title": "Fedrec: Federated recommendation with explicit feedback", "year": "2020" }, { "authors": "D Chai; L Wang; K Chen; Q Yang", "journal": "IEEE Intelligent Systems", "ref_id": "b12", "title": "Secure federated matrix factorization", "year": "2020" }, { "authors": "J Yi; F Wu; C Wu; R Liu; G Sun; X Xie", "journal": "", "ref_id": "b13", "title": "Efficient-fedrec: Efficient federated learning framework for privacy-preserving news recommendation", "year": "2021" }, { "authors": "Z Liu; L Yang; Z Fan; H Peng; P S Yu", "journal": "ACM Transactions on Intelligent Systems and Technology (TIST)", "ref_id": "b14", "title": "Federated social recommendation with graph neural network", "year": "2022" }, { "authors": "Y Guo; F Liu; Z Cai; H Zeng; L Chen; T Zhou; N Xiao", "journal": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies", "ref_id": "b15", "title": "Prefer: Point-of-interest recommendation with efficiency and privacypreservation via federated edge learning", "year": "2021" }, { "authors": "Z Sun; Y Xu; Y Liu; W He; Y Jiang; F Wu; L Cui", "journal": "", "ref_id": "b16", "title": "A survey on federated recommendation systems", "year": "2022" }, { "authors": "S Zhang; W Yuan; H Yin", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b17", "title": "Comprehensive privacy analysis on federated recommender system against attribute inference attacks", "year": "2023" }, { "authors": "B G Tekgul; Y Xia; S Marchal; N Asokan", "journal": "IEEE", "ref_id": "b18", "title": "Waffle: Watermarking in federated learning", "year": "2021" }, { "authors": "W Yuan; H Yin; F Wu; S Zhang; T He; H Wang", "journal": "", "ref_id": "b19", "title": "Federated unlearning for on-device recommendation", "year": "2023" }, { "authors": "Q Yang; A Huang; L Fan; C S Chan; J H Lim; K W Ng; D S Ong; B Li", "journal": "Machine Intelligence Research", "ref_id": "b20", "title": "Federated learning with privacy-preserving and model ip-right-protection", "year": "2023" }, { "authors": "H Zhang; F Luo; J Wu; X He; Y Li", "journal": "ACM Transactions on Information Systems", "ref_id": "b21", "title": "Lightfr: Lightweight federated recommendation with privacy-preserving matrix factorization", "year": "2023" }, { "authors": "V Kulkarni; M Kulkarni; A Pant", "journal": "IEEE", "ref_id": "b22", "title": "Survey of personalization techniques for federated learning", "year": "2020" }, { "authors": "H Chang; V Shejwalkar; R Shokri; A Houmansadr", "journal": "", "ref_id": "b23", "title": "Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer", "year": "2019" }, { "authors": "D Li; J Wang", "journal": "", "ref_id": "b24", "title": "Fedmd: Heterogenous federated learning via model distillation", "year": "2019" }, { "authors": "Y J Cho; A Manoel; G Joshi; R Sim; D Dimitriadis", "journal": "", "ref_id": "b25", "title": "Heterogeneous ensemble knowledge transfer for training large models in federated learning", "year": "2022" }, { "authors": "G Hinton; O Vinyals; J Dean", "journal": "", "ref_id": "b26", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "F M Harper; J A Konstan", "journal": "Acm transactions on interactive intelligent systems (tiis)", "ref_id": "b27", "title": "The movielens datasets: History and context", "year": "2015" }, { "authors": "G Cheuque; J Guzmán; D Parra", "journal": "", "ref_id": "b28", "title": "Recommender systems for online video game platforms: The case of steam", "year": "2019" }, { "authors": "D Liang; L Charlin; J Mcinerney; D M Blei", "journal": "", "ref_id": "b29", "title": "Modeling user exposure in recommendation", "year": "2016" }, { "authors": "X He; L Liao; H Zhang; L Nie; X Hu; T.-S Chua", "journal": "", "ref_id": "b30", "title": "Neural collaborative filtering", "year": "2017" }, { "authors": "X Wang; X He; M Wang; F Feng; T.-S Chua", "journal": "", "ref_id": "b31", "title": "Neural graph collaborative filtering", "year": "2019" }, { "authors": "X He; K Deng; X Wang; Y Li; Y Zhang; M Wang", "journal": "", "ref_id": "b32", "title": "Lightgcn: Simplifying and powering graph convolution network for recommendation", "year": "2020" }, { "authors": "Y Lin; P Ren; Z Chen; Z Ren; D Yu; J Ma; M D Rijke; X Cheng", "journal": "", "ref_id": "b33", "title": "Meta matrix factorization for federated rating predictions", "year": "2020" }, { "authors": "C Wu; F Wu; L Lyu; T Qi; Y Huang; X Xie", "journal": "Nature Communications", "ref_id": "b34", "title": "A federated graph neural network framework for privacy-preserving personalization", "year": "2022" }, { "authors": "S Rendle; C Freudenthaler; Z Gantner; L Schmidt-Thieme", "journal": "", "ref_id": "b35", "title": "Bpr: Bayesian personalized ranking from implicit feedback", "year": "2009" }, { "authors": "B Mcmahan; E Moore; D Ramage; S Hampson; B A Arcas", "journal": "PMLR", "ref_id": "b36", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "R Mehta; K Rana", "journal": "IEEE", "ref_id": "b37", "title": "A review on matrix factorization techniques in recommender systems", "year": "2017" }, { "authors": "S Wu; F Sun; W Zhang; X Xie; B Cui", "journal": "ACM Computing Surveys", "ref_id": "b38", "title": "Graph neural networks in recommender systems: a survey", "year": "2022" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b39", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "C Buciluǎ; R Caruana; A Niculescu-Mizil", "journal": "", "ref_id": "b40", "title": "Model compression", "year": "2006" }, { "authors": "J Park; H Lim", "journal": "Applied Sciences", "ref_id": "b41", "title": "Privacy-preserving federated learning using homomorphic encryption", "year": "2022" }, { "authors": "L Sun; L Lyu", "journal": "", "ref_id": "b42", "title": "Federated model distillation with noise-free differential privacy", "year": "2021" }, { "authors": "J Wu; X Wang; F Feng; X He; L Chen; J Lian; X Xie", "journal": "", "ref_id": "b43", "title": "Selfsupervised graph learning for recommendation", "year": "2021" }, { "authors": "W Fan; T Derr; X Zhao; Y Ma; H Liu; J Wang; J Tang; Q Li", "journal": "IEEE", "ref_id": "b44", "title": "Attacking black-box recommendations via copying cross-domain user profiles", "year": "2021" }, { "authors": "J Chen; W Fan; G Zhu; X Zhao; C Yuan; Q Li; Y Huang", "journal": "", "ref_id": "b45", "title": "Knowledge-enhanced black-box attacks for recommendations", "year": "2022" }, { "authors": "Y Zhang; X Yuan; J Li; J Lou; L Chen; N.-F Tzeng", "journal": "", "ref_id": "b46", "title": "Reverse attack: Black-box attacks on collaborative recommendation", "year": "2021" }, { "authors": "S Zhang; H Yin; H Chen; C Long", "journal": "", "ref_id": "b47", "title": "Defense against model extraction attacks on recommender systems", "year": "2023" }, { "authors": "W Yuan; C Yang; Q V H Nguyen; L Cui; T He; H Yin", "journal": "", "ref_id": "b48", "title": "Interaction-level membership inference attack against federated recommender systems", "year": "2023" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b49", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "H Yin; L Qu; T Chen; W Yuan; R Zheng; J Long; X Xia; Y Shi; C Zhang", "journal": "", "ref_id": "b50", "title": "On-device recommender systems: A comprehensive survey", "year": "2024" }, { "authors": "L Qu; R Tang; Q V H Zheng; Z Nguyen; Y Huang; H Shi; Yin", "journal": "", "ref_id": "b51", "title": "Semi-decentralized federated ego graph learning for recommendation", "year": "2023" }, { "authors": "S Zheng; W Wang; J Qu; H Yin; W Chen; L Zhao", "journal": "IEEE", "ref_id": "b52", "title": "Mmkgr: Multi-hop multi-modal knowledge graph reasoning", "year": "2023" }, { "authors": "Q Wang; H Yin; T Chen; J Yu; A Zhou; X Zhang", "journal": "The VLDB Journal", "ref_id": "b53", "title": "Fastadapting and privacy-preserving federated recommender system", "year": "2021" }, { "authors": "Q V H Nguyen; C T Duong; T T Nguyen; M Weidlich; K Aberer; H Yin; X Zhou", "journal": "The VLDB Journal", "ref_id": "b54", "title": "Argument discovery via crowdsourcing", "year": "2017" }, { "authors": "W Yuan; L Qu; L Cui; Y Tong; X Zhou; H Yin", "journal": "", "ref_id": "b55", "title": "Hetefedrec: Federated recommender systems with model heterogeneity", "year": "2023" }, { "authors": "R Zheng; L Qu; T Chen; L Cui; Y Shi; H Yin", "journal": "", "ref_id": "b56", "title": "Decentralized collaborative learning with adaptive reference data for on-device poi recommendation", "year": "2024" }, { "authors": "J Long; T Chen; Q V H Nguyen; G Xu; K Zheng; H Yin", "journal": "", "ref_id": "b57", "title": "Model-agnostic decentralized collaborative learning for on-device poi recommendation", "year": "2023" }, { "authors": "G Ye; T Chen; Y Li; L Cui; Q V H Nguyen; H Yin", "journal": "IEEE Journal of Biomedical and Health Informatics", "ref_id": "b58", "title": "Heterogeneous collaborative learning for personalized healthcare analytics via messenger distillation", "year": "2023" }, { "authors": "K Muhammad; Q Wang; D O'reilly-Morgan; E Tragos; B Smyth; N Hurley; J Geraci; A Lawlor", "journal": "", "ref_id": "b59", "title": "Fedfast: Going beyond average for faster training of federated recommender systems", "year": "2020" }, { "authors": "L Qu; W Yuan; R Zheng; L Cui; Y Shi; H Yin", "journal": "", "ref_id": "b60", "title": "Towards personalized privacy: User-governed data contribution for federated recommendation", "year": "2024" }, { "authors": "S Zhang; H Yin; T Chen; Z Huang; Q V H Nguyen; L Cui", "journal": "", "ref_id": "b61", "title": "Pipattack: Poisoning federated recommender systems for manipulating item promotion", "year": "2022" }, { "authors": "W Yuan; Q V H Nguyen; T He; L Chen; H Yin", "journal": "", "ref_id": "b62", "title": "Manipulating federated recommender systems: Poisoning with synthetic users and its countermeasures", "year": "2023" }, { "authors": "W Yuan; S Yuan; K Zheng; Q V H Nguyen; H Yin", "journal": "", "ref_id": "b63", "title": "Manipulating visually-aware federated recommender systems and its countermeasures", "year": "2023" }, { "authors": "X Ma; J Zhu; Z Lin; S Chen; Y Qin", "journal": "Future Generation Computer Systems", "ref_id": "b64", "title": "A state-of-the-art survey on solving non-iid data in federated learning", "year": "2022" }, { "authors": "Z Jiang; Y Xu; H Xu; Z Wang; C Qiao; Y Zhao", "journal": "IEEE", "ref_id": "b65", "title": "Fedmp: Federated learning through adaptive model pruning in heterogeneous edge computing", "year": "2022" }, { "authors": "H Wang; S Marella; J Anderson", "journal": "IEEE", "ref_id": "b66", "title": "Fedadmm: A federated primaldual algorithm allowing partial participation", "year": "2022" }, { "authors": "E Diao; J Ding; V Tarokh", "journal": "", "ref_id": "b67", "title": "Heterofl: Computation and communication efficient federated learning for heterogeneous clients", "year": "2020" }, { "authors": "Z Zhu; J Hong; S Drew; J Zhou", "journal": "", "ref_id": "b68", "title": "Resilient and communication efficient learning for heterogeneous federated systems", "year": "2022" }, { "authors": "K Wang; Q He; F Chen; C Chen; F Huang; H Jin; Y Yang", "journal": "", "ref_id": "b69", "title": "Flexifed: Personalized federated learning for edge clients with heterogeneous model architectures", "year": "2023" }, { "authors": "R Liu; F Wu; C Wu; Y Wang; L Lyu; H Chen; X Xie", "journal": "", "ref_id": "b70", "title": "No one left behind: Inclusive federated learning over heterogeneous devices", "year": "2022" }, { "authors": "C He; M Annavaram; S Avestimehr", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b71", "title": "Group knowledge transfer: Federated learning of large cnns at the edge", "year": "2020" }, { "authors": "K Wei; J Li; M Ding; C Ma; H H Yang; F Farokhi; S Jin; T Q Quek; H V Poor", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b72", "title": "Federated learning with differential privacy: Algorithms and performance analysis", "year": "2020" }, { "authors": "Z Liu; J Guo; W Yang; J Fan; K.-Y Lam; J Zhao", "journal": "IEEE Transactions on Big Data", "ref_id": "b73", "title": "Privacypreserving aggregation in federated learning: A survey", "year": "2022" }, { "authors": "M Lansari; R Bellafqira; K Kapusta; V Thouvenot; O Bettan; G Coatrieux", "journal": "", "ref_id": "b74", "title": "When federated learning meets watermarking: A comprehensive overview of techniques for intellectual property protection", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 58.93, 592.36, 161.47, 14.07 ], "formula_id": "formula_0", "formula_text": "Let U = {u i } |U | i=1 and V = {v j } |V| j=1" }, { "formula_coordinates": [ 4, 118.38, 559.8, 181.64, 11.72 ], "formula_id": "formula_1", "formula_text": "rij = σ(h ⊤ MLP([u i , v j ]))(1)" }, { "formula_coordinates": [ 4, 106.94, 693.35, 193.09, 29.38 ], "formula_id": "formula_2", "formula_text": "u l i = propagate l (v l-1 j ; j ∈ N ui ) v l j = propagate l (u l-1 i ; j ∈ N vj )(2)" }, { "formula_coordinates": [ 5, 48.96, 560.09, 254.99, 51.7 ], "formula_id": "formula_3", "formula_text": "M t+1 i = argmin M t i L c (F c (M t i )|D i ∪ D i ) L c = - (ui,vj ,rij )∈Di∪ Di r ij log rij + (1 -r ij ) log(1 -rij )(3" }, { "formula_coordinates": [ 5, 123.24, 692.32, 176.78, 30.64 ], "formula_id": "formula_4", "formula_text": "Dt i = {(u i , v j , rij )} vj ∈ Vt i rij = F c (M t+1 i |(u i , v j ))(4)" }, { "formula_coordinates": [ 5, 317.09, 345.23, 245.94, 54.16 ], "formula_id": "formula_5", "formula_text": "M t+1 s = argmin M t s ui∈U t L s (F s (M t s )| Dt i ) L s = - (ui,vj ,rij )∈ Dt i rij log r ij + (1 -rij ) log(1 -r ij )(5)" }, { "formula_coordinates": [ 5, 386.17, 542.25, 176.86, 27.55 ], "formula_id": "formula_6", "formula_text": "D i = {(u i , v j , r ij )} vj ∈ Vi r ij = F s (M t+1 s |(u i , v j ))(6)" }, { "formula_coordinates": [ 6, 124.89, 641.25, 175.13, 13.14 ], "formula_id": "formula_7", "formula_text": "Vt i ← sample(V t i |β t i , γ t i )(7)" }, { "formula_coordinates": [ 6, 401.27, 92.5, 161.77, 13.14 ], "formula_id": "formula_8", "formula_text": "Dt i ← swap( Dt i |λ)(8)" }, { "formula_coordinates": [ 6, 311.98, 386.7, 27, 13.68 ], "formula_id": "formula_9", "formula_text": "V conf i ." }, { "formula_coordinates": [ 6, 346.66, 477.49, 212.5, 68.38 ], "formula_id": "formula_10", "formula_text": "V conf i ← argmax vi / ∈ Vt i ∧| V conf i |=µ * α f requency(V) V hard i ← argmax vj / ∈ Vt i ∧| V hard i |=(1-µ)α F s (M t+1 s |V) V i = V conf i ∪ V hard i (9" }, { "formula_coordinates": [ 6, 559.16, 507.33, 3.87, 8.64 ], "formula_id": "formula_11", "formula_text": ")" } ]
2023-12-21
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b4", "b17", "b1", "b21", "b5", "b26", "b17", "b15", "b25" ], "table_ref": [], "text": "Estimating the 3D structure of a scene holds importance across a variety of domains, including robotics, virtual reality (VR), and augmented reality (AR). The demand for real-time applications in these areas increases over time as such technologies proliferate. Man-made architectures and indoor environments, where the application end-users spend a significant amount of time, often consist of regular structures like planar surfaces, aligning well with the Manhattan world assumption that such surfaces typically exist on a regular 3D grid (Coughlan and Yuille, 2003). Estimating plane parameters directly can reduce noise for areas lying on a planar surface, which can be particularly useful for indoor scenes dominated by planar surfaces. It also holds relevance for outdoor scenarios, such as self-driving cars and outdoor AR applications, where streets and buildings often adhere to similar geometric principles.\nSeveral methods have been proposed to use deep learning to recover planes of indoor scenes from a single image (Liu et added information from scene semantics. Incorporating semantics provides an added layer of scene understanding, which can be useful in many applications. For instance, the semantic label for a planar surface can help a service robot in determining the correct behaviour (e.g. mopping floor vs. wiping table), or AR/VR experiences could offer semantics-dependent retexturization. Some models predict semantics along with plane parameters but are often too computationally intensive to meet the real-time requirements of practical applications (Liu et al., 2022).\nMulti-task learning, the technique of using a single model to learn multiple tasks concurrently, has shown promise in terms of data efficiency and improved generalization (Caruana, 1997). However, recent studies indicate that there is also added difficulty in jointly learning multiple tasks. While some tasks may benefit from being learned together, thereby boosting accuracy, others may interfere with each other, leading to worse performance (Standley et al., 2020).\nOur aim was to create a data-efficient model with improved run-time efficiency compared to existing models for planar reconstruction with semantics. We achieve the desired outcome via our model, SOLO-Planes (SOLOP), where we make use of multi-view guidance for improved data usage when acceptable ground truth plane segments differ across views, and made adjustments to the base architecture for improved efficiency. Multi-view warping is done in feature space, by warping plane features from neighbour to source view, decoding, then transforming the decoded plane parameters to the source view camera view for comparison with ground truth data during training. This additional warping guidance for plane features positively impacts the learning of segmentation masks, particularly when using a more limited dataset, while only requiring a single view at inference time.\nIn the context of our work, we found that multiview guidance using plane features leads to a notable improvement in segmentation results. We attribute this enhancement to our multi-task architecture and the use of a shared trunk, meaning a global feature extractor that is common to all tasks (Crawshaw, 2020). This architecture allows for loss propagation through shared features and common base networks, and may be particularly relevant in the case of incomplete or varying data across overlapping views.\nOur contributions include the following:\n1. An empirical demonstration of cross-task improvement using multi-view guidance by feature warping, with particular relevance in cases where ground truth data may be incomplete across neighboring views.\n2. A single-image planar reconstruction model, that can concurrently predict semantics for planar segments while achieving the best efficiency compared to other known planar reconstruction meth-ods at a processing speed of 43 FPS.\nOur approach may be a helpful method for other multi-task models limited in some forms of ground truth training data. The efficiency of the model makes it suitable for a range of real-world applications. Multi-view approaches The task of predicting 3D plane parameters from a single image is inherently ambiguous and challenging. Thus, several works have incorporated multi-view information, either as a loss guidance or by using multiple image inputs at infer-ence time. PlanarRecon (Xie et al., 2022) is a realtime model using multiple image frames which makes predictions directly in 3D by using a coarse-to-fine approach for building a sparse feature volume, then clustering occupied voxels for instance planes, and uses a tracking and fusion module to get a global plane representation. PlaneMVS (Liu et al., 2022) is the first to apply a deep multi-view stereo (MVS) approach to plane parameters. Although it achieves state-of-the-art results and also predicts class semantics, it is less computationally efficient due to the use of 3D convolutions and requires generation of plane hypotheses. PlaneRCNN incorporates a multi-view warping loss module that enforces consistency with nearby views by projecting the predictions to 3D and calculating the distance after transforming to the same camera coordinates (Liu et al., 2019). Unlike our approach, their warping module is applied directly on the predictions rather than in feature space. Another work enhances the PlaneAE model with multiview regularization by warping the plane embedding feature maps and using associative embeddings from multiple views to improve the final instance segmentation (Xi and Chen, 2019)." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b12", "b31", "b3", "b31", "b9", "b0", "b7" ], "table_ref": [], "text": "Feature Warping Feature warping is commonly done in deep Multi-View Stereo (MVS) approaches, as it was found that creating the cost volume using features is as effective for artificial neural networks and more computationally efficient due to reduced size (Im et al., 2019;Yao et al., 2018). While some approaches use a similarity function on the features, others simply concatenate the warped feature with the original and let the model learn the relation rather than calculate an explicit cost volume (Chen et al., 2020;Yao et al., 2018). The latter approach is used by PlaneMVS to construct a Feature/Cost volume, which is then processed by a 3D CNN to get the plane parameters. Deep MVS methods are more commonly used for depth estimation, and their application to plane parameter estimation is relatively novel. Other research suggests that calculating a feature error between frames is more robust than a photometric error (Guo et al., 2021). However, this cannot directly be applied to plane reconstruction, as the plane features contain information in different camera views when considering a video dataset. Takanori et al. use multi-frame attention via feature warping for the task of drone crowd tracking (Asanomi et al., 2023). Ding et al. take MVS as a feature matching task and use a Transformer model to aggregate long-range global context using warped feature maps (Ding et al., 2022).\nIn order to ensure differentiability, the warp to another view using depth values and camera parameters must be backprojected using bilinear interpola-tion. Most existing works involving feature warping do not specifically deal with plane features, which require transformation to the correct view when decoded. Additionally, the majority of planar reconstruction models do not offer semantic predictions for the scene.\nThe majority of existing works primarily focus on the geometric accuracy of planes without holistically addressing the more practical requirements of speed and semantic understanding of planar scenes. Our work aims to fill this gap by offering a unified framework for semantic planar reconstruction. We improve data efficiency during training and achieve cross-task improvement using multi-view guidance for plane features, while maintaining an inference speed that is suitable for real-time applications." }, { "figure_ref": [], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "Our objective is to develop a real-time framework for the task of 3D semantic planar reconstruction. This section is organized as follows: Section 3.1 provides details of the framework, Section 3.2 elaborates on the loss terms, and Section 3.3 introduces our multi-view guidance." }, { "figure_ref": [ "fig_3" ], "heading": "Framework", "publication_ref": [ "b11", "b24", "b24" ], "table_ref": [], "text": "Our framework is built on a light version of the SOLOv2 instance segmentation model (Wang et al., 2020b) using a ResNet-50 backbone (He et al., 2016). SOLOv2 is a single-stage instance segmentation model that predicts instance masks and labels based on their spatial location. It achieves an execution speed of around 31 FPS using a single V100 GPU card (Wang et al., 2020b). The model employs dynamic convolution to generate the final segmentation mask, leveraging multi-scale features from the Feature Pyramid Network (FPN) (Lin et al., 2017a). Each level of the FPN output features are used to predict mask kernels and class semantics, with the features reshaped to square grids of varying sizes, with each responsible for predictions at a different scale. Each grid location predicts a kernel and semantic category scores. The mask feature is obtained through feature fusion of the first four levels of the FPN outputs via bilinear upsampling and convolution layers, and the final segmentation masks are obtained via convolution using the predicted kernels, with redundant masks suppressed using matrix Non-Maximum Supression (NMS) (Wang et al., 2020b). The mask and kernel features receive spatial awareness information from concatenated normalized coordinate, a method We extend the base architecture by introducing a plane feature branch that fuses the first two levels of the feature map, along with a plane prediction head that outputs per-pixel plane parameters via a convolution layer (see Fig. 2). This prediction is supervised by a set of loss functions that leverage geometrical constraints and ground truth depth information (detailed in Section 3.2). The original architecture predicts the kernels and semantic categories using all five feature map levels of the FPN. Based on the findings of Chen et al. ( 2021), a divide-and-conquer approach is more crucial than leveraging multi-scale features for task-specific predictions, we experimented with using different feature levels and found that using fewer feature levels not only maintained comparable performance in multi-task planar segmentation but also improved the overall efficiency of the model.\nOur final architecture takes a single RGB image, I ∈ R H×W ×C , as input during inference, and outputs an arbitrary number of plane instance masks along with instance level semantics and per-pixel plane parameters. We obtain the final result by pooling perpixel parameter prediction using the predicted masks, and retaining per-pixel predictions in areas without a plane instance. " }, { "figure_ref": [], "heading": "Losses", "publication_ref": [ "b24", "b14", "b30" ], "table_ref": [], "text": "Mask & Category We retain the original loss functions from Wang et al. (2020b) for mask and category predictions. The Dice Loss, L M , guides mask prediction with the original loss weight w M = 3, and focal loss, L C , for semantic category prediction (Lin et al., 2017b). For full details, we refer readers to (Wang et al., 2020a). In order to address class imbalances due to dominating negative samples, we modified L C to only consider grid locations containing an instance.\nPlane Parameters Plane parameters are represented by the normal and offset of the plane, denoted as p = (n, d), which we combine into a single parameter p = n * d ∈ R 3 , with n normalized to unit length. Due to the complexity of predicting plane parameters containing both normal and depth information, we employ multiple loss functions for supervising per-pixel plane predictions. We use L 1 loss for direct comparison with ground truth plane parameters:\nL plane = 1 N N ∑ i=1 ∥p i -p * i ∥. (1\n)\nAn asterisk is used to denote predicted values, and N represents the total number of pixels. The cosine distance, denoted L sur f ace , is used to guide the learning of surface normals. Due to the way we represent plane parameter p, we get the equivalent result calculating cosine similarity on plane parameters directly.\nsim i = p i • p * i ∥p i ∥∥p * i ∥ , L sur f ace = 1 N N ∑ i=1 1 -sim i (2)\nDue to noisy and incomplete ground truth plane annotations, we also make use of ground truth depth data, D ∈ R H×W , for additional supervision. We calculate the plane induced depth at pixel location i by\nD * i = d * i n * T i • K -1 q i ,(3)\nwhere K represents the ground truth camera intrinsics of the scene and q i is the x and y index for pixel location i. The plane induced depth loss, L depth , is formulated as:\nL depth = 1 N N ∑ i=1 |D i -D * i |.(4)\nWe use the plane structure induced loss, first introduced by (Yang and Zhou, 2018) and which we denote by L geom , based on the principle that the dot product of a 3D point on a plane with the normal equals the offset, n T Q = d. We use ground truth depth and camera intrinsics to retrieve the 3D point at each pixel location. Q i = D i K -1 q i obtains the 3D point projected at one location.\nL geom = 1 N N ∑ i=1 n * T i • Q i -d * i (5)\nGradient Weighting We add gradient edge weighting as a model variation, weighting L depth and L geom to emphasize learning at edges, areas which are typically more difficult to learn. We choose to use the gradient of the image, G ∈ R H×W rather than depth, in order to better capture edges. Despite more noise at non-edge areas, it can capture more plane edges as some plane instances can have the same depth but still represent different surfaces (e.g. picture frame on a wall). This addition results in cross-task improvements for segmentation mask prediction in the case of the multi-view model (see Section 3.3).\nL depth,geom = 1 N N ∑ i=1 G i * L i (6)\nThe total loss for plane guidance is\nL P = L plane + L sur f ace + L geom + L depth ,(7)\nand the final combined losses:\nL total = L M * w M + L C + L P .(8)" }, { "figure_ref": [], "heading": "Multiview Plane Feature Guidance", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our multi-view guidance approach, depicted in Fig 3 . We take neighbouring image pairs, which we denote by source and neighbouring view (I s , I n ), and extract the corresponding 2D features. The two finest pyramid feature maps are fused to generate plane features f ∈ R 1 4 H× 1 4 W ×C . We backproject the neighbouring feature f N to the corresponding location of the source view using bilinear interpolation. This process uses the ground truth depth, intrinsic parameters, and the relative transform between the views to obtain the warped 2D coordinates, from which we obtain the out-projection mask. We then decode the warped neighbouring feature fN with the plane prediction head to get the corresponding plane parameters. It is important to note that fN contains plane information of the neighbouring view, under the camera coordinates of I n . Therefore, we transform the decoded plane parameters to the source view's camera coordinates before comparing to ground truth. This transformation is given by:\nns = R n n , ds = d n + n n T • t,(9)\nwhere (R, t) represents the rotation matrix and translation vector from neighbour to source view, and\n(n n , d n ) are the normal and offset in the neighbouring view. We then calculate an additional plane loss L P using the transformed plane parameters decoded from the warped feature, excluding from the loss areas that are occluded or fall outside of the 2D image coordinates using the out-projection mask. " }, { "figure_ref": [], "heading": "Instance Plane Soft-Pooling", "publication_ref": [], "table_ref": [], "text": "To obtain the final instance level plane parameters, we use a soft-pooling technique which only considers per-pixel parameters within the area of the predicted instance. We found that restricting the pooling to this binary area yields better results compared to using soft-pooling across all pixel locations. We opted to not use an instance level plane loss as it negatively impacts the learning of mask segmentation. We generate a binary segmentation mask by applying a threshold to the predicted soft mask, denoted as m * ∈ [0, 1]. The instance level parameter can be retrieved by\np ins = ∑ M i=1 m * i p * i ∑ M i=1 m * i , (10\n)\nwhere M represents all the pixels falling within the region indicated by the binary segmentation mask, and p * i the predicted plane parameter at the corresponding location.\nWe evaluate various configurations of our model as well as comparison models. The nomenclature for our model versions is as follows: SOLOP-5lvls is a single view version using the original 5 feature levels for prediction, SOLOP-SV refers to the singleview model trained on 60,000 samples, SOLOP-MV is the multi-view model trained on 30,000 pairs, and SOLOP-MV-gw incorporates gradient edge loss weighting into the multi-view model. Qualitative results are obtained using the last configuration, as it achieved the best performance." }, { "figure_ref": [], "heading": "Setup & Training details", "publication_ref": [ "b33" ], "table_ref": [], "text": "For comparison between different model versions, we train a base model initialized with a pretrained ResNet-50 backbone and employ a data augmentation scheme where each sample has a 15% chance of undergoing one of several augmentations, such as a) jitter of brightness, contrast, hue, saturation, b) Planckian jitter (Zini et al., 2023), c) Gaussian noise, or d) motion blur. We use learning rate warm-up for the first 2000 steps starting from a learning rate of 1e-6 and increases until 2e-4. After the initial warmup period, the learning rate is reduced by a factor of 0.1 given no improvement to the validation loss. For quicker and more fair comparison of model variations, a base model with the best validation loss was saved at epoch 9 and used as initialization to our main models, which were trained for 11 additional epochs. We employ early stopping if validation loss fails to improve for 5 consecutive epochs and save the model with best validation performance as well as the last checkpoint. For evaluation, we take the best of either saved model. The additional models trained using the base model initialization do not use data augmentation, and have 500 steps of learning rate warmup starting from 1e-6 to 1e-5. We use a batch size of 32 for the single view model with gradient accumulation to mitigate the higher instability associated with multi-task learning. We train the models on a single NVIDIA Ampere A100 GPU. For evaluation and FPS calculation, we use a single NVIDIA GeForce RTX 3090 GPU for all models." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b6", "b15", "b17" ], "table_ref": [], "text": "For training and evaluation, we use the ScanNet dataset which contains RGB-D images from video sequences, totalling 2.5 million views complete with camera parameters and instance level semantics (Dai et al., 2017). The ground truth plane instance anno-tations for instance masks and plane parameters are generated by the authors of PlaneRCNN, and we follow the same process for filtering and preprocessing the planes (Liu et al., 2019). We also obtain the corresponding plane instance semantics from the metadata of the plane annotations. The ground truth plane data often exhibited issues such as over-segmented, rough edges, or missing plane instances, as planes with a depth error above a 0.1 meter threshold were omitted. For multi-view guidance training, we take sample pairs which are 10 time-steps away. In some cases, a neighbouring ground truth plane image might contain a segment which is missing in the source view, and vice versa. For the single-view model, we use 60,000 random samples from the training set and 10,000 from the validation set. For the multi-view model, we use 30,000 neighboring pairs for training and 5,000 pairs for validation. 2021), primarily due to the speed of prediction and the fact that they predict plane parameters directly using a single image as input. Given the inconsistent quality of ground truth plane data, the authors of PlaneMVS manually selected stereo pairs for the test set, which contained samples with more complete plane annotations (Liu et al., 2022). We run our evaluations using the same test set. For a fair comparison, we train the PlaneAE model for a total of 20 epochs using a ResNet-50 backbone and the same data with an input size of 480 x 640. The original model was trained using an input size of 192 x 256, resulting in a higher FPS. To align with our training regimen, we train PlaneAE for 11 epochs using 60,000 samples and an additional 9 epochs with 100,000 random samples. We retain the original training configura- " }, { "figure_ref": [], "heading": "Comparison", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_6" ], "heading": "Evaluation Metrics", "publication_ref": [ "b32", "b15", "b8" ], "table_ref": [ "tab_1" ], "text": "We follow previous methods (Yu et al., 2019;Liu et al., 2019) and calculate the per-pixel depth recall at varying thresholds in meters, shown in Fig. 4. We also calculate standard depth and detection metrics for a comprehensive evaluation of model performance. Average Precision (AP) is used to assess the quality of the predicted masks, and Mean Average Precision (mAP) takes into account the semantic labels by averaging AP across class categories. For depth metrics, we use Absolute Relative Difference (AbsRel), Squared Relative Difference (SqRel), Root Mean Squared Error (RMSE), log RMSE, and delta accuracy (Eigen et al., 2014). We also calculate model efficiency using Frames Per Second (FPS). The results of these evaluations are summarized in Table 1, which shows a marked improvement using our architecture." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "The task of segmentation becomes more challenging when predicting multiple classes, as overlapping masks from different classes are less likely to be suppressed. The oversegmentation issue appears to be more pronounced in the single view model, whereas multi-view guidance using plane features helped to produce more complete and less oversegmented masks. This improvement is likely attributable to feature sharing and the correlation between ground truth plane instance masks and plane " }, { "figure_ref": [ "fig_6", "fig_9", "fig_10", "fig_10", "fig_9" ], "heading": "parameters.", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Despite using multi-view guidance on plane predictions, we observe an objective improvement in prediction of segmentation masks. We hypothesize that this is especially effective when adjacent views have disparate ground truth data, such as in the case of missing annotations. This would explain the similar performance with regards to depth metrics between SOLOP-SV and multiview variants, as the ground truth depth is fairly stable across views. Ground truth mask completeness can differ across neighbouring views due to lower quality segments being filtered out. Even though the variants using multi-view guidance saw a lower diversity of scenes compared to the single view version, it nevertheless outperforms the single view variant on the task of mask segmentation.\nQuantitative All results are obtained using the selected test set chosen by the authors of PlaneMVS. The authors Liu et al. (2022) manually selected a higher quality set to evaluate on due to the incomplete and imprecise nature of the ground truth plane annotations. The resulting test set contains 949 image pairs. Our quantitative findings from model comparisons, summarized in Table 1, indicate that our multi- view model variant not only matches the performance of the single-view model in depth metrics, but also shows a significant improvement in detection metrics. This demonstrates the efficacy and improved data efficiency in using multi-view guidance via warping in feature space, at least in the case of using shared features for multitask learning. Since all SOLOP variants use a single image at inference time, the FPS result is the same for the versions of the model using 3 feature levels (SOLOP-SV, SOLOP-MV, SOLOP-MV-gw), but significantly reduced for the version with the original 5 level architecture (SOLOP-5lvls). SOLOP-MV-gw achieves better depth recall comparatively (see Fig. 4), while all SOLOP variants outperform the comparison models on standard metrics.\nQualitative We display different types of visual results from our best model in Figures 1, 5, and6. In contrast to previous works that predicted a binary plane indication, the incorporation of multi-class semantics introduces an added complexity. The change made to the focal loss for category predictions (see Section 3.2) leads to more confident scoring as well as a potential increase in false positives, which is already exacerbated in the case of multi-class predictions. However, we found that raising the score threshold for the final masks partially mitigated this issue. See Fig. 6 for visual results. The structure of the scene is easier to predict than the exact depth, a challenge presented when using a single image for inference. Sample visualizations of the semantic pre-dictions can be found in Fig. 5. Cases of oversegmentation can occur due to prediction of different classes, or different plane orientation, as each mask represents a planar segment associated with a class label. Overall, our model demonstrates robust performance both visually and quantitatively for the task of planar reconstruction with semantic labels." }, { "figure_ref": [], "heading": "DISCUSSION AND FUTURE WORK", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce SOLOPlanes, a real-time semantic planar reconstruction network which shows cross-task improvement when using multi-view guidance in feature space. The task of predicting plane parameters from a single image is non-trivial, and the complexity is further compounded by multi-task learning. Despite these challenges, our model competes favorably with other, less efficient methods in planar reconstruction that do not offer semantic predictions. To the best of our knowledge, our model also outperforms all other planar reconstruction models in computational efficiency, measured using FPS. Our work advances semantic plane instance segmentation without sacrificing computational efficiency, striking a balance between efficiency and performance. We hope it will serve as an inspiration or stepping stone for further research geared towards applications with real-world impact." } ]
Piece-wise planar 3D reconstruction simultaneously segments plane instances and recovers their 3D plane parameters from an image, which is particularly useful for indoor or man-made environments. Efficient reconstruction of 3D planes coupled with semantic predictions offers advantages for a wide range of applications requiring scene understanding and concurrent spatial mapping. However, most existing planar reconstruction models either neglect semantic predictions or do not run efficiently enough for real-time applications. We introduce SOLOPlanes, a real-time planar reconstruction model based on a modified instance segmentation architecture which simultaneously predicts semantics for each plane instance, along with plane parameters and piece-wise plane instance masks. We achieve an improvement in instance mask segmentation by including multi-view guidance for plane predictions in the training process. This cross-task improvement, training for plane prediction but improving the mask segmentation, is due to the nature of feature sharing in multi-task learning. Our model simultaneously predicts semantics using single images at inference time, while achieving real-time predictions at 43 FPS.
Multi-task Planar Reconstruction with Feature Warping Guidance
[ { "figure_caption": "al., 2018a; Yu et al., 2019; Liu et al., 2019; Xie et al., 2021b,a). While existing works have made strides in predicting piece-wise instance masks and plane parameters, they often ignore the", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Comparison of SOLOPlanes output with ground truth (GT). 3D projections using predicted plane parameters (left) and GT depth (right). Textures use RGB (top), predicted semantics (bottom left), and GT semantics (bottom right).", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "fromLiu et al. (2018b).", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Simplified overview of SOLOPlanes architecture.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "The model is trained on the largescale public ScanNet dataset containing indoor scenes from Dai et al. (2017), supplemented with ground truth plane annotations from Liu et al. (2019).", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of the feature warping guidance.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Per-pixel recall at varying depth thresholds in meters.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Our model is most comparable to the PlaneAE model from Yu et al. (2019) and PlaneTR model from Tan et al. (", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "tion of the authors(Yu et al., 2019). We use the same approach for retraining the PlaneTR model, and generate the required line segments using HAWPv3(Xue et al., 2023). While PlaneRCNN also takes a single image at inference time, its slower inference speed makes it a less direct comparison. We run evaluations on the provided model from authorsLiu et al. (2019).", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualization of semantic predictions using the SOLOP-MV-gw model.", "figure_data": "", "figure_id": "fig_9", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Qualitative results of instance plane and semantic prediction using model with best performance, SOLOP-MV-gw. From left to right: Input image, GT planes, predicted planes, GT depth, predicted depth, predicted semantics.", "figure_data": "", "figure_id": "fig_10", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Model comparison results on ScanNet dataset for variations of SOLOP model and other single-image planar reconstruction methods. AbsRel↓ SqRel↓ RMSE↓ log RMSE↓ δ < 1.25 ↑ δ 2 < 1.25 ↑ δ 3 < 1.25 ↑ AP mAP", "figure_data": "MethodDepth MetricsDetection Metrics FPSPlaneAE0.1810.0920.3250.2080.7460.9310.983--17PlaneTR0.1780.1330.3650.2150.7680.9300.977--15PlaneRCNN0.1650.0700.2780.1870.7800.9540.9910.193-7SOLOP-5lvls *0.1430.0590.2760.1850.8130.9600.9900.4160.31438SOLOP-SV *0.1340.0520.2590.1780.8320.9640.9910.3890.26743SOLOP-MV *0.1360.0540.2610.1770.8320.9620.9910.4270.34443SOLOP-MV-gw * 0.1330.0520.2590.1770.8330.9640.9920.4340.34743* = Ours", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Wei Luan; Anna Hilsmann; Peter Eisert
[ { "authors": "T Asanomi; K Nishimura; R Bise", "journal": "", "ref_id": "b0", "title": "Multiframe attention with feature-level warping for drone crowd tracking", "year": "2023" }, { "authors": "R Caruana", "journal": "Machine learning", "ref_id": "b1", "title": "Multitask learning", "year": "1997" }, { "authors": "Q Chen; Y Wang; T Yang; X Zhang; J Cheng; J Sun", "journal": "", "ref_id": "b2", "title": "You only look one-level feature", "year": "2021" }, { "authors": "R Chen; S Han; J Xu; H Su", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b3", "title": "Visibilityaware point-based multi-view stereo network", "year": "2020" }, { "authors": "J M Coughlan; A L Yuille", "journal": "Neural Computation", "ref_id": "b4", "title": "Manhattan World: Orientation and Outlier Detection by Bayesian Inference", "year": "2003" }, { "authors": "M Crawshaw", "journal": "", "ref_id": "b5", "title": "Multi-task learning with deep neural networks: A survey", "year": "2020" }, { "authors": "A Dai; A X Chang; M Savva; M Halber; T Funkhouser; M Nießner", "journal": "IEEE", "ref_id": "b6", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Y Ding; W Yuan; Q Zhu; H Zhang; X Liu; Y Wang; X Liu", "journal": "", "ref_id": "b7", "title": "Transmvsnet: Global context-aware multi-view stereo network with transformers", "year": "2022" }, { "authors": "D Eigen; C Puhrsch; R Fergus", "journal": "", "ref_id": "b8", "title": "Depth map prediction from a single image using a multi-scale deep network", "year": "2014" }, { "authors": "E Guo; Z Chen; Y Zhou; D O Wu", "journal": "Sensors", "ref_id": "b9", "title": "Unsupervised learning of depth and camera pose with feature map warping", "year": "2021" }, { "authors": "K He; G Gkioxari; P Dollár; R Girshick", "journal": "", "ref_id": "b10", "title": "Mask r-cnn", "year": "2017" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b11", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "S Im; H.-G Jeon; S Lin; I.-S Kweon", "journal": "ICLR", "ref_id": "b12", "title": "Dpsnet: End-to-end deep plane sweep stereo", "year": "2019" }, { "authors": "T.-Y Lin; P Dollár; R Girshick; K He; B Hariharan; S Belongie", "journal": "", "ref_id": "b13", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "T.-Y Lin; P Goyal; R Girshick; K He; P Dollár", "journal": "", "ref_id": "b14", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "C Liu; K Kim; J Gu; Y Furukawa; J Kautz", "journal": "", "ref_id": "b15", "title": "Planercnn: 3d plane detection and reconstruction from a single image", "year": "2019" }, { "authors": "C Liu; J Yang; D Ceylan; E Yumer; Y Furukawa", "journal": "", "ref_id": "b16", "title": "Planenet: Piece-wise planar reconstruction from a single rgb image", "year": "2018" }, { "authors": "J Liu; P Ji; N Bansal; C Cai; Q Yan; X Huang; Y Xu", "journal": "", "ref_id": "b17", "title": "Planemvs: 3d plane reconstruction from multi-view stereo", "year": "2022" }, { "authors": "R Liu; J Lehman; P Molino; F Petroski Such; E Frank; A Sergeev; J Yosinski", "journal": "", "ref_id": "b18", "title": "An intriguing failing of convolutional neural networks and the coordconv solution", "year": "2018" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b19", "title": "", "year": "" }, { "authors": "Y Qian; Y Furukawa", "journal": "", "ref_id": "b20", "title": "Learning pairwise interplane relations for piecewise planar reconstruction", "year": "2020" }, { "authors": "T Standley; A Zamir; D Chen; L Guibas; J Malik; S Savarese", "journal": "PMLR", "ref_id": "b21", "title": "Which tasks should be learned together in multi-task learning?", "year": "2020" }, { "authors": "B Tan; N Xue; S Bai; T Wu; G.-S Xia", "journal": "", "ref_id": "b22", "title": "Planetr: Structure-guided transformers for 3d plane recovery", "year": "2021" }, { "authors": "X Wang; T Kong; C Shen; Y Jiang; L Li", "journal": "Springer", "ref_id": "b23", "title": "Solo: Segmenting objects by locations", "year": "2020-08-23" }, { "authors": "X Wang; R Zhang; T Kong; L Li; C Shen", "journal": "Advances in Neural information processing systems", "ref_id": "b24", "title": "Solov2: Dynamic and fast instance segmentation", "year": "2020" }, { "authors": "W Xi; X Chen", "journal": "Computational Visual Media", "ref_id": "b25", "title": "Reconstructing piecewise planar scenes with multi-view regularization", "year": "2019" }, { "authors": "Y Xie; M Gadelha; F Yang; X Zhou; H Jiang", "journal": "", "ref_id": "b26", "title": "Planarrecon: Real-time 3d plane detection and reconstruction from posed monocular videos", "year": "2022" }, { "authors": "Y Xie; J Rambach; F Shu; D Stricker", "journal": "IEEE", "ref_id": "b27", "title": "Planesegnet: Fast and robust plane estimation using a single-stage instance segmentation cnn", "year": "2021" }, { "authors": "Y Xie; F Shu; J R Rambach; A Pagani; D Stricker", "journal": "", "ref_id": "b28", "title": "Planerecnet: Multi-task learning with crosstask consistency for piece-wise plane detection and reconstruction from a single rgb image", "year": "2021" }, { "authors": "N Xue; T Wu; S Bai; F.-D Wang; G.-S Xia; L Zhang; P H S Torr", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b29", "title": "Holistically-attracted wireframe parsing: From supervised to self-supervised learning", "year": "2023" }, { "authors": "F Yang; Z Zhou", "journal": "Cham. Springer International Publishing", "ref_id": "b30", "title": "Recovering 3d planes from a single image via convolutional neural networks", "year": "2018" }, { "authors": "Y Yao; Z Luo; S Li; T Fang; L Quan", "journal": "", "ref_id": "b31", "title": "Mvsnet: Depth inference for unstructured multi-view stereo", "year": "2018" }, { "authors": "Z Yu; J Zheng; D Lian; Z Zhou; S Gao", "journal": "", "ref_id": "b32", "title": "Single-image piece-wise planar 3d reconstruction via associative embedding", "year": "2019" }, { "authors": "S Zini; A Gomez-Villa; M Buzzelli; B Twardowski; A D Bagdanov; J Van De Weijer", "journal": "", "ref_id": "b33", "title": "Planckian jitter: countering the color-crippling effects of color jitter on self-supervised training", "year": "2023-05-01" } ]
[ { "formula_coordinates": [ 4, 365.37, 332.81, 152.37, 27.26 ], "formula_id": "formula_0", "formula_text": "L plane = 1 N N ∑ i=1 ∥p i -p * i ∥. (1" }, { "formula_coordinates": [ 4, 517.74, 341.89, 3.87, 8.64 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 4, 314.54, 442.76, 207.08, 27.26 ], "formula_id": "formula_2", "formula_text": "sim i = p i • p * i ∥p i ∥∥p * i ∥ , L sur f ace = 1 N N ∑ i=1 1 -sim i (2)" }, { "formula_coordinates": [ 4, 377.25, 527.32, 144.36, 26.89 ], "formula_id": "formula_3", "formula_text": "D * i = d * i n * T i • K -1 q i ,(3)" }, { "formula_coordinates": [ 4, 365.37, 610.94, 156.25, 27.26 ], "formula_id": "formula_4", "formula_text": "L depth = 1 N N ∑ i=1 |D i -D * i |.(4)" }, { "formula_coordinates": [ 5, 125.79, 127.08, 160.53, 27.26 ], "formula_id": "formula_5", "formula_text": "L geom = 1 N N ∑ i=1 n * T i • Q i -d * i (5)" }, { "formula_coordinates": [ 5, 128.68, 305.45, 157.64, 27.26 ], "formula_id": "formula_6", "formula_text": "L depth,geom = 1 N N ∑ i=1 G i * L i (6)" }, { "formula_coordinates": [ 5, 101.99, 358.75, 184.33, 9.81 ], "formula_id": "formula_7", "formula_text": "L P = L plane + L sur f ace + L geom + L depth ,(7)" }, { "formula_coordinates": [ 5, 124.21, 395.62, 162.11, 9.81 ], "formula_id": "formula_8", "formula_text": "L total = L M * w M + L C + L P .(8)" }, { "formula_coordinates": [ 5, 116.01, 680.1, 170.31, 12.07 ], "formula_id": "formula_9", "formula_text": "ns = R n n , ds = d n + n n T • t,(9)" }, { "formula_coordinates": [ 5, 377.58, 625.37, 139.88, 27.49 ], "formula_id": "formula_10", "formula_text": "p ins = ∑ M i=1 m * i p * i ∑ M i=1 m * i , (10" }, { "formula_coordinates": [ 5, 517.46, 634.47, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" } ]
2024-02-25
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b3", "b7", "b8", "b12", "b6", "b13", "b20", "b4", "b21", "b22", "b23", "b25", "b23", "b23", "b26", "b22", "b27", "b28", "b29", "b30", "b12", "b17", "b31", "b32", "b32", "b0", "b12", "b17" ], "table_ref": [], "text": "Registration is a fundamental step in many medical image applications [1], including atlas-based segmentation [2], [3], longitudinal lesion quantification [4], image-guided radiotherapy [5], [6], and computer-aided diagnosis with multi-modality fusion [7]. Its goal is to find the spatial correspondence L. Tian and M. Niethammer are with the University of North Carolina at Chapel Hill. This work was done during L. Tian's internship at Alibaba Group. Z. Li, X. Bai, L. Lu, K. Yan and D. Jin are with Alibaba Group. F. Liu is with Johns Hopkins University. J. Ge and X. Ye are with The First Affiliated Hospital of Zhejiang University.\nL. Tian and Z. Li have contributed equally to this work. Correspondence: K. Yan and D. Jin (email: yankethu@gmail.com and dakai.jin@gmail.com) between pairs or series of medical images. Based on the complexity of the deformation, spatial correspondences can be represented by a low dimensional parametric transformation [4] (e.g., for rigid or affine registration), a high-dimensional parametric/non-parametric transformation with many degrees of freedom [8] (e.g., represented by a spline, a displacement field, or a velocity field), or a composition of the two.\nThere are mainly two kinds of registration methods. One formulates image registration as an optimization problem [9]- [13], aiming at finding the optimal transformation parameters that minimize the dissimilarity between the warped image and the fixed image, subject to certain regularity constraints. The associated optimization problems are generally non-convex and lack a closed-form solution. Thus, iterative optimization algorithms (typically based on a form of gradient descent) are commonly adopted. To reduce the computation time, learningbased methods [7], [14]- [21] have recently been proposed. The training of these learning-based methods also relies on optimization guided by the similarity measure and typically a regularizer. For both optimization-based or learning based methods, the similarity measures are computed either on image intensities directly or on hand-crafted features (e.g. SSC [5], MIND [22] and attribute vectors [23]) that are designed to incorporate structural information. Another body of work [24]- [26] involves finding corresponding key-points in two images based on a similarity measure computed over local features (e.g., surface norm [24], curvature characteristics [24] and histogram of oriented gradients [27]), followed by estimating the deformation directly from the corresponding key-points.\nBoth types of methods rely on the similarity measures computed either directly on intensity values or using handcrafted features. Such measures may have limited capacity to accurately capture anatomical semantic similarity due to a lack of anatomical information in these measures. As a result, either optimizing non-convex registration energies or searching for corresponding key-points may lead to sub-optimal solutions where two voxels containing similar local structures but belonging to different anatomical regions are mismatched due to lack of anatomical semantic information. This could lead to severe issues when solving large deformation registration problems, where usually an affine transformation is estimated first and followed by a more flexible registration allowing for local deformations. In this case, a sub-optimal solution of the affine transformation estimated based on local features may not be able to provide a suitable initialization for the subsequent registration and may negatively affect the overall registration accuracy. The lack of anatomical information can be solved by extracting features from an anatomical segmentation (label) map [23]. But annotating segmentation maps is not a trivial task.\nTo overcome the discussed issues, we explore incorporating the Self-supervised Anatomical eMbedding (SAM) [28] approach into registration methods. Instead of relying on manually designing local features, SAM learns a unique embedding for each voxel in the image via self-supervised contrastive learning and provides both an anatomical semantic representation and a local structure representation without using of segmentation maps. It is capable of directly finding matches between anatomical key points of two images. Moreover, it transforms the input image from the intensity space to a common feature space reducing possible contrast differences due to various imaging protocols or acquisition devices. The most straightforward way to incorporate SAM into registration is to densely extract SAM embeddings from both moving and fixed images. For each voxel in the moving image, we can then search for the matching point in the fixed image based on the most similar SAM embedding, and calculate the coordinate offsets for each pair of matching voxels as the displacement. However, this approach is computationally expensive. E.g., there are millions of voxels in a typical 3D computed tomography (CT) image. Moreover, potential mismatched voxel pairs may greatly affect the regularity of the transformation and registration accuracy.\nWe propose SAM-Enhanced registration (SAME++), a four stage registration framework that is based on the selfsupervised pre-trained SAM. At each stage, we tailor the approach and utilize SAM in alignment with the specific traits of that stage, ultimately resulting in an accurate and fast registration framework. First, we introduce SAM-affine, which involves the extraction of a set of corresponding points based on SAM. These key points are subsequently used to compute the affine transformation. Following that, we present SAM coarse deformation step. This stage is focused on estimation of a coarse deformation field given the key points found in the previous step. Notably, this step requires no additional training and serves as a favorable initialization for subsequent stages. Next, we introduce SAM-deform, where we train a registration neural network to predict a dense transformation field. In this step, we enhance the network's capabilities by integrating SAM-based correlation features and leveraging a similarity measure within the SAM feature space. Lastly, we employ a SAM-based instance optimization module to counter the common generalization issue of learning-based registration methods caused by the small datasets in medical image registration.\nWe extensively evaluate SAME++ on more than 50 labeled organs in three challenging inter-subject registration tasks of different body parts (head & neck, chest, and abdomen). We compare SAME++ with two widely-used optimization-based methods (Elastix [29], DEEDS [30], [31]) and two registration methods (ConvexAdam [13] and LapIRN [18]) that achieved the top rankings in a recent Learn2Reg challenge [32]. Quantitative results show the superiority of SAME++: for affine registration, SAM-affine significantly outperforms the widelyused affine transformation techniques in terms of Dice score by at least 4.4%, 6.0%, and 8.5% for three inter-patient registration tasks, respectively. SAM-deform achieves the overall best performance as compared with four top-ranked conventional and learning-based deformable methods under the same prealignment condition. As a complete registration framework (from SAM-affine to SAM-deform), SAME++ markedly outperforms the leading methods in terms of Dice scores by 4.2% -8.2% averaged on three registration tasks.\nThis work extends our previous preliminary work SAME [33]. Compared with [33], substantial extensions are made in terms of methodology and comprehensive experimental evaluations: (1) we propose a stable sampling strategy based on cycle consistency to eliminate potential false correspondence matches of SAM embeddings in SAM-affine;\n(2) a regularization constraint in SAM-coarse is introduced to significantly reduce the folding rate while improving registration accuracy; (3) we incorporate diffeomorphic transformations, e.g., by using a stationary velocity field, in SAMdeform to guarantee desirable diffeomorphic properties; the deformation map is further finetuned by an auxiliary instance optimization module; (4) we conduct extensive experiments on three datasets (with more than 50 labeled organs) of different body parts to validate the performance and to compare to recent leading registration methods, such as ConvexAdam [13] and LapIRN [18]. The paper is organized as follows. Sec. II describes related work. Sec. III introduces the background and Sec. IV describes our framework. We show experimental results in Sec. V. Sec. VI provides conclusions." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Medical Image features", "publication_ref": [ "b4", "b21", "b22", "b4", "b21", "b22", "b33", "b34", "b36", "b37", "b38", "b39", "b27" ], "table_ref": [], "text": "Several hand-crafted descriptors [5], [22], [23] have been proposed for medical image registration. SSC [5] and MIND [22] aim at countering intensity differences between different modalities. In [23], an attribute vector containing the local edge types, image intensity, and geometric moment invariants is proposed to represent the geometric structure of the underlying anatomy. Long et al. [34] explore the intermediate features from a classification convolutional neural network (CNN) and found that the learned features can be used to find correspondence between object instances and performs on par with hand-crafted features. Further, more work [35]- [37] has been conducted to learn dense features in a supervised training scheme, demonstrating improvements of learned CNN features over hand-crafted features. This trend has been further pursued via self-supervised visual feature learning [38], where a pre-text task is specifically designed to learn the image representations with unlabelled data. Among which, VADeR [39] and DenseCL [40] have been developed to learn dense representations of natural images that can be used to discriminate instances across categories (e.g., dogs vs cats). Different from natural images, in medical images one would generally like to discriminate the anatomical structures instead of the subjects (patients). To achieve this goal, SAM [28] proposes to learn a dense representation for medical images that contains anatomical semantic information and shows the discriminativeness of the learned features by matching corresponding landmarks on several anatomical structures (e.g., chest, hand and pelvic)." }, { "figure_ref": [], "heading": "B. Affine Registration", "publication_ref": [ "b14", "b40", "b46", "b47", "b48", "b51", "b50", "b51", "b47", "b52", "b14", "b43", "b46", "b46" ], "table_ref": [], "text": "Affine registration has been extensively studied in medical image registration [15], [41]- [47], natural image matching [48] and point set registration [49]- [52]. In medical image registration, the problem is generally solved by formulating an optimization problem with the affine transformation being the parameters and an intensity-based similarity being the cost function. In the point set domain, iterative closest point (ICP) [51], [52] iterates over the following three steps: (1) finding a set of matched closest point pairs according to the Euclidean distance, (2) estimating the affine registration parameters via least square fitting, and (3) updating the source point sets. For natural images, hand-crafted local features [48], [53] have been used to find matched point pairs. These approaches rely on either local image intensities, local features or metrics that do not take the anatomical semantic information into consideration. Recently, a number of learning-based affine registration methods have been proposed [15], [44]- [47]. They show on par or better performance than conventional unsupervised affine registration approaches. However, they require training for each anatomical region or modality and show weaker generalizability than conventional method [47]." }, { "figure_ref": [], "heading": "C. Learning-based Registration", "publication_ref": [ "b8", "b9", "b12", "b6", "b13", "b14", "b16", "b18", "b43", "b44", "b13", "b16", "b6", "b14", "b17", "b18", "b43", "b44", "b53", "b54", "b55", "b16", "b55" ], "table_ref": [], "text": "Traditional deformable registration methods [9], [10], [13] solve an optimization problem and iteratively minimize a similarity measure, often jointly with a regularizer, to align a pair of images. Recently, learning-based deformable registration [7], [14], [15], [17]- [19], [44], [45], using deep networks, has been investigated. Compared with optimizationbased approaches learning-based methods are much faster at inference. Quicksilver [14] and Voxelmorph [17] were the initial approaches for supervised and unsupervised medical image registration, where a convolutional neural network predicts a vector-field to directly describe the displacements or to obtain a velocity field from which a transformation can be obtained by integration. To be able to capture large deformations, more recent methods [7], [15], [18], [19], [44], [45], [54], [55] focus on designing sophisticated networks using multi-step approaches, pyramids, cascaded structures or connecting registration to image synthesis or segmentation [56]. Compared to the extensive work on network structures, less work explores the similarity measure. Registration performance can be improved by using the segmentation (label) maps within the loss [17], [56]. Compared to similarity measures computed over image intensities, segmentation maps can provide anatomical information during training. However, anatomical segmentation maps are not always available. Different from using explicit anatomical segmentation maps, our work explores using pretrained SAM features, that contain discriminative anatomical semantic information." }, { "figure_ref": [], "heading": "III. BACKGROUND A. Problem formulation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Given two images", "publication_ref": [], "table_ref": [], "text": "I m : Ω m → R and I f : Ω f → R, Ω m ⊂ R n and Ω f ⊂ R n representing\nthe domains of the moving and fixed images respectively, our goal is to find the spatial transformation φ : Ω m → Ω f which makes the warped moving image I m • φ -1 as similar as possible to the fixed image I f . Note that the dimension n of the domains Ω ⊂ R n can be 2D or 3D. In the following sections, we assume n = 3 without loss of generality." }, { "figure_ref": [], "heading": "B. Self-supervised anatomical embedding (SAM) review", "publication_ref": [ "b27", "b56", "b57", "b27" ], "table_ref": [], "text": "SAM [28] is a voxel-wise contrastive learning framework to encode the semantic anatomical information of each voxel, so that the same anatomical location across different images will have similar embeddings. With a coarse-to-fine network structure and a hard-and-diverse negative sampling strategy, SAM learns one global and one local feature embedding per voxel in a given image. Each feature embedding expresses the deterministic semantic representation per voxel. The learned SAM feature has demonstrated efficacy in various downstream tasks, e.g., for anatomical point matching, landmark detection, and longitudinal lesion matching [57], [58].\nBecause of the semantic meaning of SAM features, they can directly be used in registration. For any image I with shape D × H × W , SAM extracts a global feature map and a local feature map with size\nC × D 2 × H 8 × W 8 and C × D 2 × H 2 × W\n2 respectively with C being the dimension of the feature embedding at each voxel. In our work, we adopt the pre-trained SAM from [28], resize the global feature map to the same size C × D 2 × H 2 × W 2 as the local feature map by linear interpolation and normalize the feature embedding via L2 normalization. Then we concatenate the resized global feature map with the local feature map along the channel dimension, resulting in the final SAM feature map that is used in our work. We denote the SAM feature maps of I m and I f as S m and S f , respectively, and define d(•, •) to measure the similarity between two SAM feature embeddings. Since feature embeddings are normalized before concatenation, we use the dot product as the measure d(•, •), which corresponds to cosine similarity. A higher score indicates that the pixels at corresponding locations are anatomically more similar. To be noted, the pre-trained SAM is kept frozen." }, { "figure_ref": [ "fig_0" ], "heading": "IV. METHODS", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 1, SAME++ consists of four consecutive steps. The initial step involves SAM-affine and SAM-coarse, which aim to seek the optimal affine transformation and a coarse displacement field that can match a set of corresponding points between I m and I f extracted based on a measure d(•, •) in SAM space. Following SAM-affine and SAM-coarse, SAMdeform leverages a neural network to predict a transformation field, which is then optimized using an auxiliary instance optimization module." }, { "figure_ref": [ "fig_0" ], "heading": "A. SAM-Affine", "publication_ref": [ "b6" ], "table_ref": [], "text": "SAM features allow us to extract corresponding points between I m and I f , and to estimate the affine transformation matrix given the corresponding points. To achieve this, we start by selecting an initial set of points {x m |x m ∈ Ω m , m = 0, 1...N } , which are evenly distributed in the domain of the moving image. The most straightforward approach of finding the corresponding points is to search for points x f ∈ Ω f that have the most similar SAM embeddings to the points x m . However, mismatched pairs of points could exist due to inaccurate SAM embeddings. In our preliminary work [7], we address this issue via thresholding the similarity d(•, •) between the corresponding points.\nIn this work, we further reduce the incorrect correspondences via cycle consistency (SSCC) that computes a set of stable SAM matched points from Ω m and Ω f . Specifically, for a point x m ∈ Ω m , we first find the matching point\nx f ∈ Ω f via FINDPOINTS({x m }, S m , S f ) (Alg. 1). Then we compute the corresponding point x ′ m ∈ Ω m to x f via FIND- POINTS({x f }, S f , S m ). Presumably, if (x m , x f ) is a correct matching pair, then x m and x ′\nm should be the same point. Otherwise, there is a high chance that (x m , x f ) is not a corresponding pair. This idea is illustrated in Fig. 1. To rule out the mismatched pairs, we substitute x m with x ′ m and repeat the process for K times. This stable sampling algorithm is outlined in Alg. 1. In Alg. 1, SELECTPOINTS(Ω) is a function to select a set of points in the given image domain. And FINDPOINTS({x k }, S k , S q ) searches on the grids of the query feature maps S q for the point that has the most similar SAM embedding to the feature vector of the key point S k (X k ). In practice, line 3 to line 9 in Alg. 1 are computed in parallel via a convolution operation.\nAfter determining the corresponding points\nX = {(x m , x f )|x m ∈ Ω m , x f ∈ Ω f },\nwe further remove the low-confidence matched pairs by filtering X with a similarity threshold ϵ, resulting in the final set\nX ϵ = {(x m , x f )|d(s m (x m ), s f (x f )) > ϵ, (x m , x f ) ∈ X}.\nWith X ϵ , one can solve the following linear system to obtain the affine transformation matrix:\nAx m = xf ,(1)\nwhere xm and xf are the homogeneous representations of x m and x f , respectively, and A ∈ R 4×4 is the affine transformation matrix." }, { "figure_ref": [], "heading": "B. SAM-coarse", "publication_ref": [ "b6" ], "table_ref": [], "text": "After SAM-affine, we introduce an additional SAM-coarse step to deform the moving image based on the corresponding point pairs X ϵ . Compared to an affine pre-alignment, SAMcoarse provides local warps that can serve as a better initialization for a following learning-based registration. SAM-coarse is implemented in a similar setting as SAM-affine but solves for a coarse displacement field (DVF). Given the set of SAM matched point pairs X ϵ computed in Sec. IV-A, we optimize over a coarse DVF φ -1\nc = x + u(x), φ -1 c : Ω f → Ω m ,\nwhich aims to bring the SAM matched points A -1 x f and x m closer. To be noted, φ -1 c is a coarse DVF with shape 3 × D stride × H stride × W stride . We set stride = 4 in the experiments. This step accounts for deformations that cannot be explained by an affine transformation matrix.\nIn our preliminary work [7], we proposed to directly estimate the coarse DVF from the displacement between the corresponding points. However, we found that this might introduce severe irregular deformations. This is because there may be mismatched pairs remaining in the corresponding point Initialize X q = {} 3:\nfor x k ∈ {x k } do 4: Initialize D = {} 5:\nfor Each point x q on the grids of S q do 6:\nAdd d(S k (x k ), S q (x q )) into D" }, { "figure_ref": [ "fig_0" ], "heading": "7:", "publication_ref": [ "b16", "b6", "b58", "b8" ], "table_ref": [], "text": "end for 8:\nAdd the x q corresponding to MAX(D) to X q 9: end for 10:\nreturn X q 11: end function 12: {x m } = SELECTPOINTS(Ω m ) 13: for k ← 1 to K do 14:\n{x f } ← FINDPOINTS({x m }, S m , S f ) 15: {x m } ← FINDPOINTS({x f }, S f , S m ) 16: end for 17: return {x m }, {x f }.\nset, especially when the structure at a point is not deterministic, despite our proposed SSCC approach to remove such pairs. Hence, to address this issue, we improve SAM-coarse via formulating it as an optimization problem and use a regularizer to obtain the coarse DVF. With such a design, SAM-coarse is formulated as\nφc -1 = arg min φc 1 |X| (xm,x f )∈Xϵ ||x m -φ -1 c (A -1 x f )|| 2 2 + 1 |Ω f | y∈Ω f ||∇u(y)|| 2 F .(2)\nC. SAM-deform SAM-affine and SAM-coarse estimate an affine transformation and a coarse displacement field, respectively. To improve the registration accuracy, SAM-deform is developed that aims at estimating a dense non-parametric transformation map φ -1 d (x) given the fixed image I f and a pre-aligned moving image I c m after SAM-affine and SAM-coarse. Deep neural networks often use pure pixel intensity-based features and similarity losses to learn φ -1 d (x), such as normalized crosscorrelation (NCC). However, the NCC loss only compares local image intensities, which may not be robust under CT contrast injection, pathological changes, and large or complex deformations in the two images. On the other hand, the SAM embeddings can uncover semantic similarities between two pixels. Hence, we improve them by leveraging the semantic information contained in SAM embeddings using SAM correlation features and a SAM loss.\nSpecifically, we train a neural network f θ (I f , I c m , S f , S c m ) to predict the dense transformation map, as illustrated in Fig. 1. To train such a network, we propose a loss containing three terms: a similarity term L sim that penalizes the appearance differences, a similarity term L SAM in SAM space, and a regularizer L reg that encourages spatial smoothness of the chosen transformation model. SAM-deform is independent on the choice of similarity measure used in L sim . In the experiment, we test two commonly used similarity measure: NCC and local normalized cross-correlation (LNCC [17]). L SAM is computed over the SAM features, which can be written as:\nL SAM = 1 -d(S c m • φ -1 d , S f ) .(3)\nPenalizing the dissimilarity between the fixed image and the warped moving image could yield a perfect match but with physically unfeasible deformations. Based on the transformation model adopted in the network, the regularizer is therefore required to obtain smooth transformations. In our preliminary work [7], we use DVF as the transformation model. To further advocate a regular transformation. In this work, we adopt the stationary velocity field (SVF)\n∂ϕ(t) ∂t = v(ϕ(t)), ϕ(0) = Id, φ = ϕ(1)(4)\nas the transformation. Govern by the ordinary differential equation constraint, ϕ(0) = Id is the identity transformation and t ∈ [0, 1] represents the integration time. We follow [59] to obtain the final registration field φ via integration using scaling and squaring [9], which recursively computes the solution in successive small time steps. We use seven steps in our experiments. We adopt the following regularization to advocate a smooth velocity field\nL reg = 1 |Ω| x∈Ω ||∇v(x)|| 2 F .(5)\nThe overall loss is\nL = λ 1 (L sim + L SAM ) + λ 2 L reg(6)\nwhere λ 1 and λ 2 is the hyper-parameters to balance the loss terms." }, { "figure_ref": [], "heading": "D. SAM Instance Optimization", "publication_ref": [ "b6" ], "table_ref": [], "text": "Upon our preliminary work [7], we further improve the registration performance by adding instance optimization module, which solves a conventional optimization problem\nφi = arg min φi 1-d(S c m •φ -1 i , S f )+ 1 |Ω| Ω ||∇u i (x)|| 2 2 ,(7)\nwhere the transformation is defined as φ -1 i (x) = x+u(x), and S c m and d(•, •) represent the SAM feature map of the warped image I c m and the dot product between SAM feature vectors. The resulting transformation of SAM-deform φ -1 d are used as the initial transformation in this step.\nThe final transformation field is computed via composition of A, φ c and φ i as defined\nφ -1 = φ -1 a • φ -1 c • φ -1 i , φ -1 a (x) = A -1 x .(8)\nWe use trilinear interpolation in the composition. " }, { "figure_ref": [], "heading": "V. EXPERIMENTS", "publication_ref": [ "b60" ], "table_ref": [], "text": "We evaluate SAME++ on three challenging inter-patient image registration tasks, each of which focuses on a specific body part. We first describe the datasets, preprocessing, evaluation metrics, and the implementation details. Then, a series of ablation results are presented to demonstrate the effectiveness of each step of SAME++. Comparisons with the leading registration methods are also reported. 2) Chest CT: A chest CT dataset of 94 subjects [61] was collected, each with a contrast-enhanced and a non-contrast scan. Each chest CT image has 35 anatomical structures manually labeled (including lung, heart, airway, esophagus, aorta, bones, muscles, arteries, and veins). We randomly split the patients into 74, 10, and 10 as training, validation, and test sets. For validation and testing, 90 image pairs are constructed for inter-patient registration, including intra-phase registration and cross-phase registration. Each image is resampled to an isotropic resolution of 2 mm and cropped to 208 × 144 × 192 (mainly contains the chest region) by removing black borders." }, { "figure_ref": [], "heading": "A. Datasets and", "publication_ref": [ "b31", "b61", "b8" ], "table_ref": [], "text": "3) Abdomen CT: We use the abdominal CT dataset [32], [62] to evaluate the inter-patient registration of abdominal CT images. 1 The dataset contains 30 CTs and we split it with 20 for training/validation and 10 for testing. Each image has 13 manually labeled anatomical structures: spleen, right kidney, left kidney, gall bladder, esophagus, liver, stomach, aorta, inferior vena cava, portal and splenic vein, pancreas, left adrenal gland and right adrenal gland. The images are resampled to the same voxel resolution of 2 mm and cropped to the spatial dimensions of 192×160×256 mainly containing the abdominal region.\nEvaluation metrics: To evaluate the accuracy, we use the average Dice score (DICE) over the labeled organs in each dataset. To evaluate the plausibility of deformation fields, we compute the percentage of foldings (%|J|) inside the deformation field. A negative determinant of the Jacobian at a voxel indicates local folding [9]. Efficiency is measured based on model inference time on the same hardware." }, { "figure_ref": [], "heading": "B. Implementation Details", "publication_ref": [ "b27", "b62" ], "table_ref": [], "text": "We use a pre-trained SAM model [28]. This SAM model outputs a 128 dimensional global embedding and a 128 dimensional local embedding for each voxel and is the same in all four steps of SAME++. Image intensities in all datasets are normalized to [-1, 1] using a window of (-800, 400) Hounsfield Unit. In SAM-affine and SAM-coarse, the SAM similarity threshold is set to 0.7 based on the performance on ChestCT validation set and kept the same across all the datasets. In SAM-Deform, we use a 3D U-Net [63] as the backbone and concatenate the correlation feature and images before the convolutional block. We test NCC and LNCC " }, { "figure_ref": [ "fig_1" ], "heading": "C. SAM-Affine and SAM-coarse Registration Results", "publication_ref": [ "b28", "b21", "b28", "b21" ], "table_ref": [], "text": "We first evaluate the effectiveness of SAM-affine. Two widely-used affine registration methods are compared: (1) an intensity optimization-based method implemented in Elastix [29], and (2) a regression-based solution using MIND features [22]. The quantitative results are summarized in Table . I. The comparison reveals that SAM-affine significantly outperforms the affine registration in Elastix [29] with an improvement in DICE ranging from 4.4% to 15.6% across the three tasks. It is also markedly better than affine registration with the hand-crafted MIND [22] descriptor, achieving an average improvement of 6.8% in DICE across the three tasks. The superior of SAM-affine can be attributed to the better anatomical correspondence provided by SAM. As SAM-affine directly calculates the affine matrix by least squares fitting, it has an average inference time of 0.75s per paired image across the three datasets (more than 10 times faster compared to Elastix-affine (9.05s) or MIND-affine (8.96s)).\nTo better understand the alignment of detailed anatomical structures, we plot the boxplot of 13 organs in the most challenging abdomen affine registration task in Fig. 2. As shown, SAM-affine is consistently better than the two competing methods on every examined abdominal organ where large deformations and complex anatomical differences exist. We also observe that traditional affine methods may even perform worse than using no alignment (the initial condition) on several organs, e.g., spleen, pancreas, esophagus, and aorta. In contrast, SAM-affine can consistently improve the alignment. This demonstrates the importance of global semantic information (as provided by SAM) in affine registration.\nThe effectiveness of SAM-coarse is also illustrated in Table. I. After SAM-affine, SAM-coarse can further boost the registration accuracy significantly by 10.47% to 14.16% in DICE across different datasets. This is because SAM-coarse allows for local deformations that provide more degrees of freedom. This step can provide a good initialization to the following step that aims to find a dense deformation map." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "D. SAM-deform Registration Results", "publication_ref": [ "b11", "b9", "b12", "b17", "b31" ], "table_ref": [], "text": "To evaluate the performance of SAM-deform, we compare with four widely-used and top-performing non-rigid registration methods. They include NiftyReg [12], Deeds [10], and two top-ranked methods (ConvexAdam [13] and LapIRN [18]) in the recent Learn2Reg [32] registration challenge. For a fair comparison, note that all deformable methods in this subsection use the pre-alignment of SAM-affine and SAMcoarse transformation, which provide better pre-alignment than other commonly adopted affine methods.\nResults for the deformable registration methods are shown in Table . II. Several conclusions can be drawn. First, SAMdeform outperforms the widely-used NiftyReg across the three datasets with 3% average DICE improvement. Second, compared with the best traditional optimization-based method (Deeds), SAM-deform performs slightly better over the three datasets with comparable folding rate, while being 10 times faster. Third, although the best learning-based method (LapIRN) has a comparable inference time to SAM-deform, LapIRN has a notably higher folding rate overall. Finally, SAM-deform achieves the overall best performance (54.61% mean DICE) compared to other deformable methods (50.75% to 54.16% mean DICE) with the fastest inference time and a comparable folding rate.\nTo better understand the performance of different deformable registration methods, we display organ-specific results in Fig. 3 and Fig. 4. For conciseness, in the head & neck CT dataset, we average the DICE of left and right organs into one score and calculate the median and interquartile range of DICE within each organ. In the chest CT dataset, we divide the 35 organs into 9 groups and calculate the median and interquartile range of DICE within each group. It is observed that SAM-deform surpasses other methods in 13 out of 17 organs or organ-groups. Some organs such as len, nerves, arteries and veins display lower DICE for all methods, this may be because they are typically small or easy to be confused with surrounding tissues. Qualitative examples are also shown in Fig. 5 with a clearly improved alignment of various organs after registration." }, { "figure_ref": [], "heading": "E. Results of Complete SAME++ Framework", "publication_ref": [], "table_ref": [], "text": "We evaluate the performance of our complete registration method SAME++, and compare it with the complete registration pipelines (affine + deformable) of other leading registration methods, such as NiftReg, Deeds and ConvexAdam. Note that the comparing methods here adopt their own affine transformation as SAM-affine and SAM-coarse do not exist in their methods. outperforms the leading methods by improving DICE scores by 4.2% -8.2% on average over the three registration tasks. Meanwhile, it has the lowest running time with a comparable folding rate." }, { "figure_ref": [], "heading": "F. Other Ablation Results", "publication_ref": [ "b0", "b1" ], "table_ref": [], "text": "We also conduct the following ablation studies to comprehensively understand the performance of each SAME++ component. (1) The effectiveness of SSCC in SAM-affine; (2) The importance of the regularization term in SAM-coarse; Effectiveness of SSCC: In SAM-affine, SSCC can help to refine the matching accuracy between corresponding points in the moving and fixed images. With SSCC, we run the iteration 5 times to obtain a stable matching point set. As shown in Table . V, the stable sample strategy can improve the affine registration performance in the head & neck and chest CT datasets by reducing the inaccurate SAM mappings. Yet, in the abdomen CT dataset, the improvement is minor. This may be because anatomic context in some abdominal organs (e.g., intestine and colon) is not unique, which results in difficulties to learn accurate SAM embedding near these organs. Regularizer in SAM-coarse: We also conduct an experiment to examine the importance of adding the regularizer in SAM-coarse. As shown in Table . VI, adding the regularizer significantly reduces the folding percentage %|J| (on average 29.02% to 0.94%) in all three datasets. Moreover, it also helps to improve the registration accuracy. E.g., on average 6.84% DICE improvement is observed. The motivation for adding a regularizer in the SAM-coarse phase is that if there exists an inaccurate match between the sampled points in the moving and fixed images, it yields a significant perturbation in the displacement field. Thus, a regularizer is required to smooth the displacement to help reducing the potentially inaccurate deformations resulting from mismatched points.\nTransformations models: In this experiment, we study how our SVF transformation model benefits the registration performance more than the Displacement Vector Field (DVF) model used in the conference version in the SAM-deform step. The initial input in this experiment is based on the SAMcoarse results. We train two neural networks with DVF and SVF, respectively. The performance is shown in Table . VII.\nAs can be seen, when SAM-deform is trained with SVF, the folding rate in the estimated map is largely reduced, leading to a more physically plausible transformation field. Meanwhile, consistent with what was observed in Table . VI, less folding in the transformation field also leads to more accurate registration results, i.e., a 3.43% DICE improvement. Effectiveness of SAM loss and feature: The ablation study for SAM loss and SAM feature in SAM-deform is shown in Table . VIII. As shown, the best result is achieved when both the correlation feature and SAM loss are applied. We can see that the correlation feature calculated by SAM provides extra guidance for determining the deformation fields and the SAM loss provides a more semantically informed supervisory signal.\nInstance optimisation: We further examine how the instance optimization step affects the final performance. Table. IV lists the registration performance with and without the SAM-based instance optimization. We see that instance optimization can consistently improve registration performance. However, as it requires a number of iterations, the running time is slightly increased. It is a trade-off between the registration accuracy and running time, and can be used in practice according to the application requirements." }, { "figure_ref": [], "heading": "VI. CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this work, we introduced the fast general SAME++ framework for medical image registration based on SAM embeddings. Specifically, we decompose image registration into four steps: affine, coarse, deformable registration, and instance optimization, and enhance these steps by finding more coherent correspondences through the use of the SAM embeddings. Our SAM-affine and SAM-coarse approaches can be alternatives to optimization-based methods for registration initialization. The SAM correlation feature and SAM loss may also be combined with any learning-based deformable registration models to serve as SAM-deform. We further use SAM-based instance optimization for further accuracy improvements. It can be used as a plug-and-play module for any other registration methods. Extensive inter-patient image registration experiments using > 50 labeled organs on chest, abdomen, and head-neck CTs datasets demonstrate the advantages of SAME++." } ]
Image registration is a fundamental medical image analysis task. Ideally, registration should focus on aligning semantically corresponding voxels, i.e., the same anatomical locations. However, existing methods often optimize similarity measures computed directly on intensities or on hand-crafted features, which lack anatomical semantic information. These similarity measures may lead to sub-optimal solutions where large deformations, complex anatomical differences, or crossmodality imagery exist. In this work, we introduce a fast and accurate method for unsupervised 3D medical image registration building on top of a Self-supervised Anatomical eMbedding (SAM) algorithm, which is capable of computing dense anatomical correspondences between two images at the voxel level. We name our approach SAM-Enhanced registration (SAME++), which decomposes image registration into four steps: affine transformation, coarse deformation, deep non-parametric transformation, and instance optimization. Using SAM embeddings, we enhance these steps by finding more coherent correspondence and providing features with better semantic guidance. SAME++ is extensively evaluated using more than 50 labeled organs on three challenging inter-subject registration tasks of different body parts (head & neck, chest, and abdomen). Quantitative results show that SAM-affine significantly outperforms the widely-used affine registration methods by Dice score improvement of at least 4.4%, 6.0%, and 8.5% for the three inter-patient registration tasks, respectively. For the non-parametric transformation step alone, SAM-deform achieves the overall best performance compared with top-ranked optimization-based and learning-based registration methods. As a complete registration framework, SAME++ markedly outperforms leading methods by 4.2% -8.2% in terms of Dice score while being orders of magnitude faster than numerical optimization-based methods. Code is available at https://github.com/
SAME++: A medical image registration framework enhanced via self-supervised anatomical embeddings
[ { "figure_caption": "Fig. 1 .1Fig. 1. The framework of SAME++. Based on the SAM space, we break down image registration into four steps: keypoint-based affine transformation, coarse deformation, deep deformable registration, and instance optimization. (a) Illustration of one incorrect match based on SAM. (b) Eliminating false correspondence via cycle consistency matching.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Boxplot of the performance of different affine registration methods on 13 organs in the abdomen CT dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Applications 1 )1Head & Neck CT: A CT dataset containing 72 head & neck cancer patients [60] (denoted as Head & Neck CT) was utilized. 13 head & neck organs are manually labeled including the brainstem, left eye, right eye, left lens, right lens, optic chiasm, left optic nerve, right optic nerve, left parotid, right parotid, left temporomandibular joint (TMJ), right TMJ and spinal cord. We randomly split the dataset into 52, 10, and 10 for training, validation, and testing, respectively. For validation and testing, 90 image pairs are constructed for interpatient registration. Each image is resampled to an isotropic resolution of 2 mm and cropped to 256 × 128 × 224 (mainly contains the head & neck region) by removing black borders.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Comparison of deformable registration methods on all organ groups on the neck dataset using boxplot.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Qualitative comparisons between SAME++ and the second best methods Deeds, LapIRN and Deeds for Neck, Chest, and Abdomen registrations, respectively. Top row: visualization of coronal head neck and warped CT slices. Middle row: Overlay of coronal chest CT (gray) and warped segmentation (color) slices. Bottom row: Differences between warped and fixed coronal abdominal scans.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 Stable Sampling via Cycle Consistency Input: S m , S f : SAM features of the moving and fixed image; K: The number of iterations. Output: {x m }: A list of points in Ω m ; {x f }: A list of matched points in Ω f . 1: function FINDPOINTS({x k }, S k , S q )", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "COMPARISON OF SAM-AFFINE AND SAM-COARSE IN THREE DATASETS.", "figure_data": "Chest CTAbdomen CTNeck CTDICE↑%|J| ↓Time(s)DICE↑%|J| ↓Time(s)DICE↑%|J| ↓Time(s)Initial12.99--25.88--11.83--MIND-affine28.24-6.8923.62-9.7018.31-10.28Elastix-affine28.68-7.7621.90-8.7325.37-10.67SAM-affine32.64-0.5729.67-1.0333.91-0.66SAM-A + SAM-C45.140.082.6240.141.876.3048.070.252.21", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "COMPARISON OF SAM-DEFORM AND OTHER DEFORMABLE REGISTRATION METHODS. NOTE THAT INPUTS FOR ALL DEFORMABLE METHODS ARE THE SAM-COARSE REGISTRATION RESULTS (A BETTER INITIAL ALIGNMENT THAN CONVENTIONAL AFFINE TRANSFORMATION). MDICEDENOTES THE MEAN DICE OVER THREE DATASETS.", "figure_data": "DICE↑Chest CT %|J| ↓Time(s)DICE↑Abdomen CT %|J| ↓Time(s)Head & Neck CT DICE↑ %|J| ↓ Time(s)mDICE↑Initial: SAM-coarse45.14--40.14--48.07--44.45NiftyReg51.580.04186.5443.140.04281.0757.540.01188.3850.75Deeds52.721.2857.8946.520.7545.2162.340.0342.0553.86ConvexAdam54.620.736.0644.442.178.8361.450.315.3353.50LapIRN55.874.332.6746.442.626.3960.161.592.2654.16SAM-deform55.360.404.0147.122.517.9161.350.354.0054.61", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "WITH WIDELY-USED LEADING REGISTRATION PIPELINES (FROM AFFINE TO DEFORMABLE TRANSFORMTION). THE UNALIGNED INITIAL DATA IS USED AS INPUT TO ALL METHODS. Comparison of deformable registration methods on all organ groups on the chest dataset using boxplot.", "figure_data": "Chest CTAbdomen CTHead & Neck CTDICE↑%|J| ↓Time(s)DICE↑%|J| ↓Time(s)DICE↑%|J| ↓Time(s)Initial12.99--25.88--11.83--NiftyReg51.650.02388.2833.390.09478.0559.920.01298.10Deeds52.320.82276.4248.311.24160.3256.320.25185.63ConvexAdam52.101.2713.8234.421.3617.5658.210.3016.00SAME++56.930.448.7449.272.8211.1663.220.339.87Fig. 4.", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Table. III summarizes the quantitative results. We see that our complete SAME++ method significantly", "figure_data": "MovingFixedSecond bestSAME++", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "OF THE SAM INSTANCE OPTIMIZATION STEP. To be noted, the ablation study is conducted on the validation set of Head & Neck CT and Chest CT datasets. Because there is no validation set in Abdomen CT dataset, we use Abdomen CT test set in the ablation study.", "figure_data": "Initialw/o Instance Opt. DICE↑ %|J| ↓ Time(s)DICE↑w/ Instance Opt. %|J| ↓Time(s)Chest CT43.3349.862.801.0052.633.164.79Abdomen CT40.1448.095.931.6148.775.557.50HeadNeck CT59.9964.212.920.8965.582.496.03TABLE VTHE EFFECTIVENESS OF STABLE SAMPLING VIA CYCLE CONSISTENCY(SSCC) IN SAM-AFFINE.Initial w/o SSCCw/ SSCCChest CT9.6027.8928.93Abdomen CT25.8829.6729.73HeadNeck CT9.2131.5533.56(3) The effect of using different transformations; (4) Theeffectiveness of SAM loss and feature; and (5) The impact ofadding instance optimization.", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "STUDY FOR SAM LOSS AND SAM FEATURE IN SAM-DEFORM. ALL METHODS ARE INITIALIZED BY SAM-AFFINE.", "figure_data": "Chest CTSAM lossSAM featureDICE↑Initial✗✗48.79✓✗50.43SAM-deform✗✓51.37✓✓51.99", "figure_id": "tab_8", "figure_label": "VIII", "figure_type": "table" } ]
Lin Tian; Zi Li; Fengze Liu; Xiaoyu Bai; Jia Ge; Le Lu Fellow; Marc Niethammer; Xianghua Ye; Ke Yan; Dakai Jin Member
[ { "authors": "M A Viergever; J B A Maintz; S Klein; K Murphy; M Staring; J P W Pluim", "journal": "Medical Image Analysis", "ref_id": "b0", "title": "A survey of medical image registration -under review", "year": "2016" }, { "authors": "A J Asman; B A Landman", "journal": "Medical image analysis", "ref_id": "b1", "title": "Non-local statistical label fusion for multi-atlas segmentation", "year": "2013" }, { "authors": "T Rohlfing; R Brandt; R Menzel; C R Maurer", "journal": "NeuroImage", "ref_id": "b2", "title": "Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains", "year": "2004" }, { "authors": "C Buerger; T Schaeffter; A P King", "journal": "Medical Image Analysis", "ref_id": "b3", "title": "Hierarchical adaptive local affine registration for fast and robust respiratory motion estimation", "year": "2011" }, { "authors": "M P Heinrich; M Jenkinson; B W Papież; S M Brady; J A Schnabel", "journal": "Medical Image Computing and Computer-Assisted Intervention", "ref_id": "b4", "title": "Towards realtime multimodal fusion for image-guided interventions using self-similarities", "year": "2013" }, { "authors": "D Jin; D Guo; T.-Y Ho; A P Harrison; J Xiao; C.-K Tseng; L Lu", "journal": "Medical Image Analysis", "ref_id": "b5", "title": "Deeptarget: Gross tumor and clinical target volume segmentation in esophageal cancer radiotherapy", "year": "2021" }, { "authors": "R Liu; Z Li; X Fan; C Zhao; H Huang; Z Luo", "journal": "IEEE Transactions on Pattern Analysis Machine Intelligence", "ref_id": "b6", "title": "Learning deformable image registration from optimization: Perspective, modules, bilevel training and beyond", "year": "2022" }, { "authors": "A Sotiras; C Davatzikos; N Paragios", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b7", "title": "Deformable medical image registration: A survey", "year": "2013" }, { "authors": "J Ashburner", "journal": "NeuroImage", "ref_id": "b8", "title": "A fast diffeomorphic image registration algorithm", "year": "2007" }, { "authors": "M P Heinrich; M Jenkinson; M Brady; J A Schnabel", "journal": "", "ref_id": "b9", "title": "Globally optimal deformable registration on a minimum spanning tree using dense displacement sampling", "year": "2012" }, { "authors": "M P Heinrich; B W Papiez; J A Schnabel; H Handels", "journal": "WBIR", "ref_id": "b10", "title": "Nonparametric discrete registration with convex optimisation", "year": "2014" }, { "authors": "W Sun; W J Niessen; S Klein", "journal": "", "ref_id": "b11", "title": "Free-form deformation using lower-order b-spline for nonrigid image registration", "year": "2014" }, { "authors": "H Siebert; L Hansen; M P Heinrich", "journal": "Biomedical Image Registration, Domain Generalisation and Out-of-Distribution Analysis", "ref_id": "b12", "title": "Fast 3d registration with accurate optimisation and little learning for learn2reg 2021", "year": "2021" }, { "authors": "X Yang; R Kwitt; M Styner; M Niethammer", "journal": "NeuroImage", "ref_id": "b13", "title": "Quicksilver: Fast predictive image registration-a deep learning approach", "year": "2017" }, { "authors": "Z Shen; X Han; Z Xu; M Niethammer", "journal": "", "ref_id": "b14", "title": "Networks for joint affine and non-parametric image registration", "year": "2019" }, { "authors": "A Dalca; G Balakrishnan; J Guttag; M Sabuncu", "journal": "Medical Image Analysis", "ref_id": "b15", "title": "Unsupervised learning of probabilistic diffeomorphic registration for images and surfaces", "year": "2019" }, { "authors": "G Balakrishnan; A Zhao; M R Sabuncu; J V Guttag; A V Dalca", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b16", "title": "Voxelmorph: A learning framework for deformable medical image registration", "year": "2019" }, { "authors": "T C W Mok; A C S Chung", "journal": "Medical Image Computing and Computer Assisted Intervention", "ref_id": "b17", "title": "Large deformation diffeomorphic image registration with laplacian pyramid networks", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b18", "title": "Fast symmetric diffeomorphic image registration with convolutional neural networks", "year": "2020" }, { "authors": "H Greer; R Kwitt; F.-X Vialard; M Niethammer", "journal": "", "ref_id": "b19", "title": "Icon: Learning regular maps through inverse consistency", "year": "2021" }, { "authors": "L Tian; H Greer; F.-X Vialard; R Kwitt; R S J Estépar; M Niethammer", "journal": "", "ref_id": "b20", "title": "Gradicon: Approximate diffeomorphisms via gradient inverse consistency", "year": "2022" }, { "authors": "M P Heinrich; M Jenkinson; M Bhushan; T N Matin; F Gleeson; M Brady; J A Schnabel", "journal": "Medical Image Analysis", "ref_id": "b21", "title": "MIND: modality independent neighbourhood descriptor for multi-modal deformable registration", "year": "2012" }, { "authors": "D Shen; C Davatzikos", "journal": "IEEE Transactions on medical imaging", "ref_id": "b22", "title": "Hammer: hierarchical attribute matching mechanism for elastic registration", "year": "2002" }, { "authors": "J Ehrhardt; R Werner; A Schmidt-Richberg; H Handels", "journal": "Medical Image Analysis for the Clinic-A Grand Challenge, MICCAI", "ref_id": "b23", "title": "Automatic landmark detection and non-linear landmark-and surface-based registration of lung ct images", "year": "2010" }, { "authors": "T Brox; J Malik", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b24", "title": "Large displacement optical flow: descriptor matching in variational motion estimation", "year": "2010" }, { "authors": "X Han", "journal": "Medical image analysis for the clinic", "ref_id": "b25", "title": "Feature-constrained nonlinear registration of lung ct images", "year": "2010" }, { "authors": "B K Horn; B G Schunck", "journal": "Artificial intelligence", "ref_id": "b26", "title": "Determining optical flow", "year": "1981" }, { "authors": "K Yan; J Cai; D Jin; S Miao; D Guo; A P Harrison; Y Tang; J Xiao; J Lu; L Lu", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b27", "title": "Sam: Self-supervised learning of pixel-wise anatomical embeddings in radiological images", "year": "2022" }, { "authors": "S Klein; M Staring; K Murphy; M A Viergever; J P W Pluim", "journal": "IEEE Trans. on Medical Imaging", "ref_id": "b28", "title": "elastix: A toolbox for intensity-based medical image registration", "year": "2010" }, { "authors": "M P Heinrich; M Jenkinson; M Brady; J A Schnabel", "journal": "IEEE transactions on medical imaging", "ref_id": "b29", "title": "Mrfbased deformable registration and ventilation estimation of lung ct", "year": "2013" }, { "authors": "M P Heinrich; O Maier; H Handels", "journal": "", "ref_id": "b30", "title": "Multi-modal multi-atlas segmentation using discrete optimisation and self-similarities", "year": "2015" }, { "authors": "A Hering; L Hansen; T C Mok; A C Chung; H Siebert; S Häger; A Lange; S Kuckertz; S Heldmann", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b31", "title": "Learn2reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning", "year": "2022" }, { "authors": "F Liu; K Yan; A P Harrison; D Guo; L Lu; A L Yuille; L Huang; G Xie; J Xiao; X Ye; D Jin", "journal": "Medical Image Computing and Computer Assisted Intervention", "ref_id": "b32", "title": "SAME: deformable image registration based on self-supervised anatomical embeddings", "year": "2021" }, { "authors": "J L Long; N Zhang; T Darrell", "journal": "Advances in neural information processing systems", "ref_id": "b33", "title": "Do convnets learn correspondence?", "year": "2014" }, { "authors": "C B Choy; J Gwak; S Savarese; M Chandraker", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Universal correspondence network", "year": "2016" }, { "authors": "X Han; T Leung; Y Jia; R Sukthankar; A C Berg", "journal": "", "ref_id": "b35", "title": "Matchnet: Unifying feature and metric learning for patch-based matching", "year": "2015" }, { "authors": "S Zagoruyko; N Komodakis", "journal": "", "ref_id": "b36", "title": "Learning to compare image patches via convolutional neural networks", "year": "2015" }, { "authors": "L Jing; Y Tian", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b37", "title": "Self-supervised visual feature learning with deep neural networks: A survey", "year": "2020" }, { "authors": "P O Pinheiro; A Almahairi; R Benmalek; F Golemo; A C Courville", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Unsupervised learning of dense visual representations", "year": "2020" }, { "authors": "X Wang; R Zhang; C Shen; T Kong; L Li", "journal": "", "ref_id": "b39", "title": "Dense contrastive learning for self-supervised visual pre-training", "year": "2021" }, { "authors": "B B Avants; N Tustison; G Song", "journal": "Insight j", "ref_id": "b40", "title": "Advanced normalization tools (ants)", "year": "2009" }, { "authors": "M Jenkinson; S Smith", "journal": "Medical image analysis", "ref_id": "b41", "title": "A global optimisation method for robust affine registration of brain images", "year": "2001" }, { "authors": "T Butz; J.-P Thiran", "journal": "Medical Image Computing and Computer-Assisted Intervention", "ref_id": "b42", "title": "Affine registration with feature space mutual information", "year": "2001" }, { "authors": "W Huang", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b43", "title": "A coarse-to-fine deformable transformation framework for unsupervised multi-contrast MR image registration with dual consistency constraint", "year": "2021" }, { "authors": "S Zhao; T F Lau; J Luo; E I Chang; Y Xu", "journal": "IEEE JBHI", "ref_id": "b44", "title": "Unsupervised 3d end-to-end medical image registration with volume tweening network", "year": "2020" }, { "authors": "E M Yu; A Q Wang; A V Dalca; M R Sabuncu", "journal": "", "ref_id": "b45", "title": "Keymorph: Robust multi-modal affine registration via unsupervised keypoint detection", "year": "2022" }, { "authors": "T C W Mok; A C S Chung", "journal": "", "ref_id": "b46", "title": "Affine medical image registration with coarse-to-fine vision transformer", "year": "2022" }, { "authors": "D G Lowe", "journal": "Ieee", "ref_id": "b47", "title": "Object recognition from local scale-invariant features", "year": "1999" }, { "authors": "J Feldmar; N Ayache", "journal": "", "ref_id": "b48", "title": "Rigid, affine and locally affine registration of free-form surfaces", "year": "1994" }, { "authors": "S Du; N Zheng; G Meng; Z Yuan", "journal": "IEEE Signal Processing Letters", "ref_id": "b49", "title": "Affine registration of point sets using icp and ica", "year": "2008" }, { "authors": "P J Besl; N D Mckay", "journal": "Spie", "ref_id": "b50", "title": "Method for registration of 3-d shapes", "year": "1992" }, { "authors": "Z Zhang", "journal": "International journal of computer vision", "ref_id": "b51", "title": "Iterative point matching for registration of free-form curves and surfaces", "year": "1994" }, { "authors": "E Karami; S Prasad; M Shehata", "journal": "", "ref_id": "b52", "title": "Image matching using sift, surf, brief and orb: performance comparison for distorted images", "year": "2017" }, { "authors": "R Liu; Z Li; Y Zhang; X Fan; Z Luo", "journal": "", "ref_id": "b53", "title": "Bi-level probabilistic feature learning for deformable image registration", "year": "2020" }, { "authors": "X Fan; Z Li; Z Li; X Wang; R Liu; Z Luo; H Huang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b54", "title": "Automated learning for deformable medical image registration by jointly optimizing network architectures and objective functions", "year": "2023" }, { "authors": "Z Xu; M Niethammer", "journal": "Medical Image Computing and Computer Assisted Intervention", "ref_id": "b55", "title": "Deepatlas: Joint semi-supervised learning of image registration and segmentation", "year": "2019" }, { "authors": "J Cai; Y Tang; K Yan; A P Harrison; J Xiao; G Lin; L Lu", "journal": "", "ref_id": "b56", "title": "Deep lesion tracker: monitoring lesions in 4d longitudinal imaging studies", "year": "2021" }, { "authors": "A Hering; F Peisen; T Amaral; S Gatidis; T Eigentler; A Othman; J H Moltz", "journal": "", "ref_id": "b57", "title": "Whole-body soft-tissue lesion tracking and segmentation in longitudinal ct imaging studies", "year": "2021" }, { "authors": "A V Dalca; G Balakrishnan; J Guttag; M R Sabuncu", "journal": "Medical Image Computing and Computer Assisted Intervention", "ref_id": "b58", "title": "Unsupervised learning for fast probabilistic diffeomorphic registration", "year": "2018" }, { "authors": "X Ye; D Guo; J Ge", "journal": "Nature Communication", "ref_id": "b59", "title": "Comprehensive and clinically accurate head and neck organs at risk delineation via stratified deep learning: A large-scale multi-institutional study", "year": "2021" }, { "authors": "D Guo; X Ye; J Ge; X Di; L Lu; L Huang; G Xie; J Xiao; Z Lu; L Peng", "journal": "Springer", "ref_id": "b60", "title": "Deepstationing: thoracic lymph node station parsing in ct scans using anatomical context encoding and key organ auto-search", "year": "2021-10-01" }, { "authors": "Z Xu; C P Lee; M P Heinrich; M Modat; D Rueckert; S Ourselin; R G Abramson; B A Landman", "journal": "IEEE Transactions on Biomedical Engineering", "ref_id": "b61", "title": "Evaluation of six registration methods for the human abdomen on clinically acquired ct", "year": "2016" }, { "authors": "Ö ; A Abdulkadir; S S Lienkamp; T Brox; O Ronneberger", "journal": "Medical Image Computing and Computer-Assisted Intervention", "ref_id": "b62", "title": "3d u-net: Learning dense volumetric segmentation from sparse annotation", "year": "2016" } ]
[ { "formula_coordinates": [ 3, 311.98, 703.21, 251.06, 21.61 ], "formula_id": "formula_0", "formula_text": "I m : Ω m → R and I f : Ω f → R, Ω m ⊂ R n and Ω f ⊂ R n representing" }, { "formula_coordinates": [ 4, 48.96, 303.49, 251.06, 25.6 ], "formula_id": "formula_1", "formula_text": "C × D 2 × H 8 × W 8 and C × D 2 × H 2 × W" }, { "formula_coordinates": [ 4, 311.98, 177.25, 251.06, 45.52 ], "formula_id": "formula_2", "formula_text": "x f ∈ Ω f via FINDPOINTS({x m }, S m , S f ) (Alg. 1). Then we compute the corresponding point x ′ m ∈ Ω m to x f via FIND- POINTS({x f }, S f , S m ). Presumably, if (x m , x f ) is a correct matching pair, then x m and x ′" }, { "formula_coordinates": [ 4, 311.98, 356.65, 251.06, 21.61 ], "formula_id": "formula_3", "formula_text": "X = {(x m , x f )|x m ∈ Ω m , x f ∈ Ω f }," }, { "formula_coordinates": [ 4, 311.98, 392.51, 251.06, 21.61 ], "formula_id": "formula_4", "formula_text": "X ϵ = {(x m , x f )|d(s m (x m ), s f (x f )) > ϵ, (x m , x f ) ∈ X}." }, { "formula_coordinates": [ 4, 413.69, 447.35, 149.35, 9.65 ], "formula_id": "formula_5", "formula_text": "Ax m = xf ,(1)" }, { "formula_coordinates": [ 4, 405.93, 617.89, 157.11, 12.19 ], "formula_id": "formula_6", "formula_text": "c = x + u(x), φ -1 c : Ω f → Ω m ," }, { "formula_coordinates": [ 5, 54.72, 140.34, 111.74, 32.51 ], "formula_id": "formula_7", "formula_text": "for x k ∈ {x k } do 4: Initialize D = {} 5:" }, { "formula_coordinates": [ 5, 50.73, 271.92, 188.27, 45.52 ], "formula_id": "formula_8", "formula_text": "{x f } ← FINDPOINTS({x m }, S m , S f ) 15: {x m } ← FINDPOINTS({x f }, S f , S m ) 16: end for 17: return {x m }, {x f }." }, { "formula_coordinates": [ 5, 61.72, 421.34, 238.3, 59.01 ], "formula_id": "formula_9", "formula_text": "φc -1 = arg min φc 1 |X| (xm,x f )∈Xϵ ||x m -φ -1 c (A -1 x f )|| 2 2 + 1 |Ω f | y∈Ω f ||∇u(y)|| 2 F .(2)" }, { "formula_coordinates": [ 5, 364.09, 159.19, 198.95, 13.38 ], "formula_id": "formula_10", "formula_text": "L SAM = 1 -d(S c m • φ -1 d , S f ) .(3)" }, { "formula_coordinates": [ 5, 353.61, 286.99, 209.43, 22.31 ], "formula_id": "formula_11", "formula_text": "∂ϕ(t) ∂t = v(ϕ(t)), ϕ(0) = Id, φ = ϕ(1)(4)" }, { "formula_coordinates": [ 5, 372.12, 415.89, 190.91, 26.8 ], "formula_id": "formula_12", "formula_text": "L reg = 1 |Ω| x∈Ω ||∇v(x)|| 2 F .(5)" }, { "formula_coordinates": [ 5, 367.5, 471.08, 195.54, 9.65 ], "formula_id": "formula_13", "formula_text": "L = λ 1 (L sim + L SAM ) + λ 2 L reg(6)" }, { "formula_coordinates": [ 5, 319.39, 590.31, 243.65, 26.8 ], "formula_id": "formula_14", "formula_text": "φi = arg min φi 1-d(S c m •φ -1 i , S f )+ 1 |Ω| Ω ||∇u i (x)|| 2 2 ,(7)" }, { "formula_coordinates": [ 5, 348.7, 716.77, 214.34, 13.15 ], "formula_id": "formula_15", "formula_text": "φ -1 = φ -1 a • φ -1 c • φ -1 i , φ -1 a (x) = A -1 x .(8)" } ]
2023-11-25
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_7" ], "heading": "INTRODUCTION", "publication_ref": [ "b52", "b31", "b42", "b48", "b4", "b35", "b0", "b41", "b45", "b48", "b35", "b19", "b6", "b47", "b20", "b37", "b47", "b37", "b47", "b25", "b49", "b3", "b45", "b15", "b42", "b48", "b4", "b38", "b33", "b30", "b35", "b0", "b37", "b19", "b44", "b22", "b14", "b41", "b35", "b48", "b0", "b45", "b31", "b35" ], "table_ref": [], "text": "Neural fields (also known as coordinate-based or implicit neural representations) have attracted great attention (Xie et al., 2022) in representing various types of signals, such as image (Chen et al., 2021b;Mehta et al., 2021), video (Rho et al., 2022;Chen et al., 2022b), 3D shape (Tancik et al., 2020;Chabra et al., 2020), and novel view synthesis (Mildenhall et al., 2020;Barron et al., 2021;2022). These methods typically use a multi-layer perceptron (MLP), mapping low-dimensional inputs (coordinates) to output quantities, as shown in Fig. 1-(a). It has achieved a very compact representation by representing signals with the dense connections of weights and biases in the MLP architecture. However, a notable drawback of MLPs is their inherent spectral bias (Rahaman et al., 2019), which leads them to learn towards lower-frequency or smoother patterns, often missing the finer and high-frequency details. Despite the recent progress, such as frequency-based activation functions (Sitzmann et al., 2020) and positional encoding (Tancik et al., 2020), deeper MLP structures and extensive training duration are needed to achieve desirable performances for high-frequency signals (Mildenhall et al., 2020).\nWith fast training and inference time, the conventional grid-based representations (Fig. 1-(b)) have been recently repopularized in neural fields literature. They can represent high-frequency signals effectively (w/o MLPs or w/ small MLPs, hence no architectural bias), achieving promising reconstruction quality (Fridovich-Keil et al., 2022;Chan et al., 2022;Takikawa et al., 2022). However, the grid structures (typically representing volume features with high resolution and large channels) cause a dramatic increase in memory footprints. Although many recent works have explored reducing the memory usage through grid factorization (Chen et al., 2022a;Fridovich-Keil et al., 2023), hash encoding (Müller et al., 2022), or vector quantization (Takikawa et al., 2022), constructing compact yet powerful grid representation remains a challenge.\nA typical approach of leveraging both grids and MLPs is to combine them sequentially (Müller et al., 2022;Yu et al., 2021a;Takikawa et al., 2022), extracting the feature from the grid representations first and feeding them to MLPs. MLPs in these approaches play a secondary role in representing signals, and the small-size MLPs are generally used to finalize or refine the features from the grids. Therefore, the grids represent most of the signals' contents, and higher resolutions of the grids are required to achieve better performance, resulting in significant memory requirements.\nIn this work, we propose a novel way of exploiting grid representations in neural fields. Based on MLP architectures, we suggest a coordinate-aware modulation (CAM), which modulates intermediate features of the neural networks using the grids (Fig. 1-(c)). More specifically, CAM extracts scale and shift parameters from the grid representations given the input coordinates, then multiplies the extracted scale parameters to the intermediate features in MLPs and adds the shift parameters. Since CAM utilizes an interpolation scheme commonly used in recent grid representations, it can extract scale and shift parameters at any arbitrary location. The main idea behind the proposed CAM is to inject spectral bias-free representations into the intermediate features in MLPs. It will assist in mitigating any remaining potential biases in MLPs and help them quickly learn high-frequency components.\nIn addition, we found that feature normalization techniques (Ioffe & Szegedy, 2015;Ulyanov et al., 2016) proved to be effective when applied in conjunction with the proposed CAM. Normalizing intermediate features in neural fields has yet to show meaningful gains in the representation performance. However, without normalization techniques, training deep neural networks in general often requires careful learning rate schedules and other hyperparameter searches (Bjorck et al., 2018), and we observed similar phenomena in training neural fields. As shown in Fig. 8-(a), given the same network architecture and task (training Mip-NeRF), different learning rate schedules resulted in significant performance variations (a learning rate schedule over 1,000K iterations vs. 500K iterations). We have demonstrated that CAM benefits from the feature normalizations, showing fast and stable convergence with superior performance.\nWe have extensively tested the proposed method on various tasks. The experimental results show that CAM improves the performance and robustness in training neural fields. First, we demonstrate the effectiveness of CAM in simple image fitting and generalization tasks, where CAM improved the baseline neural fields by a safe margin. Second, we tested CAM on video representation, applying CAM to one of the best-performing frame-wise video representation methods, and the resulting method set a new state-of-the-art compression performance among the methods using neural fields and frame-wise representations. We also tested CAM on novel view synthesis tasks. For static scenes, CAM has achieved state-of-the-art performance on real scenes (360 dataset) and also showed the best performance under a 1MB memory budget on synthetic scenes (NeRF synthetic dataset). Finally, we also tested on dynamic scenes, and CAM outperformed the existing methods with the least number of parameters and fast training speed (D-NeRF dataset). such as image representation (Sitzmann et al., 2020;Dupont et al., 2021), video representation (Rho et al., 2022;Chen et al., 2021a), 3D shape representation (Tancik et al., 2020;Chabra et al., 2020;Park et al., 2019;Mescheder et al., 2019;Martel et al., 2021), novel view synthesis (Mildenhall et al., 2020;Barron et al., 2021;Müller et al., 2022;Fridovich-Keil et al., 2022;Yu et al., 2021a;Chen et al., 2022a;Yu et al., 2021b), and novel view image generation (Schwarz et al., 2020;Chan et al., 2021;Gu et al.;Deng et al., 2022). Neural networks (typically using MLPs in neural fields) tend to be learned towards low-frequency signals due to the spectral bias (Rahaman et al., 2019). Several studies have been conducted to mitigate this issue by proposing frequency encodings (Mildenhall et al., 2020;Tancik et al., 2020;Barron et al., 2021) or periodic activations (Sitzmann et al., 2020;Mehta et al., 2021). Nevertheless, this challenge persists in the literature, demanding the use of complex MLPs and extensive training time to effectively represent high-frequency signals (Mildenhall et al., 2020)." }, { "figure_ref": [ "fig_0" ], "heading": "RELATED WORKS", "publication_ref": [ "b19", "b6", "b47", "b20", "b19", "b43", "b37", "b47", "b51", "b39", "b21", "b10", "b36", "b23", "b16", "b31", "b18", "b55" ], "table_ref": [], "text": "An emerging alternative to this MLP-dependent paradigm is the use of an auxiliary data structure, typically grids, incorporated with interpolation techniques. Such approach has notably reduced training times without sacrificing the reconstruction quality (Fridovich-Keil et al., 2022;Chan et al., 2022;Takikawa et al., 2022). However, these grid frameworks, usually designed with high-resolution volumetric features, demand extensive memory consumption as shown in Fig. 1-(b).\nWhile numerous studies have made efforts to minimize memory usage via grid factorization (Chen et al., 2022a;Fridovich-Keil et al., 2023), pruning (Fridovich-Keil et al., 2022;Rho et al., 2023), hashing (Müller et al., 2022), or vector quantization (Takikawa et al., 2022), the pursuit of memoryefficient grid representation remains an ongoing focus in the field of neural fields research.\nCombination of an MLP and grid representation. The aforementioned grid-based methods generally use a small MLP to obtain the final output from the grid feature. In other words, the grid structure and an MLP are sequentially deployed. Most recently, NFFB (Wu et al., 2023) proposed combining two architectures in a different way, by designing each of multiple sets of MLPs and grids to represent different frequency signals, similar to the concept of wavelets. Nonetheless, it is worth noting that NFFB demands task-specific designs for individual models. In contrast, CAM is a plug-and-play solution that can be easily deployed without the need for any modifications to the original model configurations.\nModulation in neural fields. Feature modulation in neural networks has been a well-established concept, spanning across diverse domains including visual reasoning (Perez et al., 2018), image generation (Ghiasi et al., 2017;Chen et al., 2019), denoising (Mohan et al., 2021), and restoration (He et al., 2019). They typically employ an additional network (or linear transform) to represent modulation parameters, learning a well-conditional impact on the intermediate features of the base network. Neural fields literature follows the paradigm by representing modulation parameters with the function of noise vector (Pi-GAN (Chan et al., 2021)), datapoints (COIN++ (Dupont et al., 2022)), patch-wise latent feature (ModSiren (Mehta et al., 2021)), or input coordinate (MFN, FINN (Fathony et al., 2021;Zhuang, 2024)). In contrast to other methods that integrate periodic functions into their approach, both FINN and our proposed method utilize coordinate-dependent parameters to directly influence the intermediate features. However, while FINN acts as a filter by using the same vector for all layers, our model represents different scale and shift (scalar) values in each layer. Furthermore, the utilization of grid representation for scale and shift parameters in our model avoids introducing any network architectural bias. This is distinct from all the aforementioned methods, which can induce architectural bias by incorporating a separate linear layer following positional encoding. \nFn,c = γ n (X; Γ)F n,c + β n (X; B),(1)\nwhere Coordinate priority for CAM. The grids are adopted to parameterize single-channel features (modulation parameters), but they can face challenges with the curse of dimensionality, especially with high-dimensional input coordinates (e.g., 6-dimensional coordinates for dynamic NeRFs). To avoid the complex dimension of grids, we strategically prioritize which coordinates to use for representing modulation parameters into grids, among diverse coordinates of each task. Visual signals can have several dimensions, including space, viewing direction, and time. At the core, spatial components construct distinct scenes, where the viewing directions determine which aspect of the scene becomes visible, and the temporal coordinates represent dynamic movements in the scene. Among the view direction and time coordinates, we empirically found that considering temporal coordinates is more beneficial for CAM. This can be interpreted that a visible scene determined by spatiality and view direction, is the basis of effectively defining a time-varying scene. We establish this hierarchy of coordinates, prioritizing the highest-level components among the coordinates (denoted as X (•) ) to be regarded for modulation (e.g., temporal coordinates X (t) for dynamic NeRFs and view direction coordinates X (ϕ,θ) for NeRFs). Given that image and frame-wise video representations involve only spatial and time coordinates, respectively, we use the complete input coordinate by denoting it as X, in the following sections.\nF, F ∈ R N ×C are\nFeature normalization. We standardize the intermediate feature F with its mean and variance before applying the modulation. Although general neural representation methods cannot take advantage of feature normalization due to its regularizing property for fitting, we empirically found that normalization integrated with CAM facilitates and stabilizes model convergence. We hypothesize that the enforcing diverse distribution of standardized features acts as de-regularization, which stands for fitting signals. We compute the mean and variance along with as many dimensions as possible, excluding the batch dimension.\nAlthough CAM serves as a universal method that can be applied to any neural fields for a wide variety of tasks, each task possesses its unique characteristics and intermediate feature shapes. In the following sections, we will provide a more in-depth explanation of the CAM approach for each specific task." }, { "figure_ref": [ "fig_1" ], "heading": "IMAGE", "publication_ref": [ "b45", "b48" ], "table_ref": [], "text": "We can formulate a neural field as a function of a 2-dimensional coordinate that outputs the corresponding color in order to represent images (Sitzmann et al., 2020;Tancik et al., 2020). When a stack of 2-dimensional coordinates X ∈ R N ×2 pass through a neural network, CAM normalizes and modulates a latent feature F l ∈ R N ×C of each layer l, where C is the channel (or feature) size (we will omit superscript l for brevity). As images only have spatial coordinates, we obtain the modulation parameters corresponding to these coordinates. More precisely, Fig. 2-(a) illustrates Preprint how CAM works in the task, CAM can be formally written as follows:\nFn,c = γ n (X; Γ) F n,c -µ n (F ) σ 2 n (F ) + ϵ + β n (X; B),(2)\nµ n (F ) = 1 C c F n,c , σ 2 n (F ) = 1 C c (F n,c -µ n (F )) 2 ,(3)\nwhere F, F ∈ R N ×C are latent and modulated latent feature tensors, respectively. The mean and variance functions µ(•), σ 2 (•) : R N ×C → R N normalize features over every dimension except for the batch dimension. Similarly, the scale and shift functions γ(•; Γ), β(•; B) : R N ×2 → R N output scalar values for each coordinate, and γ n (•; Γ), β n (•; B) denote each scale and shift factor for batch n. We can extract values from the grid representations for scale and shift parameters (Γ, B ∈ R dx×dy , d x and d y are the grid resolutions) by bilinearly interpolating values using neighboring input coordinates X." }, { "figure_ref": [ "fig_1" ], "heading": "NOVEL VIEW SYNTHESIS", "publication_ref": [ "b35", "b40" ], "table_ref": [], "text": "Neural radiance fields (NeRFs). A NeRF model uses an MLP architecture to model a function of a volume coordinate (x, y, z) and a view direction (ϕ, θ) that outputs RGB color c and density d. To calculate the color of each pixel (camera ray), a NeRF samples S points along the ray and aggregates color and density values of the sampled points using the volume rendering equation (Mildenhall et al., 2020). Since outputs of sampled points in a ray will be merged to get the color of a ray, we view a pack of points per ray as a single unit. It constructs an input coordinate tensor X ∈ R N ×S×5 , and latent features F ∈ R N ×S×C . Based on the proposed priority, CAM is applied for NeRFs according to the view directional coordinates of N ray units X (ϕ,θ) ∈ R N ×2 (Fig. 2-(b)), formally defined as follows:\nFn,s,c = γ n (X (ϕ,θ) ; Γ) F n,s,c -µ n (F ) σ 2 n (F ) + ϵ + β n (X (ϕ,θ) ; B),(4)\nµ n (F ) = 1 SC s,c F n,s,c , σ 2 n (F ) = 1 SC s,c (F n,s,c -µ n (F )) 2 , (5\n)\nwhere µ(•), σ 2 (•) : R N ×S×C → R N denote mean and variance functions, and µ n (F ) and σ 2 n (F ) represent the mean and variance for ray n when F is given. As mentioned in Sec. 3, we normalize over all dimensions except for the batch size. γ(•; Γ), β(•; B) : R N ×2 → R N are scale and shift functions, parameterized by two grid representations Γ, B ∈ R d ϕ ×d θ ; d ϕ and d θ are resolutions of azimuth ϕ and elevation θ dimension, respectively. Similar to µ n (F ) and σ 2 n (F ), the scalars γ n (X; Γ) and B n (X; B) denote the scale and shift value, respectively, for ray n.\nDynamic NeRFs build upon the static NeRFs concept by introducing the ability to model timevarying or dynamic scenes, representing 4D scenes that change over time (Pumarola et al., 2021). This is achieved by adding a time coordinate t to the input of the NeRFs. Therefore, the overall process for CAM follows as in Eq. 4, except that the modulation parameters are obtained corresponding to time coordinates X (t) ∈ R N ×1 , from two 1-dimensional grids Γ, B ∈ R dt (d t is the resolution of the temporal dimension)." }, { "figure_ref": [], "heading": "VIDEO", "publication_ref": [ "b26" ], "table_ref": [], "text": "Videos can be represented as a function of temporal and spatial coordinates. However, this pixelwise neural representation demands significant computational resources and time, limiting its practical use (Chen et al., 2021a). To tackle the challenges associated with high computational costs and slow training/inference times, NeRV (Chen et al., 2021a) and its variations (Li et al., 2022b;Lee et al., 2022) adopted a frame-wise representation approach and use neural fields as a function of only the temporal coordinate t. This not only accelerated training and inference time but also improved compression and representation performance (Chen et al., 2021a). These frame-wise video representation models leverage convolutional layers to generate a video frame per temporal coordinate t. More precisely, an input coordinate tensor X ∈ R N, C, H and W denote the number of frames or batch size, the number of channels, the feature's height and width, respectively. Then, we can define CAM as follows:\nFn,c,h,w = γ n,c (X; Γ) F n,c,h,w -µ n,c (F ) σ 2 n,c (F ) + ϵ + β n,c (X; B),(6)\nµ n,c (F ) = 1 HW h,w F n,c,h,w , σ 2 n,c (F ) = 1 HW h,w (F n,c,h,w -µ n,c (F )) 2 , (7\n)\nwhere µ(•), σ 2 (•) : R N ×C×H×W → R N ×C denote mean and variance functions. The reason for not normalizing over every dimension except the batch dimension is to keep the computational costs affordable (see App. " }, { "figure_ref": [ "fig_0", "fig_5" ], "heading": "EXPERIMENTS", "publication_ref": [ "b20", "b40", "b46", "b24" ], "table_ref": [], "text": "We initially assessed the effectiveness of CAM in terms of mitigating spectral bias. Then, we evaluated our proposed method on various signal representation tasks, including image, video, 3D scene, and 3D video representations. Finally, we delved into the reasons behind its superior performance, conducting comprehensive analyses. All baseline models were implemented under their original configurations, and CAM was applied in a plug-and-play manner. CAM includes feature normalization throughout the experiments, except for efficient NeRFs (e.g., NerfAcc (Li et al., 2022a)), where we found that the normalization is ineffective for pre-sampled inputs. We provide implementation details for each task in App. A. Novel view synthesis on static scene. We first present the superiority of CAM over representations based on an MLP or grid with a small MLP using the NeRF synthetic dataset. As the baseline models, we adopted NerfAcc (Li et al., 2022a) and K-planes (Fridovich-Keil et al., 2023) for MLPand grid-based representations (Fig. 1-(a),(b)), respectively. We modulate the intermediate features of NerfAcc, utilizing modulation parameters represented by tri-plane factorized grids with a singular channel. For a fair comparison with K-planes, here we refrained from implementing our proposed priority and used spatial coordinates to represent modulation parameters. As shown in Tab. 2, CAM outperforms other baselines, resulting in the best visual quality with compactness and comparable training duration, validating its efficiency.\nWe also evaluated with more powerful baseline models, Mip-NeRF and Mip-NeRF 360. Tab. 3, 4 show the qualitative results for the NeRF synthetic, NSVF, LLFF, and real 360 datasets. Throughout all the datasets, CAM showcases significant improvement in PSNR, with a negligible increase Preprint in the number of parameters. Especially for the 360 dataset, CAM achieves state-of-the-art performance. We also tested on lower bit precision; we quantized every weight parameter including Γ and B. As Tab. 3 shows, CAM exhibits robustness to lower bit precision and remains effective. Furthermore, the CAM-applied 8-bit model consistently outperforms the 32-bit original Mip-NeRF. Consequently, CAM achieves state-of-the-art performance under a 1MB memory budget on NeRF synthetic dataset, as shown in Fig. 5. As the qualitative results using Lego (Fig. 4-(b)) shows, the baseline performs poor reconstruction containing an incorrectly illuminated area while the CAMapplied model reconstructs accurately. This indicates that modulation according to view directions results in robustness for representing view-dependent components. Dynamic scene. We used the D-NeRF dataset (Pumarola et al., 2021) to evaluate CAM for novel view synthesis under dynamic scenes, as shown in Tab. 5. CAM is applied on NerfAcc (Li et al., 2022a) for T-NeRF (a variant of D-NeRF). CAM sets a new benchmark, outperforming the previous state-of-the-art by more than 1 PSNR, even while using the least parameters. Furthermore, our model is time-efficient, needing only an hour for training, thanks to its foundation on Nerfacc that boasts rapid processing due to efficient sampling. Video. In Fig. 4-(b), the qualitative results for video representation highlight the enhanced visual quality achieved by CAM. We offer detailed results of video representation performance in Appendix C.1, and here, we focus on showcasing video compression performance, a central and practical task for videos. Fig. 6 visualize the rate-distortion for video compression. In the range from low to high BPP, CAM improves compression performance compared to the baseline FFNeRV by a significant margin. It achieves comparable performance with HM, the reference software of HEVC (Sullivan et al., 2012). Distinct from HEVC, a commercial codec designed under the consideration of time efficiency, HM shows significantly high performance under heavy computations. HM has a decoding rate of around 10 fps using a CPU (Hu et al., 2023), while our model is built on FFNeRV (Lee et al., 2022), a neural representation capable of fast decoding, allowing for real-time processing with a GPU (around 45 fps at 0.1 BPP). To our knowledge, our compression performance is state-of-the-art among methods that have the capability for real-time decoding." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "ANALYSIS AND ABLATION STUDIES ON CAM", "publication_ref": [ "b25" ], "table_ref": [], "text": "Motivation. We analyzed the intermediate feature distribution in the image generalization task, where the features can be visualized straightforwardly, as depicted in Fig. 7-(a). CAM shows a high variance of pixel-wise features while improving the visual quality. This observation underscores the idea that the representation power can be boosted when the features of different coordinates become more distinct from each other. CAM is a strategic approach to achieve this, while it maintains compactness by representing only modulating scalar factors into grids.\nMitigating spectral bias. We visualized the error map in the frequency domain (Fig. 7-(b)) to validate that CAM is actually capable of representing high-frequency components. CAM reduces the errors in high frequency noticeably with only negligible grid parameters (263K for the MLP vs. 3K for the grid in Tab. 1), indicating its effective mitigation of the MLP's spectral bias. Effect of feature normalization. As shown in Tab. 7, normalization with CAM consistently enhances the performance for diverse tasks, while naively applying normalization typically degrades performance. In addition, CAM allows one of the known advantages of normalization, decreasing the magnitude of gradients and improving convergence speed (Ioffe & Szegedy, 2015), further discussed in App. C.2. " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We have proposed a Coordinate-Aware Modulation (CAM), a novel combination of neural networks and grid representations for neural fields. CAM modulates the intermediate features of neural networks with scale and shift parameters, which are represented in the grids. This can exploit the strengths of the MLP while mitigating any architectural biases, resulting in the effective learning of high-frequency components. In addition, we empirically found that the feature normalizations, previously unsuccessful in neural field literature, are notably effective when integrated with CAM.\nExtensive experiments have demonstrated that CAM improves the performance of neural representations and enhances learning stability across a wide range of data modalities and tasks. We renew state-of-the-art performance across various tasks while maintaining compactness. We believe it opens up new opportunities for designing and developing neural fields in many other valuable applications.\ni=1 sin(2πk i x + ϕ i ), where k i ∈ {5, 10, ..., 50}, ϕ i ∼ U (0, 2π) and we uniformly sampled x in the range of [0, 1]. The learning rate was set to 10 -3 and we trained for 1500 iterations using the Adam optimizer. We used a 4-layer MLP with 64 channels as the baseline and also set the grid resolution of 64. When applying PE, we enlarged the single-channel coordinate to 32 channels, and we concatenated the original and enlarged inputs." }, { "figure_ref": [ "fig_6" ], "heading": "A.3 IMAGE", "publication_ref": [ "b48", "b48", "b37" ], "table_ref": [], "text": "For the 2D image representation task, we used Natural and Text image datasets (Tancik et al., 2020), which include 512 × 512 images, respectively. The resolution of the grids (d x and d y ) was set to 32 × 32. Using two subtasks, we assessed the ability to regress the training data points and to generalize well on unseen data points. The first subtask is to accurately represent a target image at a resolution of 512 × 512, using the same image for training, and it aims to measure the ability for fitting signals. Another subtask trains neural fields using a smaller image with a resolution of 256 × 256, but evaluates using the original image with a resolution of 512 × 512.\nWe used FFN (Tancik et al., 2020) as the baseline model, which was originally developed in Jax a few years back. Due to its older environment, we opted for a Pytorch implementation to simplify the experimental process. We constructed a baseline model following the original paper (MLP with 4 layers, 256 hidden channels, ReLU activation, and sigmoid output). Each model was trained for 2000 iterations using the Adam optimizer. The learning rate was initially set to 10 -3 and 10 -2 for neural networks and grids, respectively, multiplied by 0.1 at 1000 and 1500 iterations. The manually tuned parameters for each dataset in FFN were also used in this experiment, where the gaussian scale factor was set to 10 and 14 for Natural and Text, respectively.\nFor I-NGP Müller et al. (2022), we used hash grids with 2-channel features across 16 different resolutions (16 to 256) and a following 2-layer 64-channel MLP. The maximum hash map size was set to 2 15 .\nThe variance in Fig. 7(a) denotes the mean of the variance of all pixels at the same channel. Formally, the variance v of H × W pixel-wise C-channel features X ∈ R C×H×W can be expressed as, v = 1 C C ch=1 var (H,W ) (X ch ), where var (H,W ) (•) : R H×W → R computes the variance of H × W values and X ch ∈ R H×W is features at the channel ch." }, { "figure_ref": [], "heading": "A.4 NOVEL VIEW SYNTHESIS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Static scene.", "publication_ref": [ "b35", "b29", "b34", "b1", "b0", "b1", "b43", "b35", "b43", "b37", "b2", "b40", "b40", "b40", "b17", "b20" ], "table_ref": [], "text": "We used synthetic (NeRF (Mildenhall et al., 2020), NSVF (Liu et al., 2020)), forward-facing (LLFF (Mildenhall et al., 2019)), and real-world unbounded (360 (Barron et al., 2022)) datasets for evaluating novel view synthesis performance. As a baseline model, we used Mip-NeRF (Barron et al., 2021) for single-scale scenes, except for 360 dataset, where we used Mip-NeRF 360 (Barron et al., 2022). We implemented CAM based on Mip-NeRF and Mip-NeRF 360 official codes in the Jax framework. While following all the original configurations, we incorporated CAM into every MLP linear layer until the view direction coordinates were directly inputted. For the scale and shift grids (Γ, B), the values of d θ and d ϕ were set to 4 and 3 for forward-facing scenes, and 10 and 3 for other scenes, respectively. For quantization, we applied layer-wise min-max quantizationaware training (QAT), as in Rho et al. (2023). We compared our method with NeRF (Mildenhall et al., 2020), TensoRF (Chen et al., 2022a), Rho et al. (2023) for NeRF synthetic, NSVF, and LLFF datasets in Tab. 3, and with I-NGP (Müller et al., 2022) and Zip-NeRF (Barron et al., 2023) for 360 dataset in Tab 4.\nDynamic scene. We used the D-NeRF dataset (Pumarola et al., 2021) to evaluate CAM for novel view synthesis under dynamic scenes. CAM was implemented on NerfAcc (Li et al., 2022a) with the grid resolution d t of 10. NerfAcc for dynamic scene was originally based on T-NeRF (Pumarola et al., 2021), which deploys deformation network and canonical network. We incorporated CAM into every linear layer in the canonical network, until the view direction coordinates were directly inputted. We compared our approach with the baseline NerfAcc and recent state-of-the-art algorithms for dynamic NeRF (D-NeRF (Pumarola et al., 2021), TiNeuVox (Fang et al., 2022), and K-planes. (Fridovich-Keil et al., 2023))." }, { "figure_ref": [], "heading": "A.5 VIDEO", "publication_ref": [ "b32", "b26", "b26" ], "table_ref": [], "text": "Video representation. To measure the video representation performance of neural fields, we used the UVG dataset (Mercat et al., 2020), which is one of the most popular datasets in neural field-based video representation. The UVG dataset contains seven videos with a resolution of 1920 × 1080. Among video representing neural fields (Chen et al., 2021a;Li et al., 2022b;Lee et al., 2022), we used FFNeRV (Lee et al., 2022) as our baseline model because of its compactness and representation performance. We implemented CAM based on FFNeRV official codes in the Pytorch framework. To ensure consistency, we maintained all the original configurations including QAT, with the exception of applying CAM between the convolutional and activation layers of each FFNeRV convolution block. In regards to the scale and shift grids (Γ, B), we set d T to 60 for both the 32-bit and 8-bit models, and 30 for the 6-bit model." }, { "figure_ref": [], "heading": "Compression comparison.", "publication_ref": [ "b50" ], "table_ref": [], "text": "For video compression results, we followed the compression pipeline used in FFNeRV, which includes QAT, optional weight pruning, and entropy coding. Although FFNeRV quantized to 8-bit width for model compression, we further lowered the bit width to 6bit, except for the last head layer. This was done because CAM exhibits robust performance even with 6-bit, where the baseline FFNeRV shows poor performance, as shown in Tab. 10. Fig. 6 of the main paper depicts the rate-distortion performance of our approach compared with widely-used video codecs (H.264 (Wiegand et al., 2003) . This result demonstrates that extended representational capacity in the temporal dimension due to grids surely improves performance in representing time-varying information. In addition, the performance gap between video representations with and without CAM widened as the bit precision decreased (from 0.25 to 0.60). These results imply that our method can be useful for neural fields designed for storage-constrained situations." }, { "figure_ref": [ "fig_6" ], "heading": "C.2 EFFECT OF NORMALIZATION", "publication_ref": [ "b25" ], "table_ref": [], "text": "In addition to the result in Fig. 7, we analyze the actual benefits of CAM using Mip-NeRF. One of the known advantages of normalization is that it decreases the magnitude of gradients and prevents them from diverging, which allows the use of a higher learning rate and improved convergence speed (Ioffe & Szegedy, 2015). As shown by the decreased level of gradients in We report the inference speed and GPU memory requirements of the models in Tab. 2, evaluated on the 'Mic' scene. As shown in Tab. 11, K-Planes requires small memory while showing slow inference. CAM reduces the original NerfAcc's speed when testing chunk size is small. However, increasing the testing chunk size reduces the speed gap between using CAM and not using it. Intriguingly, CAM even lowers memory usage under these conditions. We interpret that CAM facilitates a more effectively trained occupancy grid and helps bypass volume sampling, offsetting the additional computational demands introduced by CAM itself.\nC.4 PER-SCENE RESULTS.\nWe evaluated the performance on various datasets for novel view synthesis. We provide per-scene results for NeRF synthetic (Tab. 12), NSVF synthetic(Tab. 13), and LLFF (Tab. 14), 360 (Tab. 15), and D-NeRF (Tab. 16) datasets. " }, { "figure_ref": [], "heading": "APPENDIX A IMPLEMENTATION DETAILS", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a brief explanation of the functional form of grids and specify our implementation details for diverse tasks.\nA.1 FUNCTION OF Γ AND B Γ and B are grid structures to represent the scale and shift factors, where each grid is trained as a function of coordinates with infinite resolution, outputting coordinate-corresponding components. The output in infinite resolution is aggregated by nearby features in the grid based on the distance between the coordinates of the input and neighboring features." }, { "figure_ref": [], "heading": "A.2 1D SIGNAL", "publication_ref": [ "b41", "b13" ], "table_ref": [], "text": "We conducted the experiment for regression 1D periodic function, following the previous works (Rahaman et al., 2019;Cho et al., 2022). We constructed the target function f (x) =" } ]
Neural fields, mapping low-dimensional input coordinates to corresponding signals, have shown promising results in representing various signals. Numerous methodologies have been proposed, and techniques employing MLPs and grid representations have achieved substantial success. MLPs allow compact and high expressibility, yet often suffer from spectral bias and slow convergence speed. On the other hand, methods using grids are free from spectral bias and achieve fast training speed, however, at the expense of high spatial complexity. In this work, we propose a novel way for exploiting both MLPs and grid representations in neural fields. Unlike the prevalent methods that combine them sequentially (extract features from the grids first and feed them to the MLP), we inject spectral bias-free grid representations into the intermediate features in the MLP. More specifically, we suggest a Coordinate-Aware Modulation (CAM), which modulates the intermediate features using scale and shift parameters extracted from the grid representations. This can maintain the strengths of MLPs while mitigating any remaining potential biases, facilitating the rapid learning of high-frequency components. In addition, we empirically found that the feature normalizations, which have not been successful in neural filed literature, proved to be effective when applied in conjunction with the proposed CAM. Experimental results demonstrate that CAM enhances the performance of neural representation and improves learning stability across a range of signals. Especially in the novel view synthesis task, we achieved state-of-the-art performance with the least number of parameters and fast training speed for dynamic scenes and the best performance under 1MB memory for static scenes. CAM also outperforms the best-performing video compression methods using neural fields by a large margin.
COORDINATE-AWARE MODULATION FOR NEURAL FIELDS
[ { "figure_caption": "Figure 1 :1Figure 1: Feature representations based on the (a) MLP, (b) Grid → MLP, (c) CAM. The dot in CAM means a Hadamard product.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Visualization of CAM on different domains.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "an intermediate feature tensor and the modulated output feature (N : batch size, C: channel size), and n, c denote the batch and channel index of the feature, respectively. γ(•; Γ), β(•; B) : R N ×D → R N are the scale and shift function of input coordinates X ∈ R N ×D (D: input coordinate dimension), outputting scalar values from the single-channel grids Γ, B given each coordinate. γ n (•; Γ), β n (•; B) denote each scale and shift factor for batch n.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "B). Motivated byUlyanov et al. (2016), we exclude the channel dimension, and represent channel-wise modulation parameters by scale and shift functions γ(•; Γ), β(•; B) : R N ×1 → R N ×C . The grids for scales and shifts are denoted by Γ and B, where Γ and B are of size R dt×C , respectively. Here, d t represents the grid resolution in the time dimension, and C represents the channel size which is the same as the channel size in the feature tensor F . Fig.2-(c) illustrates how CAM works in frame-wise video representation neural fields.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Performance on 1D signal regression. The yellow dotted line represents GT.", "figure_data": "", "figure_id": "fig_4", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The rate-distortion curve evaluated on NeRF synthetic dataset.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: (a) Visualization of the pixel-wise distribution before and after applying CAM on the final feature. The same color indicates the same distribution (mean and variance). We provide variance between pixels of each feature (described in App. A) and output PSNR. (b) Error map in the frequency domain: A more centralized pixel of the maps indicates an error in the lower frequency.Coordinate priority. As shown in Tab. 6, CAM with the highest-level coordinates based on the proposed priority achieves the optimal performance. CAM with spatial coordinates is effective for modalities with only spatial coordinates (images), as we have shown in Tab. 1. However, when the input modality becomes more complex in NeRFs and dynamic NeRFs, spatiality-aware modulation can be meaningless in spite of the requirement of large additional memory (even with the factorized grids). Furthermore, although using both time and view direction coordinates increases performance compared to the baseline in D-NeRF, a single prioritized component demonstrates the most efficient result.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Analysis on convergence using Lego scene. (a) Train PSNRs with different learning schedules, while quantization-aware trained to 8-bit. (b) Gradient norm of weights during training.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Neural fields, or implicit neural representations, use neural networks to represent signals based on coordinates. Recent studies on neural fields have shown promising results in a variety of vision tasks", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "N ×1 associated with N temporal coordinates is supplied to the neural network to generate intermediate feature tensors F ∈ R N ×C×H×W , where Preprint Performance evaluation for image regression and generalization measured in PSNR.", "figure_data": "Method #ParamsRegression Natural TextGeneralization Natural TextI-NGP237K32.9841.9426.1132.37FFN263K30.3034.4427.4830.04+ CAM266K32.21 (+1.91)50.17 (+15.73)28.19 (+0.71)33.09 (+3.05)", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Effectiveness in the NeRF task. * denotes the reported value in the original paper.", "figure_data": "Method#Params Time PSNRNerfAcc0.6M38 m31.55K-planes37M38* m 32.36CAM3.7M 13M51 m 54 m32.18 32.60", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Qualitative results evaluated on NeRFs. The sizes are measured in megabytes (MB).", "figure_data": "Bit MethodNeRF Synthetic NSVF SyntheticLLFFSizePSNRSize PSNRSize PSNRNeRF5.0031.015.0030.815.0026.5032TensoRF71.933.14≈ 7036.52179.7 26.73Mip-NeRF 2.3433.092.3435.832.3426.86+ CAM2.3433.422.3436.562.3427.17Rho et al.1.6932.241.8835.117.4926.648TensoRF16.932.7817.836.1144.726.66Mip-NeRF 0.5832.860.5835.520.5826.64+ CAM0.5833.270.5836.300.5826.88", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Performance evalua-tion on the 360 dataset, whichcomprises unbounded real scenes.Among 9 scenes, we evaluate 7publicly available scenes. CAMis applied on Mip-NeRF 360.Method#Params PSNRMip-NeRF0.6M25.12I-NGP84M27.06Zip-NeRF84M29.82Mip-NeRF 3609M29.11+ CAM9M29.98", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance evaluation of dynamic NeRFs.", "figure_data": "Method#Params PSNRD-NeRF1.1M29.67TiNeuVox12M32.67K-planes37M31.61NerfAcc0.6M32.22+ CAM0.6M33.78", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "BaselineCAM coord. #Params PSNRS D T0.6M32.22NerfAcc✓13.1M32.57(D-NeRF)✓0.6M32.44✓ ✓0.6M32.49✓0.6M33.78-0.6M33.09Mip-NeRF✓-13.1M32.70✓-0.6M33.42", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablation study on the feature normalization, evaluated on Natural images, Ready video, and Lego scene. CAM-N indicates CAM without normalization.", "figure_data": "TaskBase BNLNINCAM-N CAMImage 30.3 23.6 30.8-30.932.2Video 31.6 22.1-31.531.932.3NeRF 35.7 35.2 35.4-35.936.2", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Compression performance evaluated on UVG videos at various levels.", "figure_data": "Video (#frames)Beauty (600)Bospho (600)Honey (600)Ready (600)Jockey (600)Shake (300)Yacht (600)Avg.PSNR33.6534.5938.8933.8927.133.4328.7632.86BPP0.0144 0.01490.0148 0.0142 0.0145 0.0115 0.0148 0.0144PSNR34.2138.3939.5837.2931.6235.1832.5335.57BPP0.0454 0.04590.0442 0.0439 0.0448 0.0355 0.0455 0.0442PSNR34.5139.8739.7138.3233.6436.6534.3536.73BPP0.0752 0.07510.0728 0.0721 0.0735 0.0743 0.0748 0.0739PSNR34.7840.9139.8638.9235.2337.2435.8437.56BPP0.1122 0.11090.1087 0.1068 0.1089 0.0980 0.1108 0.1088PSNR35.0641.7140.0139.336.537.7137.1338.24BPP0.1563 0.1530 0.15114 0.1480 0.1508 0.1249 0.1535 0.1500", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Performance evaluation under different settings for frame-wise video representation. generally proposed to compute the mean and variance along with as many dimensions as possible excluding the batch dimension, and represent scalar features in grids Γ, B. However, we introduce some adaptations for 4D intermediate tensors in frame-wise video representation: excluding also the channel dimension and representing channel-wise modulation factors in the grids. This is because of heavy computation from the large normalization unit, which causes a dramatic increase in training time (about 50%), as shown in Tab. 9. When we exclude channel axis, representing channel-wise modulation factors shows better than representing scalar factors. It is worth noting that our general proposal achieves the best performance, highlighting the flexibility of CAM where we can trade performance and complexity.", "figure_data": "Norm Unit Γ, B shape PSNR Params (M) Time/Epoch (sec)(H, W )R dt×C32.2511.469.1(H, W )R dt31.9311.368.8(C, H, W )R dt×C32.3711.4102.8(C, H, W )R dt32.3911.3104.0B ADAPTATION FOR 4D TENSOR", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "PSNR on video representation. The leftmost column denotes the bit precision of neural networks. BPP denotes \"bits per pixel\". the video representation performance, measured in PSNR. The CAM-applied models consistently beat the baselines, regardless of videos. The performance gap is much wider for fastmoving videos (e.g., Ready and Jockey) than it is for static videos (e.g., Beauty and Honey)", "figure_data": "Bit MethodBeauty Bospho Honey JockeyReadyShakeYachtAvg.BPP32FFNeRV + CAM34.28 34.29 (+0.01)38.67 38.86 (+0.19)39.70 39.69 (-0.01)37.48 37.82 (+0.34)31.55 32.25 (+0.70)35.45 35.47 (+0.02)32.65 33.03 (+0.38)35.70 35.95 (+0.25)0.2870 0.28948FFNeRV + CAM34.21 34.27 (+0.06)38.41 38.82 (+0.41)39.60 39.67 (+0.07)37.29 37.63 (+0.34)31.48 32.12 (+0.64)35.26 35.39 (+0.13)32.48 32.90 (+0.42)35.55 35.86 (+0.31)0.0718 0.07236FFNeRV + CAM34.09 34.21 (+0.12)37.26 38.25 (+0.99)39.13 39.21 (+0.08)36.63 37.16 (+0.53)30.47 31.57 (+1.10)34.54 35.02 (+0.48)31.65 32.50 (+0.85)34.85 35.45 (+0.60)0.0538 0.0540C.1 VIDEO REPRESENTATIONTab. 10 shows", "figure_id": "tab_13", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Inference speed and GPU memory requirement of CAM compared to MLP-based and grid-based methods, using Mic scene. Method Test chunk PSNR Inf. FPS Inf. Mem.", "figure_data": "K-Planes-34.100.253.8 GBNerfacc +CAM102433.77 36.030.51 0.264.4 GB 4.7 GBNerfacc +CAM409633.77 36.031.19 0.6710.5 GB 8.8 GBNerfacc +CAM819233.77 36.031.45 1.0119.5 GB 16.4 GBC.3 INFERENCE SPEED AND MEMORY", "figure_id": "tab_14", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Per-scene performance on the NeRF synthetic dataset measured in PSNR.", "figure_data": "BitMethodChair Drums Ficus Hotdog Lego MaterialsMicShipAvg.32Mip-NeRF 35.14 25.48 33.29 + CAM 35.24 25.74 34.0737.48 37.8935.70 36.2430.71 31.4836.51 30.41 33.09 36.04 30.64 33.428Mip-NeRF 34.68 25.48 33.20 + CAM 34.98 25.80 33.7737.28 37.7735.29 35.9530.52 31.4836.18 30.28 32.86 35.96 30.47 33.27", "figure_id": "tab_15", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Per-scene performance on the LLFF dataset measured in PSNR.", "figure_data": "BitMethodFern Flower Fortress Horns Leaves Orchids Room TrexAvg.32Mip-NeRF 24.97 27.83 + CAM 25.06 28.3931.73 31.7328.01 28.7621.00 21.4020.07 20.4033.22 28.02 26.86 33.40 28.22 27.178Mip-NeRF 24.95 27.56 + CAM 25.06 27.7231.27 31.4527.66 28.1820.88 21.2720.07 20.3732.97 27.73 26.64 33.13 27.88 26.88", "figure_id": "tab_16", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Per-scene performance on the 360 dataset measured in PSNR.", "figure_data": "MethodBicycle Bonsai Counter Garden Kitchen Room Stump Avg.Mip-NeRF 36024.3733.4629.5526.9832.2331.63 26.40 29.23+ CAM24.3035.4430.6226.9933.6032.91 26.03 29.98", "figure_id": "tab_17", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Per-scene performance on the D-NeRF dataset measured in PSNR.", "figure_data": "Method BallsHell Hook Jacks Lego Mutant Standup TrexAvg.NerfAcc 39.49 25.58 31.86 32.73 24.3235.5535.9032.33 32.22+ CAM 41.52 27.86 33.20 33.89 25.0936.2937.5734.81 33.78", "figure_id": "tab_18", "figure_label": "16", "figure_type": "table" } ]
Joo Chan Lee; Daniel Rho; Seungtae Nam; Jong Hwan Ko; Eunbyung Park
[ { "authors": "Jonathan T Barron; Ben Mildenhall; Matthew Tancik; Peter Hedman; Ricardo Martin-Brualla; Pratul P Srinivasan", "journal": "", "ref_id": "b0", "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", "year": "2021" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Dor Verbin; P Pratul; Peter Srinivasan; Hedman", "journal": "", "ref_id": "b1", "title": "Mip-nerf 360: Unbounded anti-aliased neural radiance fields", "year": "2022" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Dor Verbin; Peter Pratul P Srinivasan; Hedman", "journal": "", "ref_id": "b2", "title": "Zip-nerf: Anti-aliased grid-based neural radiance fields", "year": "2023" }, { "authors": "Nils Bjorck; Carla P Gomes; Bart Selman; Kilian Q Weinberger", "journal": "", "ref_id": "b3", "title": "Understanding batch normalization", "year": "2018" }, { "authors": "Rohan Chabra; Jan E Lenssen; Eddy Ilg; Tanner Schmidt; Julian Straub; Steven Lovegrove; Richard Newcombe", "journal": "", "ref_id": "b4", "title": "Deep local shapes: Learning local sdf priors for detailed 3d reconstruction", "year": "2020" }, { "authors": "Marco Eric R Chan; Petr Monteiro; Jiajun Kellnhofer; Gordon Wu; Wetzstein", "journal": "", "ref_id": "b5", "title": "pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis", "year": "2021" }, { "authors": "Eric R Chan; Connor Z Lin; Matthew A Chan; Koki Nagano; Boxiao Pan; Shalini De Mello; Orazio Gallo; Leonidas J Guibas; Jonathan Tremblay; Sameh Khamis; Tero Karras; Gordon Wetzstein", "journal": "", "ref_id": "b6", "title": "Efficient geometry-aware 3d generative adversarial networks", "year": "2022" }, { "authors": "Anpei Chen; Zexiang Xu; Andreas Geiger; Jingyi Yu; Hao Su", "journal": "", "ref_id": "b7", "title": "Tensorf: Tensorial radiance fields", "year": "2022" }, { "authors": "Bo Hao Chen; Hanyu He; Yixuan Wang; Ren; Nam Ser; Abhinav Lim; Shrivastava", "journal": "", "ref_id": "b8", "title": "Nerv: Neural representations for videos", "year": "2021" }, { "authors": "Matthew Hao Chen; Ser-Nam Gwilliam; Abhinav Lim; Shrivastava", "journal": "", "ref_id": "b9", "title": "Hnerv: A hybrid neural representation for videos", "year": "2023" }, { "authors": "Ting Chen; Mario Lucic; Neil Houlsby; Sylvain Gelly", "journal": "", "ref_id": "b10", "title": "On self modulation for generative adversarial networks", "year": "2019" }, { "authors": "Yinbo Chen; Sifei Liu; Xiaolong Wang", "journal": "", "ref_id": "b11", "title": "Learning continuous image representation with local implicit image function", "year": "2021" }, { "authors": "Zeyuan Chen; Yinbo Chen; Jingwen Liu; Xingqian Xu; Vidit Goel; Zhangyang Wang; Humphrey Shi; Xiaolong Wang", "journal": "", "ref_id": "b12", "title": "Videoinr: Learning video implicit neural representation for continuous space-time super-resolution", "year": "2022" }, { "authors": "Junwoo Cho; Seungtae Nam; Daniel Rho; Jong Hwan Ko; Eunbyung Park", "journal": "", "ref_id": "b13", "title": "Streamable neural fields", "year": "2022" }, { "authors": "Yu Deng; Jiaolong Yang; Jianfeng Xiang; Xin Tong", "journal": "", "ref_id": "b14", "title": "Gram: Generative radiance manifolds for 3d-aware image generation", "year": "2022" }, { "authors": "Emilien Dupont; Adam Goliński; Milad Alizadeh; Yee Whye Teh; Arnaud Doucet", "journal": "", "ref_id": "b15", "title": "Coin: Compression with implicit neural representations", "year": "2021" }, { "authors": "Emilien Dupont; Hrushikesh Loya; Milad Alizadeh; Adam Golinski; Yee Whye Teh; Arnaud Doucet", "journal": "Transactions on Machine Learning Research", "ref_id": "b16", "title": "COIN++: Neural compression across modalities", "year": "2022" }, { "authors": "Jiemin Fang; Taoran Yi; Xinggang Wang; Lingxi Xie; Xiaopeng Zhang; Wenyu Liu; Matthias Nießner; Qi Tian", "journal": "", "ref_id": "b17", "title": "Fast dynamic radiance fields with time-aware neural voxels", "year": "2022" }, { "authors": "Rizal Fathony; Anit Kumar Sahu; Devin Willmott; J Zico Kolter", "journal": "", "ref_id": "b18", "title": "Multiplicative filter networks", "year": "2021" }, { "authors": "Sara Fridovich-Keil; Alex Yu; Matthew Tancik; Qinhong Chen; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b19", "title": "Plenoxels: Radiance fields without neural networks", "year": "2022" }, { "authors": "Sara Fridovich-Keil; Giacomo Meanti; Frederik Rahbaek Warburg; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b20", "title": "K-planes: Explicit radiance fields in space, time, and appearance", "year": "2023" }, { "authors": "Golnaz Ghiasi; Honglak Lee; Manjunath Kudlur; Jonathon Vincent Dumoulin; Shlens", "journal": "", "ref_id": "b21", "title": "Exploring the structure of a real-time, arbitrary neural artistic stylization network", "year": "2017" }, { "authors": "Jiatao Gu; Lingjie Liu; Peng Wang; Christian Theobalt", "journal": "", "ref_id": "b22", "title": "Stylenerf: A style-based 3d aware generator for high-resolution image synthesis", "year": "" }, { "authors": "Jingwen He; Chao Dong; Yu Qiao", "journal": "", "ref_id": "b23", "title": "Modulating image restoration with continual levels via adaptive feature modification layers", "year": "2019-06" }, { "authors": "Zhihao Hu; Dong Xu; Guo Lu; Wei Jiang; Wei Wang; Shan Liu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b24", "title": "Fvc: An end-to-end framework towards deep video compression in feature space", "year": "2023" }, { "authors": "Sergey Ioffe; Christian Szegedy", "journal": "", "ref_id": "b25", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "Chan Joo; Daniel Lee; Jong Hwan Rho; Eunbyung Ko; Park", "journal": "", "ref_id": "b26", "title": "Ffnerv: Flow-guided frame-wise neural representations for videos", "year": "2022" }, { "authors": "Ruilong Li; Matthew Tancik; Angjoo Kanazawa", "journal": "", "ref_id": "b27", "title": "Nerfacc: A general nerf acceleration toolbox", "year": "2022" }, { "authors": "Zizhang Li; Mengmeng Wang; Huaijin Pi; Kechun Xu; Jianbiao Mei; Yong Liu", "journal": "", "ref_id": "b28", "title": "E-nerv: Expedite neural video representation with disentangled spatial-temporal context", "year": "2022" }, { "authors": "Lingjie Liu; Jiatao Gu; Kyaw Zaw Lin; Tat-Seng Chua; Christian Theobalt", "journal": "", "ref_id": "b29", "title": "Neural sparse voxel fields", "year": "2020" }, { "authors": "N P Julien; David B Martel; Connor Z Lindell; Eric R Lin; Marco Chan; Gordon Monteiro; Wetzstein", "journal": "ACM Transactions on Graphics", "ref_id": "b30", "title": "Acorn: adaptive coordinate networks for neural scene representation", "year": "2021" }, { "authors": "Ishit Mehta; Michaël Gharbi; Connelly Barnes; Eli Shechtman; Ravi Ramamoorthi; Manmohan Chandraker", "journal": "", "ref_id": "b31", "title": "Modulated periodic activations for generalizable local functional representations", "year": "2021" }, { "authors": "Alexandre Mercat; Marko Viitanen; Jarno Vanne", "journal": "", "ref_id": "b32", "title": "Uvg dataset: 50/120fps 4k sequences for video codec analysis and development", "year": "2020" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b33", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Rodrigo Srinivasan; Nima Ortiz-Cayon; Ravi Khademi Kalantari; Ren Ramamoorthi; Abhishek Ng; Kar", "journal": "ACM Transactions on Graphics", "ref_id": "b34", "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b35", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Sreyas Mohan; Joshua L Vincent; Ramon Manzorro; Peter Crozier; Carlos Fernandez-Granda; Eero Simoncelli", "journal": "Advances in neural information processing systems", "ref_id": "b36", "title": "Adaptive denoising via gaintuning", "year": "2021" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Trans. Graph", "ref_id": "b37", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b38", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Ethan Perez; Florian Strub; Harm De Vries; Vincent Dumoulin; Aaron Courville", "journal": "", "ref_id": "b39", "title": "Film: Visual reasoning with a general conditioning layer", "year": "2018" }, { "authors": "Albert Pumarola; Enric Corona; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b40", "title": "D-nerf: Neural radiance fields for dynamic scenes", "year": "2021" }, { "authors": "Aristide Nasim Rahaman; Devansh Baratin; Felix Arpit; Min Draxler; Fred Lin; Yoshua Hamprecht; Aaron Bengio; Courville", "journal": "", "ref_id": "b41", "title": "On the spectral bias of neural networks", "year": "2019" }, { "authors": "Daniel Rho; Junwoo Cho; Jong Hwan Ko; Eunbyung Park", "journal": "", "ref_id": "b42", "title": "Neural residual flow fields for efficient video representations", "year": "2022" }, { "authors": "Daniel Rho; Byeonghyeon Lee; Seungtae Nam; Chan Joo; Jong Hwan Lee; Eunbyung Ko; Park", "journal": "", "ref_id": "b43", "title": "Masked wavelet representation for compact neural radiance fields", "year": "2023" }, { "authors": "Katja Schwarz; Yiyi Liao; Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b44", "title": "Graf: Generative radiance fields for 3d-aware image synthesis", "year": "2020" }, { "authors": "Julien Vincent Sitzmann; Alexander Martel; David Bergman; Gordon Lindell; Wetzstein", "journal": "", "ref_id": "b45", "title": "Implicit neural representations with periodic activation functions", "year": "2020" }, { "authors": "Jens-Rainer Gary J Sullivan; Woo-Jin Ohm; Thomas Han; Wiegand", "journal": "IEEE Transactions on circuits and systems for video technology", "ref_id": "b46", "title": "Overview of the high efficiency video coding (hevc) standard", "year": "2012" }, { "authors": "Towaki Takikawa; Alex Evans; Jonathan Tremblay; Thomas Müller; Morgan Mcguire; Alec Jacobson; Sanja Fidler", "journal": "", "ref_id": "b47", "title": "Variable bitrate neural fields", "year": "2022" }, { "authors": "Matthew Tancik; Pratul Srinivasan; Ben Mildenhall; Sara Fridovich-Keil; Nithin Raghavan; Utkarsh Singhal; Ravi Ramamoorthi; Jonathan Barron; Ren Ng", "journal": "", "ref_id": "b48", "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "year": "2020" }, { "authors": "Dmitry Ulyanov; Andrea Vedaldi; Victor Lempitsky", "journal": "", "ref_id": "b49", "title": "Instance normalization: The missing ingredient for fast stylization", "year": "2016" }, { "authors": "Thomas Wiegand; Gary J Sullivan; Gisle Bjontegaard; Ajay Luthra", "journal": "IEEE Transactions on circuits and systems for video technology", "ref_id": "b50", "title": "Overview of the h. 264/avc video coding standard", "year": "2003" }, { "authors": "Zhijie Wu; Yuhe Jin; Kwang Moo; Yi ", "journal": "", "ref_id": "b51", "title": "Neural fourier filter bank", "year": "2023" }, { "authors": "Yiheng Xie; Towaki Takikawa; Shunsuke Saito; Or Litany; Shiqin Yan; Numair Khan; Federico Tombari; James Tompkin; Vincent Sitzmann; Srinath Sridhar", "journal": "Computer Graphics Forum", "ref_id": "b52", "title": "Neural fields in visual computing and beyond", "year": "2022" }, { "authors": "Alex Yu; Ruilong Li; Matthew Tancik; Hao Li; Ren Ng; Angjoo Kanazawa", "journal": "", "ref_id": "b53", "title": "Plenoctrees for real-time rendering of neural radiance fields", "year": "2021" }, { "authors": "Alex Yu; Vickie Ye; Matthew Tancik; Angjoo Kanazawa", "journal": "", "ref_id": "b54", "title": "pixelnerf: Neural radiance fields from one or few images", "year": "2021" }, { "authors": "Yixin Zhuang", "journal": "", "ref_id": "b55", "title": "Preprint Table 13: Per-scene performance on the NSVF dataset measured in PSNR", "year": "2024" } ]
[ { "formula_coordinates": [ 4, 237.19, 236.78, 266.81, 12.17 ], "formula_id": "formula_0", "formula_text": "Fn,c = γ n (X; Γ)F n,c + β n (X; B),(1)" }, { "formula_coordinates": [ 4, 134.71, 257.81, 72.8, 11.81 ], "formula_id": "formula_1", "formula_text": "F, F ∈ R N ×C are" }, { "formula_coordinates": [ 5, 189.02, 102.91, 314.98, 25.58 ], "formula_id": "formula_2", "formula_text": "Fn,c = γ n (X; Γ) F n,c -µ n (F ) σ 2 n (F ) + ϵ + β n (X; B),(2)" }, { "formula_coordinates": [ 5, 186.79, 131.32, 317.21, 26.35 ], "formula_id": "formula_3", "formula_text": "µ n (F ) = 1 C c F n,c , σ 2 n (F ) = 1 C c (F n,c -µ n (F )) 2 ,(3)" }, { "formula_coordinates": [ 5, 176.21, 384.28, 327.79, 25.58 ], "formula_id": "formula_4", "formula_text": "Fn,s,c = γ n (X (ϕ,θ) ; Γ) F n,s,c -µ n (F ) σ 2 n (F ) + ϵ + β n (X (ϕ,θ) ; B),(4)" }, { "formula_coordinates": [ 5, 173.98, 412.68, 326.15, 26.35 ], "formula_id": "formula_5", "formula_text": "µ n (F ) = 1 SC s,c F n,s,c , σ 2 n (F ) = 1 SC s,c (F n,s,c -µ n (F )) 2 , (5" }, { "formula_coordinates": [ 5, 500.13, 419.74, 3.87, 8.64 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 6, 152.73, 235.32, 351.27, 27.9 ], "formula_id": "formula_7", "formula_text": "Fn,c,h,w = γ n,c (X; Γ) F n,c,h,w -µ n,c (F ) σ 2 n,c (F ) + ϵ + β n,c (X; B),(6)" }, { "formula_coordinates": [ 6, 150.5, 269.7, 349.63, 26.88 ], "formula_id": "formula_8", "formula_text": "µ n,c (F ) = 1 HW h,w F n,c,h,w , σ 2 n,c (F ) = 1 HW h,w (F n,c,h,w -µ n,c (F )) 2 , (7" }, { "formula_coordinates": [ 6, 500.13, 276.76, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" } ]
2023-11-28
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b9", "b14", "b14", "b15", "b16", "b17", "b18", "b19", "b10", "b20", "b21", "b22", "b23", "b24", "b20", "b25", "b26", "b26", "b6", "b21", "b10", "b6" ], "table_ref": [], "text": "Shellfish production can be viewed as a sustainable alternative for protein production in a world with increasing demand for seafood and increasing challenges to feed a growing world population [1,2]. Shellfish, here seen as bivalve molluscs, are filterfeeding organisms that obtain their energy to grow from particles available in the water column, namely phytoplankton cells. Therefore, there is no need to add feed or any input of fresh/drinkable water. No fertilizers, pesticides, or antibiotics are used as well. Moreover, shellfish with their relevant filtration rates can have a positive impact on water transparency and even be cultivated combined with fish farms in an Integrated Multi-Trophic Aquaculture, which has been a strategy to implement more sustainable systems [3,4].\nHowever, from the thousands of phytoplankton species in the oceans that act as primary producers, a few are able to produce toxins, which under certain environmental conditions may bloom and reach a high number of cells in the seawater, leading to shellfish contamination. Although aquaculture, and particularly shellfish farming, has been a growing business sector, this activity is severely impacted by harmful algal blooms (HAB).\nHAB are recurrent natural phenomena. Depending on the phytoplankton species involved, notably high cell concentrations may emerge within the water column in response to favorable oceanographic conditions. In specific instances, these occurrences might even alter the color of the sea surface. As a consequence of HAB events, shellfish may accumulate toxins exceeding the regulatory and safety limits for human consumption. Under these circumstances, the food safety agencies temporarily close shellfish harvesting and prohibit the sale of shellfish in the market [5].\nAccording to the EU regulations, the member states that produce shellfish must implement a monitoring program and an audited Official Control of their shellfish production. Each shellfish production area should be classified in terms of microbiological contaminants, and shellfish should be regularly tested, on a weekly basis, for HABtoxins determination. Phytoplankton analysis of the seawater should be performed simultaneously [6]. Whenever the monitoring program reaches a positive result (i.e., toxin levels exceeding the regulatory limits) the precautionary closure of the shellfish harvesting is set in order to minimize the risk of acute intoxication.\nThe implemented strategy with precautionary closures to shellfish harvesting is well established and is considered an effective approach to minimize the health risk as the number of cases of shellfish poisoning has been highly reduced. However, this system is complex, requires intensive and expensive field sample collection and laboratory analysis, and only provides a reactive response to the problem. To minimize the economic impact on shellfish producers a more proactive system allowing anticipating the shellfish contamination is of great importance [7,8]. In this sense, remote sensing based on satellite imagery has been a key research tool for early detection of algal blooms, including harmful events [9,10,11], either in marine or freshwater environments [12,13], or even in polar ice sheets [14].\nAlthough there are some limitations for accurate satellite observations, such as unfavorable atmospheric conditions (clouds) or the presence of suspended material and coloured dissolved organic matter in coastal regions [10], the vast potential of satellite imagery for the detection of algal blooms has increased deeply over the last years, leading the International Ocean Colour Coordinating Group [15] of the Intergovernmental Oceanographic Commission of UNESCO to elaborate a report on \"Observation of Harmful Algal Blooms with Ocean Colour Radiometry\", where some strategies are pointed out to distinguish harmful blooms from a background of harmless phytoplankton, including dinoflagellate blooms associated with paralytic shellfish poisoning, blooms of the toxic diatom genus Pseudo-nitzschia, blooms of the neurotoxin dinoflagellate Karenia brevis, and cyanobacterial blooms [15].\nThe role of artificial neural network (ANN) models has recently been evaluated for HAB prediction. In particular, multilayer perceptron (MLP) (e.g., [16,17,18,19,20]), convolutional neural networks (CNNs) [11,21,22] and long short-term memory (LSTM) networks [23,24,25,21,26], with the latter being increasingly used to overcome the limitations of the former. Although still valid as a proxy for shellfish contamination and for complementing decision-making tools, HAB forecasts may not represent an optimal solution. There is no direct relationship between HAB events and shellfish contamination, and not every HAB event translates into contamination. Unlike for HAB forecasting, only a few studies attempted to forecast shellfish contamination, which arises from HABs and directly impacts industry and public safety management. Grasso et al. (2019) [27] used an MLP with one hidden layer to predict closures to shellfish harvesting areas due to paralytic shellfish poisoning (PSP) toxins in blue mussels, one to ten weeks in advance. Cruz et al. (2022) [7] developed several MLP, CNN, and LSTM to predict contamination of mussels by diarrhetic shellfish poisoning (DSP) toxins up to 4 weeks in advance. Several biological and environmental time-series variables involved in HABs' formation and shellfish contamination were used for model building, including chlorophyll a (chl-a), sea surface temperature (SST), toxic phytoplankton cell counts, and meteorologic variables (wind, atmospheric temperature, and rainfall).\nDespite the acknowledged potential of satellite imagery to enhance the prediction of HABs and shellfish contamination, the predictive ability of multispectral or hyperspectral image data has rarely been investigated in ANN-based forecasting models, and exclusively aimed at HAB forecasting. Pyo et al. [22] developed a CNN model with a convolutional block attention module to predict cyanobacterial cell concentrations, based on in situ data, simulated hydrodynamic features, and chl-a distribution maps obtained from airborne-generated hyperspectral images. The incorporation of chl-a maps in the attention module was shown to contribute to improving the model prediction accuracy at certain periods. CNNs and LSTMs were also successfully evaluated as part of a HAB detection system based on historical records and remote sensing-based datacubes obtained from MODIS sensors (e.g., SST, chl-a, and several spectral bands) to classify and discriminate between HAB and non-HAB events [11].\nIn this work, we evaluate the contribution of satellite data on a multivariate forecasting model to predict diarrhetic shellfish poisoning (DSP) in shellfish species across Portuguese production areas. We take as variables past values of contamination in shellfish and a time series of satellite images for the areas studied. Given the high dimensionality of the satellite data, we propose a methodology encompassing a prior step for dimensionality reduction to extract a small number of relevant satellite-based features using autoencoders. Following the successful implementation of ANN models for biotoxin contamination forecasting in mussels [7], MLP, CNN, and LSTM architectures were trained on contamination data and the extracted remote sensing features from 2016 to 2020 and validated in 2021. Model performance improvements in predicting contamination events in 2022 were obtained on a case-by-case basis regarding the forecasting horizons (t+1 to t+4 weeks) and the different areas evaluated. Our approach shows the usefulness of incorporating available inexpensive information from a highdimension data source like remote sensing, configuring a promising tool to improve available mechanisms used by shellfish farmers for production management." }, { "figure_ref": [], "heading": "Materials and methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Contamination Data", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Data regarding DSP contamination of several shellfish species across the Portuguese shellfish production areas is available for public access on the website of the Portuguese Institute of the Sea and Atmosphere (IPMA) at https://www.ipma.pt/pt/bivalves/ zonas. Data has been available since August 2014 and updated periodically, with measurements being performed on a weekly basis with some irregularity. The time frame selected for this data is the same considered for satellite images, between 26-04-2016 and 31-12-2022. The mussel species Mytilus galloprovincialis is the most commonly monitored species, given its faster toxin accumulation rate, therefore being considered as an indicator species. However several other species are also monitored depending on their commercial importance in the corresponding production area. In order to increase the size of the dataset for data modeling purposes, all available species were considered for this work.\nFrom the IPMA measurement sites on the western coast of Portugal, we selected for study the areas that featured a minimum amount of weekly data points (more than 200), while keeping a reasonable balance between contaminated and not contaminated samples (Table 1). The selected areas were the following: L1, L2, L3, RIAV (RIAV1, RIAV2, RIAV3 and RIAV4), L5B, and L6 (Figure 1). A closer inspection of the contamination values reveals that contamination events are very likely to occur in consecutive weeks, and likewise for non-contamination. For this reason, we deal with missing values by assigning the previous value observed in the area, assuming it will maintain its current state until a future change." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Satellite Data", "publication_ref": [ "b27", "b29" ], "table_ref": [], "text": "Satellite data was obtained from the Copernicus' SENTINEL-3 mission, a part of the European Union's space programme for earth observation, where several collections and levels of data processing are available for public access. The Ocean and Land Color Instrument (OLCI) provides useful information on the ocean and coastal areas with 2 processing levels available. Level 1B provides top of atmosphere radiances while level 2 further processes this data to water leaving reflectances and bio-optical variables (e.g., chlorophyll). Level 2 includes several atmospheric corrections as well as anomaly detection (e.g., masks for clouds and invalid pixels) hence being the most suitable data for our study. Satellite images come in multispectral data frames with a side of approximately 1440 km and when at full resolution each pixel covers an area of 300×300m. Data was selected from two collections: EO:EUM:DAT:0407 and EO:EUM:DAT:0556. The first one is the operational collection which allows access to the latest data available (typically one year). For older archive information one must access the reprocessed collection, the second one used. Several years of full-resolution multispectral images were gathered, starting on the first available date (26/04/2016) until the end of 2022, roughly 6 and a half years. The data was obtained using the eumdac Python library, made available by Europe's meteorological satellite agency (EUMETSAT). It provides simple access to EUMETSAT data from a variety of satellite missions, with several useful command-line utilities for data search, download, and processing [28].\nAfter obtaining the frames from both collections, all daily information was aggregated. Ideally, full information would be retrieved over our target area (Western Portugal). However, the amount of coverage in each frame varies with satellite orbit (Figure 2a). The mission comprises two satellites, A and B, which may each hold none, 1 or 2 contiguous frames overlapping the target area. Obtaining the largest possible coverage of the target area for each particular date requires a careful frame selection. Frames are gathered from the satellite covering more area in each date, using the strategy as follows: i) remove any frames with less than 1.5% of coverage; ii) remove all frames from a given satellite if their combined coverage is under 20% of the target area; iii) select remaining frames from the same satellite with most total coverage (e.g., two frames from satellite 3-B), with contiguous frames being merged.\nAs the goal was to obtain a set of images related to the contamination measurement sites from IPMA, further processing was required. To create these images we considered the coordinates of the measurement site to be in the center of the eastern edge of the image because we are considering sampling points along the western coast of Portugal (Figure 2b). For each location, we select the closest point in the frame as reference to extract a 64x64 pixel multispectral image, corresponding to an area of 19,2×19,2km. We expect this image size to be informative for our models and small enough to allow effective feature extraction. Image pixels referring to clouds, land, or several kinds of anomalies should be filtered out, so it was necessary to compute flags for invalid pixels for each image channel, based on the recommendations provided in the sentinel-3 manual [30]. Images with 10% or less valid pixels were discarded from the dataset. Images were extracted directly from the frame without resampling, which may lead to slight orientation differences depending on the orbit of the satellite on a particular date." }, { "figure_ref": [], "heading": "Time Series", "publication_ref": [ "b30" ], "table_ref": [], "text": "To compare the performance of models with or without satellite information, two different inputs were evaluated: a univariate case, which includes only information from contamination, and a multivariate case, where satellite features were also considered. Each time step corresponds to the maximum weekly value of contamination. Mussels show higher values of contamination overall, hence being selected in most cases when measurements from several species are present. Features extracted from satellite data for the multivariate model correspond to the same or closest previous date to the measurement date. Each feature, as well as the contamination values, were rescaled separately to the [0, 1] range using min-max normalization. Forecasts took as input the last 12 time steps, corresponding to the last three months, and targeted the upcoming 4-time steps. This input value was selected based on previous work performed on the same dataset, where an input of 12-time steps was found to provide the best results [31]." }, { "figure_ref": [], "heading": "Artificial Neural Network models", "publication_ref": [ "b30", "b31", "b32", "b33", "b33", "b34", "b33", "b35" ], "table_ref": [ "tab_1" ], "text": "The forecasting models developed are based on artificial neural networks (ANNs) commonly used for time series that proved successful in forecasting shellfish contamination [31]. Several models were selected and compared, i.e., Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM).\nMLPs were the first ANN used for time series forecasting [32]. They consist of layers of neurons connected with weighted links, including an input layer, an output layer, and hidden layers. Neurons perform calculations and adjust their weights through training to optimize the network's response, typically using the back-propagation algorithm [33,34].\nA CNN is a specialized type of ANN designed for processing grid-like data, such as images and time series. Instead of matrix multiplication, CNNs perform convolutions that involve sliding a weight matrix (kernel or filter) of a given size over the input data to create feature maps. The main idea is for each layer to learn a weight matrix that extracts important features from the input [34].\nLSTM networks [35], a type of Recurrent Neural Network, excel in capturing long data patterns by preserving gradient information over time, proving to be a successful approach in machine learning [34]. In an LSTM, there are two distinct flows of information. At each time step, it processes the current input element and receives the hidden state and cell state from the previous time step. The hidden state results from non-linear transformations of the input, while the cell state is a function of linear transformations [36]. This property allows the cell state to better preserve gradients and to handle both long and short time dependencies, hence this model is expected to perform best overall in our tests.\nDetails on the ANN architectures developed are presented in Table 2, respecting the number of batches, optimizer (e.g., Adam and RMSProp), kernel size, activation functions (e.g., sigmoid, ELU, ReLU, leaky ReLU) and layer dimensions tested. The ANN models were built and trained with the Keras library of the TensorFlow machine learning platform. " }, { "figure_ref": [], "heading": "Model Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "The problem under study was treated as a regression problem and evaluated based on standard regression metrics. Both the Mean Absolute Error (MAE) and the Root Mean Squared Error (RMSE) were considered, given by\nMAE = 1 n n i=1 |y i -y i | , RMSE = 1 n n i=1 (y i -ŷi ) 2 ,\nwith y i = representing the true value and y i = the predicted value. MAE was used as a loss function for training since it is easier to interpret in the context of this work and is less sensitive to outliers.\nThe classification above or below contamination limits is essential for regulating production areas, therefore classification accuracy was also computed, indicating the percentage of correctly classified cases, as follows" }, { "figure_ref": [], "heading": "Accuracy =", "publication_ref": [], "table_ref": [], "text": "T P + T N Total Cases , with T P representing true positives and T N the true negatives cases." }, { "figure_ref": [ "fig_4" ], "heading": "Feature Extraction", "publication_ref": [ "b10" ], "table_ref": [ "tab_2", "tab_2" ], "text": "Due to the inherently high dimensionality of the image data, a crucial step of the methodology proposed was to extract a small set of features retaining relevant information from the data. We propose to accomplish this in an unsupervised manner using autoencoders that are capable of ignoring all non-valid pixels. For this purpose, we focused on convolutional autoencoders, which are better equipped to deal with visual data, and evaluated several architectures. As the information extracted is expected to be specific to the geographical location, a different autoencoder was trained for each location. The multispectral images included 16 water reflectance bands corresponding to different wavelengths and several bio-optical products. Previous work applied feature classification using a Random Forest to imaging products from NASA's MODIS satellite, estimating the importance of bands and products available for HAB forecasting [11]. Following the authors' conclusions, and given available products are similar to the ones from SENTINEL-3 satellites, we selected CHL OC4ME and CHL NN (chlorophyll concentration estimation products) and Photosynthetically Active Radiation (PAR) as the most promising bio-optical products for our experiments.\nFigure 3 shows the chosen architecture design for the convolutional autoencoders. Encoding and decoding blocks follow one of two patterns: conv-pool or conv-convpool (Table 3). Initial testing showed better performance for models with 3 layers. Parameters optimized include kernel size and number of filters (Table 4). 4." }, { "figure_ref": [], "heading": "Architecture", "publication_ref": [ "b31", "b31", "b31", "b31" ], "table_ref": [], "text": "Block Layers Kernel Filters\nConv-Pool Encoder Conv2D K N MaxPool 2×2 - BatchNormalization - - Decoder UpSampling2D 2×2 16 Conv2D K N BatchNormalization - - Conv-Conv-Pool Encoder Conv2D K N Conv2D K N MaxPool 2×2 - BatchNormalization - - Decoder UpSampling2D 2×2 16 Conv2D K N Conv2D K N BatchNormalization - -\nModel selection was performed based on the MSE, disregarding invalid pixels. A different model was selected for each area.\nAutoencoders were trained with Adam optimizer, learning rate=0.001, batch size=16, and 1000 epochs with early stopping (patience=100). Similarly to the ANN models, autoencoders were built and trained with the Keras library of the TensorFlow machine learning platform and tensorflow-gpu was used for increased performance. Data from 2016-2020 was used for training and 2021-2022 for validation. Both architectures were relevant when selecting the best models, (3×3, 5×5, 5×5) and (3×3, 5×5, 7×7) for kernel size and (32,32,32), (32,64,128) for filter number being the best combinations for these parameters amongst all areas. The best models selected were retrained using the full dataset before extracting features. Only four features were extracted in the middle layer, as they proved sufficient to improve results in several areas." }, { "figure_ref": [ "fig_6", "fig_8", "fig_10" ], "heading": "Results and discussion", "publication_ref": [], "table_ref": [ "tab_3", "tab_3", "tab_3" ], "text": "This section provides a summary of the findings of our experiments, comparing the performance of the different networks across the several shellfish production areas studied and regarding the various forecasting horizons and metrics considered.\nThe results obtained for all regions studied are displayed in Table 5, referring to the metrics evaluated on the test set (2022). The univariate case respects to using only past values of contamination for forecasting, whereas the multivariate case also incorporates extracted features from satellite data. Forecasting horizons range from t+1 to t+4, corresponding to one to four weeks ahead.\nA different model was selected for each forecasting horizon, as this greatly improved the model performance metrics in comparison to single models with four outputs. LSTM networks were selected more often as the best model, i.e., on 32 occasions, while MLP and CNN were selected 15 and 17 times, respectively, the latter showing a larger expression in lagoon areas (RIAV areas).\nThe prediction errors tend to increase week by week while the accuracy decreases, as expected, with the exception of the L6 area where there are few cases of contamination resulting in similar performances for all weeks. In areas L1 and L2, the univariate condition outperforms the multivariate in most metrics. In L5B, multivariate performs better on the first two forecasting horizons, but worse on t+3 and similarly on t+4. Results are similar on L6, with a slight advantage for the multivariate case. In the other four areas (RIAV1 to RIAV4) the multivariate case performs best for most metrics and forecasting horizons, except for t+1 on RIAV1 and RIAV4, where there is a loss in MAE. This shows that some areas benefit from the integration of satellite data, improving the models' predictions. Figures 4 and5 provide graphical representations of the predictions obtained by the best models in two different regions, one oceanic (L2) and other of a lagoon area (RIAV1). In the first case, for which the inclusion of satellite information did not result in an increase in the predictive ability of the model (Table 5), an increase in the model errors can be observed when this type of information is considered, especially for the contamination cases below the legal safety limit (160 µg OA equiv. kg -1 ). Even though we use several years of information since sampling occurs, at best, once per week, only hundreds of data points are available to train each model. So, in order to improve predictions outside the training set, the features extracted from the satellite must be sufficiently informative to compensate for the increase in overfitting due to using more attributes. Despite these limitations, this was observed in many cases with evident forecasting improvements. For the lagoon case (RIAV1), for which an increase in the model prediction performance was obtained when considering satellite information, the metrics are mostly increased for all forecasting horizons, especially for long-term predictions, suggesting a two-week horizon (at least) is informative of a future contamination event. Despite the expected increase in the model prediction errors, such a model might represent a long-term warning to shellfish farmers in this area.\nRegarding the accuracy of classification, i.e., below or above the regulatory limit, the results are presented in the form of confusion matrices (Figures 6 and7). The accuracy obtained is generally high, indicating an overall good classification. A decrease in accuracy from the univariate to the multivariate case for t+1 forecastings can be observed for the L2 area, whereas an increase in accuracy for t+2 and longer forecasting horizons was obtained for the RIAV1 area (Table 5)." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "The main objective of this work was to test an approach for extracting useful features from satellite data and evaluate the impact of its integration on forecasting models for predicting toxin concentrations in shellfish. We hypothesize that information gathered from satellite images of a given shellfish production area, particularly those covering the oceanographic neighbouring area, might contain relevant information regarding the marine conditions that lead to contamination events. We investigated how informative these features were by including them as additional features in several ANN-based time-series forecasting models to predict biotoxin contamination in shellfish.\nThe results obtained by the proposed approach show that including satellite data features improves the prediction of contamination values in most areas, particularly for the 2-week and longer forecasting horizons in the lagoon shellfish production areas (RIAV), and for 1-week and 2-week horizons in the L5B area. These findings support the use of autoencoders for feature extraction on satellite data, suggesting that it is possible to include information from a high-dimension data source without losing the ability to generalize outside the training set. Improvements were obtained for most areas studied by integrating only four features, which fosters further investigation on the usefulness of remotely sensed data on shellfish contamination forecasting. Further work might also include refining the selection of image locations, as well as experimenting with different image sizes (current images considered 20×20km) and evaluating its impact on the quality of the information generated. Testing more spectral bands may also be a path for further improvement.\nTo the best of our knowledge, this work is the first to report improvements in the use of satellite imagery for directly forecasting shellfish contamination. This contribution may pave the way for the development of more robust and accurate predictive systems in the future, anticipating an effective impact on shellfish production management." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by national funds through the Fundação para a Ciência e a Tecnologia (FCT, I.P.) through the project MATISSE: A Machine Learning-Based Forecasting Systems for Shellfish (DSAIPA/DS/0026/2019), projects UIDB/04516/2020 (NOVA LINCS), UIDB/00297/2020 and UIDP/00297/2020 (NOVA Math), UIDB/00667 /2020 and UIDP/00667/2020 (UNIDEMI), and also CEECINST/00042/2021." } ]
Shellfish production constitutes an important sector for the economy of many Portuguese coastal regions, yet the challenge of shellfish biotoxin contamination poses both public health concerns and significant economic risks. Thus, predicting shellfish contamination levels holds great potential for enhancing production management and safeguarding public health. In our study, we utilize a dataset with years of Sentinel-3 satellite imagery for marine surveillance, along with shellfish biotoxin contamination data from various production areas along Portugal's western coastline, collected by Portuguese official control. Our goal is to evaluate the integration of satellite data in forecasting models for predicting toxin concentrations in shellfish given forecasting horizons up to four weeks, which implies extracting a small set of useful features and assessing their impact on the predictive models. We framed this challenge as a time-series forecasting problem, leveraging historical contamination levels and satellite images for designated areas. While contamination measurements occurred weekly, satellite images were accessible multiple times per week. Unsupervised feature extraction was performed using autoencoders able to handle non-valid pixels caused by factors like cloud cover, land, or anomalies. Finally, several Artificial Neural Networks models were applied to compare univariate (contamination only) and multivariate (contamination and satellite data) time-series forecasting. Our findings show that incorporating these features enhances predictions, especially beyond one week in lagoon production 1
Satellite-based feature extraction and multivariate time-series prediction of biotoxin contamination in shellfish
[ { "figure_caption": "Figure 1 :1Figure 1: Selected locations (adapted from IPMA website at https://www.ipma.pt/pt/ bivalves/zonas/). Areas RIAV1 to RIAV4 are lagoon areas adjacent to the L3 oceanic area.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a) Example of satellite frames. (b) Image selection.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: a) Satellite frame exploration on 25/04/2021 performed with SNAP software [29]. Frames 1 and 2 refer to satellite 3-A, while the remaining come from satellite 3-B (notice that the orbits and overlaps of both satellites differ); b) Example of final image selection areas for L2 and RIAV1 (red dots correspond to IPMA measurement sites and blue squares indicate the 64×64 extracted images.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 : 3 -33Figure 3: 3-Layer architecture used for the convolutional autoencoders.", "figure_data": "", "figure_id": "fig_3", "figure_label": "33", "figure_type": "figure" }, { "figure_caption": "Table 3 :3Conv-Pool and Conv-Conv-Pool schemes, with the layers used for encoder and decoder blocks. Values tested for kernel sizes and number of filters are displayed in Table", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Contamination predictions obtained in area L2 for one week forecasting (t+1). The black line represents the legal limit for contamination (160 µg OA equiv. kg -1 ), and the dashed lines represent the data split into training, validation, and test sets.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Contamination predictions obtained in area RIAV1 for two week forecasting (t+2). The black line represents the legal limit for contamination (160 µg OA equiv. kg -1 ), and the dashed lines represent the data split into training, validation, and test sets.", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "1 Figure 6 :16Figure 6: Confusion matrices obtained in area L2 for one week forecasting (t+1).", "figure_data": "", "figure_id": "fig_9", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Confusion matrices obtained in area RIAV1 for two week forecasting (t+2).", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Number of available weekly measurements for the selected areas, including the number of missing values and counts below or over contamination limits (Cont = contamination). The full dataset includes years 2016 to 2020, comprising a total of 349 weeks.", "figure_data": "Area Measured Missing No Cont Cont Cont (%)L12161331298725%L23183120111734%L5B250991519928%L6261882085315%RIAV13272214218553%RIAV2346316518152%RIAV33381123810029%RIAV42151341417421%", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Description of the models and parameters tested when optimizing the time-series forecasts.", "figure_data": "ModelDescriptionLayer DimensionsMLPBatch 8, Adam optimizer, ReLU activation, final sigmoid12,24,36,48,60CNNBatch 8, Adam optimizer, kernel size 1, leaky ReLU activation, final sigmoid12,24,36,48,60LSTMStateful (batch 1), RMSProp optimizer, final activation ELU12,24,36,48,60", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Parameters tested when optimizing autoencoders, resulting in 16 possible combinations. The best parameter combinations are highlighted.", "figure_data": "Kernel Sizes (K)Filter Numbers (N)(3×3, 3×3, 3×3)(16, 16, 16)(5×5, 5×5, 5×5)(32, 32, 32)(3×3, 5×5, 5×5)(16, 32, 64)(3×3, 5×5, 7×7)(32, 64, 128)", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Summary of the metrics obtained for all areas (best values between conditions highlighted). A different model was selected for each forecasting horizon.", "figure_data": "Area MetricsUnivariateMultivariatet+1t+2t+3t+4t+1t+2t+3t+4mae4680898661868393L1rmse accuracy 0.89 79112 0.75126 0.74128 0.7292 0.85131 0.77133 0.72144 0.74modelLSTM MLPLSTM MLPLSTM MLPMLPCNNmae112142155165117154161164L2rmse accuracy 0.77 154183 0.66219 0.60233 0.58156 0.68207 0.62217 0.49235 0.57modelLSTM LSTM LSTM LSTMLSTM LSTM LSTM CNNmae738292106717297107L5Brmse accuracy 0.92 110115 0.94133 0.94151 0.87105 0.9699 0.96147 0.92154 0.87modelLSTM MLPCNNCNNMLPLSTM CNNMLPmae4547474745454446L6rmse accuracy 0.87 7279 0.8780 0.8782 0.8773 0.8771 0.8773 0.8776 0.87modelMLPLSTM LSTM LSTMLSTM LSTM LSTM LSTMmae5389112136678610493RIAV1rmse accuracy 0.91 107127 0.83147 0.79172 0.75101 0.89120 0.85138 0.87128 0.77modelLSTM CNNCNNCNNMLPCNNCNNLSTMmae9711913614294110123140RIAV2rmse accuracy 0.77 147160 0.74178 0.75186 0.74138 0.75148 0.81163 0.74185 0.74modelLSTM CNNMLPCNNLSTM CNNMLPCNNmae8410312313082104116115RIAV3rmse accuracy 0.87 141175 0.83203 0.79213 0.77139 0.89168 0.81194 0.79189 0.81modelLSTM CNNCNNLSTMLSTM LSTM MLPLSTMmae811021131139010010198RIAV4rmse accuracy 0.83 162151 0.79168 0.75178 0.75142 0.79152 0.79149 0.79144 0.81modelLSTM LSTM MLPCNNLSTM MLPMLPLSTM", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" } ]
Sérgio Tavares; Pedro R Costa; Ludwig Krippahl; Marta B Lopes
[ { "authors": "F M Suplicy", "journal": "Reviews in Aquaculture", "ref_id": "b0", "title": "A review of the multiple benefits of mussel farming", "year": "2020" }, { "authors": "A S Olivier; L Jones; L L Vay; M Christie; J Wilson; S K Malham", "journal": "Reviews in Aquaculture", "ref_id": "b1", "title": "A global review of the ecosystem services provided by bivalve aquaculture", "year": "2020" }, { "authors": "B A Macdonald; S M C Robinson; K A Barrington", "journal": "Aquaculture", "ref_id": "b2", "title": "Feeding activity of mussels (mytilus edulis) held in the field at an integrated multi-trophic aquaculture (imta) site (salmo salar) and exposed to fish food in the laboratory", "year": "2011" }, { "authors": "M S Park; J K Kim; S Shin; B H Min; P Samanta", "journal": "Aquaculture", "ref_id": "b3", "title": "Trophic fractionation in an integrated multi-trophic aquaculture off tongyoung coast: A stable isotope approach", "year": "2021" }, { "authors": "A C Braga; S M Rodrigues; H M Lourenço; P R Costa; P S ", "journal": "Toxins", "ref_id": "b4", "title": "Bivalve shellfish safety in portugal: Variability of faecal levels, metal contaminants and marine biotoxins during the last decade (2011-2020)", "year": "2023" }, { "authors": " Eu", "journal": "The Official Journal of the European Union L", "ref_id": "b5", "title": "Commission regulation (EC) no 853/2004 of the european parliament and of the council of 29 april 2004 laying down specific hygiene rules for on the hygiene of foodstuffs", "year": "2004" }, { "authors": "R C Cruz; P R Costa; L Krippahl; M B Lopes", "journal": "Knowledge-Based Systems", "ref_id": "b6", "title": "Forecasting biotoxin contamination in mussels across production areas of the portuguese coast with artificial neural networks", "year": "2022" }, { "authors": "R C Cruz; P R Costa; S Vinga; L Kripphal; M B Lopes", "journal": "Journal of Marine Science and Engineering", "ref_id": "b7", "title": "Review of recent machine learning advances for forecasting harmful algal blooms and shellfish contamination", "year": "2021" }, { "authors": "Y H Ahn; P Shanmugam", "journal": "Remote Sensing of Environment", "ref_id": "b8", "title": "Detecting the red tide algal blooms from satellite ocean color observations in optically complex northeast-asia coastal waters", "year": "2006" }, { "authors": "I Caballero; R Fernández; O M Escalante; L Mamán; N G ", "journal": "Scientific Reports", "ref_id": "b9", "title": "New capabilities of sentinel-2A/B satellites combined with in situ data for monitoring small harmful algal blooms in complex coastal waters", "year": "2020" }, { "authors": "P R Hill; A Kumar; M Temimi; D R Bull", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b10", "title": "HABNet: Machine learning, remote sensing-based detection of harmful algal blooms", "year": "2020" }, { "authors": "S Mishra; R P Stumpf; B A Schaeffer; P J Werdell; K A Loftin; A Meredith", "journal": "Scientific Reports", "ref_id": "b11", "title": "Measurement of cyanobacterial bloom magnitude using satellite remote sensing", "year": "2019" }, { "authors": "X Hou; L Feng; Y Dai; C Hu; L Gibson; J Tang; Z Lee; Y Wang; X Cai; J Liu; Y Zheng; C Zheng", "journal": "Nature Geoscience", "ref_id": "b12", "title": "Global mapping reveals increase in lacustrine algal blooms over the past decade", "year": "2022" }, { "authors": "S Wang; M Tedesco; M Xu; P M Alexander", "journal": "Geophysical Research Letters", "ref_id": "b13", "title": "Mapping ice algal blooms in southwest greenland from space", "year": "2018" }, { "authors": "S Bernard; R Kudela; L Robertson Lain; G C Pitcher", "journal": "", "ref_id": "b14", "title": "Observation of harmful algal blooms with ocean colour radiometry", "year": "2021" }, { "authors": "F Recknagel; M French; P Harkonen; K.-I Yabunaka", "journal": "Ecological Modelling", "ref_id": "b15", "title": "Artificial neural network approach for modelling and prediction of algal blooms", "year": "1997" }, { "authors": "J H W Lee; Y Huang; M Dickman; A W Jayawardena", "journal": "Ecological Modelling", "ref_id": "b16", "title": "Neural network modelling of coastal algal blooms", "year": "2003" }, { "authors": "L Velo-Suárez; J Gutiérrez-Estrada", "journal": "Harmful Algae", "ref_id": "b17", "title": "Artificial neural network approaches to onestep weekly prediction of Dinophysis acuminata blooms in huelva (western andalucía, spain)", "year": "2007" }, { "authors": "X Li; J Yu; Z Jia; J Song", "journal": "", "ref_id": "b18", "title": "Harmful algal blooms prediction with machine learning models in tolo harbour", "year": "2014" }, { "authors": "J Kim; H Kim; K Kim; J M Ahn", "journal": "Water", "ref_id": "b19", "title": "Research on the development and application of a deep learning model for effective management and response to harmful algal blooms", "year": "2023" }, { "authors": "F N Yussof; N Maan; M N M Reba", "journal": "International Journal of Environmental Research and Public Health", "ref_id": "b20", "title": "LSTM networks to improve the prediction of harmful algal blooms in the west coast of sabah", "year": "2021" }, { "authors": "J Pyo; K H Cho; K Kim; S.-S Baek; G Nam; S Park", "journal": "Water Research", "ref_id": "b21", "title": "Cyanobacteria cell prediction using interpretable deep learning model with observed, numerical, and sensing data assemblage", "year": "2021" }, { "authors": "S Lee; D Lee", "journal": "Int. J. Environ. Res. Public Health", "ref_id": "b22", "title": "Improved prediction of harmful algal blooms in four major south korea's rivers using deep learning models", "year": "2018" }, { "authors": "H Cho; U.-J Choi; H Park", "journal": "WIT Transactions on Ecology and the Environment", "ref_id": "b23", "title": "Deep learning application to time series prediction of daily chlorophyll-a concentration", "year": "2018" }, { "authors": "H Cho; H Park", "journal": "IOP Conference Series: Earth and Environmental Science", "ref_id": "b24", "title": "Merged-LSTM and multistep prediction of daily chlorophyll-a concentration for algal bloom forecast", "year": "2019" }, { "authors": "T Kim; J Shin; D Lee; Y Kim; E Na; J Park; C Lim; Y Cha", "journal": "Water Research", "ref_id": "b25", "title": "Simultaneous feature engineering and interpretation: Forecasting harmful algal blooms using a deep learning approach", "year": "2022" }, { "authors": "I Grasso; S D Archer; C Burnell; B Tupper; C Rauschenber; K Kanwit; N R Record", "journal": "Ecosphere", "ref_id": "b26", "title": "The hunt for red tides: Deep learning algorithm forecasts shellfish toxicity at site scales in coastal maine", "year": "2019" }, { "authors": "", "journal": "", "ref_id": "b27", "title": "Eumdac -eumetsat data access client v2", "year": "2022-10" }, { "authors": "", "journal": "", "ref_id": "b28", "title": "Snap -esa sentinel application platform v9", "year": "2022" }, { "authors": " Eumetsat", "journal": "Sentinel-3 OLCI Marine User Handbook", "ref_id": "b29", "title": "", "year": "2021-03-12" }, { "authors": "R C Cruz; P R Costa; L Krippahl; M B Lopes", "journal": "Knowledge-Based Systems", "ref_id": "b30", "title": "Forecasting biotoxin contamination in mussels across production areas of the portuguese coast with artificial neural networks", "year": "2022" }, { "authors": "F Liu; Y Lu; M Cai", "journal": "IEEE Access", "ref_id": "b31", "title": "A hybrid method with adaptive sub-series clustering and attention-based stacked residual LSTMs for multivariate time series forecasting", "year": "2020" }, { "authors": "D E Rumelhart; G E Hinton; R J Williams", "journal": "nature", "ref_id": "b32", "title": "Learning representations by backpropagating errors", "year": "1986" }, { "authors": "J Heaton; Ian Goodfellow", "journal": "Genetic programming and evolvable machines", "ref_id": "b33", "title": "yoshua bengio, and aaron courville: Deep learning: The mit press", "year": "2016" }, { "authors": "S Hochreiter; J Schmidhuber", "journal": "Neural computation", "ref_id": "b34", "title": "Long short-term memory", "year": "1997" }, { "authors": "H Hewamalage; C Bergmeir; K Bandara", "journal": "International Journal of Forecasting", "ref_id": "b35", "title": "Recurrent neural networks for time series forecasting: Current status and future directions", "year": "2021" } ]
[ { "formula_coordinates": [ 9, 155.33, 331.03, 293.7, 33.71 ], "formula_id": "formula_0", "formula_text": "MAE = 1 n n i=1 |y i -y i | , RMSE = 1 n n i=1 (y i -ŷi ) 2 ," }, { "formula_coordinates": [ 10, 144.78, 450.88, 300.54, 175.07 ], "formula_id": "formula_1", "formula_text": "Conv-Pool Encoder Conv2D K N MaxPool 2×2 - BatchNormalization - - Decoder UpSampling2D 2×2 16 Conv2D K N BatchNormalization - - Conv-Conv-Pool Encoder Conv2D K N Conv2D K N MaxPool 2×2 - BatchNormalization - - Decoder UpSampling2D 2×2 16 Conv2D K N Conv2D K N BatchNormalization - -" } ]
2023-11-30
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b48", "b3", "b32", "b47", "b51", "b25", "b45", "b3", "b32", "b32", "b3", "b9", "b9", "b17", "b31", "b17", "b31", "b19", "b16", "b8", "b17", "b31", "b0", "b37", "b32", "b29", "b9", "b19", "b9", "b54", "b53", "b20", "b30", "b6", "b18", "b42" ], "table_ref": [], "text": "Pre-training & fine-tuning paradigm [49] can perform impressive transfer learning between homo-modal tasks, as has been demonstrated in computer vision (CV) [14,33] and natural language processing (NLP) [36,45,48]. Pretrained models are often trained by well-resourced and experienced teams with large amounts of clean data [52,53].\nFigure 1. Comparisons of our method with full fine-tuning and recent delta-tuning art on representative visual tasks. Red dashed line is the performance of full fine-tuning on ADE20K and COCO. The proposed Mona outperforms full fine-tuning on representative visual tasks, which promotes the upper limit of previous delta-tuning art. The results demonstrate that the adapter-tuning paradigm can replace full fine-tuning and achieve better performance in most visual tasks. Full fine-tuning may no longer be the only preferred solution for transfer learning in the future.\nExceptional pre-trained models can help hardware-and datalimited teams save plenty of training costs and train wellperforming deep models on new tasks [3,26,42,46]. In the era of large models, the efficiency of tuning pre-trained models is an important issue. Full fine-tuning [14,33] has been widely used with great success in CV tasks, which tunes all parameters in the pre-trained backbone as well as additional task-specific heads/necks during the training process. Many impressive CV art (e.g., Swin [33], EVA [14], etc.) broaden the upper limit of visual tasks in this way. However, is full fine-tuning still the best way to fine-tune visual tasks now? Our answer is NO.\nApart from full fine-tuning, Delta tuning [10,23] has recently attracted attention in NLP and CV tasks. Delta tuning comes from NLP, which tunes only part of the backbone network or extra lightweight structures for efficient transfer learning [10]. Delta tuning methods generally fix most backbone parameters and achieve comparable or even better performance than full fine-tuning on simple tasks (including classification tasks in NLP [41,57] and CV [6,18,25,32]). VPT [25] is the first to explore the potential of prompt-tuning on visual classification tasks. LoRand [53] pioneers adaptertuning on dense predictions and reduces the gap between delta tuning and full fine-tuning on visual dense tasks. However, existing methods cannot outperform full fine-tuning on dense prediction tasks, including semantic segmentation and instance segmentation.\nTo challenge the dominance of full fine-tuning in CV, we propose Mona-tuning, a novel tuning paradigm based on Multi-cognitive visual adapters (Mona). We analyse recent art and summarise two problems in existing visual adapters. First, the designs of existing CV adapters [6, 18,32] follow linear adapters in NLP [20,22]. Indeed, visual tasks process visual signals, which are significantly different from linguistic signals and have unique convolutional operations [2, 17,29]. Our experiments show that convolution-based filters can better transfer visual knowledge from pre-trained models to other tasks, so we propose a novel convolutionbased adapter for visual tasks. Second, most existing adapters compress the upstream features into a single dimension and fit them to the new task via simple nonlinearity [6,18,32]. Previous works claim that models have different cognitions of features at different filter scales [1,4,38]. As a result, we employ multiple convolutional filters behind the adapter's reduction layer to enhance the cognitive abilities of the adapters. We demonstrate the generality and superiority of Mona-tuning on plenty of representative visual tasks, including image classification, object detection, semantic segmentation, and instance segmentation. We employ the SwinTransformer [33] series trained on ImageNet-22k [9] as pre-trained models. Extensive experiments indicate that the proposed method outperforms the traditional full fine-tuning paradigm both on simple image classification tasks and complex dense prediction tasks. For example, Mona-tuning outperforms full fine-tuning on the COCO dataset [30] by 1% mAP. The results suggest that full fine-tuning may no longer be the optimal choice for visual tasks. Adapter-based tuning is a better-performing and more efficient paradigm for visual transfer learning. Moreover, Mona is the only method that surpasses full fine-tuning on semantic segmentation and instance segmentation. Figure 1 illustrates the superiority of the proposed method on the challenging instance segmentation and semantic segmentation tasks. Our contributions can be three-fold:\n• We demonstrate that the adapter-based tuning can replace full fine-tuning on common visual tasks and achieve better performance with fewer new parameters. [10,20,22,23] (or parameter efficient fine-tuning PEFT) is dedicated to improving the efficiency of fine-tuning. Delta-tuning methods can be divided into three groups [10]. The first group fixes most of the parameters in the pre-trained backbone and fine-tune a small number of them, e.g., BitFit [55] tunes bias, Norm Tuning [16] tunes norm layers, and Partial-1 [54] only tunes the last block. The second group reparameterises some parameters in the pre-trained model, e.g. the LoRA [21] optimises low-rank subspaces. The third group fixes the pre-trained backbone's original parameters and adds additional trainable structures, including prompt series [25,31,58] and adapter series [7,19,43]. Our experiments compare Mona with Mona has a scaled LayerNorm before the down projection. A multi-cognitive convolutional filter group and an aggregation filter are behind the down projection. We add skip-connections at four places inside Mona to strengthen its adaptation capabilities. Mona enables the adapter-based fine-tuning paradigm to outperform full fine-tuning in typical visual tasks comprehensively. these three groups." }, { "figure_ref": [], "heading": "Computer Vision Meets Delta-tuning", "publication_ref": [ "b6", "b18" ], "table_ref": [], "text": "Although derived from NLP, delta tuning is also explored in CV. VPT [25] is the first to introduce delta-tuning (prompttuning) to visual classification tasks. AdaptFormer [7] designs a parallel adapter structure to improve delta-tuning performance on visual classification. KAdaptation [19] optimises the adapter through the Kronecker product. The above art is the pioneer in visual tasks, revealing the potential of delta-tuning on visual classification. LoRand [53] brings impressive performance on dense prediction tasks via multi-branch low-rank adapters but still cannot surpass full fine-tuning on all dense prediction tasks. Recent art indicates that delta-tuning cannot completely replace full fine-tuning on vision tasks. Therefore, we propose Mona-tuning, an alternative to full fine-tuning for more visual tasks, which outperforms full fine-tuning in both new parameter sizes and performance." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we present the proposed method in four parts, including the adapter-tuning paradigm (Section 3.1), Mona (Section 3.2), the design process (Section 3.3), and parameter analysis (Section 3.4)." }, { "figure_ref": [ "fig_0" ], "heading": "Adapter-tuning", "publication_ref": [], "table_ref": [], "text": "Previous work [53] discussed adapter fine-tuning, and we briefly introduce related concepts here. Figure 2 illustrates the difference between full fine-tuning (exemplified by Swin-Block) and the adapter-tuning paradigm (exemplified by Mona). Full fine-tuning updates all parameters in the pretrained backbone, while adapter-tuning fixes the pre-trained parameters and updates the parameters in adapters. For dataset\nD = {(x i , y i )} N i=1\n, the optimization process of full fine-tuning and adapter-tuning can be expressed as Equation 1and Equation 2:\nθ ← arg min loss(D, θ) θ ,(1)\nω ← arg min loss(D, θ F , ω) ω , (2\n)\nwhere loss is the training loss, θ represents parameters of the whole framework, and θ F is the fixed parameters in adaptertuning. ω represents updated parameters in adapter-tuning, including parameters in adapters and parameters outside the backbone." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Mona", "publication_ref": [ "b17", "b31", "b19", "b27", "b34", "b23" ], "table_ref": [], "text": "Multi-Cognitive Visual Filters. Previous CV adapter art [6, 18,25,32] is based on linear structures, mainly including down projection, nonlinear activation, up projection, and skip connections. Vanilla adapter [20] is for natural language signals and is not optimized for visual signals. Given the limitations of vanilla adapters, we focus on how to make the adapter better at transferring visual knowledge. For visual cognition, human eyes process visual signals from different scales and integrate them for better understanding [28,35,44]. Adapters should also process upstream features from multiple cognitive perspectives for better performance on downstream tasks. We introduce multiple convolutional filters to Mona to increase the cognitive dimension. Depth-Wise Convolutions [44] (DWConv) instead of standard convolutions are employed in Mona to minimize additional parameters. Specifically, the upstream features go through three DWConv filters after down projection. Convolution kernels are 3×3, 5×5 and 7×7. We compute the average results from three filters and aggregate features with the 1×1 convolution. Then, features are nonlinearized by GeLU. We add skip-connections at multiple places in Mona to minimize feature losses during convolution (see Figure 3). Finally, the feature dimension is recovered by up projection. Input Optimization. For the adapter, the input features come from fixed layers. Fixed layers cannot adjust their parameter spaces according to new tasks' sample space, so the pre-training task affects inputs from the fixed layer. Vanilla adapter directly downscales inputs from fixed layers, which is not a good choice for visual tasks. As a result, we enable Mona to adjust the input distributions and the proportion of inputs from the fixed layer itself. Specifically, we add a norm layer and two learnable weights, S 1 and S 2 , to the top end of Mona to adjust the input distribution. Previous work indicates that normalization [51] helps to stabilize the forward input distribution and the backpropagated gradient. We find in practice that LayerNorm (LN) [51] is better than Batch-Norm [24], so we employ LN in Mona. Figure 3 illustrates our design." }, { "figure_ref": [ "fig_2" ], "heading": "Design Process", "publication_ref": [], "table_ref": [], "text": "The design of Mona undergoes several iterations, as depicted in Figure 4. The initial version incorporates multiple convolutional filters into the vanilla adapter to enhance its ability to process visual signals. This design outperforms previous adapter-based approaches and surpasses full fine-tuning in most tasks. We seek ways to optimise Mona to exceed full fine-tuning in all our tasks. Convolutional filters after down projection effectively enhance Mona's ability to alter visual knowledge in low-dimensional space. However, this version cannot optimise the input features originating from the fixed layers. We hope to send \"controllable\" and \"clean\" features to convolutional filters. As discussed in Section 3.2, prior art demonstrates that LN can effectively adjust the distribution of features. In view of this, we try to enhance Mona by incorporating LN at various locations. Initially, we add Scaled LNs after down projection and before 1×1 convolution (2 nd version). However, this design does not yield results as promising as the initial version. We then average the summation of the DWConvs (3 rd version), which slightly improves the second version's performance but still falls short of the first version. We consider that optimising the inputs of the subspace is not sufficient to improve the inputs of Mona. Therefore, we place the Scaled LN at the beginning of the entire adapter. In this version, Mona outperforms full fine-tuning across all tasks, and the final version also retains the positive averaging operation from the 3 rd version." }, { "figure_ref": [], "heading": "Parameter Analysis", "publication_ref": [], "table_ref": [], "text": "The parameters of Mona come from LN, scaling factors, linear layers, DWconv and 1×1 conv. Assuming that the input dimension of the adapter is m and the dimension after down projection is n, the parameters of the LN and scaling factors are 2m + 2, the parameters of the two linear layers are 2mn + m + n, the parameters of the DWConv layer are (3 2 + 5 2 + 7 2 )n = 83n, and the PWConv is n 2 . The total parameter of each Mona module are\n(2n + 3)m + n 2 + 84n + 2.\nFor each block, all Mona parameters are: 2 × ((2n + 3)m + n 2 + 84n + 2). We set the value of n to a constant (64) to reduce parameters in Mona." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We " }, { "figure_ref": [], "heading": "Pretrained models", "publication_ref": [ "b32" ], "table_ref": [], "text": "The Swin Transformer series [33] " }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_3", "tab_3", "tab_3", "tab_4", "tab_4" ], "text": "Table 1 shows the results of the proposed method and baselines on the COCO dataset. Instance segmentation is the most challenging task of all the experimental tasks, and COCO is also the largest of all the experimental datasets.\nResults on COCO can better demonstrate the potential of adapter-tuning in visual tasks compared to other tasks. From Table 1, we find that Mona, based on multi-cognitive visual filters, outperforms all baselines. Moreover, Mona is the only method that outperforms full fine-tuning, resulting in a significant gain of 1%. COCO experiments effectively demonstrate the capability of the proposed method and show a better option than full fine-tuning in terms of storage and performance. Among delta-tuning methods, most baselines without extra structure can save more new parameters (except Partial-1), but their average performance is lower than that with extra structure. For baselines with additional structure, the adapter-based approach is superior to the reparameterization-based approach (LoRA). LoRA is one of the recent representative delta-tuning approaches widely used in NLP tasks, but its performance weaknesses render it unsuitable for computer vision tasks. Table 1 indicates that the performance of delta-tuning is not directly proportional to parameter sizes. Partial-1 has the most updated parameters among all baselines, but its performance is significantly lower than that of adapter-based baselines. This result suggests that superior module design can effectively enhance the transferring efficiency of pre-trained models while reducing massive storage consumption. We show results in Table 2 for two tasks, namely object detection on Pascal VOC and semantic segmentation on ADE20K. The proposed Mona outperforms all baseline methods on these two representative vision tasks. Mona produces a performance gain of 3.6% and 0.18% on the two tasks compared to full fine-tuning. Table 2 again indicates that full fine-tuning is not the best choice for visual transfer learning. For other baselines, conclusions on COCO are confirmed again on VOC and ADE20K. Interestingly, it is different from COCO and ADE20K that all baselines exceed full fine-tuning on VOC. The VOC dataset has relatively little data, which may lead to overfitting when full fine-tuning a 198M Swin-Large pretained model. Compared to full fine-tuning, other methods fix most pre-trained parameters, so the model performance is less likely to collapse severely during tuning. NLP scholars treat similar cases as" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Flowers102", "publication_ref": [ "b10", "b39" ], "table_ref": [ "tab_3", "tab_4" ], "text": "OxfordPets VOC2007 Average top-1 acc. top-5 acc. top-1 acc. top-5 acc. top-1 acc. top-5 acc. top-1 acc. top-5 acc. Table 3. Results of baselines and our methods on three classification datasets. Swin-L is employed as the pre-trained model here. We present top-1 accuracy (%) and top-5 accuracy (%) of each dataset. The best result in each column is bolded low-resource cases [11,40]. The object detection results here can be considered as a low-resource case in CV. For ADE20K, the performance gaps between baselines without additional and adapter-based baselines are more significant than VOC and COCO. For parameter sizes, most methods in Tables 1 and2 (except Partial-1) produce less than 5% new backbone parameters, which is the characteristic and advantage of delta-tuning. Despite the slight increase in parameters, Mona still outperforms the previous art and breaks the full fine-tuning performance ceiling by a wide margin." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b17" ], "table_ref": [], "text": "We have shown the individual and average results on three classification datasets in Table 3. Mona outperforms all the baselines on Flowers102, OxfordPets, and outperforms the average results of all baselines. Table 3 indicates that Mona has a high transfer efficiency on relatively simple tasks. In addition, we find that the average results of all delta-tuning methods surpass full fine-tuning, which is similar to conclusions in previous art [18]. The pre-trained model we used is Swin-Large (198M parameters) trained on ImageNet-22K, whose powerful knowledge has enabled Flower102 and OxfordPets to achieve very high scores. Compared to classification tasks, more complex dense prediction tasks (object detection, semantic segmentation, instance segmentation) are more suitable for reflecting the differences between different fine-tuning paradigms.\nIn summary, the results of Tables 1 to 3 can be summarized in two aspects: 1) As to performance, the widely used full fine-tuning paradigm in art like Swin and EVA is no longer the optimal choice for visual tasks. The proposed Mona-tuning surpasses the performance ceiling of full finetuning in representative tasks such as instance segmentation, semantic segmentation, object detection, and image classification. Specifically, Mona achieves a 1% AP gain over full fine-tuning in the challenging COCO instance segmentation task. 2) Mona, based on multi-cognitive visual filtering, surpasses recent remarkable baselines in most tasks. Mona comprehensively enhances the practicality and generality of delta-tuning in visual tasks. Mona-tuning not only significantly reduces storage costs, but also further elevates the performance ceiling of visual tasks." }, { "figure_ref": [], "heading": "Loss Analysis", "publication_ref": [], "table_ref": [], "text": "We present the loss converging process for Mona and five representative baselines on the object detection task (Pascal VOC) in Figure 5. The proposed method yields a significant advantage in the convergence process compared to full fine-tuning, which explains its better performance on VOC. Mona also converges faster than other delta-tuning methods, suggesting that multi-cognitive visual filters can better process visual features and accelerate the convergence of transfer learning. Convergence analysis again demonstrates that the proposed method is a highly competitive visual transfer learning method and full fine-tuning is no longer the optimal choice for visual tasks." }, { "figure_ref": [], "heading": "Ablations", "publication_ref": [ "b49", "b33" ], "table_ref": [ "tab_8", "tab_8", "tab_8" ], "text": "In this section, we conduct several ablation experiments to discuss some detailed issues of the model, including the impact of intermediate dimensions on the model results and the relationship between model sizes and Mona-tuning. For the fairness and clarity of the experiment, all ablation experiments are conducted on Pascal VOC.\nThe workflow of the adapter is to compress the input In addition to the above issue, we are also very concerned about Mona's performance on models of different sizes. We change the size of the backbone network under the same settings, and the model candidates are 29M Swin-T, 88M Swin-B, and 197M Swin-L. Table 5 shows the results of full fine-tuning and Mona-tuning under three settings. We can draw the following three conclusions from Table 5. First, the more parameters the backbone network has, the smaller the proportion of Mona parameters for the same Mona setting. This result indicates that Mona-tuning can save more parameters when the backbone gets larger. Existing visual models are getting larger and larger. InternImage-H [50] reaches 1.08B parameters, and SwinV2-G [34] reaches 3B. Parameter-efficient Mona-tuning can save billions of parameters and massive storage costs in the era of large models. Second, Mona surpasses full fine-tuning on three model settings, and its performance improves when model size grows. Table 5 shows that Mona-tuning can improve training efficiency and performance in smaller models. We just discussed Mona's advantages for large models. However, more resource-limited research teams and project groups use small models. Mona-tuning also has the potential to help resource-limited visual researchers effectively leverage highperformance large models in their own applications. Third, the proposed method is more capable of stimulating the potential of large models compared to full fine-tuning. From Swin-T to Swin-L, full fine-tuning brings 3.6% performance gain, while Mona brings 3.8%. In other words, Mona can achieve better performance as the model gets larger and help further increase the upper bound for performance-sensitive tasks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper propose a novel visual fine-tuning method, the multi-cognitive visual adapter (Mona) tuning, which effectively enhances the efficiency and performance of visual fine-tuning. Comprehensive experiments demonstrate that the proposed Mona outperforms traditional full fine-tuning paradigms and other delta-tuning methods across representative tasks, including instance segmentation, semantic segmentation, object detection, and image classification. In the era of large models, full fine-tuning is no longer the optimal choice for visual tasks. We hope that Mona-tuning can effectively improve the transferring efficiency of large models and bring performance breakthroughs on more visual tasks." } ]
Pre-training & fine-tuning can enhance the transferring efficiency and performance in visual tasks. Recent deltatuning methods provide more options for visual classification tasks. Despite their success, existing visual delta-tuning art fails to exceed the upper limit of full fine-tuning on challenging tasks like instance segmentation and semantic segmentation. To find a competitive alternative to full fine-tuning, we propose the Multi-cognitive Visual Adapter (Mona) tuning, a novel adapter-based tuning method. First, we introduce multiple vision-friendly filters into the adapter to enhance its ability to process visual signals, while previous methods mainly rely on language-friendly linear filters. Second, we add the scaled normalization layer in the adapter to regulate the distribution of input features for visual filters. To fully demonstrate the practicality and generality of Mona, we conduct experiments on multiple representative visual tasks, including instance segmentation on COCO, semantic segmentation on ADE20K, object detection on Pascal VOC, and image classification on several common datasets. Exciting results illustrate that Mona surpasses full fine-tuning on all these tasks and is the only delta-tuning method outperforming full fine-tuning on instance segmentation and semantic segmentation tasks. For example, Mona achieves a 1% performance gain on the COCO dataset compared to full finetuning. Comprehensive results suggest that Mona-tuning is more suitable for retaining and utilizing the capabilities of pre-trained models than full fine-tuning. The code will be released at https://github.com/Leiyi-Hu/mona.
Adapter is All You Need for Tuning Visual Tasks
[ { "figure_caption": "Figure 2 .2Figure2. Left: All parameters in the classic full fine-tuning paradigm need to be updated. We employ the remarkable Swin Transformer series as backbones. Right: The proposed Monatuning. We add Mona after MSA and MLP in each SwinBlock. The proposed method fixes the parameters of pre-trained layers and updates the parameters of Mona.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Details of Mona. Mona has a scaled LayerNorm before the down projection. A multi-cognitive convolutional filter group and an aggregation filter are behind the down projection. We add skip-connections at four places inside Mona to strengthen its adaptation capabilities. Mona enables the adapter-based fine-tuning paradigm to outperform full fine-tuning in typical visual tasks comprehensively.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Design Iterations. Mona surpasses full fine-tuning on typical visual tasks after many iterations. The first version introduces multi-cognitive visual filters and outperforms previous adapter-based tuning art on visual tasks. We add LNs at different places and incorporate averaging operations to optimise Mona further. In fact, Mona finally succeeds in achieving our desired goal through more than 15 versions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "implement sufficient experiments on multiple representative visual tasks to demonstrate the superiority of Monatuning. Sections 4.1 ∼ 4.3 present the experimental settings, including datasets, pre-trained models and baselines. Section 4.4 shows the experimental results and analysis. Section 4.5 analyses the convergence processes of different approches. We design ablation experiments in Section 4.6 to illustrate some details and generalisability of the proposed method. Hyperparameters and detailed settings for model training are shown in the Supplementary Material. Pascal VOC 0712 [13] has 16k/5k training/validation images and is used for object detection tasks. We employ Swin-Large + RetinaNet for training. The evaluation metric for object detection task is the most commonly used AP box . Semantic Segmentation. ADE20K[56] is the most widely used semantic segmentation dataset containing 20K training and 2K validation images. We employ Swin-Large + UperNet for experiments on semantic segmentation. The evaluation metric here is the most commonly used mIoU.", "figure_data": "4.1. DatasetsObject Detection. Instance Segmentation. MS COCO [30] is a representativeinstance segmentation dataset with 118k training images and5k validation images. We employ Swin-Base + CascadeMask RCNN for training. Evaluation metrics for instancesegmentation task are AP box and AP M ask .Image Classification. Classification tasks have been wellstudied in previous art. To increase the broadness of ourexperiments, we also demonstrate Mona's generality on sev-eral widely used classification datasets. Specifically, weconduct our experiments on the Oxford 102 Flower Dataset[37], the Oxford-IIIT Pet Dataset [39], and the VOC 2007Classification Challenge dataset [12]. Oxford 102 FlowerDataset has 102 categories of flowers, with between 40 and258 images per category. Oxford-IIIT Pet Dataset has 37categories of pets, with about 200 images per category. VOC2007 Classification Challenge Dataset contains about 10kimages and 20 labeled categories. The top-1 and top-5 ac-curacies are used as evaluation metrics. We also report theaverage performance of each method.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results of baselines and our methods on COCO benchmarks. Swin-B is employed as the pre-trained model here. We present the numbers and percentages of trainable backbone parameters on the left and all the performences on the right. * denotes the trainable parameters in backbones. The best AP in each column is bolded.", "figure_data": "Swin-B (89M)Trained * Params%∆ F ullExtra StructureAP BoxCOCO (Cascade Mask R-CNN) ∆ F ull AP Mask∆ F ullBaselinesFULL89.14 M100.00 %-✗52.40 %-45.10 %-FIXED0.00 M0.00 %-100.00 %✗48.00 %-4.40 %41.60 %-3.50 %BITFIT0.21 M0.23 %-99.77 %✗50.10 %-2.30 %43.60 %-1.50 %NORMTUNING0.06 M0.07 %-99.93 %✗50.10 %-2.30 %43.50 %-1.60 %PARTIAL-112.95 M14.53 %-85.47 %✗50.60 %-1.80 %43.70 %-1.40 %ADAPTER3.19 M3.58 %-96.42 %✓52.10 %-0.30 %45.00 %-0.10 %LORA3.06 M3.43 %-96.57 %✓50.40 %-2.00 %43.90 %-1.20 %ADAPTFORMER1.60 M1.79 %-98.21 %✓51.70 %-0.70 %44.60 %-0.50 %LORAND1.20 M1.34 %-98.66 %✓51.00 %-1.40 %43.90 %-1.20 %Our MethodMONA4.16 M4.67 %-95.33 %✓53.4 %+ 1.00 %46.00 %+ 0.90 %Swin-L (198M)Trained * Params%∆ F ullExtra StructurePascal VOC (RetinaNet) AP Box ∆ F ullADE20K (UperNet) mIoU ∆ F ullBaselinesFULL198.58 M100.00 %-✗83.70 %-51.18 %-FIXED0.00 M0.00 %-100.00 %✗83.80 %+ 0.10 %46.84 %-4.34 %BITFIT0.30 M0.15 %-99.85 %✗85.40 %+ 1.70 %48.37 %-2.81 %NORMTUNING0.10 M0.05 %-99.95 %✗85.50 %+ 1.80 %47.89 %-3.29 %PARTIAL-128.77 M14.53 %-85.47 %✗85.50 %+ 1.80 %47.44 %-3.74 %ADAPTER4.61 M2.33 %-97.67 %✓86.70 %+ 3.00 %50.78 %-0.40 %LORA4.57 M2.31 %-97.69 %✓85.40 %+ 1.70 %50.34 %-0.84 %ADAPTFORMER2.34 M1.18 %-98.82 %✓86.60 %+ 2.90 %50.83 %-0.35 %LORAND1.31 M0.66 %-99.34 %✓86.80 %+ 3.10 %50.76 %-0.42 %Our MethodMONA5.08 M2.56 %-97.44 %✓87.30 %+ 3.60 %51.36 %+ 0.18 %", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of baselines and our methods on Pascal VOC and ADE20K benchmarks. Swin-L is employed as the pre-trained model here. We present the numbers and percentages of trainable backbone parameters on the left and all the performences on the right. * denotes the trainable parameters in backbones. The best AP/mIoU in each column is bolded.", "figure_data": "", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Loss curves. Among all the methods, the proposed method converges faster and significantly exceeds the full finetuning.from pre-trained layers into a low-dimensional feature space and transfer the knowledge in pre-trained layers to the new models by updating the adapter's parameters. Therefore, the intermediate dimension of the adapter is an important factor for model performance. We ablate the intermediate dimension of Mona and present the results in Table4. We only change the dimensions of Mona and fix other settings.", "figure_data": "AdaptFormerAdaptFormer Fixed AdaptFormer Fixed1.35 1.35FixedAdapter Adapter LoRA LoRA Full FullAdapter LoRAMona MonaFull1.05 1.05 LossMona0.75 0.75 Training0.45 0.45151 156 151 156 0161 161166 166171 171176 181 176 181 1800 186 186191 191196 1960.15 0.1516 11162126 300 3136 41465156 600 6166717681 Iterations 86 91 96 101 106 111 900116 1200 121 1261311361411500 146 151 156 161 166 171 176 181 186 191 196 1800Figure 5. Dimension candidates are 32, 64, and 128. The results ofTable 4 show that the 64-dimension result surpasses thatof smaller 32-dimension and larger 128-dimension, whichis interesting. Chen et al. [7] also study the intermediatedimension of AdaptFormer. They find that the 64-dimensionAdaptFormer surpasses its 32-and 256-dimension versionsin visual classification tasks, which is consistent with ourconclusion. The results of Table 4 and Chen et al. indi-cate that the intermediate dimension of the adapter is notproportional to the performance, which means that a largernumber of adapter parameters does not necessarily lead tobetter results.", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablations of intermediate dimensions. We fix other settings and evaluate the performance of Mona with 32, 64, and 128 intermediate dimensions on the VOC dataset. The configuration with 64 intermediate dimensions achieves the best performance and is consequently chosen as the general setting for Mona. * denotes the trainable parameters in backbones.", "figure_data": "Intermediate DimensionsTrained Params*AP Box321.35 %86.8 %642.56 %87.3 %1285.22 %87.1 %ModelFULL (VOC)MONA (VOC)Param % (Mona)Swin-T80.1 %83.5 %4.87 %Swin-B81.6 %86.5 %4.06 %Swin-L83.7 %87.3 %2.56 %", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of mona on models with different sizes. Mona outperforms full fine-tuning in all configurations, indicating that the model size does not constrain Mona's superiority. Mona can save more new parameters on larger models, which is significant in the era of large models.", "figure_data": "", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" } ]
Dongshuo Yin; Leiyi Hu; Bin Li; Youqun Zhang
[ { "authors": "Abhinav Agrawal; Namita Mittal", "journal": "The Visual Computer", "ref_id": "b0", "title": "Using cnn for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy", "year": "2020" }, { "authors": "Tareq Abed Saad Albawi; Saad Mohammed; Al-Zawi", "journal": "Ieee", "ref_id": "b1", "title": "Understanding of a convolutional neural network", "year": "2017" }, { "authors": "Caisse Amisse; Mario Ernesto Jijón-Palma; Jorge Antonio; Silva Centeno", "journal": "Boletim de Ciências Geodésicas", "ref_id": "b2", "title": "Fine-tuning deep learning models for pedestrian detection", "year": "2021" }, { "authors": "Danupon Chansong; Siriporn Supratid", "journal": "IEEE", "ref_id": "b3", "title": "Impacts of kernel size on different resized images in object recognition based on convolutional neural network", "year": "2021" }, { "authors": "Kai Chen; Jiaqi Wang; Jiangmiao Pang; Yuhang Cao; Yu Xiong; Xiaoxiao Li; Shuyang Sun; Wansen Feng; Ziwei Liu; Jiarui Xu", "journal": "", "ref_id": "b4", "title": "Mmdetection: Open mmlab detection toolbox and benchmark", "year": "2019" }, { "authors": "Shoufa Chen; Zhan Ge Chongjian; Jiangliu Tong; Yibing Wang; Jue Song; Ping Wang; Luo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Adaptformer: Adapting vision transformers for scalable visual recognition", "year": "" }, { "authors": "Shoufa Chen; Chongjian Ge; Zhan Tong; Jiangliu Wang; Yibing Song; Jue Wang; Ping Luo", "journal": "", "ref_id": "b6", "title": "Adaptformer: Adapting vision transformers for scalable visual recognition", "year": "2022" }, { "authors": "Tianrun Chen; Lanyun Zhu; Chaotao Deng; Runlong Cao; Yan Wang; Shangzhan Zhang; Zejian Li; Lingyun Sun; Ying Zang; Papa Mao", "journal": "", "ref_id": "b7", "title": "Sam-adapter: Adapting segment anything in underperformed scenes", "year": "2023" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b8", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Ning Ding; Yujia Qin; Guang Yang; Fuchao Wei; Zonghan Yang; Yusheng Su; Shengding Hu; Yulin Chen; Chi-Min Chan; Weize Chen", "journal": "Nature Machine Intelligence", "ref_id": "b9", "title": "Parameter-efficient fine-tuning of largescale pre-trained language models", "year": "2023" }, { "authors": "Jesse Dodge; Gabriel Ilharco; Roy Schwartz; Ali Farhadi; Hannaneh Hajishirzi; Noah Smith", "journal": "", "ref_id": "b10", "title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping", "year": "2020" }, { "authors": "M Everingham; L Van Gool; C K I Williams; J Winn; A Zisserman", "journal": "", "ref_id": "b11", "title": "The PASCAL Visual Object Classes Challenge", "year": "2007" }, { "authors": "Mark Everingham; Luc Eslami; Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "International journal of computer vision", "ref_id": "b12", "title": "The pascal visual object classes challenge: A retrospective", "year": "2015" }, { "authors": "Yuxin Fang; Wen Wang; Binhui Xie; Quan Sun; Ledell Wu; Xinggang Wang; Tiejun Huang; Xinlong Wang; Yue Cao", "journal": "", "ref_id": "b13", "title": "Eva: Exploring the limits of masked visual representation learning at scale", "year": "2023" }, { "authors": "Luciano Floridi; Massimo Chiriatti", "journal": "Minds and Machines", "ref_id": "b14", "title": "Gpt-3: Its nature, scope, limits, and consequences", "year": "2020" }, { "authors": "Angeliki Giannou; Shashank Rajput; Dimitris Papailiopoulos", "journal": "", "ref_id": "b15", "title": "The expressive power of tuning only the norm layers", "year": "2023" }, { "authors": "Jiuxiang Gu; Zhenhua Wang; Jason Kuen; Lianyang Ma; Amir Shahroudy; Bing Shuai; Ting Liu; Xingxing Wang; Gang Wang; Jianfei Cai", "journal": "Pattern recognition", "ref_id": "b16", "title": "Recent advances in convolutional neural networks", "year": "2018" }, { "authors": "Xuehai He; Chunyuan Li; Pengchuan Zhang; Jianwei Yang; Xin Eric; Wang ", "journal": "", "ref_id": "b17", "title": "Parameter-efficient fine-tuning for vision transformers", "year": "2022" }, { "authors": "Xuehai He; Chunyuan Li; Pengchuan Zhang; Jianwei Yang; Xin Eric; Wang ", "journal": "", "ref_id": "b18", "title": "Parameter-efficient model adaptation for vision transformers", "year": "2023" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "PMLR", "ref_id": "b19", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b20", "title": "Lowrank adaptation of large language models", "year": "" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b21", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Shengding Hu; Zhen Zhang; Ning Ding; Yadao Wang; Yasheng Wang; Zhiyuan Liu; Maosong Sun", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Sparse structure search for delta tuning", "year": "2022" }, { "authors": "Sergey Ioffe; Christian Szegedy", "journal": "", "ref_id": "b23", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "Menglin Jia; Luming Tang; Bor-Chun Chen; Claire Cardie; Serge Belongie; Bharath Hariharan; Ser-Nam Lim", "journal": "", "ref_id": "b24", "title": "Visual prompt tuning", "year": "2022" }, { "authors": "Christoph Käding; Erik Rodner; Alexander Freytag; Joachim Denzler", "journal": "Springer", "ref_id": "b25", "title": "Fine-tuning deep neural networks in continuous learning scenarios", "year": "2016" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b26", "title": "Segment anything", "year": "2023" }, { "authors": "Jane F Koretz; George H Handelman", "journal": "Scientific American", "ref_id": "b27", "title": "How the human eye focuses", "year": "1988" }, { "authors": "Zewen Li; Fan Liu; Wenjie Yang; Shouheng Peng; Jun Zhou", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b28", "title": "A survey of convolutional neural networks: analysis, applications, and prospects", "year": "2021" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b29", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Xiao Liu; Kaixuan Ji; Yicheng Fu; Weng Tam; Zhengxiao Du; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b30", "title": "P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks", "year": "2022" }, { "authors": "Yen-Cheng Liu; Chih-Yao Ma; Junjiao Tian; Zijian He; Zsolt Kira", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Polyhistor: Parameter-efficient multi-task adaptation for dense vision tasks", "year": "" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b32", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Ze Liu; Han Hu; Yutong Lin; Zhuliang Yao; Zhenda Xie; Yixuan Wei; Jia Ning; Yue Cao; Zheng Zhang; Li Dong", "journal": "", "ref_id": "b33", "title": "Swin transformer v2: Scaling up capacity and resolution", "year": "2022" }, { "authors": "Susana Martinez-Conde; Stephen L Macknik; David H Hubel", "journal": "Nature reviews neuroscience", "ref_id": "b34", "title": "The role of fixational eye movements in visual perception", "year": "2004" }, { "authors": "Bonan Min; Hayley Ross; Elior Sulem; Amir Pouran; Ben Veyseh; Thien Huu Nguyen; Oscar Sainz; Eneko Agirre; Ilana Heintz; Dan Roth", "journal": "ACM Computing Surveys", "ref_id": "b35", "title": "Recent advances in natural language processing via large pre-trained language models: A survey", "year": "2023" }, { "authors": "Maria-Elena Nilsback; Andrew Zisserman", "journal": "IEEE", "ref_id": "b36", "title": "Automated flower classification over a large number of classes", "year": "2008" }, { "authors": "Şaban Öztürk; Umut Özkaya; Bayram Akdemir; Levent Seyfi", "journal": "IEEE", "ref_id": "b37", "title": "Convolution kernel size effect on convolutional neural network in histopathological image processing applications", "year": "2018" }, { "authors": "Andrea Omkar M Parkhi; Andrew Vedaldi; Zisserman; Jawahar", "journal": "IEEE", "ref_id": "b38", "title": "Cats and dogs", "year": "2012" }, { "authors": "Sebastian Matthew E Peters; Noah A Ruder; Smith", "journal": "", "ref_id": "b39", "title": "To tune or not to tune? adapting pretrained representations to diverse tasks", "year": "2019" }, { "authors": "Janani Himashi Rathnayake; Raveesha Sumanapala; Surangika Rukshani; Ranathunga", "journal": "Knowledge and Information Systems", "ref_id": "b40", "title": "Adapter-based fine-tuning of pre-trained multilingual language models for code-mixed and code-switched text classification", "year": "2022" }, { "authors": "Chompunuch Sarasaen; Soumick Chatterjee; Mario Breitkopf; Georg Rose; Andreas Nürnberger; Oliver Speck", "journal": "Artificial Intelligence in Medicine", "ref_id": "b41", "title": "Finetuning deep learning model parameters for improved superresolution of dynamic mri with prior-knowledge", "year": "2021" }, { "authors": "Yi-Lin Sung; Jaemin Cho; Mohit Bansal", "journal": "", "ref_id": "b42", "title": "Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks", "year": "2022" }, { "authors": "Christian Szegedy; Wei Liu; Yangqing Jia; Pierre Sermanet; Scott Reed; Dragomir Anguelov; Dumitru Erhan; Vincent Vanhoucke; Andrew Rabinovich", "journal": "", "ref_id": "b43", "title": "Going deeper with convolutions", "year": "2015" }, { "authors": "Robert Tinn; Hao Cheng; Yu Gu; Naoto Usuyama; Xiaodong Liu; Tristan Naumann; Jianfeng Gao; Hoifung Poon", "journal": "Patterns", "ref_id": "b44", "title": "Finetuning large neural language models for biomedical natural language processing", "year": "2023" }, { "authors": "Edna Chebet Too; Li Yujian; Sam Njuki; Liu Yingchun", "journal": "Computers and Electronics in Agriculture", "ref_id": "b45", "title": "A comparative study of fine-tuning deep learning models for plant disease identification", "year": "2019" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale", "journal": "", "ref_id": "b46", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "" }, { "authors": "Rosalia Tufano; Simone Masiero; Antonio Mastropaolo; Luca Pascarella; Denys Poshyvanyk; Gabriele Bavota", "journal": "", "ref_id": "b47", "title": "Using pre-trained models to boost code review automation", "year": "2022" }, { "authors": "Jindong Wang; Yiqiang Chen", "journal": "Springer", "ref_id": "b48", "title": "Pre-training and fine-tuning", "year": "2022" }, { "authors": "Wenhai Wang; Jifeng Dai; Zhe Chen; Zhenhang Huang; Zhiqi Li; Xizhou Zhu; Xiaowei Hu; Tong Lu; Lewei Lu; Hongsheng Li", "journal": "", "ref_id": "b49", "title": "Internimage: Exploring large-scale vision foundation models with deformable convolutions", "year": "2023" }, { "authors": "Jingjing Xu; Xu Sun; Zhiyuan Zhang; Guangxiang Zhao; Junyang Lin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b50", "title": "Understanding and improving layer normalization", "year": "2019" }, { "authors": "Dongshuo Yin; Xueting Han; Bin Li; Hao Feng; Jing Bai", "journal": "", "ref_id": "b51", "title": "Parameter-efficient is not sufficient: Exploring parameter, memory, and time efficient adapter tuning for dense predictions", "year": "2023" }, { "authors": "Dongshuo Yin; Yiran Yang; Zhechao Wang; Hongfeng Yu; Kaiwen Wei; Xian Sun", "journal": "", "ref_id": "b52", "title": "1% vs 100%: Parameter-efficient low rank adapter for dense predictions", "year": "2023" }, { "authors": "Jason Yosinski; Jeff Clune; Yoshua Bengio; Hod Lipson", "journal": "", "ref_id": "b53", "title": "How transferable are features in deep neural networks? Advances in neural information processing systems", "year": "2014" }, { "authors": "Elad Ben Zaken; Shauli Ravfogel; Yoav Goldberg", "journal": "", "ref_id": "b54", "title": "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models", "year": "2021" }, { "authors": "Bolei Zhou; Hang Zhao; Xavier Puig; Sanja Fidler; Adela Barriuso; Antonio Torralba", "journal": "", "ref_id": "b55", "title": "Scene parsing through ade20k dataset", "year": "2017" }, { "authors": "Xin Zhou; Ruotian Ma; Yicheng Zou; Xuanting Chen; Tao Gui; Qi Zhang; Xuan-Jing Huang; Rui Xie; Wei Wu", "journal": "", "ref_id": "b56", "title": "Making parameter-efficient tuning more efficient: A unified framework for classification tasks", "year": "2022" }, { "authors": "Beier Zhu; Yulei Niu; Yucheng Han; Yue Wu; Hanwang Zhang", "journal": "", "ref_id": "b57", "title": "Prompt-aligned gradient for prompt tuning", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 339.41, 206.31, 74.07, 14.11 ], "formula_id": "formula_0", "formula_text": "D = {(x i , y i )} N i=1" }, { "formula_coordinates": [ 3, 376.92, 257.72, 168.86, 17.15 ], "formula_id": "formula_1", "formula_text": "θ ← arg min loss(D, θ) θ ,(1)" }, { "formula_coordinates": [ 3, 367.45, 283.09, 174.46, 16.63 ], "formula_id": "formula_2", "formula_text": "ω ← arg min loss(D, θ F , ω) ω , (2" }, { "formula_coordinates": [ 3, 541.91, 283.4, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 4, 369.34, 702.12, 115.28, 10.81 ], "formula_id": "formula_4", "formula_text": "(2n + 3)m + n 2 + 84n + 2." } ]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b45", "b110", "b56", "b14", "b107", "b26", "b65", "b100", "b41", "b70", "b79", "b106", "b35", "b60", "b70", "b76", "b85", "b53", "b115", "b53", "b33", "b54", "b39", "b53", "b62", "b111", "b76", "b79", "b75", "b106", "b17", "b73", "b85", "b27", "b50" ], "table_ref": [], "text": "Visual salient object detection (SOD) and camouflaged object detection (COD) are two interconnected yet unique tasks. The goal of SOD is to identify prominent objects within an image that significantly contrast with their surroundings [5], which can be used to promote segmentation [46,111], detection [94], and Part-Object Relational visual saliency [57,58]. While COD focuses on identifying objects concealed within their environment. These objects intentionally blend in by sharing structural or textural similarities with their surroundings [15]. Despite the seemingly different definitions of SOD and COD, they both belong to the realm of binary segmentation and share some vital fundamental similarities, such as objectness and structuredness.\nTo cater to various scenarios, both SOD and COD have given rise to several sub-tasks with different modalities, including RGB SOD [83,108], RGB COD [27,66,101], RGB-D SOD [42,71], RGB-D COD [92], and RGB-T SOD [80,107]. By leveraging optical flow maps, Video SOD (VSOD) [44,91] and VCOD [10,36] tasks can also be seen as a combination of two modalities. The relationship of SOD, COD, and multimodal tasks is shown in Figure 1, where each specific task can be considered as a combination of two dimensions, i.e. domain and task. Although these multimodal tasks differ in the complementary cues they employ, these modalities share some key commonalities. For instance, depth, thermal, and optical flow maps often show obvious objectness as in RGB images.\nAlthough previous CNN-based [10,61,71,77,86, 109] and transformer-based [54,116] approaches have effectively addressed these tasks and achieved favorable results, they usually rely on meticulously designed models to tackle each task individually. Designing models specifically for individual tasks can be problematic since the training data of one task is typically limited. Task-specific specialist models Overall architecture of our VSCode model. We use VST [54] as the foundation model to acquire commonalities among multimodal SOD and COD tasks. For each task, we integrate 2D prompts to aggregate peculiarities along the domain dimension and the task dimension, including four domain-specific prompts and two task-specific prompts.\nmay be overly adapted to a particular task and overfitted to specific training data distribution, which ultimately sacrifices generalization ability and results in suboptimal performance. One solution may be using more data, however, being costly and time-consuming for data annotation. To this end, joint learning a generalist model emerges as a more promising option, as it allows for the maximum use of all data and the effective learning of the commonalities of all tasks, hence significantly reducing the risk of overfitting and enhancing the generalization capability [34,55]. However, joint learning multiple tasks is not straightforward. On one hand, simultaneously handling both commonalities and peculiarities of all tasks poses a significant challenge as the incompatibility among different tasks easily leads to a decline in performance with simple joint training [40]. On the other hand, it usually introduces additional complexity, computational costs, and parameters.\nIn this paper, we present a general Visual Salient and Camouflaged object detection (VSCode) model which encapsulates both commonalities and peculiarities of different tasks with a simple but effective design, as illustrated in Figure 2. On one hand, we adopt VST [54] as the shared segmentation foundation model to assimilate commonalities of different tasks by leveraging its simple and pure-transformerbased architecture. On the other hand, inspired by the recent emergence of the parameter-efficient prompting technique [31,63,112], we propose 2D prompts to capture task peculiarities. Specifically, we decompose these peculiarities along the domain dimension and the task dimension, and consequently design domain-specific prompts and task-specific prompts to comprehend the differences among diverse domains and tasks, respectively. These 2D prompts can effectively disentangle domain and task peculiarities, making our model easily adaptable by combining them to tackle specific tasks and even unseen ones. Furthermore, we present a prompt discrimination loss to encourage the 2D prompts to focus on acquiring adequate peculiarities and enable the foundational model to concentrate on commonality learning.\nFinally, we train our VSCode model on four SOD tasks and two COD tasks, demonstrating its effectiveness against state-of-the-art methods. What's more, we carry out evaluations on a reserved task and reveal remarkable zero-shot generalization ability of our model, which has never been explored in previous works. The main contributions in this work can be summarized as follows:\n• We present VSCode, the first generalist model for multimodal SOD and COD tasks. • We propose to use a foundation segmentation model to aggregate commonalities and introduce 2D prompts to learn peculiarities along the domain and task dimensions, respectively. • A prompt discrimination loss is proposed to effectively enhance the learning of peculiarities and commonalities for 2D prompts and the foundation model, respectively. • Our VSCode model surpasses all existing state-of-theart models across all tasks on 26 datasets and showcases its ability to generalize to unseen tasks, further emphasizing the superiority of our approach. [77,80] and multi-level fusion [76,107] to excavate the relationship between RGB and thermal features. Regarding the VSOD task, some works [18,21,30,74,86] mined spatial-temporal and appearance cues. More recently, there was a growing trend where various research [28,44,51,73] endeavored to incorporate optical flow for combining motion cues with appearance details. Consistent with recent studies, we treat optical flow as a form of modality information and view VSOD as a multimodal SOD task." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b60", "b100", "b65", "b113", "b14", "b3", "b35" ], "table_ref": [], "text": "COD. Currently, COD has RGB COD, RGB-D COD, and VCOD tasks. RGB COD methods can be broadly categorized as multi-task-based approaches [61,101], multiinput-based approaches [66,114], and refinement-based approaches [15,32]. The RGB-D COD task was initially introduced in [92], where depth inference models are adapted for object segmentation. For VCOD, prior studies segmented the moving camouflaged objects via dense optical flow [3,4] or well-designed models [10,36]. For a more comprehensive literature review, please refer to [16]. The convertor is used for multimodal feature fusion. Within the transformer decoder layers, task-specific prompts are appended to image feature tokens to perform task-specific decoding. We also provide detailed structures of an encoder layer (i = 0) and a decoder layer (j = 0)." }, { "figure_ref": [], "heading": "Prompt in Computer Vision", "publication_ref": [ "b4", "b114" ], "table_ref": [], "text": "Prompt was initially introduced in the field of NLP [6] and has been successfully integrated into computer vision tasks [25]. VPT [31] introduced a small amount of trainable parameters as prompts in the input space. ViPT [115] put forth the idea of modality-complementary prompts for taskoriented multi-modal tracking. Prior research has primarily focused on specific tasks, such as classification or tracking.\nIn this paper, we propose to use 2D prompts for assembling different multimodal tasks and enable zero-shot generalization on unseen tasks, which has not been explored before." }, { "figure_ref": [], "heading": "Generalist Segmentation Architecture", "publication_ref": [ "b0", "b116", "b94", "b86", "b55" ], "table_ref": [], "text": "Recently, several generalist frameworks have emerged for a range of segmentation tasks using a variety of prompts [1]. On one hand, X-Decoder [117] utilized generic nonsemantic queries and semantic queries to decode different pixel-level and token-level outputs. UNINEXT [95] introduced three types of prompts, namely category names, language expressions, and reference annotations. On the other hand, Painter [87] and SegGPT [88] leveraged image-mask pairs from the same task as prompts. Unlike the approaches mentioned above, which mainly concentrate on task differences, our VSCode dissects unique characteristics based on both domain and task dimensions, leading to a more versatile design.\nIn the field of SOD and COD, EVP [56] introduced adaptors into the encoder and trained each task individually for various foreground segmentation tasks. Different from them, we consider not only multiple tasks but also multiple modalities and we train all tasks simultaneously." }, { "figure_ref": [ "fig_2" ], "heading": "Methodology", "publication_ref": [ "b53" ], "table_ref": [], "text": "In this work, we propose VSCode with the aim of jointly training SOD and COD tasks in an efficient and effective way. We allow VST [54] to incorporate commonalities (Section 3.1), and utilize 2D prompts, which comprise domain-specific (Section 3.2) and task-specific prompts (Section 3.3), to encapsulate peculiarities. To accurately disentangle domain and task peculiarities in 2D prompts and encourage commonality learning in VST, we introduce a prompt discrimination loss (Section 3.5). Figure 3 shows the overall architecture of our proposed VSCode." }, { "figure_ref": [], "heading": "Foundation Model", "publication_ref": [ "b53" ], "table_ref": [], "text": "To achieve a more comprehensive integration of commonalities from SOD and COD tasks, we select VST [54] as our fundamental model. VST was originally proposed for RGB and RGB-D SOD and comprises three primary components, i.e. a transformer encoder, a transformer convertor, and a multi-task transformer decoder. It initially employs the transformer encoder to capture long-range dependencies within the image features f E i ∈ R li×ci , where i ∈ [0, 1, 2, 3] indicates the index of blocks in the encoder, l i and c i mean the length of the patch sequence and the channel number of f E i . Subsequently, the transformer convertor integrates the complement between RGB and depth features via crossattention for RGB-D SOD or uses self-attention for RGB SOD. In the decoder, which is composed of a sequence of self-attention layers, VST predicts saliency maps and boundary maps simultaneously via a saliency token, a boundary token, and decoder features f D j ∈ R lj ×d , where j corresponds the index of blocks in the decoder. Here j ∈ [2, 1, 0] for descending order and d =384. Due to the simple and puretransformer-based architecture, VST can be easily used for other multimodal tasks and COD tasks without the need for model redesign. As a result, it emerges as a superior choice for constructing a generalist model for general multimodal SOD and COD.\nIn pursuit of improved outcomes and a more suitable structure, we introduce modifications to VST. First, we select Swin transformer [59] as our backbone due to its efficiency and high performance. Second, to maintain a unified structure for both RGB tasks and other multimodal tasks, we utilize the RGB convertor in VST, which comprises standard transformer layers. For multimodal tasks, we simply concatenate the supplementary modality's features with the RGB features along the channel dimension and employ a multilayer perceptron (MLP) to project them from 2d channels to d channels. For RGB tasks, no alterations are made. Third, we incorporate certain extensions from VST++ [50], specifically including the token-supervised prediction loss." }, { "figure_ref": [ "fig_2" ], "heading": "Domain-specific Prompt", "publication_ref": [ "b99", "b22" ], "table_ref": [], "text": "Within the encoder, lower layers are dedicated to extracting low-level features, encompassing edges, colors, and textures, which exhibit distinct characteristics in various domains [100]. For instance, depth maps are typically rendered in grayscale, while thermal maps present a broader color spectrum. Higher layers, on the other hand, capture semantic information from modality features, which is crucial for all tasks. Consequently, we introduce domain-specific prompts p d i at each block i in the encoder and design four kinds of domain-specific prompts for RGB, depth, thermal, and optical flow, respectively, to highlight the disparities among domains, as shown in Figure 3.\nGiven the image features f E i from a specific block in the Swin transformer encoder, we use window-attention [59] and partition the feature\nf E i into window features f E i w ∈ R li/M 2 ×M 2 ×ci\n, where M represents the window size and l i /M 2 is the number of windows. Then, we replicate the prompts p d i ∈ R Ni×ci for each window and obtain p d ′ i ∈ R li/M 2 ×Ni×ci , where N i represents the number of learnable prompt tokens. Next, we append them to the patch feature tokens in each window and perform self-attention within each window, which can be defined as\np d ′ i+1 f E i w ← MLP(SW/W-MSA( p d ′ i f E i w )),(1)\nwhere W-MSA and SW-MSA are multi-head self-attention modules with regular and shifted windowing configurations, respectively. Here we omit the residual connection [23], and layer normalization [2]. Next, we segment p d ′ i+1 from each window and calculate the average of them to obtain p d i+1 , and then reassemble the output window feature\nf E i w to f E i+1\nfor the next block." }, { "figure_ref": [ "fig_3" ], "heading": "Task-specific Prompt", "publication_ref": [ "b38", "b38" ], "table_ref": [], "text": "Prior research [39] has traditionally regarded SOD and COD as opposing tasks, emphasizing the disparities between the features extracted by the SOD encoder and the COD encoder as much as possible. However, we believe that SOD and COD share significant commonalities in their features, such as low-level cues, high-level objectness, and spatial structuredness. As a result, we introduce task-specific prompts to learn the peculiarities while retaining the primary stream parameters shared to capture commonalities. We add the task-specific prompts in both VST encoder and decoder, and the overall impact of adding these prompts is illustrated in Figure 4.\nEncoder. Although the encoder primarily focuses on domain-specific features with domain prompts, semantic features still play a pivotal role in distinguishing SOD and COD tasks. Semantic features from the encoder typically emphasize the most relevant region for a particular task and allocate more attention accordingly. In the case of the SOD task, the foreground region receives greater attention, whereas for the COD task, the background usually gains large importance since objects are typically concealed within it. Hence, it is essential to incorporate task-specific prompts to encourage learning task-related features in the encoder. Otherwise, we risk initially activating the wrong objects before the decoding process. Following the pattern of domain-specific prompts, we introduce task-specific prompts p te i ∈ R Ni×ci in each encoder block and use them in the same way as how domain-specific prompts are used.\nDecoder. Camouflaged objects typically exhibit more intricate and detailed boundaries compared to salient objects. This complexity arises because concealed objects often share color or textual similarities with their surroundings, resulting in imperceptible boundaries. Therefore, solely introducing task-specific prompts in the encoder may not be adequate, as camouflaged objects require a more refined process within the decoder. We incorporate task-specific prompts in the decoder to allocate distinct attention for reconstructing both the boundary and object regions based on the features extracted by the encoder. In contrast, previous research [39] has not adequately explored the differences between these two tasks in the decoder, as they typically use a single decoder to handle both.\nRegarding task-specific prompts in the decoder, we simply append learnable prompts p td j+1 ∈ R N ×d to the decoder feature tokens f D j+1 from a specific block j +1 in the decoder. Then, we apply the self-attention as follows:\np td j f D j ← MLP(MSA( p td j+1 f D j+1 )),(2)\nwhere MSA denotes the multi-head self-attention. Here we omit the saliency and boundary tokens in the VST decoder for conciseness. Please note that our task-specific prompts differ from saliency and boundary tokens since we do not introduce any supervision for them." }, { "figure_ref": [ "fig_4" ], "heading": "Prompts Layout and Discussion", "publication_ref": [], "table_ref": [], "text": "To incorporate the aforementioned prompts within the encoder-decoder architecture, inspired by VPT [31], we offer two prompt inserting versions. In the deep version, new prompts are introduced at the start of each transformer block, whereas the shallow version involves proposing prompts at first and updating them across all blocks. To unveil the specific relationship among different domains and tasks at varying network depths, we employ the deep version for both domain and task-specific prompts within the encoder.\nBased on VST's design, which introduces a saliency and a boundary token at the beginning of the decoder, we use the shallow version for task-specific prompts in the decoder.\nWe calculate the correlations of different domain and task prompt pairs at different blocks in Figure 5. It is evident that depth, thermal, and optical flow exhibit relatively strong correlations in low-level features, as all of them usually show obvious low-level contrast between target objects and backgrounds in terms of color or luminance. However, at higher levels, most domains exhibit lower correlations, highlighting the distinctions among them. Additionally, as for task-specific prompts, it is clear that SOD prompts and COD prompts exhibit more shared knowledge in the lower layers. As we progress to higher layers, the correlation decreases, indicating that high-level features gradually learn unrelated information. This observation urges us to implement the deep version of domain-specific prompts and task-specific prompts in the encoder in our final design, as different blocks acquire distinct knowledge. Moreover, the gradually decreased correlation values along with the increase of the network depth encourage us to use a progressively larger number of prompt tokens, as lower correlation means larger peculiarities and hence requires more parameters to learn." }, { "figure_ref": [ "fig_4" ], "heading": "Loss Function", "publication_ref": [ "b53" ], "table_ref": [], "text": "The design principle of our model is to use 2D prompts for encompassing peculiarities while integrating commonalities into the foundation model. However, this is not straightforward for freely learned prompts. As shown in Figure 5, they still suggest certain correlations. This indicates that the learned prompts are entangled, risking the model's capacity to differentiate among various domains and tasks and resulting in suboptimal optimization. Hence, we propose a prompt discrimination loss to minimize the correlation among the prompts of the same type, guaranteeing that each prompt acquires unique domain or task knowledge. Specifically, we aggregate prompts of the same domain/task into a single embedding and then perform discrimination. First, we average the input prompt tokens of each same prompt type at each block and use linear projections to align the channel numbers to d. Subsequently, for each type of prompt, we concatenate the averaged prompts of different blocks \nwhere L and A represent the linear and average operation, respectively, with l ∈ {depth, thermal, f low, rgb}, and k ∈ {SOD, COD}. Since the task-specific prompts in the decoder are shallow, we simply average them. Afterward, we calculate the cosine similarity between prompt pairs, resulting in eight types of cosine similarity results CS m . Here m means the combination of domains/tasks, namely {RD, RT, RF, DF, DT, T F } for domain-aggregated prompts and {SC EN , SC DE } for taskaggregated prompts in the encoder and decoder, respectively. Finally, we minimize the correlation within these prompt pairs and define our prompt discrimination loss as\nL dis = m ln(1 + |CS m |),(4)\nwhich is further combined with the segmentation losses and boundary losses [54] to train our model." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Evaluation Metrics", "publication_ref": [ "b80", "b32", "b76", "b14", "b60", "b12", "b13" ], "table_ref": [], "text": "For RGB SOD , we evaluate our proposed model using six commonly used benchmark datasets, i.e. DUTS [81] NJUD [33] VT5000 [77] SegV2 COD10K [15], CAMO [37], and NC4K [61]. For VCOD , we utilize two widely accepted benchmark datasets: CAD [3] and MoCA-Mask [10]. To ensure a consistent evaluation across all SOD and COD tasks, we employ three commonly used evaluation metrics to assess model performance: structure-measure S m [13], maximum enhanced-alignment measure E m [14], and maximum F-measure F m . image to 384 × 384 pixels and then randomly crop them to 352 × 352 image regions for training. Our training process employs the Adam optimizer [35] with an initial learning rate of 0.0001, which is reduced by a factor of 10 at half and three-quarters of the total training steps. We conduct a total of 150,000 training steps using a 3090 GPU. We mix the above six tasks in each training iteration with two samples for each task, leading to a total batch size of 12.\n(M) Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑\n[41] CAMO[37] CAD[3] (M) Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ shallow" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_2", "tab_3", "tab_3" ], "text": "Architecture Design. To demonstrate the efficacy of various components in our VSCode model, we report the quantitative results in Table 1. We start by performing special training (ST) on each task individually and then conduct general training (GT) on all SOD tasks. Please note that here we do not consider COD tasks since no task prompt is used. We observe improved performance on RGB-T SOD and VSOD, demonstrating the significant benefit of shared knowledge in different tasks, especially for those with limited training data diversity. However, the results of RGB SOD and RGB-D SOD do not show a significant increase. Our hypothesis is that amalgamating the training of multi- modal images within a shared model might prevent further optimization on those well-learned tasks. Based on this, we introduce domain-specific prompts p d , resulting in substantial improvements across all datasets, which demonstrates the efficacy of domain-specific prompts in consolidating peculiarities within their respective domains. Subsequently, we introduce task-specific prompts p t in the encoder-decoder architecture, enabling the capability to handle COD tasks. This brings slightly improved performance on some SOD tasks, however, significantly improves the performance on all COD tasks compared with the ST baseline, which probably owes to the well-learned commonalities from different tasks. Moreover, the incorporation of the prompt discrimination loss L dis leads to improved performance on most tasks, reaffirming its effectiveness in disentangling peculiarities.\n(M) Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ VST[\n(M) Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ CMINet[\nTo further evaluate the effectiveness of the task-specific prompts in the encoder and decoder, we remove them individually, resulting in performance decrease. This indicates that using task prompts in both encoder and decoder is necessary. We also observe our 2D prompts only bring around 0.03M parameters, which makes our model much more parameterefficient than the traditional special training scheme. † Please note that our model shares parameters across six tasks, in con-Prompt Location. Following VPT [31], we design other forms of prompt layout based on Section 3.4. Table 2 reveals that employing the shallow version of task-specific prompts in the decoder, the deep version of domain-specific prompts and task-specific prompts in the encoder yields the best results. One plausible rationale is that each block aggregates distinct-level features within the encoder, thus it is better to propose unique prompts for each block. In our decoder, we follow VST and used skip connection to fuse decoder features with encoder features, which have already utilized deep task prompts for distinction. Hence, using more task prompts in the decoder may not be essential, and the shallow version seems to be a more fitting choice.\nPrompt Length. We perform experiments with varying lengths for three kinds of prompts. As shown in Table 2, for domain-specific prompts, using one prompt token at each block achieves better performance than using more tokens. This suggests that it's possible to effectively capture domain distinctions using only a small number of prompts, which matches the observed relatively large correlation within domain prompts in Figure 5. Regarding task-specific prompts within the encoder, a prompt layout of 1,1,5,10 tokens at four blocks is found to be optimal on COD tasks, highlighting the importance of high-level semantic features over low-level features in distinguishing between SOD and COD tasks. This observation matches Figure 5 as well in which the correlations of SC in deep blocks are smaller than those in shallow blocks. Regarding the number of task-specific prompts in the decoder, performance starts to decline when it exceeds trast to EVP, which uses task-specific training. Therefore, comparing the parameters of our model with EVP may not be completely fair owing to the differences in training strategies and backbone utilization.\nTable 9. Comparison with the SOTA RGB-D COD method on three benchmark datasets. \"ZS\" indicates zero-shot.\nglecting their capacity for generalizing to novel tasks. Therefore, we employ the RGB-D COD task, which is not used in training, to further investigate the zero-shot generalization capabilities of our model. Specifically, we utilize our well-trained model and combine depth and COD prompts to tackle the RGB-D COD task. As shown in Table 9, our VSCode model significantly outperforms the state-of-theart specialist model PopNet [92], although ours works in a pure zero-shot way. This demonstrates the superior zeroshot generalization ability of our proposed method. We also present the results of our model using only RGB information, which yields considerably lower performance compared to zero-shot RGB-D results. This validates that our zero-shot performance is not reliant on the utilization of seen RGB COD information but on the effectiveness of consolidating domain-and task-specific knowledge, which allows for the straightforward combination of various domain-and taskspecific prompts for unseen tasks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we present VSCode, a novel generalist and parameter-efficient model that tackles general multimodal SOD and COD tasks. Concretely, we use a foundation model to assimilate commonalities and 2D prompts to learn domain and task peculiarities. Furthermore, a prompt discrimination loss is introduced to help effectively disentangle specific knowledge and learn better shared knowledge. Our experiments demonstrate the effectiveness of VSCode on six training tasks and one unseen task." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the Key R&D Program of Shaanxi Province under Grant 2021ZDLGY01-08; the National Natural Science Foundation of China under Grants 62136007, U20B2065, 62036005, 62322605; the Key Research and Development Program of Jiangsu Province under Grant BE2021093; the Institute of Artificial Intelligence, Hefei Comprehensive National Science Center Project under Grant 21KT008; and by the MBZUAI-WIS Joint Program for AI Research under Grants WIS P008 and P009." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "been available at https://github.com/Sssssuperior/VSCode." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b69", "b64", "b17", "b17" ], "table_ref": [], "text": "Params DAVIS [70] FBMS [65] ViSal [85] SegV2[41] DAVSOD-Easy [18] DAVSOD-Normal [18] 10. This emphasizes that blindly increasing the number of prompts doesn't guarantee improved performance." }, { "figure_ref": [], "heading": "Comparison with State-of-the-Art Methods", "publication_ref": [ "b53", "b115", "b37", "b53", "b102", "b10", "b75", "b27", "b50", "b104", "b38", "b100", "b97", "b55", "b55", "b92" ], "table_ref": [], "text": "Due to space limitation, we only report the performance comparison of our methods against other most highly-performed state-of-the-art methods, including 4 specialist RGB SOD models [50, 54,89,116], 5 specialist RGB-D SOD models [38,50,54,68,103], 5 specialist RGB-T SOD models [7,11,50,68,76], 3 specialist VSOD models [28,51,105], 4 specialist RGB COD models [22,32,39,101], and 4 specialist VCOD models [10,26,96,98]. Two generalist models [56,75] are also reported. To ensure a relatively fair comparison with EVP [56], which utilizes SegFormer-B4 [93] as their backbone (64.1M parameters), we switch our backbone to Swin-S [59] as it has a similar number of parameters (50M). As shown in Table 3, Table 4, Table 5, Table 6, Table 7, and Table 8, our VSCode significantly outperforms all specialist methods and two generalist models across all six tasks, underscoring the effectiveness of our specially designed 2D prompts and prompt discrimination loss. The supplementary material displays visual comparison results among the top-performing models." }, { "figure_ref": [], "heading": "Analysis of Generalization Ability", "publication_ref": [ "b55" ], "table_ref": [], "text": "Previous generalist research [56,75] " } ]
Salient object detection (SOD) and camouflaged object detection (COD) are related yet distinct binary mapping tasks. These tasks involve multiple modalities, sharing commonalities and unique cues. Existing research often employs intricate task-specific specialist models, potentially leading to redundancy and suboptimal results. We introduce VS-Code, a generalist model with novel 2D prompt learning, to jointly address four SOD tasks and three COD tasks. We utilize VST as the foundation model and introduce 2D prompts within the encoder-decoder architecture to learn domain and task-specific knowledge on two separate dimensions. A prompt discrimination loss helps disentangle peculiarities to benefit model optimization. VSCode outperforms state-ofthe-art methods across six tasks on 26 datasets and exhibits zero-shot generalization to unseen tasks by combining 2D prompts, such as RGB-D COD.
VSCode: General Visual Salient and Camouflaged Object Detection with 2D Prompt Learning
[ { "figure_caption": "Figure 1 .1Figure 1. Relationship of SOD, COD, and multimodal tasks. Each specific task is seen as a combination of two dimensions, i.e. domain (RGB/Depth/Thermal/Flow) and task (SOD/COD).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure2. Overall architecture of our VSCode model. We use VST[54] as the foundation model to acquire commonalities among multimodal SOD and COD tasks. For each task, we integrate 2D prompts to aggregate peculiarities along the domain dimension and the task dimension, including four domain-specific prompts and two task-specific prompts.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Overall framework of our proposed VSCode model with 2D prompt learning. Based on the VST[54] foundation model, we insert the respective domain-specific prompts and task-specific prompts in the attention windows in the Swin transformer [59] encoder layers to learn domain and task-specific encoder features. The convertor is used for multimodal feature fusion. Within the transformer decoder layers, task-specific prompts are appended to image feature tokens to perform task-specific decoding. We also provide detailed structures of an encoder layer (i = 0) and a decoder layer (j = 0).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Illustration of the influence of using different task prompts.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Correlation of prompt pairs at each encoder block.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": ", and use MLP to obtain the overall domain-specific prompt p d all l and task-specific encoder prompt p te all k : p d all l = MLP[LA(p d 0 ); LA(p d 1 ); LA(p d 2 ); LA(p d 3 )], p te all k = MLP[LA(p te 0 ); LA(p te 1 ); LA(p te 2 ); LA(p te 3 )],", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Building on prior research [10, 22, 51, 54, 77], we employ the following datasets to train our model concurrently: the training set of DUTS for RGB SOD , the training sets of NJUD, NLPR, and DUTLF-Depth for RGB-D SOD , the training set of VT5000 for RGB-T SOD , the training sets of DAVIS and DAVSOD for VSOD , the training sets of COD10K and CAMO for RGB COD , and the training set of MoCA-Mask for VCOD . To ensure a fair comparison with previous works [22, 38, 56, 76, 116], we resize each * The parameters for our specialized training methods amount to 53.61M for the RGB task and 54.06M for the multimodal task, resulting in a total of 323.46M parameters for all six tasks.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Em ↑ ST 323.46 * .900 .885 .940 .927 .928 .958 .900 .863 .938 .896 .870 .952 .793 .751 .871 .686 .522 .787 GT 54.06 .898 .884 .941 .924 .922 .954 .903 .886 .942 .930 .+p t 54.09 .904 .892 .945 .931 .931 .961 .906 .892 .946 .925 .910 .970 .804 .776 .876 .759 .639 .808 GT+p d +p t +L dis 54.09 .909 .899 .948 .935 .938 .965 .912 .882 .950 .943 .930 .984 .811 .782 .884 .736 .Ablation studies of our VSCode on the Swin-T [59] backbone with 224 × 224 image size. We conduct evaluations on one representative dataset for each task. \"ST\" indicates special training, \"GT\" means general training, p d represents domain-specific prompts, and p t is task-specific prompts, which consists of p te in the encoder and p td in the decoder. L dis is our prompt discrimination loss. The best results under each setting are labeled in bold.", "figure_data": "911 .972------", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "or deep version for domain-specific prompts shallow 54.83 .900 .887 .943 .926 .926 .955 .905 .873 .944 .931 .version for task-specific prompts in the encoder shallow 54.45 .902 .888 .943 .928 .928 .958 .902 .870 .940 .927 .905 .964 .793 .763 .866 .747 .616 .798 deep 54.08 .903 .890 .944 .934 .934 .962 .905 .874 .943 .924 .903 .960 .804 .772 .881 .759 .651 .831 shallow or deep version for task-specific prompts in the decoder deep 54.10 .903 .891 .945 .930 .932 .960 .905 .888 .943 .922 .903 .966 .802 .774 .881 .738 .605 .801 shallow 54.09 .904 .892 .945 .930 .931 .961 .906 .892 .946 .925 .910 .970 .804 .776 .876 .759 .639 .808 number of domain-specific prompts at four blocks 1,1,1,1 54.06 .902 .890 .945 .931 .932 .962 .909 .877 .947 .931 .917 .975 -specific prompts in the encoder at four blocks 5,5,5,5 54.07 .903 .893 .947 .928 .930 .959 .903 .870 .940 .931 .918 .975 .795 .766 .866 .739 .600 .799 1,1,5,10 54.08 .903 .890 .944 .934 .934 .962 .905 .874 .943 .924 .903 .960 .804 .772 .881 .759 .651 .831 number of task-specific prompts in the decoder 5 54.08 .904 .890 .946 .929 .931 .957 .904 .890 .943 .931 .911 .969 .807 .782 .881 .746 .626 .805 10 54.09 .904 .892 .945 .930 .931 .961 .906 .892 .946 .925 .910 .970 .804 .776 .876 .759 .639 .808 15 54.09 .903 .889 .944 .929 .933 .956 .904 .890 .942 .932 .913 .974 .798 .771 .875 .743 .621 .791 Ablation studies of different designs of prompt layout.", "figure_data": "917 .972------", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "54] 44.48 .896 .877 .939 .932 .944 .964 .928 .937 .968 .873 .850 .900 .850 .800 .888 .854 .866 .902 ICON-R[116] 33.09 .890 .876 .931 .928 .943 .960 .920 .931 .960 .862 .844 .888 .845 .799 .884 .848 .861 .899 VST-T++ [50] 53.60 .901 .887 .943 .937 .949 .968 .930 .939 .968 .878 .855 .901 .853 .804 .892 .853 .866 .899 MENet[89] 27.83 .905 .895 .943 .927 .938 .956 .927 .939 .965 .871 .848 .892 .850 .792 .879 .841 .847 .884 VSCode-T 54.09 .917 .910 .954 .945 .957 .971 .935 .946 .970 .878 .852 .900 .869 .830 .910 .863 .879 .908Quantitative comparison of our VSCode with other 5 SOTA RGB SOD methods on six benchmark datasets. \"-R\", \"-T\" and \"-S\" mean the ResNet50[23], Swin-T, and Swin-S[59] backbones, respectively. '-' indicates the code is not available. The best performance under all settings is bolded, and the best results under each setting are labeled in bold.", "figure_data": "EVP[56]64.52 † .917 .910 .956 .936 .949 .965 .935 .945 .971 .880 .859 .902 .864 .822 .902 .854 .873 .901VSCode-S74.72 † .926 .922 .960 .949 .959 .974 .940 .951 .974 .887 .864 .904 .877 .840 .912 .870 .882 .910MethodParamsNJUD [33]NLPR[69]DUTLF-Depth[71]ReDWeb-S[53]STERE[64]SIP[17]", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "103] 188.12 .929 .934 .957 .932 .922 .963 .912 .913 .938 .725 .726 .800 .918 .916 .951 .899 .Quantitative comparison of our VSCode with other 5 SOTA RGB-D SOD methods on six benchmark datasets.", "figure_data": "910 .939", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Quantitative comparison of our VSCode with other 5 SOTA RGB-T SOD methods on three benchmark datasets.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
Ziyang Luo; Nian Liu; Wangbo Zhao; Xuguang Yang; Dingwen Zhang; Deng-Ping Fan; Fahad Khan; Junwei Han
[ { "authors": "Muhammad Awais; Muzammal Naseer; Salman Khan; Rao Muhammad Anwer; Hisham Cholakkal; Mubarak Shah; Ming-Hsuan Yang; Fahad Shahbaz Khan", "journal": "", "ref_id": "b0", "title": "Foundational models defining a new era in vision: A survey and outlook", "year": "2023" }, { "authors": "Jimmy Lei Ba; Jamie Ryan Kiros; Geoffrey E Hinton", "journal": "", "ref_id": "b1", "title": "Layer normalization", "year": "2016" }, { "authors": "Pia Bideau; Erik Learned-Miller", "journal": "Springer", "ref_id": "b2", "title": "It's moving! a probabilistic model for causal motion segmentation in moving camera videos", "year": "2016" }, { "authors": "Pia Bideau; Erik Rakesh R Menon; Learned-Miller", "journal": "", "ref_id": "b3", "title": "Moa-net: self-supervised motion segmentation", "year": "2018" }, { "authors": "Ali Borji; Ming-Ming Cheng; Qibin Hou; Huaizu Jiang; Jia Li", "journal": "CVMJ", "ref_id": "b4", "title": "Salient object detection: A survey", "year": "2019" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "NeurIPS", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Gang Chen; Feng Shao; Xiongli Chai; Hangwei Chen; Qiuping Jiang; Xiangchao Meng; Yo-Sung Ho", "journal": "IEEE TCSVT", "ref_id": "b6", "title": "Cgmdrnet: Cross-guided modality difference reduction network for rgbt salient object detection", "year": "2022" }, { "authors": "Shuhan Chen; Yun Fu", "journal": "", "ref_id": "b7", "title": "Progressively guided alternate refinement network for rgb-d salient object detection", "year": "2020" }, { "authors": "Zuyao Chen; Runmin Cong; Qianqian Xu; Qingming Huang", "journal": "IEEE TIP", "ref_id": "b8", "title": "Dpanet: Depth potentiality-aware gated attention network for rgb-d salient object detection", "year": "2020" }, { "authors": "Xuelian Cheng; Huan Xiong; Deng-Ping Fan; Yiran Zhong; Mehrtash Harandi; Tom Drummond; Zongyuan Ge", "journal": "", "ref_id": "b9", "title": "Implicit motion handling for video camouflaged object detection", "year": "2022" }, { "authors": "Runmin Cong; Kepu Zhang; Chen Zhang; Feng Zheng; Yao Zhao; Qingming Huang; Sam Kwong", "journal": "IEEE TMM", "ref_id": "b10", "title": "Does thermal really always matter for rgb-t salient object detection", "year": "2022" }, { "authors": "Zijun Deng; Xiaowei Hu; Lei Zhu; Xuemiao Xu; Jing Qin; Guoqiang Han; Pheng-Ann Heng", "journal": "", "ref_id": "b11", "title": "R3net: Recurrent residual refinement network for saliency detection", "year": "2018" }, { "authors": "Deng-Ping Fan; Ming-Ming Cheng; Yun Liu; Tao Li; Ali Borji", "journal": "", "ref_id": "b12", "title": "Structure-measure: A new way to evaluate foreground maps", "year": "2017" }, { "authors": "Deng-Ping Fan; Cheng Gong; Yang Cao; Bo Ren; Ming-Ming Cheng; Ali Borji", "journal": "", "ref_id": "b13", "title": "Enhanced-alignment Measure for Binary Foreground Map Evaluation", "year": "2018" }, { "authors": "Deng-Ping Fan; Ge-Peng Ji; Guolei Sun; Ming-Ming Cheng; Jianbing Shen; Ling Shao", "journal": "", "ref_id": "b14", "title": "Camouflaged object detection", "year": "2020" }, { "authors": "Deng-Ping Fan; Ge-Peng Ji; Peng Xu; Ming-Ming Cheng; Christos Sakaridis; Luc Van Gool", "journal": "VI", "ref_id": "b15", "title": "Advances in deep concealed scene understanding", "year": "2023" }, { "authors": "Deng-Ping Fan; Zheng Lin; Zhao Zhang; Menglong Zhu; Ming-Ming Cheng", "journal": "IEEE TNNLS", "ref_id": "b16", "title": "Rethinking rgb-d salient object detection: Models, data sets, and large-scale benchmarks", "year": "2020" }, { "authors": "Deng-Ping Fan; Wenguan Wang; Ming-Ming Cheng; Jianbing Shen", "journal": "", "ref_id": "b17", "title": "Shifting more attention to video salient object detection", "year": "2019" }, { "authors": "Deng-Ping Fan; Yingjie Zhai; Ali Borji; Jufeng Yang; Ling Shao", "journal": "", "ref_id": "b18", "title": "Bbs-net: Rgb-d salient object detection with a bifurcated backbone strategy network", "year": "2020" }, { "authors": "Chaowei Fang; Haibin Tian; Dingwen Zhang; Qiang Zhang; Jungong Han; Junwei Han", "journal": "SCIS", "ref_id": "b19", "title": "Densely nested top-down flows for salient object detection", "year": "2022" }, { "authors": "Yuchao Gu; Lijuan Wang; Ziqin Wang; Yun Liu; Ming-Ming Cheng; Shao-Ping Lu", "journal": "", "ref_id": "b20", "title": "Pyramid constrained selfattention network for fast video salient object detection", "year": "2020" }, { "authors": "Chunming He; Kai Li; Yachao Zhang; Longxiang Tang; Yulun Zhang; Zhenhua Guo; Xiu Li", "journal": "", "ref_id": "b21", "title": "Camouflaged object detection with feature decomposition and edge reconstruction", "year": "2023" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b22", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": " Hou; X Cheng; Hu; Z Borji; Tu; Torr", "journal": "IEEE TPAMI", "ref_id": "b23", "title": "Deeply supervised salient object detection with short connections", "year": "2018" }, { "authors": "Guyue Hu; Bin He; Hanwang Zhang", "journal": "MIR", "ref_id": "b24", "title": "Compositional prompting video-language models to understand procedure in instructional videos", "year": "2023" }, { "authors": "Ge-Peng Ji; Yu-Cheng Chou; Deng-Ping Fan; Geng Chen; Huazhu Fu; Debesh Jha; Ling Shao", "journal": "Springer", "ref_id": "b25", "title": "Progressively normalized self-attention network for video polyp segmentation", "year": "2021" }, { "authors": "Ge-Peng Ji; Deng-Ping Fan; Yu-Cheng Chou; Dengxin Dai; Alexander Liniger; Luc Van Gool", "journal": "MIR", "ref_id": "b26", "title": "Deep gradient learning for efficient camouflaged object detection", "year": "2023" }, { "authors": "Ge-Peng Ji; Deng-Ping Fan; Keren Fu; Zhe Wu; Jianbing Shen; Ling Shao", "journal": "CVMJ", "ref_id": "b27", "title": "Full-duplex strategy for video object segmentation", "year": "2023" }, { "authors": "Wei Ji; Jingjing Li; Miao Zhang; Yongri Piao; Huchuan Lu", "journal": "", "ref_id": "b28", "title": "Accurate rgb-d salient object detection via collaborative learning", "year": "2020" }, { "authors": "Yuzhu Ji; Haijun Zhang; Zequn Jie; Lin Ma; Jonathan Wu", "journal": "IEEE TNNLS", "ref_id": "b29", "title": "Casnet: A cross-attention siamese network for video salient object detection", "year": "2020" }, { "authors": "Menglin Jia; Luming Tang; Bor-Chun Chen; Claire Cardie; Serge Belongie; Bharath Hariharan; Ser-Nam Lim", "journal": "Springer", "ref_id": "b30", "title": "Visual prompt tuning", "year": "2022" }, { "authors": "Qi Jia; Shuilian Yao; Yu Liu; Xin Fan; Risheng Liu; Zhongxuan Luo", "journal": "", "ref_id": "b31", "title": "Segment, magnify and reiterate: Detecting camouflaged objects the hard way", "year": "2022" }, { "authors": "Ran Ju; Ling Ge; Wenjing Geng; Tongwei Ren; Gangshan Wu", "journal": "", "ref_id": "b32", "title": "Depth saliency based on anisotropic centersurround difference", "year": "2014" }, { "authors": "Alex Kendall; Yarin Gal; Roberto Cipolla", "journal": "", "ref_id": "b33", "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "year": "2018" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b34", "title": "Adam: A method for stochastic optimization", "year": "" }, { "authors": "Hala Lamdouar; Charig Yang; Weidi Xie; Andrew Zisserman", "journal": "ACCV", "ref_id": "b35", "title": "Betrayed by motion: Camouflaged object discovery via motion segmentation", "year": "2020" }, { "authors": "Trung-Nghia Le; Tam V Nguyen; Zhongliang Nie; Minh-Triet Tran; Akihiro Sugimoto", "journal": "CVIU", "ref_id": "b36", "title": "Anabranch network for camouflaged object segmentation", "year": "2019" }, { "authors": "Minhyeok Lee; Chaewon Park; Suhwan Cho; Sangyoun Lee", "journal": "Springer", "ref_id": "b37", "title": "Spsn: Superpixel prototype sampling network for rgb-d salient object detection", "year": "2022" }, { "authors": "Aixuan Li; Jing Zhang; Yunqiu Lv; Bowen Liu; Tong Zhang; Yuchao Dai", "journal": "", "ref_id": "b38", "title": "Uncertainty-aware joint salient object and camouflaged object detection", "year": "2021" }, { "authors": "Aixuan Li; Jing Zhang; Yunqiu Lv; Tong Zhang; Yiran Zhong; Mingyi He; Yuchao Dai", "journal": "", "ref_id": "b39", "title": "Joint salient object detection and camouflaged object detection via uncertaintyaware learning", "year": "2023" }, { "authors": "Fuxin Li; Taeyoung Kim; Ahmad Humayun; David Tsai; James M Rehg", "journal": "", "ref_id": "b40", "title": "Video segmentation by tracking many figure-ground segments", "year": "2013" }, { "authors": "Gongyang Li; Zhi Liu; Haibin Ling", "journal": "IEEE TIP", "ref_id": "b41", "title": "Icnet: Information conversion network for rgb-d based salient object detection", "year": "2020" }, { "authors": "Gongyang Li; Zhi Liu; Linwei Ye; Yang Wang; Haibin Ling", "journal": "", "ref_id": "b42", "title": "Cross-modal weighting network for rgb-d salient object detection", "year": "2020" }, { "authors": "Guanbin Li; Yuan Xie; Tianhao Wei; Keze Wang; Liang Lin", "journal": "", "ref_id": "b43", "title": "Flow guided recurrent neural encoder for video salient object detection", "year": "2018" }, { "authors": "Guanbin Li; Yizhou Yu", "journal": "", "ref_id": "b44", "title": "Visual saliency based on multiscale deep features", "year": "2015" }, { "authors": "Hao Li; Dingwen Zhang; Nian Liu; Lechao Cheng; Yalun Dai; Chao Zhang; Xinggang Wang; Junwei Han", "journal": "", "ref_id": "b45", "title": "Boosting low-data instance segmentation by unsupervised pretraining with saliency prompt", "year": "2023" }, { "authors": "Yin Li; Xiaodi Hou; Christof Koch; James M Rehg; Alan L Yuille", "journal": "", "ref_id": "b46", "title": "The secrets of salient object segmentation", "year": "2014" }, { "authors": "Nian Liu; Junwei Han", "journal": "", "ref_id": "b47", "title": "Dhsnet: Deep hierarchical saliency network for salient object detection", "year": "2016" }, { "authors": "Nian Liu; Junwei Han; Ming-Hsuan Yang", "journal": "", "ref_id": "b48", "title": "Picanet: Learning pixel-wise contextual attention for saliency detection", "year": "2018" }, { "authors": "Nian Liu; Ziyang Luo; Ni Zhang; Junwei Han", "journal": "", "ref_id": "b49", "title": "Vst++: Efficient and stronger visual saliency transformer", "year": "2023" }, { "authors": "Nian Liu; Kepan Nan; Wangbo Zhao; Xiwen Yao; Junwei Han", "journal": "IEEE TNNLS", "ref_id": "b50", "title": "Learning complementary spatial-temporal transformer for video salient object detection", "year": "2023" }, { "authors": "Nian Liu; Ni Zhang; Junwei Han", "journal": "", "ref_id": "b51", "title": "Learning selective self-mutual attention for rgb-d saliency detection", "year": "2020" }, { "authors": "Nian Liu; Ni Zhang; Ling Shao; Junwei Han", "journal": "IEEE TPAMI", "ref_id": "b52", "title": "Learning selective mutual attention and contrast for rgb-d saliency detection", "year": "2021" }, { "authors": "Nian Liu; Ni Zhang; Kaiyuan Wan; Ling Shao; Junwei Han", "journal": "", "ref_id": "b53", "title": "Visual saliency transformer", "year": "2008" }, { "authors": "Shikun Liu; Edward Johns; Andrew J Davison", "journal": "", "ref_id": "b54", "title": "Endto-end multi-task learning with attention", "year": "2019" }, { "authors": "Weihuang Liu; Xi Shen; Chi-Man Pun; Xiaodong Cun", "journal": "", "ref_id": "b55", "title": "Explicit visual prompting for universal foreground segmentations", "year": "2023" }, { "authors": "Yi Liu; Dingwen Zhang; Nian Liu; Shoukun Xu; Jungong Han", "journal": "IEEE TIP", "ref_id": "b56", "title": "Disentangled capsule routing for fast part-object relational saliency", "year": "2022" }, { "authors": "Yi Liu; Dingwen Zhang; Qiang Zhang; Jungong Han", "journal": "TPAMI", "ref_id": "b57", "title": "Part-object relational visual saliency", "year": "2021" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b58", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Zhengyi Liu; Song Shi; Quntao Duan; Wei Zhang; Peng Zhao", "journal": "IJON", "ref_id": "b59", "title": "Salient object detection for rgb-d image by single stream recurrent convolution neural network", "year": "2019" }, { "authors": "Yunqiu Lv; Jing Zhang; Yuchao Dai; Aixuan Li; Bowen Liu; Nick Barnes; Deng-Ping Fan", "journal": "", "ref_id": "b60", "title": "Simultaneously localize, segment and rank the camouflaged objects", "year": "2021" }, { "authors": "Vida Movahedi; James H Elder", "journal": "", "ref_id": "b61", "title": "Design and perceptual validation of performance measures for salient object segmentation", "year": "2010" }, { "authors": "Xing Nie; Bolin Ni; Jianlong Chang; Gaomeng Meng; Chunlei Huo; Zhaoxiang Zhang; Shiming Xiang; Qi Tian; Chunhong Pan", "journal": "", "ref_id": "b62", "title": "Pro-tuning: Unified prompt tuning for vision tasks", "year": "2022" }, { "authors": "Yuzhen Niu; Yujie Geng; Xueqing Li; Feng Liu", "journal": "", "ref_id": "b63", "title": "Leveraging stereopsis for saliency analysis", "year": "2012" }, { "authors": "Peter Ochs; Jitendra Malik; Thomas Brox", "journal": "IEEE TPAMI", "ref_id": "b64", "title": "Segmentation of moving objects by long term video analysis", "year": "2013" }, { "authors": "Youwei Pang; Xiaoqi Zhao; Tian-Zhu Xiang; Lihe Zhang; Huchuan Lu", "journal": "", "ref_id": "b65", "title": "Zoom in and out: A mixed-scale triplet network for camouflaged object detection", "year": "2022" }, { "authors": "Youwei Pang; Xiaoqi Zhao; Lihe Zhang; Huchuan Lu", "journal": "", "ref_id": "b66", "title": "Multi-scale interactive network for salient object detection", "year": "2020" }, { "authors": "Youwei Pang; Xiaoqi Zhao; Lihe Zhang; Huchuan Lu", "journal": "IEEE TIP", "ref_id": "b67", "title": "Caver: Cross-modal view-mixed transformer for bi-modal salient object detection", "year": "2023" }, { "authors": "Houwen Peng; Bing Li; Weihua Xiong; Weiming Hu; Rongrong Ji", "journal": "", "ref_id": "b68", "title": "Rgbd salient object detection: A benchmark and algorithms", "year": "2014" }, { "authors": "Federico Perazzi; Jordi Pont-Tuset; Brian Mcwilliams; Luc Van Gool; Markus Gross; Alexander Sorkine-Hornung", "journal": "", "ref_id": "b69", "title": "A benchmark dataset and evaluation methodology for video object segmentation", "year": "2016" }, { "authors": "Yongri Piao; Wei Ji; Jingjing Li; Miao Zhang; Huchuan Lu", "journal": "", "ref_id": "b70", "title": "Depth-induced multi-scale recurrent attention network for saliency detection", "year": "2019" }, { "authors": "Xuebin Qin; Zichen Zhang; Chenyang Huang; Chao Gao; Masood Dehghan; Martin Jagersand", "journal": "", "ref_id": "b71", "title": "Basnet: Boundaryaware salient object detection", "year": "2019" }, { "authors": "Sucheng Ren; Chu Han; Xin Yang; Guoqiang Han; Shengfeng He", "journal": "Springer", "ref_id": "b72", "title": "Tenet: Triple excitation network for video salient object detection", "year": "2020" }, { "authors": "Hongmei Song; Wenguan Wang; Sanyuan Zhao; Jianbing Shen; Kin-Man Lam", "journal": "", "ref_id": "b73", "title": "Pyramid dilated deeper convlstm for video salient object detection", "year": "2018" }, { "authors": "Yukun Su; Jingliang Deng; Ruizhou Sun; Guosheng Lin; Hanjing Su; Qingyao Wu", "journal": "IEEE TMM", "ref_id": "b74", "title": "A unified transformer framework for group-based segmentation: Co-segmentation, cosaliency detection and video salient object detection", "year": "2023" }, { "authors": "Zhengzheng Tu; Zhun Li; Chenglong Li; Yang Lang; Jin Tang", "journal": "IEEE TIP", "ref_id": "b75", "title": "Multi-interactive dual-decoder for rgb-thermal salient object detection", "year": "2021" }, { "authors": "Zhengzheng Tu; Yan Ma; Zhun Li; Chenglong Li; Jieming Xu; Yongtao Liu", "journal": "IEEE TMM", "ref_id": "b76", "title": "Rgbt salient object detection: A largescale dataset and benchmark", "year": "2022" }, { "authors": "Zhengzheng Tu; Tian Xia; Chenglong Li; Xiaoxiao Wang; Yan Ma; Jin Tang", "journal": "IEEE TMM", "ref_id": "b77", "title": "Rgb-t image saliency detection via collaborative graph learning", "year": "2019" }, { "authors": "Guizhao Wang; Chenglong Li; Yunpeng Ma; Aihua Zheng; Jin Tang; Bin Luo", "journal": "Springer", "ref_id": "b78", "title": "Rgb-t saliency detection benchmark: Dataset, baselines, analysis and a novel approach", "year": "2018" }, { "authors": "Jie Wang; Kechen Song; Yanqi Bao; Liming Huang; Yunhui Yan", "journal": "IEEE TCSVT", "ref_id": "b79", "title": "Cgfnet: Cross-guided fusion network for rgb-t salient object detection", "year": "2021" }, { "authors": "Lijun Wang; Huchuan Lu; Yifan Wang; Mengyang Feng; Dong Wang; Baocai Yin; Xiang Ruan", "journal": "", "ref_id": "b80", "title": "Learning to detect salient objects with image-level supervision", "year": "2017" }, { "authors": "Linzhao Wang; Lijun Wang; Huchuan Lu; Pingping Zhang; Xiang Ruan", "journal": "IEEE TPAMI", "ref_id": "b81", "title": "Salient object detection with recurrent fully convolutional networks", "year": "2018" }, { "authors": "Tiantian Wang; Ali Borji; Lihe Zhang; Pingping Zhang; Huchuan Lu", "journal": "", "ref_id": "b82", "title": "A stagewise refinement model for detecting salient objects in images", "year": "2017" }, { "authors": "Wenguan Wang; Jianbing Shen; Xingping Dong; Ali Borji", "journal": "", "ref_id": "b83", "title": "Salient object detection driven by fixation prediction", "year": "2018" }, { "authors": "Wenguan Wang; Jianbing Shen; Ling Shao", "journal": "IEEE TIP", "ref_id": "b84", "title": "Consistent video saliency using local gradient flow optimization and global refinement", "year": "2015" }, { "authors": "Wenguan Wang; Jianbing Shen; Ling Shao", "journal": "IEEE TIP", "ref_id": "b85", "title": "Video salient object detection via fully convolutional networks", "year": "2017" }, { "authors": "Xinlong Wang; Wen Wang; Yue Cao; Chunhua Shen; Tiejun Huang", "journal": "", "ref_id": "b86", "title": "Images speak in images: A generalist painter for in-context visual learning", "year": "2023" }, { "authors": "Xinlong Wang; Xiaosong Zhang; Yue Cao; Wen Wang; Chunhua Shen; Tiejun Huang", "journal": "", "ref_id": "b87", "title": "Seggpt: Segmenting everything in context", "year": "2023" }, { "authors": "Yi Wang; Ruili Wang; Xin Fan; Tianzhu Wang; Xiangjian He", "journal": "", "ref_id": "b88", "title": "Pixels, regions, and objects: Multiple enhancement for salient object detection", "year": "2023" }, { "authors": "Jun Wei; Shuhui Wang; Zhe Wu; Chi Su; Qingming Huang; Qi Tian", "journal": "", "ref_id": "b89", "title": "Label decoupling framework for salient object detection", "year": "2020" }, { "authors": "Lina Wei; Shanshan Zhao; Omar Farouk Bourahla; Xi Li; Fei Wu; Yueting Zhuang; Junwei Han; Mingliang Xu", "journal": "IEEE TNNLS", "ref_id": "b90", "title": "End-to-end video saliency detection via a deep contextual spatiotemporal network", "year": "2020" }, { "authors": "Zongwei Wu; Danda Pani Paudel; Deng-Ping Fan; Jingjing Wang; Shuo Wang; Cédric Demonceaux; Radu Timofte; Luc Van Gool", "journal": "", "ref_id": "b91", "title": "Source-free depth for object pop-out", "year": "2023" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "NeurIPS", "ref_id": "b92", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Jin Xie; Hisham Cholakkal; Rao Muhammad Anwer; Fahad Shahbaz Khan; Yanwei Pang; Ling Shao; Mubarak Shah", "journal": "Springer", "ref_id": "b93", "title": "Count-and similarity-aware r-cnn for pedestrian detection", "year": "2020" }, { "authors": "Bin Yan; Yi Jiang; Jiannan Wu; Dong Wang; Ping Luo; Zehuan Yuan; Huchuan Lu", "journal": "", "ref_id": "b94", "title": "Universal instance perception as object discovery and retrieval", "year": "2023" }, { "authors": "Pengxiang Yan; Guanbin Li; Yuan Xie; Zhen Li; Chuan Wang; Tianshui Chen; Liang Lin", "journal": "", "ref_id": "b95", "title": "Semi-supervised video salient object detection using pseudo-labels", "year": "2019" }, { "authors": "Qiong Yan; Li Xu; Jianping Shi; Jiaya Jia", "journal": "", "ref_id": "b96", "title": "Hierarchical saliency detection", "year": "2013" }, { "authors": "Charig Yang; Hala Lamdouar; Erika Lu; Andrew Zisserman; Weidi Xie", "journal": "", "ref_id": "b97", "title": "Self-supervised video object segmentation by motion grouping", "year": "2021" }, { "authors": "Chuan Yang; Lihe Zhang; Huchuan Lu; Xiang Ruan; Ming-Hsuan Yang", "journal": "", "ref_id": "b98", "title": "Saliency detection via graph-based manifold ranking", "year": "2013" }, { "authors": "D Matthew; Rob Zeiler; Fergus", "journal": "Springer", "ref_id": "b99", "title": "Visualizing and understanding convolutional networks", "year": "2014" }, { "authors": "Qiang Zhai; Xin Li; Fan Yang; Chenglizhao Chen; Hong Cheng; Deng-Ping Fan", "journal": "", "ref_id": "b100", "title": "Mutual graph learning for camouflaged object detection", "year": "2021" }, { "authors": "Dingwen Zhang; Junwei Han; Yu Zhang; Dong Xu", "journal": "IEEE TPAMI", "ref_id": "b101", "title": "Synthesizing supervision for learning deep saliency network without human annotation", "year": "2019" }, { "authors": "Jing Zhang; Deng-Ping Fan; Yuchao Dai; Xin Yu; Yiran Zhong; Nick Barnes; Ling Shao", "journal": "", "ref_id": "b102", "title": "Rgb-d saliency detection via cascaded mutual information minimization", "year": "2021" }, { "authors": "Lu Zhang; Jianming Zhang; Zhe Lin; Huchuan Lu; You He", "journal": "", "ref_id": "b103", "title": "Capsal: Leveraging captioning to boost semantics for salient object detection", "year": "2019" }, { "authors": "Miao Zhang; Jie Liu; Yifei Wang; Yongri Piao; Shunyu Yao; Wei Ji; Jingjing Li; Huchuan Lu; Zhongxuan Luo", "journal": "", "ref_id": "b104", "title": "Dynamic context-sensitive filtering network for video salient object detection", "year": "2021" }, { "authors": "Miao Zhang; Weisong Ren; Yongri Piao; Zhengkun Rong; Huchuan Lu", "journal": "", "ref_id": "b105", "title": "Select, supplement and focus for rgb-d saliency detection", "year": "2020" }, { "authors": "Qiang Zhang; Nianchang Huang; Lin Yao; Dingwen Zhang; Caifeng Shan; Jungong Han", "journal": "IEEE TIP", "ref_id": "b106", "title": "Rgb-t salient object detection via fusing multi-level cnn features", "year": "2019" }, { "authors": "Xiaoning Zhang; Tiantian Wang; Jinqing Qi; Huchuan Lu; Gang Wang", "journal": "", "ref_id": "b107", "title": "Progressive attention guided recurrent network for salient object detection", "year": "2018" }, { "authors": "Jia-Xing Zhao; Yang Cao; Deng-Ping Fan; Ming-Ming Cheng; Xuan-Yi Li; Le Zhang", "journal": "", "ref_id": "b108", "title": "Contrast prior and fluid pyramid integration for rgbd salient object detection", "year": "2019" }, { "authors": "Jia-Xing Zhao; Jiang-Jiang Liu; Deng-Ping Fan; Yang Cao; Jufeng Yang; Ming-Ming Cheng", "journal": "", "ref_id": "b109", "title": "Egnet:edge guidance network for salient object detection", "year": "2019" }, { "authors": "Wangbo Zhao; Kepan Nan; Songyang Zhang; Kai Chen; Dahua Lin; Yang You", "journal": "", "ref_id": "b110", "title": "Learning referring video object segmentation from weak annotation", "year": "2023" }, { "authors": "Wangbo Zhao; Jiasheng Tang; Yizeng Han; Yibing Song; Kai Wang; Gao Huang; Fan Wang; Yang You", "journal": "", "ref_id": "b111", "title": "Dynamic tuning towards parameter and inference efficiency for vit adaptation", "year": "2024" }, { "authors": "Xiaoqi Zhao; Youwei Pang; Lihe Zhang; Huchuan Lu; Lei Zhang", "journal": "", "ref_id": "b112", "title": "Suppress and balance: A simple gated network for salient object detection", "year": "2020" }, { "authors": "Dehua Zheng; Xiaochen Zheng; Laurence T Yang; Yuan Gao; Chenlu Zhu; Yiheng Ruan", "journal": "", "ref_id": "b113", "title": "Mffn: Multi-view feature fusion network for camouflaged object detection", "year": "2023" }, { "authors": "Jiawen Zhu; Simiao Lai; Xin Chen; Dong Wang; Huchuan Lu", "journal": "", "ref_id": "b114", "title": "Visual prompt multi-modal tracking", "year": "2023" }, { "authors": "Mingchen Zhuge; Deng-Ping Fan; Nian Liu; Dingwen Zhang; Dong Xu; Ling Shao", "journal": "IEEE TPAMI", "ref_id": "b115", "title": "Salient object detection via integrity learning", "year": "2022" }, { "authors": "Xueyan Zou; Zi-Yi Dou; Jianwei Yang; Zhe Gan; Linjie Li; Chunyuan Li; Xiyang Dai; Harkirat Behl; Jianfeng Wang; Lu Yuan", "journal": "", "ref_id": "b116", "title": "Generalized decoding for pixel, image, and language", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 50.11, 358.58, 236.25, 24.29 ], "formula_id": "formula_0", "formula_text": "f E i into window features f E i w ∈ R li/M 2 ×M 2 ×ci" }, { "formula_coordinates": [ 4, 91.75, 466.01, 195.28, 25.94 ], "formula_id": "formula_1", "formula_text": "p d ′ i+1 f E i w ← MLP(SW/W-MSA( p d ′ i f E i w )),(1)" }, { "formula_coordinates": [ 4, 237.26, 561.44, 48.6, 12.33 ], "formula_id": "formula_2", "formula_text": "f E i w to f E i+1" }, { "formula_coordinates": [ 4, 366.68, 657.45, 179.1, 24.66 ], "formula_id": "formula_3", "formula_text": "p td j f D j ← MLP(MSA( p td j+1 f D j+1 )),(2)" }, { "formula_coordinates": [ 5, 372.43, 478.28, 173.35, 19.61 ], "formula_id": "formula_5", "formula_text": "L dis = m ln(1 + |CS m |),(4)" }, { "formula_coordinates": [ 6, 126.31, 84.56, 390.33, 7.17 ], "formula_id": "formula_6", "formula_text": "(M) Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑" }, { "formula_coordinates": [ 6, 54.76, 232.04, 483.71, 26.18 ], "formula_id": "formula_7", "formula_text": "[41] CAMO[37] CAD[3] (M) Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ shallow" }, { "formula_coordinates": [ 7, 52.45, 80.76, 488.33, 17.03 ], "formula_id": "formula_8", "formula_text": "(M) Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ VST[" }, { "formula_coordinates": [ 7, 52.45, 215.66, 488.33, 17.03 ], "formula_id": "formula_9", "formula_text": "(M) Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ Sm ↑ Fm ↑ Em ↑ CMINet[" } ]
2023-11-25
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b2", "b19", "b35", "b8", "b15", "b16", "b17", "b29", "b42", "b19", "b13", "b14", "b14", "b14", "b17" ], "table_ref": [], "text": "Interpretability in deep learning provides insights into the complex operations of deep neural networks (DNNs), which often seem like \"black boxes\" due to their intricate structures. There's a growing demand for interpreters, tools that decode the influence of input features on a DNN's decisions, especially in critical areas like healthcare and autonomous vehicles. Effective explanations enhances user trust, highlight model biases and also its strengths, fostering wider acceptance of these systems [3,20,35].\nWithin this field, perturbation-based methods are those which attempt to explain the machine learning model by connecting input modifications with output changes to construct an explanation heatmap, i.e., a 2D attribution matrix indicating the responsibility of each input pixel to the model prediction [9,[16][17][18]. In that sense, occlusion is one of such methods, measuring the responsibility of each pixel by replacing image regions with a given baseline, e.g., setting it to zero, and measuring output variations [29,42]. Nevertheless, careless occlusion likely generates images which are outside of the training data's distribution, leading to unfair comparisons and fragile visualizations [20].\nIn order to address this shortcoming, we propose a novel interpretability framework that integrates naïve occlusion with other common image augmentations employed during model training. Our proposal hinges on a simple premise: if data augmentations are pivotal in model training, they can be equally instrumental in enhancing interpretability as the model's reaction to augmentations is a viable path to understand its decision-making process. However, seamlessly integrating these augmentations is not trivial. For example, if jittering the color of an image changes the model output, how to pinpoint which region was most affected by it?\nThus, a challenge arises when trying to determine the specific impact of an augmentation. Our approach relies on the DNN deep feature vector from the final layer before the classification head. We feed both the original images and their augmented variants (with or without occlusion) to CNNs or Vision Transformers. This yields two sets of deep feature vectors: one from original/augmented images without occlusion and another from their occluded counterparts, as depicted in Fig. 1.\nWe then compactly represent each set as a lowdimensional subspace in the deep feature vector space by applying Principal Component Analysis (PCA) without data centering to the set. Two subspaces V M and V are generated from the sets extracted from images with and without occlusion, respectively. The core idea of our proposal is to measure the small perturbation due to the occlusion by the structural similarity Sim between V M and V, which is defined using the multiple canonical angles {θ} between . OSA-DAS Overview: Subspace V is derived from the augmented input image, while VM originates from its occluded counterpart. Both are derived from the principal component analysis (PCA) of a DNN's deep feature vector. The orthogonal degree [14,15] between V and VM quantifies the occlusion's effect and shapes the explanation heatmap. Multiple occlusion augmentation subspaces are used to capture diverse facets of the input's representation. Their combined relationships offer a holistic view of occlusion impacts, producing a detailed heatmap.\nthe subspaces [15]. A larger subspace distance (orthogonal degree), 1 -Sim, signifies that the occluded region is crucial for classification. This subspace representation method streamlines the process of merging multiple augmentation influences, offering a straightforward and robust metric of structural difference in the deep feature vector space. Overall, our contributions are as follows:\n1. We introduce a novel interpretability framework able to leverage any data augmentation to improve DNNs prediction explanation, shown in Fig. 1.\n2. We leverage subspace representations [15] with the deep feature vector in explanation methods. This approach facilitates a more granular understanding of the model's behavior and offers a robust explanation.\n3. We optimize our algorithm by designing a better random masking routine, which proposes better occlusions, allowing for a faster convergence.\n4. We present a new interpretability metric named minimal size, which relies on causality theory [18] to measure how close the explanation heatmap is to the actual cause of the model prediction." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b32", "b34", "b5", "b26", "b6", "b31", "b33", "b2", "b21", "b10", "b23", "b24", "b39", "b0", "b42", "b11", "b12", "b29", "b37", "b8", "b19", "b36", "b40", "b41", "b38", "b3" ], "table_ref": [], "text": "The visualization of deep learning models decisionmaking process has become a vital research area, given the complex and often opaque nature of neural networks. Many methods have been introduced to shed light on how DNNs arrive at specific predictions. Gradient-based methods generate visualizations from the model output derivative with respect to the input image [32,34]. Activation-based methods [6,26,27,31,33] build upon gradients but take into consideration common properties of the network structure, which improves output. These techniques can compute heatmaps quite fast yet many times lack explainability, showing many similarities to an edge detector [3,21].\nAdditionally, given the recent developments in transformers [11,23,24,39], a new family of attention-based interpreters has been proposed [1,7], in which the attention weights from multiple layers are used to compute explanations. These methods demonstrate elevated interpretability capacity, but they are architecture-specific.\nOn the other hand, perturbation-based methods make minimal assumptions about the nature of the model itself and exactly for that reason show increased ability in explaining any kind of machine learning model. The basic perturbation method, Occlusion Sensitivity Analysis (OSA) [42], is actually quite straightforward. First, it measures the slight variation of the class score to occlusion in different regions of an input image using small perturbations of the image. Then, the resultant variation of each region is summarized as a heatmap of the input image. Other methods propose extensions to this idea by introducing new ways to generate the optimal occlusions [12,13,29,37] or on how to compute their contributions [9].\nNevertheless, these methods are unable to explain the whole range of possibilities that can lead to a prediction, and have been criticized for analyzing the model on a different data distribution on which it was trained [20]. In that sense, the robustness of visual explanations to common data augmentation techniques, such as occlusions, has been studied. [36] analyzed the response of post-hoc visual explanations to natural data transformations. They found significant differences in robustness depending on the type of transformation, with some techniques demonstrating more stability. Similarly, [40] explored the relationship between data augmentation strategies and model interpretability, revealing that models trained with mixed sample data augmentation showed lower interpretability, particularly with Cut-Mix [41] and SaliencyMix [38] augmentations. Moreover, [4] proposes an augmentation method leveraging multiple interpreters, thereby enhancing model robustness against noise or occlusions. This highlights the complex relationship between augmentation techniques and interpretability, raising caution for their adoption in critical applications. However, it's noteworthy that while these works analyze the impact of augmentations on explanations, as far as we know, none proposes an interpreter that leverages augmentation specifically to improve explanation trustworthiness." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our original method and metric. Details can be found in the supplementary material." }, { "figure_ref": [], "heading": "Occlusion with Augmentation Subspaces", "publication_ref": [ "b29", "b42", "b9", "b28" ], "table_ref": [], "text": "Traditional occlusion sensitivity analysis (OSA) computes explanation heatmaps by replacing image regions with a given baseline (masking it to 0), and measuring the score difference in the output [29,42]. While this technique is cost-effective, occluded images originate from a distinct distribution from the one which the model was trained on. Thus, discerning whether the performance dip arises from this distributional shift or due to the responsibility of the occluded regions becomes ambiguous.\nOn the other hand, data augmentation (including random occlusions) have been used in most state-of-the-art models during training [5,10,28]. Therefore, we expect a more accurate interpretation could be performed if the model uses augmentations closer to the real training distribution.\nWith that in mind, we devise a technique that adapts OSA to using any data augmentation routine in an independent way by leveraging subspaces of deep feature vectors." }, { "figure_ref": [], "heading": "Data Augmentation Methods", "publication_ref": [ "b9", "b28", "b9", "b28" ], "table_ref": [], "text": "Occlusion Sensitivity Analysis with Deep Feature Augmentation Subspace (OSA-DAS) utilizes data augmentation methods to foster more distinctive deep feature vectors that can be leveraged for enhanced interpretability.\nIn the realm of data augmentation, there exist prominent state-of-the-art routines that have revolutionized the process. For instance, RandAugment [10] is an automated data augmentation approach that streamlines the selection of transformations through two hyperparameters: n ops , denoting the number of sequential augmentation transformations, and mag, representing the magnitude of these transformations. The transformations span from simple affine transformations, such as rotation and translation to more intricate operations such as color jittering and auto contrast.\nOn the other hand, TrivialAugment [28] presents an elegant yet powerful approach to automatic augmentation. It stands out due to its simplicity, requiring no parameters and applying a singular augmentation to each image. Despite its minimalist design, it has demonstrated its prowess, outperforming more complex augmentation techniques.\nCentral to our method is its adaptability and versatility. We chose the aforementioned augmentations in our experiments given they represent the pinnacle of current techniques, but our proposed framework is inherently flexible. It is designed to seamlessly integrate with any data augmentation routine, be it RandAugment [10], TrivialAugment [28] or else that best fits the explanation goal of the task at hand." }, { "figure_ref": [], "heading": "Deep Feature Augmentation Subspace", "publication_ref": [], "table_ref": [], "text": "The addition of any data augmentation to perturbationbased interpretability is not trivial, and we opt to use sets of augmented inputs around each occlusion.\nConsider that an image x is fed into the model f (). In this paper, the output v = f (x) is referred to as a deep feature vector in a k-dimensional vector space. For each occlusion M, we generate a set of deep feature vectors corresponding to augmented images with occlusions, and then represent compactly the set by a subspace V M ⊂ R k for the specific occlusion. The same is performed for the original input image, which builds the reference subspace V ⊂ R k .\nThe orthonormal basis, V and V M ∈ R k×d , of the d-dimensional subspaces V and V M are calculated by applying Principal Component Analysis (PCA) without data centering to each set of deep feature vectors. More concretely, they can be obtained as the eigenvectors corresponding to several largest eigenvalues of auto-correlation matrix\nm i=1 v i v T i ∈ R k×k\n, where m is the number of applied augmentation types." }, { "figure_ref": [], "heading": "Structural Similarity between Two Subspaces", "publication_ref": [ "b14", "b13", "b14", "b14" ], "table_ref": [], "text": "The relationship between two d-dimensional subspaces in R k is defined by a set of d canonical angles {θ i } d i=1 between them. They can be obtained by applying singular value decomposition (SVD) to V T V M , where V and V M ∈ R k×d are the orthonormal basis [15]. The cos θ i of the i-th smallest canonical angle θ i is the i-th largest singular value:\ncos θ i = σ i V T V M ,(1)\nwhere σ i (•) returns the matrix i-th largest singular value. The structural similarity between two subspaces is defined as the sum of the square of the cosines of the first n c canonical angles, where n c is a hyperparameter indicating how much information from each subspace is to be considered [14,15]. However, in our method, we need a measurement of subspace distance, which can be used as a proxy for the degree of responsibility r of each occlusion. Thus, we introduce the subspace distance, i.e., orthogonal degree, [15] defined by the following equation:\nr (M) = 1 - nc i σ i V T V M 2 .\n( " }, { "figure_ref": [ "fig_1" ], "heading": "Speedup by improved masking", "publication_ref": [ "b37", "b33", "b34" ], "table_ref": [], "text": "OSA-DAS enhances OSA by incorporating more information, albeit at a higher computational cost. Essentially, perturbation-based interpretability is akin to a Monte Carlo approach for estimating machine learning models. The efficiency of this method can be improved by proposing better masks, thereby reducing the number of required masks, as seen in [37]. One straightforward strategy to devise superior masks is to utilize the model's gradient concerning the input image as weights. Albeit the simplicity, gradients are know to be noisy and not always indicate the most relevant features [33,34], yet can be leveraged to sample the mask anchor points using a multinomial distribution. However, direct sampling often results in highly overlapping masks. To address this, we filter out those with substantial overlapping mask areas, as illustrated in Fig. 2." }, { "figure_ref": [], "heading": "Algorithmic generation of Explanation heatmaps", "publication_ref": [], "table_ref": [], "text": "The presented ideas for the basis of our method is fully presented in Algorithm 1. Our OSA-DAS begins by sampling a set of augmentations on the original image. It then constructs a subspace, V, which captures the model outputs for these augmented images. For each occlusion applied to the image, a similar subspace, V M , is formed. The goal is then to compare the two subspaces, V and V M , to understand the significance of the occluded region.\n1. Initialization: Let f be a deep learning model that outputs a k-dimensional deep feature vector extracted from an input image. Let x be an input image and {τ i (x)} na i=1 a set of its augmentations. Besides, set parameters for the number of masks n m , augmentations n a , number of canonical angles n c , and mask size l.\n2. Construct the Reference Subspace V: For i-th augmentation τ i :\n(a) Feed augmented image τ i (x) into the model f .\n(b) Normalize the length and store the deep feature vector f τ i (x) ∈ R k in an array." }, { "figure_ref": [], "heading": "Algorithm 1 Occlusion Sensitivity Analysis with Deep", "publication_ref": [], "table_ref": [], "text": "Feature Augmentation Subspace (OSA-DAS)\nRequire: x ← image, f ←model, τ i ←i-th augmentation n m ← number of masks n a ← number of augmentations n c ← number of canonical angles l ← mask size V ← {} for i ← 1 to n a do x t ← τ i (x) Insert the normalized f (x t ) ∈ R k in V end for V ← P CA (V) H ← 0 for i ← 1 to n m do M ← mask (i, x.shape, l) V M ← {} x M ← x ⊙ M for j ← 1 to n a do x M t ← τ j x M Insert the normalized f x M t ∈ R k in V M end for V M ← P CA (V M ) r ← 1 - nc k σ k V T V M 2 H ← H + (1 -M) r end for return H H\nWe conduct the above process over all the augmentations, and then compute the orthonormal basis V ∈ R k×d of the V subspace from the set of deep feature vectors {f τ i (x) } na i=1 ." }, { "figure_ref": [], "heading": "Sample Masks and Construct Occluded Subspaces:", "publication_ref": [], "table_ref": [], "text": "For each mask generated:\n(a) Create occlusions in the image using the mask.\n(b) For each occlusion, compute a basis\nV M ∈ R k×d of subspace V M from the set of the k- dimensional feature vectors, {f τ i x M } na i=1\n, following the process in Step 2. " }, { "figure_ref": [], "heading": "Explanation and Metrics", "publication_ref": [ "b1" ], "table_ref": [], "text": "Even though the interpretability goal is to build clear visualizations of the machine learning model decisionmaking process, the comparison of interpreters at scale re-quires the application of metrics that can accurately measure the quality of the explanations [2]." }, { "figure_ref": [], "heading": "Explanations", "publication_ref": [ "b8", "b8", "b17", "b34" ], "table_ref": [], "text": "Given an input image x, S = x ⊙ M indicates a masked subset of the input, where M is a binary mask and ⊙ is the Hadamard product. Then, the explanation E is the minimal subset which has the same output as the original input.\nE f |x = min |S| S : f (S) = f (x) , with |S| > 0, (3)\nwhere | • | counts the number of unmasked pixels.\nEq. ( 3) is a general definition, and the nature of the model's output can vary depending on the algorithm. In this work, we want to build a class-agnostic method using deep feature vectors ∈ R k , which are extracted from the final layer before the classification head.\nHowever, to compute the precise explanation using only Eq. (3) would require testing all possible subsets of pixels to ensure we have the minimal one [9]. In that sense, real interpreters provide an approximate explanation heatmap Ẽ f |x . This map is usually taken to be a description on how the model's predictions are influenced by each pixel [9,18,34]. In this work, we interpret these explanation heatmaps as probability distributions: they indicate the probability of each pixel in x belonging to the ideal explanation E f |x . See the supplementary material for details." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b29" ], "table_ref": [], "text": "Many metrics have been proposed in interpretability literature, each offering different perspectives. In this paper, we chose to use multiple metrics to provide a more comprehensive measurement of the interpreter effectiveness. Deletion and insertion metrics [29] gauge the faithfulness of an explanation heatmap in representing a model's inferences.\nFirst, the deletion metric measures how rapidly the model's prediction probability decreases when pixels are deleted according to their heatmap significance.\nConversely, the insertion metric evaluates how quickly the model's prediction probability escalates when pixels are inserted based on their heatmap significance. The performance of these metrics is quantified using the area under the curve (AU C), with the horizontal axis indicating the percentage of pixels deleted or inserted and the vertical axis representing the output probability of the model [7].\nAlthough useful, we argue these metrics do not fundamentally align with the causality definition of explanation as per Eq. ( 3). Also, their numbers are not so intuitive and most often than not it is difficult to link their values to any visible property of the heatmap." }, { "figure_ref": [ "fig_3" ], "heading": "Minimal Size Metric", "publication_ref": [], "table_ref": [], "text": "Sec. 3.2.1 defines an explanation as the smallest set of pixels that still results in the same model output. Now, given an explanation heatmap Ẽ f |x , we can try to generate an explanation from it. If the heatmap is correct, the explanation must use the minimal number of pixels, and we can use this number as a viable metric of the proximity between the explanation heatmap and the model's output cause.\ns min Ẽ f |x = |S| |x| , with f (S) ≈ f (x) ,(4)\nwhere f (S) ≈ f (x) replaces the ideal equality f (S) = f (x) in Eq. ( 3) to make the metric less rigid while also improving numerical stability.\nWe stress that our metric is class-agnostic, which allows us to directly use deep feature vectors f (S) ∈ R k and f (x) ∈ R k , while the deletion and insertion metrics are exclusively based on the change of the scalar class probability change measured with AUC.\nIn practical terms, to compute this number we start from an empty set S and sequentially add pixels by order of importance, where pixel importance comes from Ẽ. During this process, we must reach a point such that ||f (S) -f (x) || 1 ≤ δ, where || • || 1 ≤ δ is an element-wise comparison within a fixed tolerance δ. Then, the algorithm stops and the ratio |S| |x| is returned. Although this works, the number of steps can be reduced by adding batches of pixels instead of one pixel at a time, as exemplified at Fig. 4. Each batch is given from the contour map of Ẽ, which splits the heatmap into regions by the intensity of each pixel, and determines the number of pixels to be added at each step.\nBeyond that, notice a good interpreter metric should focus on evaluating only the explanation quality independently of model performance. This metric assesses the explanation's precision without being swayed by the model's accuracy while also providing a number that directly reflects the visual characteristics of the explanation. It's a clear and effective way to compare different interpreters' quality. See the supplementary material for more information." }, { "figure_ref": [], "heading": "Overall performance metric", "publication_ref": [ "b43" ], "table_ref": [], "text": "While the Minimal Size metric offers a fresh perspective, it is essential to view it in conjunction with the currently used metrics for a holistic understanding. A more pivotal metric should thus be defined by balancing deletion, insertion and minimal size. We propose an Overall performance metric, building upon the work of [43], which combines insertion, deletion, and minimal size for a comprehensive evaluation. This simplified version decreases the iterations as follows: First, we divide the explanation heatmap into regions with the same importance level according to a contour map. Then, we introduce pixels from each region to the partial image in descending order of importance. We stop when the model's output of this partial image becomes very close to the one of the original image. The fraction of filled pixels in the partial image is the minimal size metric.\nEquation (5) offers a more thorough understanding of interpreter performance. The incorporation of size in the denominator ensures a dimensionless metric, where both the numerator and denominator represent areas. This combined metric offers a balanced and insightful evaluation of the general interpreter performance, making it a more sensible evaluator of the general interpreter performance." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b29" ], "table_ref": [], "text": "In this section, we present a comprehensive evaluation of our proposed methods through comparison with the conventional explanation methods. This includes qualitative comparisons of explanation heatmaps and a quantitative evaluation using deletion, insertion [29] and minimal size metrics." }, { "figure_ref": [], "heading": "Experiment Settings", "publication_ref": [ "b18", "b10", "b23", "b24", "b30", "b8", "b29", "b28", "b31", "b34", "b42", "b0", "b22" ], "table_ref": [], "text": "We employ ResNet-50 [19], ViT-B-14 [11] and Swin-V2 [23,24] as classification models and assess the results on the validation set of ImageNet [30], which comprises 50K images from 1000 classes and is used in explainable AI literature for evaluation [7,9,29]. The images are resized to 256 × 256 pixels and center cropped to 224 × 224 pixels.\nFor our method, we use masks of 64 × 64(=l) pixels in the image as described in Sec. 3.1.4, TrivialAugment [28] as the augmentation routine, with 32(=n a ) augmentations per occlusion. The dimensions of the deep feature vector was 786 for all models. For comparison, we perform the evaluations together with Guided Grad-CAM [31], Integrated Gradients [34] and OSA [42], which are frequently employed interpreters of each major family of methods. Given our emphasis on developing model-agnostic methods, we refrained from comparing with non model-agnostic methods, such as [1], [7], or expensive techniques like [25].\nWe used the implementation of these methods provided by the Captum tool [22]. The batch of all experiments performed in this work, including ablations, took approximately one week to run on 8 V100 16Gb GPUS. Table 1. Average Metric scores on ImageNet between ResNet-50, ViT-B and Swin-V2 models. For deletion and minimal size, lower is better (↓). For insertion and overall, higher is better (↑). Bold represents the best metric, while underline is the second best. Occlusion and Ours have the same mask size, but the former uses a sliding window, so it generates much more masks." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b31", "b34", "b42" ], "table_ref": [], "text": "Minimal Size (↓) Deletion (↓) Insertion (↑) Overall (↑)\nGuided Grad-CAM [31] 0.515 0.298 0.289 -0.034 Integrated Gradients [34] 0.518 0.234 0.267 0.123 Occlusion [42] 0.251 0.328 0.549 0.880 OSA-DAS (Ours) 0.231 0.331 0.539 0.901" }, { "figure_ref": [ "fig_4" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "The visualization results of the explanation heatmaps are showcased in Fig. 3 and Fig. 5. For the class-specific methods, we show the heatmap with respect to the predicted class. These images illustrate how other interpreters tend to generate noisy heatmaps, especially notable for traditional OSA on model misclassifications, which attributes inverted responsibility compared to OSA-DAS.\nOverall, the proposed method precisely captures the general features which the model attends to in a more stable manner, which facilitates model understanding and debugging. This resilience is likely due to its class-agnostic nature combined with the variety of feature comparisons enabled by the augmentation subspaces. These results suggest that OSA-DAS is capable of selecting the most impactful regions for the model, regardless of mispredictions.\nOn the flip side, the increased memory cost restricts the maximum number of masks and augmentations that can be applied, posing a trade-off for achieving more robust and accurate explanations." }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [ "b2", "b29", "b37", "b34", "b31" ], "table_ref": [], "text": "Whereas Sec. 4.2 implies superiority of our proposed method, caution must be taken against sole reliance on visual assessments [3]. The average evaluation results on the whole validation set of ImageNet are presented at Tab. 1 using the metrics of insertion, deletion (Sec. 3.2.2), minimal size (Sec. 3.2.3) and overall (Sec. 3.2.4). Constant tolerance value δ = 10 -2 is used. In deletion, the heatmap that accurately captures important individual pixels is highly valued, while for insertion, a heatmap presenting cohesive regions of importance is better evaluated [29,37]. Minimal size metric measures proximity of the explanation to the actual cause. Overall balances insertion, deletion and minimal size areas evenly. This experiment uses 32 iterations for insertion, deletion and minimal size.\nIntegrated Gradients [34] and Grad-CAM [31] focus too much on important pixels, but not on important regions, which optimizes deletion in detriment of other metrics. Occlusion shows excellent insertion performance because it exclusively focuses on the regions which impact the pre-dicted class. On the other hand, our method showcases best overall performance, showing good results among all metrics. We argue this demonstrates it can explain the actual prediction cause in a more holistic and class-agnostic way than others. Further details on the performance for each model is shown in the supplementary material.\nIn this context, it's noteworthy that class-specific methods, which consider specific priors for each class, are anticipated to perform better in insertion and deletion metrics compared to class-agnostic ones. This is because these methods priors (prediction probability) align with the same priors used in the evaluation metrics [7]. Regardless of such, our method still showcases comparable performance to OSA in spite of not being able to leverage such priors." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_5" ], "heading": "Ablation", "publication_ref": [ "b28", "b9", "b32", "b31", "b0", "b10" ], "table_ref": [], "text": "We conducted an ablation study on our method's three key components: augmentations, masking, and subspace representations. These tests used 2500 ImageNet training images to ensure cost-efficiency and to maintain independence from analyses in Sec. 4.2 and Sec. 4.3. Tests used the ResNet-50 model, evaluating hyperparameters on a log2 scale until resource limits. We reported changes in the overall metric, defined in Sec. 3.2.4. Other metrics showed consistent behaviors and led to the same conclusions.\nWe examined how augmentations impact our method by switching from TrivialAugment [28] to RandAugment [10]. RandAugment allows for adjustable augmentation strength, even though the specific augmentations are random. Using 32 augmentations and 256 masks per image, we found, as seen in Fig. 6a, that our method remains stable up to a certain augmentation strength, beyond which it breaks down. This suggests the model can handle various augmentations as long as the image does not become unrecognizable.\nMoreover, Fig. 6c demonstrates our technique possesses a better convergence rate with respect to the number of masks, with 256 masks already reaching good performance. This can be traced down to the efficient masking mechanism introduced at Sec. 3.1.4. In fact, the gradient is considered the simplest version of a gradient-based interpreter [32], and so, all we are doing is using a quick interpreter to derive a initial probability distribution for an- other interpreter. Thinking from the viewpoint of chaining interpreters, we can likely consider changing the gradient for other simple options, like Grad-CAM [31] for CNNs or Attention Rollout [1] for ViT [11]. Also, Fig. 6c shows that applying OSA-DAS without augmentations reduces its performance significantly, showing the convergence rate is correlated with the augmentations. Finally, we measure how many canonical angles should be used to measure the similarity between the original and occlusion subspaces. By this, we understand the impact of subspace representations to solve this problem. According to Fig. 6b, there is a clear dependency with n c , but also not many angles are required to reach good performance. In fact, we can see the the curve starts to saturate after 32 angles (out of 786), which already provide over 2× improvement over using only 1 angle. We argue it is a strong favorable indicator for using subspace representations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we proposed a model-and class-agnostic approach for interpreting machine learning model behavior based on general augmentations and occlusions, providing robust explanations for the decision-making process of computer vision models. Our contribution lies in the application of augmentations to occluded inputs and the use of subspace representation on deep feature vectors to gauge occlusion impact with improved precision. Moreover, we enhanced the computational efficiency by transitioning the occlusion selection process from random to gradient-based. Experimental results affirm our approach's superiority over traditional methods both quantitatively and qualitatively, providing sensible explanations that effectively demystify model decisions. This work heralds significant advancements in interpretability and trustworthiness of AI systems." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Theory", "publication_ref": [], "table_ref": [], "text": "In this section, we present details of our method with increased mathematical formality. Some of the actual discussion and contributions may be repeated to complement the main idea." }, { "figure_ref": [], "heading": "Actual causality", "publication_ref": [ "b17", "b7", "b8", "b8", "b8", "b29", "b42", "b8", "b8", "b29" ], "table_ref": [], "text": "Actual causality [18] is a framework to formally explain the way model predictions depend on input variables, what are the output causes, and how certain changes in the inputs can change the predictions. It extends counterfactual reasoning with contingencies, which means that if a Boolean function\nϕ (x 1 , x 2 , • • • , x n ) changes when a variable x i is altered, then ϕ depends on x i .\nMoreover, the Degree of Responsibility r is a quantification of causality [8], which is based on the size k of the smallest contingency required to create a counterfactual dependency [9], i.e., the minimal change to alter the function output. Definition 6.1 (Singleton cause). Let f be a machine learning model and x an input, an entry\nx i1 i2 ••• in (think a pixel) is a cause of f (x) if and only if there is a subset χ ⊂ x such that the following hold [9, p. 3]: 1. x i1 i2 ••• in ̸ ∈ χ 2.\noutput invariance to masking of χ:\nLet χ ′ ⊂ χ, m ∈ R, then χ ′ = m =⇒ ∆f = 0 3. output dependency to masking of x i1 i2 ••• in : Let m ∈ R, then χ = x i1 i2 ••• in = m =⇒ ∆f ̸ = 0\nDefinition 6.2 (Cause witness). If a subset χ ∈ x and entry x i1 i2 ••• in satisfy Definition 6.1, then we say χ is a witness to the fact\nx i1 i2 ••• in is a cause of x [9, p. 3]. Definition 6.3 (Simplified Degree of Responsibility). If a subset χ ∈ x and entry x i1 i2 ••• in satisfy the definition of Singleton Cause from [9], then r x i1 i2 ••• in |x, f = 1 1 + k , (6\n)\nwhere k is the size of the minimal witness, which refers to the smallest subset of input variables that, when changed, can demonstrate that a particular input variable has an effect on the output of a function.\nIn our terminology, interpreters are algorithms which process a model and a single input, and output an explanation. In computer vision, a valid explanation could be an attribution heatmap over the original input image. Next, we formally define this concept. Definition 6.4 (Explanation). Let f be a machine learning model and x an input with output f (x), and S = x ⊙ M a masked subset of the input. Then, the explanation E of the model f given input x is the minimal subset which maintains the output [9] \nE f |x = min |S| S : f (S) = f (x) , |S| > 0 (7)\nwhere | • | is the number of items in the set, e.g., the number of non-masked pixels Remark 1 (Triviality). The explanation must not be a null tensor. Any model will output a prediction for the null tensor, however this would be a trivial explanation for all inputs sharing the same prediction. For example, if a model outputs \"dog\" for the null tensor, then all images of dogs would have an empty heatmap as explanation.\nRemark 2 (Non-uniqueness). The explanation may not be unique. Inputs might have symmetries or repetitions, leading to multiple viable subsets of the same size.\nHowever, computing an explanation is NP-complete and real interpreters will output approximate explanations [9]. Definition 6.5 (Approximate Explanation). Let f be a machine learning model and x an input with output f (x). The approximate explanation Ẽ of model f given input x is the probability distribution indicating if the input entry belongs to the explanation x.\nẼi1 i2 ••• in = p x i1 i2 ••• in ∈ x(8)\nRemark 3 (Normalization).\ni1 i2 ••• in Ẽi1 i2 ••• in = 1(9)\nRemark 4 (Approximate Explanation and Degree of responsibility). While explanations can be seen as a binary inclusion mask, i.e., the mask is 1 if the entry belongs to the explanation. However, the degree of responsibility is a measure between 0 and 1, so it can be seen as a proxy for probability.\nRemark 5 (Composition). Given the probabilistic nature of an approximate explanation, the composed approximate explanation can be built when multiple approximate explanations are available (e.g., when multiple interpreters can be used). The composed explanation can be built by simple summation and renormalization following Remark 3. Definition 6.6 (Minimal Size). Let Ẽ be the approximate explanation of model f given input x. The explanation minimal size is defined as Occlusion computes explanation heatmaps by replacing image regions with a given baseline (masking it to 0), and measuring the score difference in the output [29,42].\ns min Ẽ f |x = |S| |x| , with f (S) ≈ f (x)(10\nProposition 1 (Occlusion degree of responsibility). Let f be a model which outputs a probability score p ∈ [0, 1],\nx an input and M as binary mask with the same shape as x, and ⊙ be the Hadamard product. Then, the degree of responsibility of the masked region is\nr = 1 - p (x ⊙ M) p (x)\nProof. Let the degree of responsibility be r = 1 1+k (Definition 6.3), the factor k represents the size of the minimal witness [9], which should be 0 for relevant causes and ∞ for irrelevant ones, i.e., k ∈ [0, +∞).\nFirst, assume that for every image, there is a defined region S where it's minimal witness has size 0, i.e., when we mask all of the image keeping nothing but S, the prediction score p output by the model is unaltered. Moreover, assume that the opposite action is also true: masking S while keeping any other part of the image will lead to a prediction collapse. Simply put,\np (x -S) = 0 ⇐⇒ p (S) = p (x)\nConversely, if S ′ is an irrelevant area, masking it should render no change, p x -S ′ = p (x) ⇐⇒ p S ′ = 0\nFrom such assumption, we should expect that the masked region minimal witness [9] size must be proportional to the score p (S) of keeping only the cause S, and inversely proportional to the score p S ′ of keeping only an unimportant region S ′ . Thus, we can say\nk ∝ p (S) p (S ′ ) ≡ p (x -S) |p (x) -p (x -S ′ ) | ≡ p (x ⊙ M) |p -p (x ⊙ M) | ,\nwhich should be 0 when the cause is masked and diverge when masking an unimportant region (score does not change). Finally, we can effectively ignore the modulo assuming p (x) ≥ p (x ⊙ M), and\nr = 1 1 + k = 1 1 + p(x⊙M) p-p(x⊙M) = p (x) -p (x ⊙ M) p (x) = 1 - p (x ⊙ M) p (x)(11)\n■ Proposition 2 (Occlusion approximate explanation). The approximate explanation of an input for a single mask is\nẼM f |x = (1 -M) 1 - p (x ⊙ M) p (x)\nProof. Given M is a binary mask, 1 -M is the inverse mask, i.e., the masked region is set to 1. Then, from Proposition 1 and Remark 4 we say the probability of the cause belonging to the masked region is equal the term 1 -p(x⊙M) p(x) . ■ Lemma 6.1 (Occlusion Sensitivity Analysis (OSA)). The non-normalized general occlusion sensitivity analysis is the combination of individual occlusion explanations.\nẼ f |x = i (1 -M i ) 1 - p (x ⊙ M i ) p (x)(12)\nProof. Direct from Proposition 2 and Remark 5 ■ OSA is a simple example of a perturbation-based method, in which the explanation is a composition of output scores relative variations for each masked input. Proposition 1 defines a way of computing the responsibility of a singular mask, i.e., the probability it belongs to the cause, and Algorithm 3 shows how to compose it into an approximate explanation. OSA pseudocode can be found in the supplementary material.\nHowever, notice the formulation at Lemma 6.1 is different to the more traditional one defined by [29] in equation 6. The difference is mostly due to the different assumptions we took, but they can be shown to be proportionally equivalent, differing only by a constant. However, the formulation at Lemma 6.1 will be important for the methods we will propose next." }, { "figure_ref": [], "heading": "Algorithm 3 Occlusion Sensitivity Analysis", "publication_ref": [], "table_ref": [], "text": "Require: x ← image, f ← model n ← number of masks l ← mask size p ← f (x) H ← 0 for i ← 1 to n do M ← mask (i, x.shape, l) ▷ any mask generator x M ← x ⊙ M p M ← f x M H ← H + (1 -M) 1 -p M p ▷ compose (Remark 5) end for return H H ▷ normalize explanation 6.1." }, { "figure_ref": [], "heading": "Occlusion Sensitivity Analysis with Deep Feature Vectors", "publication_ref": [], "table_ref": [], "text": "Lemma 6.1 is a class-specific algorithm. However, most machine learning models actually output general vectors, also known as deep feature vector, which encode the input through the model. These vectors are then later processed to obtain the probability score of a single feature (class)." }, { "figure_ref": [], "heading": "Proposition 3 (Representation degree of responsibility).", "publication_ref": [], "table_ref": [], "text": "Let f be a model which outputs a vector f\n(x) = v = (v i ) , i ∈ [m].\nThe degree of responsibility of the masked region is\nr = ||v -v (x ⊙ M) || p ||v|| p ,\nwhere ||v|| p stands for the ℓ p -norm\nProof. Let f be a model which outputs a scalar probability score p (x) ∈ [0, 1]. Then, from Proposition 1, the degree of responsibility is\nr = 1 - p (x ⊙ M) p (x) = p (x) -p (x ⊙ M) p (x) = f (x) -f (x ⊙ M) f (x)(13)\nNow, notice that Proposition 1 assumes f (x ⊙ M) ≤ f (x) and p (x) ≥ 0. Then, we can extend this concept to a new f which outputs vectors by\nf (x) -f (x ⊙ M) f (x) = |f (x) -f (x ⊙ M) | |f (x) | = ||v -v (x ⊙ M) || p ||v|| p(14)" }, { "figure_ref": [], "heading": "■", "publication_ref": [], "table_ref": [], "text": "Remark 7 (Occlusion sensitivity analysis as a special case). The vector extension in Proposition 3 also shows that Lemma 6.1 is a special case when we wish to analyze one particular feature of f (x), so this can be thought as a generalization of said method. Lemma 6.2 (Representation Occlusion Sensitivity Analysis). The natural extension to dealing with representations reintroduces Lemma 6.1 with the only change in the degree of responsibility calculation.\nẼ f |x = i (1 -M i ) ||v -v (x ⊙ M) || p ||v|| p(15)\nProof. Analogous to Lemma 6.1. ■ Remark 8 (Representation Occlusion Sensitivity Analysis).\nThe natural extension to dealing with representations from Lemma 6.2 reintroduces Lemma 6.1 with the only change in the degree of responsibility calculation." }, { "figure_ref": [], "heading": "Occlusion Sensitivity Analysis with Deep Feature Augmentation Subspaces", "publication_ref": [ "b13", "b14" ], "table_ref": [], "text": "While Lemma 6.2 outlines a general method to determine the degree of responsibility, its sole dependence on occlusion may not capture the nuanced relationships inherent in deep learning models. Recognizing the vital role of data augmentation in training and viewing occlusion as a form of augmentation, we propose a shift from simple vector comparisons to a detailed analysis between two subspaces. A subspace here denotes a segment of the deep feature vector space defined by an occluded image and its augmentations. We strive to assess the similarity between each \"occlusion subspace\" and the \"reference subspace\", which is formed by the original image and its augmentations. Extending Lemma 6.2, we compare the size difference between two subspaces, focusing on their orthogonal degree, and measure the canonical angles between subspaces derived from varied transformations on the original and occluded images.\nProposition 4 (Subspace degree of responsibility). The degree of responsibility between subspaces is the orthogonal degree between them, i.e.,\nr (M) = 1 - nc i σ i V T V M 2(16)\nProof. We extend the idea (and the notation) of difference of vectors to difference of subspaces\nr = ||v -v (x ⊙ M) || 2 ||v|| 2 ≡ |V -V M | |V| = |V -V M | |V -0| = |V -V M | 1 = |V -V M | = 1 -simi (V, V M ) = 1 - nc i σ i V T V M 2 (17)\nwhere |A -B| is a subspace distance, i.e., the orthogonal degree between A and B [14,15]. V and V M ∈ R k×d are the orthonormal basis of the subspaces V and V M respectively. ■ Theorem 6.1 (Occlusion Sensitivity Analysis with Deep Feature Augmentation Subspace). The natural extension to dealing with deep feature augmentation subspaces reintroduces Lemma 6.2 with the only change in the degree of responsibility calculation.\nẼ f |x = nm i (1 -M i )   1 - nc j σ 2 j V T V Mi  (18)\nProof. Analogous to Lemma 6.2. ■" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b31", "b22", "b18", "b10", "b23", "b34", "b22", "b37", "b42", "b22" ], "table_ref": [], "text": "All models used Imagenet-1k weights provided by torchvision. Grad-CAM [31] uses Captum [22] Guided-GradCam implementation. Grad-CAM is set to track the last convolutional layer on ResNet-50 [19] and the least BatchNormalization layer on ViT-B [11] and Swin-V2 [23]. Integrated Gradients [34] uses Captum [22] IntegratedGradients implementation. We use a null baseline, computing the integral on 128 steps with Gauss-Legendre quadrature. Occlusion Sensitivity Analysis [37,42] uses Captum [22] Occlusion, which is implemented with a sliding window of binary masks with 32 pixels and stride of 1. This is equivalent to approximately 9216 masks per image. Quantitative results for each interpreter on each model can be found at Tab. 2.\nTable 2. Metric scores on ImageNet for ResNet-50, ViT-B and Swin-V2. For deletion and minimal size, lower is better (↓). For insertion, higher is better (↑). Bold represents the best metric for a given model, while underline is the second best." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b31" ], "table_ref": [], "text": "Model Minimal Size (↓) Deletion (↓) Insertion (↑)\nGrad-CAM [31] ResNet " } ]
Deep Learning of neural networks has gained prominence in multiple life-critical applications like medical diagnoses and autonomous vehicle accident investigations. However, concerns about model transparency and biases persist. Explainable methods are viewed as the solution to address these challenges. In this study, we introduce the Occlusion Sensitivity Analysis with Deep Feature Augmentation Subspace (OSA-DAS), a novel perturbation-based interpretability approach for computer vision. While traditional perturbation methods make only use of occlusions to explain the model predictions, OSA-DAS extends standard occlusion sensitivity analysis by enabling the integration with diverse image augmentations. Distinctly, our method utilizes the output vector of a DNN to build low-dimensional subspaces within the deep feature vector space, offering a more precise explanation of the model prediction. The structural similarity between these subspaces encompasses the influence of diverse augmentations and occlusions. We test extensively on the ImageNet-1k, and our class-and model-agnostic approach outperforms commonly used interpreters, setting it apart in the realm of explainable AI.
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space
[ { "figure_caption": "Figure 11Figure1. OSA-DAS Overview: Subspace V is derived from the augmented input image, while VM originates from its occluded counterpart. Both are derived from the principal component analysis (PCA) of a DNN's deep feature vector. The orthogonal degree[14,15] between V and VM quantifies the occlusion's effect and shapes the explanation heatmap. Multiple occlusion augmentation subspaces are used to capture diverse facets of the input's representation. Their combined relationships offer a holistic view of occlusion impacts, producing a detailed heatmap.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "2 .2Mask anchor point selection via gradient sampling. The image gradient is produced on inference time, which is then used to sample anchor points. Anchor points too close to each other are filtered out. (anchors size is increased for visibility)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4 .Figure 3 .43Figure 3. Explanation heatmaps visualizations for ResNet-50. Regions in red indicate the prediction causes. The proposed method generate concise and smooth explanation heatmaps, more in line to the general features the model is attending than other techniques.", "figure_data": "", "figure_id": "fig_2", "figure_label": "43", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Simplified schema for computing the minimal size on a image-heatmap pair of 224 × 224 pixels with tolerance δ = 10 -2 .", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Explanation heatmaps visualizations for ViT-B and Swin-V2. Regions in red indicate the prediction causes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Dependency of Overall metric with OSA-DAS hyperparameters. The horizontal axes are set to log scale for visibility.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Remark 6 (Minimal Size metric). Definition 6.6 is a viable explanation metric. It can be measured by Algorithm 2. In that sense, an explanation with low minimal size indicates the most substantial region for the model was reached.", "figure_data": "Algorithm 2 Minimal Size Metric ComputationRequire: x ← image, f ← model, H ← heatmap, s ←number of steps, δ ← tolerancefor i ← 1 to |x| in s steps doS ← top i pixels from x based on Hif ||f (x) -f (S) || 1 ≤ δ then return |S|/|x|end ifend for6.1.2 Occlusion Sensitivity Analysis", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
Pedro H V Valois; Koichiro Niinuma; Kazuhiro Fukui
[ { "authors": "Samira Abnar; Willem Zuidema", "journal": "", "ref_id": "b0", "title": "Quantifying Attention Flow in Transformers", "year": "2020" }, { "authors": "Julius Adebayo; Justin Gilmer; Ian Goodfellow; Been Kim", "journal": "ICLR", "ref_id": "b1", "title": "Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values", "year": "2018" }, { "authors": "Julius Adebayo; J Gilmer; M Muelly; I Goodfellow; Moritz Hardt; Been Kim", "journal": "", "ref_id": "b2", "title": "Sanity Checks for Saliency Maps", "year": "2018-10" }, { "authors": "Flávio Arthur; Oliveira Santos; Cleber Zanchettin; Leonardo Nogueira Matos; Paulo Novais", "journal": "Logic Journal of the IGPL", "ref_id": "b3", "title": "On the Impact of Interpretability Methods in Active Image Augmentation Method", "year": "2022-07" }, { "authors": "Randall Balestriero; Ishan Misra; Yann Lecun", "journal": "", "ref_id": "b4", "title": "A Data-Augmentation Is Worth A Thousand Samples: Exact Quantification From Analytical Augmented Sample Moments", "year": "2022-02" }, { "authors": "Aditya Chattopadhay; Anirban Sarkar; Prantik Howlader; Vineeth N Balasubramanian", "journal": "", "ref_id": "b5", "title": "Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks", "year": "2018-03" }, { "authors": "Hila Chefer; Shir Gur; Lior Wolf", "journal": "", "ref_id": "b6", "title": "Transformer Interpretability Beyond Attention Visualization", "year": "2007" }, { "authors": "H Chockler; J Y Halpern", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b7", "title": "Responsibility and Blame: A Structural-Model Approach", "year": "2004-10" }, { "authors": "Hana Chockler; Daniel Kroening; Youcheng Sun", "journal": "", "ref_id": "b8", "title": "Explanations for Occluded Images", "year": "2021-10-01" }, { "authors": "D Ekin; Barret Cubuk; Jonathon Zoph; Shlens; V Quoc; Le", "journal": "", "ref_id": "b9", "title": "Randaugment: Practical automated data augmentation with a reduced search space", "year": "2020-06" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "ICLR", "ref_id": "b10", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2021-06" }, { "authors": "Ruth Fong; Mandela Patrick; Andrea Vedaldi", "journal": "", "ref_id": "b11", "title": "Understanding Deep Networks via Extremal Perturbations and Smooth Masks", "year": "2019-10" }, { "authors": "Ruth C Fong; Andrea Vedaldi", "journal": "", "ref_id": "b12", "title": "Interpretable Explanations of Black Boxes by Meaningful Perturbation", "year": "2002" }, { "authors": "Kazuhiro Fukui", "journal": "Springer International Publishing", "ref_id": "b13", "title": "Subspace Methods", "year": "2020" }, { "authors": "Kazuhiro Fukui; Atsuto Maki", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b14", "title": "Difference Subspace and Its Generalization for Subspace-Based Methods", "year": "2015" }, { "authors": "Y Joseph; Halpern", "journal": "", "ref_id": "b15", "title": "A Modification of the Halpern-Pearl Definition of Causality", "year": "2015-05" }, { "authors": "Joseph Y Halpern; Judea Pearl", "journal": "The British Journal for the Philosophy of Science", "ref_id": "b16", "title": "Causes and Explanations: A Structural-Model Approach. Part I: Causes", "year": "2005" }, { "authors": "Joseph Y Halpern; Judea Pearl", "journal": "The British Journal for the Philosophy of Science", "ref_id": "b17", "title": "Causes and Explanations: A Structural-Model Approach. Part II: Explanations", "year": "2005-12-01" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b18", "title": "Deep Residual Learning for Image Recognition", "year": "2016-06-06" }, { "authors": "Sara Hooker; Dumitru Erhan; Pieter-Jan Kindermans; Been Kim", "journal": "", "ref_id": "b19", "title": "A Benchmark for Interpretability Methods in Deep Neural Networks", "year": "" }, { "authors": "Curran Associates", "journal": "Inc", "ref_id": "b20", "title": "", "year": "2019" }, { "authors": "Narine Kokhlikyan; Vivek Miglani; Bilal Alsallakh; Miguel Martin; Orion Reblitz-Richardson", "journal": "", "ref_id": "b21", "title": "Investigating sanity checks for saliency maps with image and text classification", "year": "2021-06" }, { "authors": "Narine Kokhlikyan; Vivek Miglani; Miguel Martin; Edward Wang; Bilal Alsallakh; Jonathan Reynolds; Alexander Melnikov; Natalia Kliushkina; Carlos Araya; Siqi Yan; Orion Reblitz-Richardson", "journal": "ICLR", "ref_id": "b22", "title": "Captum: A unified and generic model interpretability library for PyTorch", "year": "2020-09" }, { "authors": "Ze Liu; Han Hu; Yutong Lin; Zhuliang Yao; Zhenda Xie; Yixuan Wei; Jia Ning; Yue Cao; Zheng Zhang; Li Dong; Furu Wei; Baining Guo", "journal": "", "ref_id": "b23", "title": "Swin Transformer V2: Scaling Up Capacity and Resolution", "year": "2022-06-02" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b24", "title": "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows", "year": "2006-02" }, { "authors": "M Scott; Su-In Lundberg; Lee", "journal": "", "ref_id": "b25", "title": "A Unified Approach to Interpreting Model Predictions", "year": "" }, { "authors": "Grégoire Montavon; Sebastian Lapuschkin; Alexander Binder; Wojciech Samek; Klaus-Robert Müller", "journal": "PR", "ref_id": "b26", "title": "Explaining nonlinear classification decisions with deep Taylor decomposition", "year": "2017-05" }, { "authors": "Mohammed Bany; Muhammad ; Mohammed Yeasin", "journal": "", "ref_id": "b27", "title": "Eigen-CAM: Class Activation Map using Principal Components", "year": "2020-07" }, { "authors": "G Samuel; Frank Muller; Hutter", "journal": "ICCV", "ref_id": "b28", "title": "TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation", "year": "2007" }, { "authors": "Abir Vitali Petsiuk; Kate Das; Saenko", "journal": "BMVC", "ref_id": "b29", "title": "RISE: Randomized Input Sampling for Explanation of Black-box Models", "year": "2018-06-05" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein; Alexander C Berg; Li Fei-Fei", "journal": "International Journal of Computer Vision", "ref_id": "b30", "title": "ImageNet Large Scale Visual Recognition Challenge", "year": "2015-12" }, { "authors": "R Ramprasaath; Michael Selvaraju; Abhishek Cogswell; Ramakrishna Das; Devi Vedantam; Dhruv Parikh; Batra", "journal": "International Journal of Computer Vision", "ref_id": "b31", "title": "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization", "year": "2016-10-02" }, { "authors": "K Simonyan; A Vedaldi; Andrew Zisserman", "journal": "CoRR", "ref_id": "b32", "title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", "year": "2013-12" }, { "authors": "D Smilkov; Nikhil Thorat; Been Kim; F Viégas; M Wattenberg", "journal": "", "ref_id": "b33", "title": "SmoothGrad: removing noise by adding noise", "year": "2017-06" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "", "ref_id": "b34", "title": "Axiomatic Attribution for Deep Networks", "year": "2017-06-04" }, { "authors": "Joe Biden; The White House", "journal": "", "ref_id": "b35", "title": "President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence", "year": "2023-10" }, { "authors": "Lenka Tětková; Lars Kai Hansen", "journal": "CVPRW", "ref_id": "b36", "title": "Robustness of Visual Explanations to Common Data Augmentation", "year": "2023" }, { "authors": "Tomoki Uchiyama; Naoya Sogi; Koichiro Niinuma; Kazuhiro Fukui", "journal": "", "ref_id": "b37", "title": "Visually explaining 3D-CNN predictions for video classification with an adaptive occlusion sensitivity analysis", "year": "2023-01-02" }, { "authors": "A Uddin; M Monira; Wheemyung Shin; Taechoong Chung; S Bae", "journal": "", "ref_id": "b38", "title": "SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization", "year": "2020-06" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b39", "title": "Attention is All you Need", "year": "" }, { "authors": "Soyoun Won; Sung-Ho Bae; Seong Tae Kim", "journal": "", "ref_id": "b40", "title": "Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability", "year": "2023" }, { "authors": "Sangdoo Yun; Dongyoon Han; Sanghyuk Chun; Seong Joon Oh; Youngjoon Yoo; Junsuk Choe", "journal": "", "ref_id": "b41", "title": "CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features", "year": "2002" }, { "authors": "Matthew D Zeiler; Rob Fergus", "journal": "Springer International Publishing", "ref_id": "b42", "title": "Visualizing and Understanding Convolutional Networks", "year": "2014" }, { "authors": "Qing-Long Zhang; Lu Rao; Yubin Yang", "journal": "", "ref_id": "b43", "title": "Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks", "year": "2021-03" } ]
[ { "formula_coordinates": [ 3, 348.06, 361.08, 71.81, 14.11 ], "formula_id": "formula_0", "formula_text": "m i=1 v i v T i ∈ R k×k" }, { "formula_coordinates": [ 3, 378.98, 520.02, 166.13, 11.72 ], "formula_id": "formula_1", "formula_text": "cos θ i = σ i V T V M ,(1)" }, { "formula_coordinates": [ 3, 351.78, 685.9, 150.41, 30.43 ], "formula_id": "formula_2", "formula_text": "r (M) = 1 - nc i σ i V T V M 2 ." }, { "formula_coordinates": [ 4, 308.86, 102.19, 227.41, 309.59 ], "formula_id": "formula_4", "formula_text": "Require: x ← image, f ←model, τ i ←i-th augmentation n m ← number of masks n a ← number of augmentations n c ← number of canonical angles l ← mask size V ← {} for i ← 1 to n a do x t ← τ i (x) Insert the normalized f (x t ) ∈ R k in V end for V ← P CA (V) H ← 0 for i ← 1 to n m do M ← mask (i, x.shape, l) V M ← {} x M ← x ⊙ M for j ← 1 to n a do x M t ← τ j x M Insert the normalized f x M t ∈ R k in V M end for V M ← P CA (V M ) r ← 1 - nc k σ k V T V M 2 H ← H + (1 -M) r end for return H H" }, { "formula_coordinates": [ 4, 350.71, 540.9, 194.41, 42.27 ], "formula_id": "formula_5", "formula_text": "V M ∈ R k×d of subspace V M from the set of the k- dimensional feature vectors, {f τ i x M } na i=1" }, { "formula_coordinates": [ 5, 322.47, 206.41, 222.65, 15.08 ], "formula_id": "formula_6", "formula_text": "E f |x = min |S| S : f (S) = f (x) , with |S| > 0, (3)" }, { "formula_coordinates": [ 6, 77.16, 191.98, 209.21, 22.31 ], "formula_id": "formula_7", "formula_text": "s min Ẽ f |x = |S| |x| , with f (S) ≈ f (x) ,(4)" }, { "formula_coordinates": [ 10, 308.86, 252.25, 236.25, 21.61 ], "formula_id": "formula_8", "formula_text": "ϕ (x 1 , x 2 , • • • , x n ) changes when a variable x i is altered, then ϕ depends on x i ." }, { "formula_coordinates": [ 10, 308.86, 353.12, 236.25, 68.61 ], "formula_id": "formula_9", "formula_text": "x i1 i2 ••• in (think a pixel) is a cause of f (x) if and only if there is a subset χ ⊂ x such that the following hold [9, p. 3]: 1. x i1 i2 ••• in ̸ ∈ χ 2." }, { "formula_coordinates": [ 10, 316.33, 429.2, 226.93, 52.01 ], "formula_id": "formula_10", "formula_text": "Let χ ′ ⊂ χ, m ∈ R, then χ ′ = m =⇒ ∆f = 0 3. output dependency to masking of x i1 i2 ••• in : Let m ∈ R, then χ = x i1 i2 ••• in = m =⇒ ∆f ̸ = 0" }, { "formula_coordinates": [ 10, 308.86, 517.26, 236.25, 79.52 ], "formula_id": "formula_11", "formula_text": "x i1 i2 ••• in is a cause of x [9, p. 3]. Definition 6.3 (Simplified Degree of Responsibility). If a subset χ ∈ x and entry x i1 i2 ••• in satisfy the definition of Singleton Cause from [9], then r x i1 i2 ••• in |x, f = 1 1 + k , (6" }, { "formula_coordinates": [ 10, 541.24, 581.53, 3.87, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 11, 82.25, 143.43, 204.11, 15.08 ], "formula_id": "formula_13", "formula_text": "E f |x = min |S| S : f (S) = f (x) , |S| > 0 (7)" }, { "formula_coordinates": [ 11, 106.27, 411.71, 180.09, 12.2 ], "formula_id": "formula_14", "formula_text": "Ẽi1 i2 ••• in = p x i1 i2 ••• in ∈ x(8)" }, { "formula_coordinates": [ 11, 121.96, 453.93, 164.4, 22.46 ], "formula_id": "formula_15", "formula_text": "i1 i2 ••• in Ẽi1 i2 ••• in = 1(9)" }, { "formula_coordinates": [ 11, 71.07, 693.11, 211.14, 22.31 ], "formula_id": "formula_16", "formula_text": "s min Ẽ f |x = |S| |x| , with f (S) ≈ f (x)(10" }, { "formula_coordinates": [ 11, 386.43, 399.87, 79.93, 22.34 ], "formula_id": "formula_17", "formula_text": "r = 1 - p (x ⊙ M) p (x)" }, { "formula_coordinates": [ 11, 356.41, 577.58, 141.15, 8.77 ], "formula_id": "formula_18", "formula_text": "p (x -S) = 0 ⇐⇒ p (S) = p (x)" }, { "formula_coordinates": [ 12, 54.03, 91.35, 228.42, 22.34 ], "formula_id": "formula_19", "formula_text": "k ∝ p (S) p (S ′ ) ≡ p (x -S) |p (x) -p (x -S ′ ) | ≡ p (x ⊙ M) |p -p (x ⊙ M) | ," }, { "formula_coordinates": [ 12, 119.93, 184.15, 166.43, 106.98 ], "formula_id": "formula_20", "formula_text": "r = 1 1 + k = 1 1 + p(x⊙M) p-p(x⊙M) = p (x) -p (x ⊙ M) p (x) = 1 - p (x ⊙ M) p (x)(11)" }, { "formula_coordinates": [ 12, 84.25, 345.63, 160.72, 22.34 ], "formula_id": "formula_21", "formula_text": "ẼM f |x = (1 -M) 1 - p (x ⊙ M) p (x)" }, { "formula_coordinates": [ 12, 69.1, 484.84, 217.27, 26.68 ], "formula_id": "formula_22", "formula_text": "Ẽ f |x = i (1 -M i ) 1 - p (x ⊙ M i ) p (x)(12)" }, { "formula_coordinates": [ 12, 308.86, 90.23, 236.25, 195.79 ], "formula_id": "formula_23", "formula_text": "Require: x ← image, f ← model n ← number of masks l ← mask size p ← f (x) H ← 0 for i ← 1 to n do M ← mask (i, x.shape, l) ▷ any mask generator x M ← x ⊙ M p M ← f x M H ← H + (1 -M) 1 -p M p ▷ compose (Remark 5) end for return H H ▷ normalize explanation 6.1." }, { "formula_coordinates": [ 12, 308.86, 385.07, 236.25, 21.64 ], "formula_id": "formula_24", "formula_text": "(x) = v = (v i ) , i ∈ [m]." }, { "formula_coordinates": [ 12, 375.17, 419.61, 103.64, 23.25 ], "formula_id": "formula_25", "formula_text": "r = ||v -v (x ⊙ M) || p ||v|| p ," }, { "formula_coordinates": [ 12, 377.74, 509.57, 167.37, 77.38 ], "formula_id": "formula_26", "formula_text": "r = 1 - p (x ⊙ M) p (x) = p (x) -p (x ⊙ M) p (x) = f (x) -f (x ⊙ M) f (x)(13)" }, { "formula_coordinates": [ 12, 379.11, 637.68, 166, 78.3 ], "formula_id": "formula_27", "formula_text": "f (x) -f (x ⊙ M) f (x) = |f (x) -f (x ⊙ M) | |f (x) | = ||v -v (x ⊙ M) || p ||v|| p(14)" }, { "formula_coordinates": [ 13, 68.21, 238.23, 218.15, 26.68 ], "formula_id": "formula_28", "formula_text": "Ẽ f |x = i (1 -M i ) ||v -v (x ⊙ M) || p ||v|| p(15)" }, { "formula_coordinates": [ 13, 95.24, 647.28, 191.12, 30.44 ], "formula_id": "formula_29", "formula_text": "r (M) = 1 - nc i σ i V T V M 2(16)" }, { "formula_coordinates": [ 13, 364.14, 93.56, 180.98, 167.51 ], "formula_id": "formula_30", "formula_text": "r = ||v -v (x ⊙ M) || 2 ||v|| 2 ≡ |V -V M | |V| = |V -V M | |V -0| = |V -V M | 1 = |V -V M | = 1 -simi (V, V M ) = 1 - nc i σ i V T V M 2 (17)" }, { "formula_coordinates": [ 13, 321.37, 405.67, 223.75, 45.81 ], "formula_id": "formula_31", "formula_text": "Ẽ f |x = nm i (1 -M i )   1 - nc j σ 2 j V T V Mi  (18)" } ]
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b14", "b16", "b15", "b22", "b4", "b8", "b21" ], "table_ref": [], "text": "As the popularity of social media continues to grow, the spread of offensive content in these platforms has increased substantially, motivating companies to invest heavily in content moderation strategies and robust models to detect offensive content. We have observed a growing interest in this topic, evidenced by popular shared tasks at SemEval (Basile et al., 2019) and other venues. Apart from a few notable exceptions (Mandl et al., 2020), most of the work on this topic has not addressed the question of transliteration and code-mixing, two common phenomena in social media.\nCode-mixing is the phenomenon of embedding linguistic units such as phrases, words, or morphemes of one language into another language (Myers-Scotton, 1997;Muysken et al., 2000). Code-mixed texts often feature transliterations where speakers use an alternative script to the language's official or standard script by mapping from one writing system (e.g., Hindi and its original Devanagari script) to another one (e.g., Latin transliteration of Hindi) based on phonetic similarity. Transliterated texts are widely used in social media platforms as transliteration allows users to write in their native language using a script that may not be supported by the platform and/or using Latin-based default keyboards. Furthermore, the use of transliteration also allows users to easily switch between languages with otherwise different scripts (e.g., English and Hindi). As discussed in a recent survey (Winata et al., 2022), however, processing code-mixing datasets is a challenge that hinders performance in a variety of NLP tasks, thus deserving special attention.\nCode-mixing and transliteration are common in various languages, including Bangla (Das and Gambäck, 2015;Jamatia et al., 2015). Related work on Bangla offensive language identification (Wadud et al., 2021), however, has mostly focused on standard Bangla script. As such, the performance of offensive language identification models on codemixing and transliterated Bangla remains largely unexplored. To address this shortcoming, we create TB-OLID, a manually annotated transliterated Bangla offensive language dataset. TB-OLID was annotated following the popular OLID hierarchical taxonomy (Zampieri et al., 2019a), allowing crosslingual experiments. To the best of our knowledge, the dataset is the first of its kind for Bangla, opening exciting new avenues for future research.\nThe main contributions of this paper are as follows:\n1. We introduce TB-OLID, an offensive language identification corpus containing 5,000 Facebook comments.1 2. We provide a comparative analysis of various machine learning models trained or fine-tuned on TB-OLID. 1: Examples from TB-OLID in Bangla along with an English translation. The labels included are C (transliterated code-mixed), T (transliterated Bangla), O (offensive), N (not-offensive), I (offensive posts targeted at an individual), G (offensive posts targeted at a group), and U (untargeted offensive posts)." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b11", "b10", "b25", "b18" ], "table_ref": [], "text": "Data Collection We collect data from Facebook, the most popular social media platform in Bangladesh. We compile a list of the most popular Facebook pages in Bangladesh using Fanpage Karma2 and scraped comments from each of the top 100 most followed Facebook pages using the publicly available Facebook scraper tool. 3 This results in an initial corpus of over 100,000 comments. We exclude all comments not written with non-Latin script. We search the corpus using keywords for transliterated hate speech and offensive language. We select keywords from the list of 175 offensive Bangla terms by Karim et al. (2021). As the dataset by Karim et al. (2020) contains standard Bangla, we convert keywords into transliterated Bangla using the Indic-transliteration tool. 4 Using these keywords we randomly select a set of 5,000 comments for annotation.\nAnnotation Guidelines We prepare the TB-OLID annotation guidelines containing labels and examples. The first step is to label whether a comment is transliterated Bangla or transliterated codemixed Bangla. If the comment contains at least one English word along with other Bangla transliterated words, we consider it as transliterated code-mixed. Next, we consider the offensive vs. non-offensive distinction and, in the case of offensive posts, its target or lack thereof. Table 1 presents six annotated instances included in TB-OLID.\nWe adopt the guidelines introduced by the popular OLID annotation taxonomy (Zampieri et al., 2019a) used in the OffensEval shared task (Zampieri et al., 2019b) and replicated in multiple other datasets in languages such as Danish (Sigurbergsson and Derczynski, 2020), Greek (Pitenis et al., 2020), Marathi (Gaikwad et al., 2021;Zampieri et al., 2022), Portuguese (Sigurbergsson and Derczynski, 2020), Sinhala (Ranasinghe et al., 2022) and Turkish (Çöltekin, 2020). We choose OLID due to the flexibility provided by its threelevel hierarchical taxonomy that allows us to model different types of offensive and abusive content (e.g., hate speech, cyberbulling, etc.) using a single taxonomy. OLID's taxonomy considers whether an instance is offensive (level A), whether an offensive post is targeted or untargeted (level B), and what is the target of an offensive post (level C). As the second level of the TB-OLID annotation we consider OLID level A as follows.\n• Offensive: Comments that contain any form of non-acceptable language or a targeted offense, including insults, threats, and posts containing profane language • Non-offensive: Comments that do not contain any offensive language Finally, the third level of the TB-OLID annotation merges OLIDs level B and C. We label whether a post is untargeted or, when targeted, whether it is labeled at an individual or a group as follows:\n• Individual: Comments targeting any individ-ual, such as mentioning a person with his/her name, unnamed participants, or famous personality. • Group: Comments targeting any group of people of common characteristics, religion, gender, etc. • Untargeted: Comments containing unacceptably strong language or profanities that are not targeted.\nEnsuring Annotation Quality Three annotators working on this project are tasked to annotate TB-OLID. They are PhD students in Computing aged 22-28 years old, 1 male and 2 female, all native speakers of Bangla and fluent speakers of English. The first step of the annotation process involves a pilot annotation study, where 300 comments are assigned to all three annotators to calculate initial agreement and refine the annotation guidelines according to their feedback. After this pilot experiment, we annotate an additional 4,700 Facebook comments totaling 5,000 instances which are subsequently split into 4,000 and 1,000 instances for training and testing, respectively. The instances in TB-OLID are annotated by at least two annotators, with the third one serving as adjudicator. We calculate pairwise inter-annotator agreement on 1,000 instances using Cohen's Kappa, and we report Cohen's Kappa score of 0.77 and 0.72 for levels 1 (code-mixed vs. transliterated) and 2 (offensive vs. non-offensive), which is generally considered substantial agreement. We report Cohen's Kappa score of 0.66 on level 3, considered moderate agreement." }, { "figure_ref": [], "heading": "Dataset Statistics", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We calculated the frequency of each label in the dataset namely code-mixed and transliterated, offensive and non-offensive, targeted and untargeted, and target types in the dataset. The dataset statistics are presented in Table 2. Finally, we run an analysis of the code-mixed data using ad-hoc Python scripts. We observe that English is by far the most common language included in the code-mixed instances mixed with Bangla followed by Hindi. We report that 38.42% of all tokens in the code-mixed (C) class are English." }, { "figure_ref": [], "heading": "Level Label", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Baselines and Models", "publication_ref": [ "b9", "b6" ], "table_ref": [], "text": "Baselines We report the results of three baselines:\n(1) Google's Perspective API 5 , a free API developed to detect offensive comments widely used as a baseline in this task (Kaati et al., 2022;Fortuna et al., 2020); (2) prompting GPT 3.5 turbo providing the model with TB-OLID's annotation guidelines; and (3) a majority class baseline. Due to the API's limitations, Perspective API was used only for offensive language identification and not for for target classification." }, { "figure_ref": [], "heading": "General Models", "publication_ref": [ "b5", "b13", "b12", "b5", "b3", "b1", "b19" ], "table_ref": [], "text": "We experiment with pre-trained language models fine-tuned on TB-OLID. As our dataset is transliterated Bangla and contains English code-mixed, we experiment with BERT (Devlin et al., 2019), roBERTa (Liu et al., 2020) which are trained on English, and Bangla-BERT (Kowsher et al., 2022), which is trained on Bangla. We also use cross-lingual models such as mBERT (Devlin et al., 2019) and xlm-roBERTa (Conneau et al., 2020) which are trained in multiple languages.\nTask-specific Models We also experiment with task-specific fined-tuned models like HateBERT (Caselli et al., 2021), and fBERT (Sarkar et al., 2021). These models were also further fined-tuned on TB-OLID." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We use F1-score to evaluate the performance of all models. The training and test sets are obtained by the aforementioned 80-20 random split on the entire TB-OLID dataset. We further subdivide the test set into transliterated code-mixed (C), transliterated (T), and all instances. We present results for offensive text classification (offensive vs. nonoffensive) in Table 3. We observe that the standard BERT model performs well over the baselines, whereas the Bangla-BERT model performs less well. BERT achieves F1-score of 0.71, whereas Bangla-BERT obtains F1-score of 0.42. We believe this is due to the fact that many instances in the dataset are in Latin script, which means that BanglaBERT frequently struggles with out-of-vocabulary tokens. The low performance of Bangla-BERT in this task requires further examination. Models pre-trained specifically on offensive language identification perform very well with fBERT and Hate-BERT coming out on top, both with an F1 score of 0.72. Finally, we observe that the top-5 performing models perform better on the code-mixing data compared to transliterated data. This is likely due to the heavy presence of English words in the code-mixing data where we observe the presence of 38% of English words. Finally, Table 4 Overall, target classification is a more challenging task than offensive language identification due to the presence of three classes instead of two. Therefore, all results are substantially lower for this task. HateBERT performs better than all other mod-els with an F1 score of 0.68. roBERTa achieved more competitive performance for target classification than for offensive language identification whereas Bangla-BERT did not perform well in both tasks. Finally, similar to the previous task, the bestperforming models achieved higher F1 scores on the code-mixed data than on the transliterated data.\nOne key observation is that the transformerbased models do not perform very well, since most of them are not pre-trained on transliterated Bangla. Among the models that we experiment with, only xlm-roBERTa is pre-trained with a comparatively small set of Romanized Bangla. However, the lack of any standard rules for spelling in transliterated Bangla makes TB-OLID very challenging." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [ "b5", "b13", "b12", "b5", "b3", "b1", "b19" ], "table_ref": [], "text": "In this work, we introduced TB-OLID, a transliterated Bangla offensive language dataset containing 5,000 instances retrieved from Facebook. Three native speakers of Bangla have annotated the dataset with respect to the presence of code-mixing, the presence of offensive language, and its target according to the OLID taxonomy. TB-OLID opens exciting new avenues for research on offensive language identification in Bangla.\nWe performed experiments with multiple models such as general monolingual models like BERT (Devlin et al., 2019), roBERTa (Liu et al., 2020) and Bangla-BERT (Kowsher et al., 2022); crosslingual models like mBERT (Devlin et al., 2019) and xlm-roBERTa (Conneau et al., 2020); and models fine-tuned for offensive language identification like HateBERT (Caselli et al., 2021), and fBERT (Sarkar et al., 2021)). The best results were obtained by the task-specific models.\nIn future work, we would like to extend the TB-OLID dataset and annotate the offense type (e.g., religious offense, political offense, etc.). This would help us identify the common targets in various platforms. Furthermore, we would like to pre-train and fine-tune a Bangla transliterated BERT model to see how it performs on TB-OLID. Finally, in future work, we would like to evaluate the performance of other recently released large language models (LLMs) (e.g., GPT 4.0, Llama 2) on TB-OLID. The first baseline results using GPT 3.5 indicate that general-purpose LLMs still struggle with the transliterated and code-mixed content presented in TB-OLID." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We thank the anonymous workshop reviewers for their insightful feedback. Antonios Anastasopoulos is generously supported by NSF award IIS-2125466." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "The generation and annotation procedure of TB-OLID adheres to the ACL Ethics Policy and seeks to make a valuable contribution to the realm of online safety. The technology in question possesses the potential to serve as a beneficial instrument for the moderation of online content, thereby facilitating the creation of safer digital environments. However, it is imperative to exercise caution and implement stringent regulations to prevent its potential misuse for purposes such as monitoring or censorship." } ]
Identifying offensive content in social media is vital for creating safe online communities. Several recent studies have addressed this problem by creating datasets for various languages. In this paper, we explore offensive language identification in texts with transliterations and code-mixing, linguistic phenomena common in multilingual societies, and a known challenge for NLP systems. We introduce TB-OLID, a transliterated Bangla offensive language dataset containing 5,000 manually annotated comments. We train and fine-tune machine learning models on TB-OLID, and we evaluate their results on this dataset. Our results show that English pre-trained transformerbased models, such as fBERT and HateBERT achieve the best performance on this dataset.
Offensive Language Identification in Transliterated and Code-Mixed Bangla
[ { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Offensive Language Identification -F1-score of all models trained and/or fine-tuned on TB-OLID. We report results on the transliterated code-mixed (C), transliterated (T), and All test set. Baselines in italics.", "figure_data": "ModelCTAllfBERT0.73 0.70 0.72HateBERT0.74 0.69 0.72BERT0.73 0.68 0.71m-BERT0.70 0.68 0.69GPT 3.50.65 0.64 0.64Majority Class Baseline 0.57 0.57 0.57Perspective API0.53 0.50 0.51Bangla-BERT0.42 0.42 0.42xlm-roBERTa0.40 0.41 0.41roBERTa0.41 0.41 0.41", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "presents the results of target type classification (individual, group, or untargeted).", "figure_data": "ModelCTAllHateBERT0.69 0.66 0.68m-BERT0.72 0.64 0.67BERT0.72 0.64 0.67fBERT0.66 0.64 0.65roBERTa0.73 0.60 0.65GPT 3.50.39 0.46 0.43Majority Class Baseline 0.48 0.63 0.55xlm-roBERTa0.61 0.51 0.55Bangla-BERT0.59 0.47 0.51Table 4: Target Classification -F1-score of all modelstrained and/or fine-tuned on TB-OLID. We report resultson the transliterated code-mixed (C), transliterated (T),and All test sets. Baselines in italics.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Md Nishat Raihan; Umma Hani Tanmoy; Anika Binte Islam; Kai North; Tharindu Ranasinghe; Antonios Anastasopoulos; Marcos Zampieri
[ { "authors": "Cristina Valerio Basile; Elisabetta Bosco; Debora Fersini; Viviana Nozza; Francisco Patti; Manuel Rangel; Paolo Pardo; Manuela Rosso; Sanguinetti", "journal": "", "ref_id": "b0", "title": "Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter", "year": "2019" }, { "authors": "Tommaso Caselli; Valerio Basile; Jelena Mitrović; Michael Granitzer", "journal": "", "ref_id": "b1", "title": "Hatebert: Retraining bert for abusive language detection in english", "year": "2021" }, { "authors": "Çagrı Çöltekin", "journal": "", "ref_id": "b2", "title": "A Corpus of Turkish Offensive Language on Social Media", "year": "2020" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Édouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b3", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Amitava Das; Björn Gambäck", "journal": "Revue TAL -Association pour le Traitement Automatique des Langues (ATALA)", "ref_id": "b4", "title": "Code-mixing in social media text: The last language identification frontier", "year": "2015" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Paula Fortuna; Juan Soler; Leo Wanner", "journal": "", "ref_id": "b6", "title": "Toxic, hateful, offensive, or abusive? what are we really classifying? an empirical analysis of hate speech datasets", "year": "2020" }, { "authors": "Sampatrao Saurabh; Tharindu Gaikwad; Marcos Ranasinghe; Christopher Zampieri; Homan", "journal": "", "ref_id": "b7", "title": "Cross-lingual offensive language identification for low resource languages: The case of marathi", "year": "2021" }, { "authors": "Anupam Jamatia; Björn Gambäck; Amitava Das", "journal": "", "ref_id": "b8", "title": "Part-of-speech tagging for code-mixed englishhindi twitter and facebook chat messages", "year": "2015" }, { "authors": "Lisa Kaati; Amendra Shrestha; Nazar Akrami", "journal": "", "ref_id": "b9", "title": "A machine learning approach to identify toxic language in the online space", "year": "2022" }, { "authors": "Md Rezaul Karim; Bharathi Raja Chakravarthi; John P Mccrae; Michael Cochez", "journal": "", "ref_id": "b10", "title": "Classification benchmarks for under-resourced bengali language based on multichannel convolutional-lstm network", "year": "2020" }, { "authors": "Md Rezaul Karim; Sumon Kanti Dey; Tanhim Islam; Sagor Sarker; Mehadi Hasan Menon; Kabir Hossain; Md Azam Hossain; Stefan Decker", "journal": "", "ref_id": "b11", "title": "Deephateexplainer: Explainable hate speech detection in under-resourced bengali language", "year": "2021" }, { "authors": "Abdullah As Kowsher; Sami; Jahan Nusrat; Mohammad Prottasha; Pranab Shamsul Arefin; Takeshi Kumar Dhar; Koshiba", "journal": "IEEE Access", "ref_id": "b12", "title": "Bangla-bert: transformer-based efficient model for transfer learning and language understanding", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b13", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2020" }, { "authors": "Thomas Mandl; Sandip Modha; Anand Kumar; M ; Bharathi Raja; Chakravarthi ", "journal": "", "ref_id": "b14", "title": "Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german", "year": "2020" }, { "authors": "Pieter Muysken", "journal": "Cambridge University Press", "ref_id": "b15", "title": "Bilingual speech: A typology of code-mixing", "year": "2000" }, { "authors": "Carol Myers-Scotton", "journal": "Oxford University Press", "ref_id": "b16", "title": "Duelling languages: Grammatical structure in codeswitching", "year": "1997" }, { "authors": "Zesis Pitenis; Marcos Zampieri; Tharindu Ranasinghe", "journal": "", "ref_id": "b17", "title": "Offensive language identification in greek", "year": "2020" }, { "authors": "Tharindu Ranasinghe; Isuri Anuradha; Damith Premasiri; Kanishka Silva; Hansi Hettiarachchi; Lasitha Uyangodage; Marcos Zampieri", "journal": "", "ref_id": "b18", "title": "SOLD: Sinhala offensive language dataset", "year": "2022" }, { "authors": "Diptanu Sarkar; Marcos Zampieri; Tharindu Ranasinghe; Alexander Ororbia", "journal": "", "ref_id": "b19", "title": "fbert: A neural transformer for identifying offensive content", "year": "2021" }, { "authors": "Gudbjartur Ingi; Sigurbergsson ; Leon Derczynski", "journal": "", "ref_id": "b20", "title": "Offensive Language and Hate Speech Detection for Danish", "year": "2020" }, { "authors": "Md Anwar; Hussen Wadud; Md ; Abdul Hamid; Muhammad Mostafa Monowar; Atif Alamri", "journal": "Ieee Access", "ref_id": "b21", "title": "Lboost: Identifying offensive texts from social media post in bengali", "year": "2021" }, { "authors": "Genta Winata; Alham Fikri Aji; Zheng Xin Yong; Thamar Solorio", "journal": "", "ref_id": "b22", "title": "The decades progress on codeswitching research in NLP: A systematic survey on trends and challenges", "year": "2022" }, { "authors": "Marcos Zampieri; Shervin Malmasi; Preslav Nakov; Sara Rosenthal; Noura Farra; Ritesh Kumar", "journal": "", "ref_id": "b23", "title": "Predicting the type and target of offensive posts in social media", "year": "2019" }, { "authors": "Marcos Zampieri; Shervin Malmasi; Preslav Nakov; Sara Rosenthal; Noura Farra; Ritesh Kumar", "journal": "", "ref_id": "b24", "title": "SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (Of-fensEval)", "year": "2019" }, { "authors": "Marcos Zampieri; Tharindu Ranasinghe; Mrinal Chaudhari; Saurabh Gaikwad; Prajwal Krishna; Mayuresh Nene; Shrunali Paygude", "journal": "Social Network Analysis and Mining", "ref_id": "b25", "title": "Predicting the type and target of offensive social media posts in Marathi", "year": "2022" } ]
[]
2023-11-25
[ { "figure_ref": [], "heading": "Double-Flow-based Steganography without", "publication_ref": [], "table_ref": [], "text": "Embedding for Image-to-Image Hiding Bingbing Song, Derui Wang, Member, IEEE, Tianwei Zhang, Member, IEEE, Renyang Liu, Yu Lin* and Wei Zhou* Member, IEEE, Abstract-As an emerging concept, steganography without embedding (SWE) hides a secret message without directly embedding it into a cover. Thus, SWE has the unique advantage of being immune to typical steganalysis methods and can better protect the secret message from being exposed. However, existing SWE methods are generally criticized for their poor payload capacity and low fidelity of recovered secret messages. In this paper, we propose a novel steganography-without-embedding technique, named DF-SWE, which addresses the aforementioned drawbacks and produces diverse and natural stego images. Specifically, DF-SWE employs a reversible circulation of double flow to build a reversible bijective transformation between the secret image and the generated stego image. Hence, it provides a way to directly generate stego images from secret images without a cover image. Besides leveraging the invertible property, DF-SWE can invert a secret image from a generated stego image in a nearly lossless manner and increases the fidelity of extracted secret images. To the best of our knowledge, DF-SWE is the first SWE method that can hide large images and multiple images into one image with the same size, significantly enhancing the payload capacity. According to the experimental results, the payload capacity of DF-SWE achieves 24 -72BP P is 8000 ∼ 16000 times compared to its competitors while producing diverse images to minimize the exposure risk. Importantly, DF-SWE can be applied in the steganography of secret images in various domains without requiring training data from the corresponding domains. This domain-agnostic property suggests that DF-SWE can 1) be applied to hiding private data and 2) be deployed in resourcelimited systems.\nIndex Terms-Image steganography, Steganography without embedding, Encryption, Flow-based Model, Security." }, { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b0", "b3", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b10" ], "table_ref": [ "tab_0" ], "text": "D EEP Image steganography aims to concealing secret messages into cover images imperceptibly. The secret messages are only allowed to be recovered by the informed receiver while being invisible to others, which secures its transmission without being noticed [1,2]. Henceforth, image steganography has been applied in various domains, such as information security [3], data communication [1] and copyright protection [4].\nIn the image steganography task, the primary requirements converge to capacity, extraction error, and security. The embedded steganography (ES) generally select an existing image as a cover and then embed secret information into the cover image with a slight modification. However, these traditional ES steganography methods [4,5] have limited payload capacity. To further increase payload capacity, deep learning-based ES steganography methods have been proposed recently to achieve both acceptable imperceptibility and small extraction error of secret message [6]. However, since all these ES methods need to modify the cover image, the modified cover image always contains a subtle pseudo-shadow of the secret message, especially under a high hiding payload. This leads to potential risks of exposing the secret message through compromising the cover image using steganalysis tools.\nInstead of directly embedding the secret message into a cover image, steganography without embedding (SWE) is an emerging concept of hiding a secret message without a cover image, which eliminates the modification traces observed in ES methods. Thus, SWE has the unique advantage of reducing the risk of secret messages breach from typical steganalysis [7]. Although current SWE approaches have achieved remarkable results, there still exist some fatal drawbacks. There are two types of SWE techniques. 1) Mapping-based methods transform the secret message into a sequence of image hashes selected from an existing image set [8,9]. These mappingbased methods require the construction of fixed image mapping rules, which do not accommodate the dynamic growth of images. 2) Alternatively, generating-based methods synthesize images by passing the secret message into a deep generator network, e.g., generative adversarial network (GAN) [10,11]. However, due to the instability of the generative network and the irreversibility of the generative process, a critical weakness is that the payload capacity is extremely limited, especially for hiding large secret images. As shown in Table I, the maximum hiding capacity of the existing works without embedding is 4, and the hiding type can only be bit. In order to realize image-to-image steganography without embedding, the hiding capacity must be at least 24 BPP. If for multi-image hiding, it needs a higher hiding capacity. Moreover, it is difficult to minimize the message extraction error while keeping the visual quality of the generated stego images [11].\nIn this paper, we propose a novel double-flow-based steganography without embedding (DF-SWE) approach to tackle the above issues of current SWE methods. DF-SWE builds a reversible bijective transformation between the secret images and the generated stego images via the invertibility of the flow model and the reversible circulation of double flow. Our approach significantly enhances the payload capacity and can hide large images without cover images. Furthermore, " }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "Most existing steganographic approaches are embedded steganography (ES), which embeds the secret information imperceptibly into a cover image by slightly modifying its content. However, the modification traces of the embedded steganography will cause some distortion in the stego image, especially when embedding color image data that usually contain thousands of bits, making them easily detected by steganalysis. Steganography without embedding is proposed to improve security, which doesn't need to modify the cover image." }, { "figure_ref": [], "heading": "A. Embedded steganography", "publication_ref": [ "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b5", "b24", "b25", "b26", "b27", "b28", "b4", "b29", "b30" ], "table_ref": [], "text": "Traditional ES methods: The Least Significant Bits (LSB) [16] only modified the information of the last few bits, so it would not cause visible changes in the pixel values of the picture. In addition, LSB also had many variations [17,18]. For example, an information hiding technique [19] has been proposed by utilizing the least significant bits (LSBs) of each pixel of grayscale image adopting XOR features of the host image pixels. Besides, HUGO [20] was proposed, and the main design principle was to minimize the properly defined distortion through an efficient coding algorithm. There are steganographic algorithms not only in the spatial domain, but also in the frequency domain, such as J-UNIWARD [21], UED [22], I-UED [23], UERD [24].\nDeep learning-based ES methods: Baluja [6] proposed an autoencoder architecture placing a full-size image in another image of the same size. After this, Wu et al. [25] proposed an encoder-decoder architecture, where the cover image and the secret image were concatenated using Separable Convolution (SCR) with residual block. Besides, Zhang et al. [26] combined the method of adversarial examples for steganography. Replacing encoder-decoder architecture. CycleGAN-based methods [27,28] had proposed for image steganography. Furthermore, Zhang et al. [29] proposed IS-GAN, which improved the invisibility by hiding the secret image only in the Y channel of the cover image. Wang et al. [5] designed a multi-level feature fusion procedure based on GAN to capture texture information and semantic features. Recently, Invertible Network was proposed for image hiding. Due to the reversible nature of Invertible Network, HiNet [30] significantly improves the restored quality of secret image. Based on this, DeepMIH [31] was proposed to hide Multiple Image and achieved excellent performance compared with ES methods." }, { "figure_ref": [], "heading": "B. Steganography without embedding (SWE)", "publication_ref": [ "b7", "b8", "b31", "b32", "b33", "b34", "b9", "b11", "b35", "b36", "b10", "b37", "b38", "b14" ], "table_ref": [], "text": "Mapping-based SWE methods: In 2016, a bag-of-words (BOW) model was proposed to construct the mapping relationship between the dictionary and the words [8]. Furthermore, Zheng et al. [9] proposed robust image hashing, which calculated the scale-invariant feature transform (SIFT) points in 9 sub-images respectively. Cao et al. [32] divided the pixel values from 0 to 255 into 16 intervals, and built a mapping relationship with the bit string of length 4. After this, Qiu et al. [33] first hashed the local binary pattern (LBP) features of the cover image and the secret image, and then the hashes were matched to create the hidden image. Besides, a CIS algorithm based on DenseNet feature mapping was proposed [34], which introduced deep learning to extract highdimensional CNN features mapped into hash sequences. Based on GAN, a Star Generative Adversarial Network (StarGAN) was proposed to construct a high-quality stego image with the mapping relationship [35].\nGenerating-based SWE methods: Stego-ACGAN was proposed to generate new meaning normal images for hiding and extracting information [10]. In 2018, Hu et al. [12] mapped secret information into noise vectors and used DCGAN to generate stego image. After this, Zhu et al. [36] proposed a coverless image steganography method based on the orthogonal generative adversarial network, adding constraints to the objective function to make the model training more stable. For improving the steganography capacity and image quality, A GAN steganography without embedding that combines adversarial training techniques was proposed [37]. And then, the attention-GAN model was proposed for steganography without embedding [11]. Besides, Liu et al. [38] proposed IDEAS based on GAN, which disentangled an image into two representations for structure and texture and utilized structure representation to improve secret message extraction. Different from GAN-based approachs, Generative Steganographic Flow (GSF) [39] built a reversible bijective mapping between the input secret data and the generated stego images and took the stego image generation and secret data recovery process as an invertible transformation. After this, Zhou et al. [15] proposed a secret-to-image reversible transformation (S2IRT), where a large number of elements of given secret message were arranged into the corresponding positions to construct a high-dimensional vector. And then, the vector is mapped to a generated image." }, { "figure_ref": [], "heading": "C. Comparison with DF-SWE", "publication_ref": [], "table_ref": [], "text": "Unlike those SWE method, we propose DF-SWE to hide the secret images rather than the limited binary bits, bringing higher hiding capacity without losing the naturalness of the stego images. Meanwhile, we build a reversible bijective transformation between the secret images and the generated stego images, reducing the extraction error of the secret images." }, { "figure_ref": [], "heading": "III. BACKBONE NETWORK", "publication_ref": [ "b39" ], "table_ref": [], "text": "In this paper, we propose a double-flow-based model to build a reversible bijective transformation between secret images and generated stego image. Our flow-based backbone network relies on Glow [40]. The flow-based model is commonly used in an image generation tasks by learning a bijective mapping between the latent space of simple distributions and the image space with complex distributions.\nIn flow-based generative models, the generative process is defined as:\nz ∼ p θ (z),(1)\nx\n∼ g θ (z), (2\n)\nwhere z is the latent variable and p θ (z) is usually a multivariate Gaussian distribution N(z; 0, I). The function g θ (•) is invertible, such that given a datapoint x, latent-variable inference is done by z = f θ (x) = g -1 θ (x). For brevity, we will omit subscript θ from f θ and g θ . The function f is composed of a sequence of transformations:\nf = f 1 • f 2 • • • • • f K ,\nsuch that the relationship between x and z can be written as:\nx f1 ←→ h 1 f2 ←→ h 2 . . . f k ←→ z,(3)\nwhere f i is a reversible transformation function and h i is the output of f i . Under the change of variables of Equation 2, the probability density function of the model for a given a datapoint can be written as\nlog p θ (x) = log p θ (z) + log |det(dz/dx).(4)\nThe network architectures of Glow comprises three modules, namely the squeeze module, the flow module, and the split module. The squeeze module is used to downsample the feature maps, and the flow module is used for feature processing. The split module will divide the image features into halves along the channel side, and half of them are outputted as the latent tensor." }, { "figure_ref": [], "heading": "IV. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "DF-SWE builds a reversible circulation of double flow to generate stego images and hide secret images. In the reversible circulation of double flow, there are three strategies, i.e., prior knowledge sampling, high-dimensional space replacement, and distribution consistency transformation. In the following section, we propose problem definition and threat model. Based on this, we explain the reversible circulation of double flow and the hiding and restoring processes of DF-SWE in detail." }, { "figure_ref": [], "heading": "A. Problem definition and threat model", "publication_ref": [ "b39" ], "table_ref": [], "text": "Given a set I se := {I se } k of k secret images, a SWE encoder f se (•) : I se → z se transforms the secret images into random noises z se , and z se t ←→ z ′ st is a transformation from z se to z ′ st for secret image hiding. In closing, a generator f st (•) : z st → Ĩ se produces a stego image Ĩ se from the noise z ′ st . To maximize the reconstruction performance of the secret images, we propose using an invertible function for both f se (•) and f st (•). That is, after taking the inverse f -1 st (•) : Ĩst → z se and a transformation z st t -1 ←→ z ′ se from z st to z ′ se , the secret images can be revealed through f -1 se (•) : z st → I se . In our threat model, the attacker has access to a public training dataset for training the steganography model. During the attacking phase, an attacker gathers the secret images and generates the stego image by a composition f (•) := f se •f st (•) of f se and f st . Once the stego image is delivered to the recipient, the recipient recovers the secret images by the inverse of the same stego model f -1 (•). Moreover, the trained f (•) can be reused for various secret images, even those coming from different domains.\nInspired by Glow [40], DF-SWE uses the double-flowbased model to build a reversible bijective transformation between secret images and generated stego images. The DF-SWE network takes a secret image as its input to generate a realistic stego image. Later on, it can directly recover the hidden secret image from the stego image via the reversible transformation. \n′ st = z ′ 1 , z ′ 2 , ..., z ′ L .\nL is the depth of architecture. We use two Glow models to learn multivariate Gaussian distributions of the secret images I se and the stego image I st , separately. Given functions\nf se := f 1 • f 2 • • • • • f K and f st := f ′ 1 • f ′ 2 • • • • • f ′ n , we have I se f1 ←→ h 1 . . . h k-1 f k ←→ z se , I st f ′ 1 ←→ h ′ 1 . . . h ′ n-1 f ′ n ←→ z ′ st .\nThe existing flow model (Glow) implements a mapping relationship between the distribution of z and that of the generated image. In contrast, large image steganography without embedding is a generative task from an image to another. Hence, the core task of image-to-image steganography without embedding is to construct a mapping between the secret image I se and the stego image I st while ensuring the mapping is reversible to enhance the extraction quality of I se . This task can be formulated as follows:\nI se f1 ←→ h 1 . . . f k ←→ z se t ←→ z ′ st f ′ 1 ←→ . . . h ′ n-1 f ′ n ←→ I st . (5) z se t ←→ z ′\nst is a transformation from a multivariate Gaussian distributions z se to another multivariate Gaussian distributions z ′ se . Consequently, the core task is the transformation t to construct a reversible circulation in the double flow model, for hiding the secret image in the generated stego image and keeping it reversible." }, { "figure_ref": [ "fig_1" ], "heading": "B. Reversible circulation of double flow", "publication_ref": [], "table_ref": [], "text": "For transmitting z se to z ′ st and keeping it reversible, we divide the task of z se t ←→ z ′ st into three tasks that need to be solved.\n• How to initialize z ′ st ? • How to transmit z se to z ′ st ? • How to reduce the distortions on generated stego images? In order to solve the above issues, we propose three techniques named prior knowledge sampling, high-dimensional space replacement, and distribution consistency transformation. We use the latent variables of z and its variants (e.g., z ′ st , ẑst ) to describe the circulation of two flows at different stages after different operations.\n1) Prior knowledge sampling (PKS) :\nFor initializing z ′ st , we utilize the prior knowledge of the generator of Glow. Firstly, z is sampled from N(0, I) and the generated image I ge is generated from a Glow model Glow gst (z). The process can be formulated as:\nI ge = Glow gst (z), z ∼ N(0, I).(6)\nDuring the generation of I ge , Glow gst utilizes prior knowledge of Glow parameters to generate image and the generation is irreversible. Next, we obtain the initialized z ′ st by a sequence of invertible transformations, which can be formulated as:\nI ge f ′ 1 ←→ h ′ 1 . . . h ′ n-1 f ′ n ←→ z ′ st .(7)\n2) High-dimensional space replacement (HDSR):\nFor transmitting z se to z ′ st and reducing the generated stego image distortion, we proposed the high-dimensional space replacement.\nIn the backbone network(Glow), each of the L layers of feature maps in M odel se is divided into halves along the channel dimension to two sets. Half of the sets are outputted as the latent tensor {z i } L i=1 , and the other half of the sets are cycled into the squeeze module. Hence, z se contains different levels of information about the image. As shown in Figure . 1,\nz se = {z 1 , . . . , z L-1 , z L } and z ′ st = z ′ 1 , . . . , z ′ L-1 , z ′ L .\nParticularly, we find that the latent tensor from shallow layers of M odel se has greater effect on the reversibility of the image. If z ′ st is replaced with z se directly, it will cause the distortion of the stego image due to the distribution differences between z ′ st and z se . Since different latent tensors of z i have different effects on the reconstruction of the image, we propose high-dimensional space replacement, which replaces the high-dimensional distribution of generated images with the low-dimensional distribution of secret images. Our technique follows the principle of minimum information loss. As shown in Figure 2,z ′ L is replaced with the concatenated {z 1 , . . . , z L-1 }. For brevity, we abbreviate this process as that ẑst is replaced with ẑse .\nThe z se of the secret image is circulated to the z ′ st of the stego image, reducing the impact of the secret image and stego image generation. During the secret image extraction phase, ẑse is replaced with ẑst . ⃝ are the matrix operations of multiplication and addition respectively. The DCT is the distribution consistency transform and the HDSR is the highdimensional space replacement.\nDCT Z se … … Z L-1 Z L-1 Z L Z L Z 1 Z 1 Z 1 Z L-1 Z L HDSR Z st Z st Std + + × × Mean" }, { "figure_ref": [ "fig_1" ], "heading": "3) Distribution consistency transformation (DCT):", "publication_ref": [ "b9" ], "table_ref": [], "text": "High-dimensional space replacement has circulated z se of secret image to the z ′ st of the stego image and reduced the generated stego image distortion. For further improving the quality of image generation and reducing the generated image distortion, we propose distribution consistency transformation, which can decrease the distribution discrepancy between ẑse and ẑst .\nAs shown in Figure 2, distribution consistency transformation is implemented in the high-dimensional space replacement. Because flow-based generative models learn reversible bijective transformation between images and a multivariate Gaussian, ẑse and ẑst obey the Gaussian distribution. Hence, the most important thing to measure the Gaussian distribution is its mean and variance.\nBased on this, our proposed distribution consistency transformation is to maintain the consistency of the mean and variance between two distributions. Distribution consistency transformation is defined as follows:\nStd = n i=1 ẑst i - n i=1 ẑst i n n i=1 ẑse i - n i=1 ẑse i n ,(8)\nM ean = n i=1 Std × ẑst i -ẑse i n ,(9)\nẑse = ẑse × Std + M ean.(10)\nEquations ( 8), ( 9), (10) can achieve the reduction of the distribution discrepancy between ẑse and ẑst . During the secret image extraction phase, the reversible transformation of distribution consistency transformation is expressed as Equation ( 11):\nẑst = ẑst -M ean Std .(11)" }, { "figure_ref": [], "heading": "C. Hiding and restoring processes", "publication_ref": [], "table_ref": [], "text": "In this section, we will describe the secret image hiding and restoring processes in detail. As shown in Figure . 3, DF-SWE comprises two stages, a secret image hiding phase and an extracting phase.\n1) Hiding process: " }, { "figure_ref": [], "heading": "V. EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Experimental setup", "publication_ref": [ "b11", "b10", "b12", "b40", "b1", "b41", "b0", "b42", "b43", "b44" ], "table_ref": [], "text": "To demonstrate the superiority of DF-SWE, we compare it with four state-of-the-art SWE methods, namely DCGAN-Steg [12], SAGAN-Steg [11], SSteGAN [13] and WGAN-Steg [41]. To verify the extraction quality of the secret images, we compare DF-SWE with ES methods including 4 bit-LSB, Baluja [2], Weng et al. [42] and HiDDeN [1], since the existing SWE methods cannot be applied to image data.\nOur DF-SWE and baseline models are trained on the datasets of Bedroom (subsets of LSUN, including 3033042 color images) [43], LFW [44] (including 13234 color images), and CelebA [45] (including 202599 color images). We train DF-SWE with the hyper-parameter L = 4. L is the depth of the model. The greater the depth of the model, the better the quality of the generated images, but the model parameters and computational resources increase. Therefore, the hyperparameter L can be set according to actual requirements. Besides, the steganography process is completed in less than a second on a GPU RTX3090 with L = 4. Therefore, our proposed method has excellent performance in real time applications. " }, { "figure_ref": [], "heading": "Ise Ise", "publication_ref": [ "b45" ], "table_ref": [], "text": "Step 2\nStep 2\nStep 1\nStep 1\nStep 3\nStep 3\nStep 4\nStep 4\nStep 5\nStep 5\nStep 6\nStep 6\nStep 7\nStep 7 We evaluate the hiding capacity of DF-SWE by comparing the bits per pixel (BPP), BPP = Len(secret) H×W , which is the number of message bits hidden per pixel of the encoded image. H/W is the height/width of stego images. Meanwhile, we evaluate the detection error (Pe), P e = 1 2 (P F A + P M D ). where P F A and P M D represent the probabilities of false alarm and missed detection rate, respectively. P e ranges in [0, 1], and its optimal value is 0.5. As a proxy to secrecy, we can also measure the secret image extraction performance using peak signal-to-noise ratio (PSNR), Root Mean Square Error (RMSE) and Structure Similarity Index Measure (SSIM). A larger value of PSNR, SSIM and smaller value of RMSE indicate higher image quality, which are formulated as follows:\n• RMSE: Root Mean Square Error (RMSE) measures the difference between two images. Given two images X and Y with width W and height H, RMSE is formulated as follows:\nRM SE = √ M SE,(12)\nM SE = 1 W * H W i=1 H j=1 (X i,j -X i,j ) 2 ,(13)\nwhere X i,j and Y i,j indicate the pixels at position (i, j) of images X and Y , respectively. • PSNR: Peak signal-to-noise ratio (PSNR) is a widely used metric to measure the quality of an image. PSNR is defined as follows:\nP SN R = 10 * log 10 R 2 M SE ,(14)\nwhere R represents the maximum value of images, which is usually set as 255. • SSIM: Structural Similarity Index Measure (SSIM) is another commonly used image quality assessment based on the degradation of structural information [46]. SSIM is computed by the means µ X and µ X , the variance σ X and σ X , and the co-variance σ (X,Y ) , as follows:\nSSIM = (2µ X µ Y + C 1 )(σ (X,Y ) + C 2 ) (µ X 2 + µ Y 2 + C 1 )(σ X 2 + σ Y 2 + C 2 )(15)\nwhere c1 = k 1 L 2 , c2 = k 2 L 2 and L is the dynamic range of the pixel values. The default configuration of k 1 is 0.01 and k 2 is 0.03." }, { "figure_ref": [ "fig_5" ], "heading": "B. Evaluation by image hiding quality", "publication_ref": [ "b3" ], "table_ref": [], "text": "Figure. 4 is compares our DF-SWE with SWE methods on the bedroom dataset (subsets of LSUN). Since SWE methods hide secret messages without embedding modifications and are immune to typical steganalysis tools, visual quality is crucial. From Figure. 4, we can see that images generated by DF-SWE have higher capacity and are more realistic with the FID (Fréchet Inception Distance) than that of the competitors. FID is a metric of image generation and a lower FID score means that the generated image is more realistic. There are noticeable distortions in the stego images generated by these SWE methods. More importantly, in lieu of hiding images, these rivals only support secret messages in binary bits. The BPP of these SWE methods is around 1.5e -3 while our BPP is 24, which is 8000 -16000 times more than that of the competitors. Hence, our DF-SWE can hide secret images with the same size as the stego images.\nExamples of stego images generated by DF-SWE are given in Figure. 5 and Figure. 6, which show the hiding quality of images in sizes 64×64×3 and 128×128×3, respectively. It can be observed that the stego images leak no information of the secret images. Only through M odle se , M odel st and reversible circulation of double flow, the secret image can be extracted from the stego image. M odle se and M odel st have hundreds of millions of parameters and different network structures, which makes decrypting the secret images difficult. Once trained, DF-SWE can be generalized to hiding images from various domains. Figure. 5 and Figure. 6 show secret images and generated stego images, in different domains. For example, the LFW-CelebA signifies that the secret image is randomly selected from the LFW dataset and generated stego image is similar to the style of the CelebA dataset. From Figure 4, we can see that images generated by DF-SWE are more realistic and extracted secret images have nearly lossless extraction quality." }, { "figure_ref": [], "heading": "C. The extraction quality compared with prevalent methods.", "publication_ref": [], "table_ref": [], "text": "Table II lists the performances of information extraction accuracy of different steganographic approaches, i.e., DCGAN-Steg, SAGAN-SSteGAN, WGAN-Steg, IDEAS, and S2IRT, with the increase of hiding payloads. From this table, it is clear that DF-SWE achieve much higher information extracted accuracy than SWE approaches under different hiding payloads. The extracted accuracy rates of DF-SWE keep at a very highlevel when the hiding payload ranges from 1 BPP to 4 BPP. Besides, the proposed generative steganographic approach can achieve high hiding capacity (up to 12 BPP) and accurate extraction of secret message (almost 100% accuracy rate), simultaneously. Even when hiding images (BPP = 24), the extracted accuracy achieve 0.5124 and the pixel errors of the extracted images are mostly ranged in ±1. That is because DF-SWE built image-to-image reversible bijective mapping " }, { "figure_ref": [], "heading": "D. Security evaluation by steganalysis", "publication_ref": [ "b46" ], "table_ref": [ "tab_4" ], "text": "We compare our DF-SWE with ES and SWE schemes as shown in Table IV. The steganalysis performance is measured by PE metrics. The optimal value of detection error (Pe) is 0.5. At this time, the steganalyzer (Ye-net [47]) cannot distinguish the source of images and can only perform random guess. Most ES methods have poor steganographic security, while our proposed SWE achieves better security performance with higher Pe values. Compared to SWE schemes, DF-SWE has shown significant improvements in several aspects. The payload is more than 8000 times higher than that of others. Meanwhile, we have achieved Pe values better than most of the other works." }, { "figure_ref": [ "fig_8", "fig_9" ], "heading": "E. Multiple image hiding", "publication_ref": [], "table_ref": [], "text": "Most image hiding work can only hide a secret image to a cover image. However, it is not applicable to hide multiple secret images to an image, when specific integrated or sequentially related multiple images are not separable. Especially in image steganography without embedding, there is no work to do multiple image hiding and our method is proposed firstly to realize multi-image hiding without embedding.\nIn this section, we demonstrate the experimental results of DF-SWE for hiding multiple images in sizes of 64×64×3 and 128×128×3 in Figure 7 and Figure 8 respectively. Secreti is the i-th secret image and Extracti is the i-th extracted image from the Stego image (Stego) with respect to Secreti. It can be observed that, even there are three images (i.e., BP P = 72) hidden into the same stego image, the generated stego images remain natural. Moreover, the recovered secret images are nearly lossless." }, { "figure_ref": [ "fig_10" ], "heading": "F. Domain generalization", "publication_ref": [], "table_ref": [], "text": "Current image steganography usually requires that the secret images to be hidden are from the same domain of the samples used to train the steganography model. However, it is expensive to train individual steganography models for images from new domains. Furthermore, collecting training data from particular domains could be difficult due to data privacy or other concerns. Therefore, existing methods cannot achieve image steganography when accessing images from the same domain of the secret images is prohibited. However, our method circumvents this limitation by its capability of domain generalization. As shown in Figure 9, images in the first three columns on the left side are from the Stanford-dog dataset and the other images are randomly selected from the Internet. All the images have totally different distributions with that of images used to train DF-SWE. According to Figure. 9, DF-SWE can successfully hide and recover these images with a satisfactory visual quality. This property greatly boosts the capability of DF-SWE and makes it the first domain-agnostic steganography method." }, { "figure_ref": [ "fig_11" ], "heading": "G. Ablation experiment", "publication_ref": [], "table_ref": [], "text": "Figure 10 performs an ablation analysis of 3 tactics employed by DF-SWE, which are prior knowledge sampling, high-dimensional space replacement and distribution consistency transformation. The first three, middle three and last three columns of images are the effect of different tactics on the LFW, CelebA and LSUN datasets, respectively.\nIn the first row, the generated stego image is abnormal, particularly in the first three columns. The main change of direct replacement tactic is that z is replaced with z se directly without utilizing prior knowledge of M odel st . The highdimensional space replacement is our proposed tactic shown in second row, which uses low-dimensional space of ẑse to replace high-dimensional space of ẑst . We can see that highdimensional space replacement effectively generates realistic images, but only this technique is not adequate from the abnormal images of the first three columns. The prior knowledge sampling is our proposed method. In the third row, z ′ st is replaced with z se , which utilizes prior knowledge M odel st and a multivariate Gaussian latent-variable z. The distribution consistency transformation is proposed to reduce the distortion from the difference between the two distributions. The fourth row is that z is replaced with z se , but z se is changed by the distribution consistency transformation. In the last three columns, the generated stego image is more normal than the first row.\nThe fifth row combines our proposed prior knowledge sampling and distribution consistency transformation. z ′ st is replaced with z se which is modified by the distribution consistency transformation. In the last three columns, the quality of the generated image is a significant improvement compared with the first row. In the sixth row, the first three columns indicate that only high-dimensional space replacement and distribution consistency transformation cannot generate a realistic image. Compared with the second and seventh rows, the first three columns clearly show that prior knowledge sampling effectively improves the quality of the generated stego images.\nIn summary, the ablation experiments verify the effectiveness of our proposed method to circulate two latent flows and guarantee reversibility meanwhile." }, { "figure_ref": [], "heading": "VI. DISCUSSION AND FUTURE WORK", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel double-flow-based steganography without embedding (DF-SWE) method for hiding large images. Specifically, we propose the reversible circulation of double flow to build a reversible bijective transformation between secret images and generated stego images. The reversible circulation ensures the small extraction error of the secret images and the high-quality of generated stego images. Importantly, DF-SWE is the first SWE method that enables hiding images, or even multiple large images, into one stego image. Specifically, the payload capacity of DF-SWE achieves 24 -72BP P and is 8000 -16000 times more than that of the other SWE methods. In this way, DF-SWE provides a way to directly generate stego images without a cover image, which greatly improves the security of the secret images. According to the experimental results, the proposed DF-SWE shows better hiding/recovering performance. Intriguingly, DF-SWE can be generalized to hiding secret images from different domains with that of the training dataset. This nice property indicates that DF-SWE can be deployed to privacy-critical scenarios in which the secret images are hidden from the provider of DF-SWE. Although our method achieves excellent performance for secret image recovery, the method is not completely lossless.\nIn the future, it is interesting to further explore the potential of SWE in lossless secret image recovery and multi-modal data hiding. The main challenge for multi-modal data hiding is how to map multi-modal data to the similarity multivariate Gaussian distribution. All these will be interesting future works to be explored. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements: This work is supported in part by the National Natural Science Foundation of China under" } ]
[ { "figure_caption": "Fig. 1 .1Fig. 1. The network architecture of double-flow-based steganography without embedding for image-to-image hiding.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. High-dimensional space replacement and Distribution consistency transformation. The Std and Mean are the variance and mean of z ′ L . The × ⃝ and +⃝ are the matrix operations of multiplication and addition respectively. The DCT is the distribution consistency transform and the HDSR is the highdimensional space replacement.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure. 33(a) describes the hiding phase, which can hide large images without embedding. M odel se and M odel st are two different Glow models. Firstly, as shown in Step 1, M odel st randomly samples a Gaussian distribution z to generate an image I ge utilizing prior knowledge of M odel st . Based on the generated image I ge , we use the reversible operation of M odel st to obtain an initialized distribution z ′ st in order to better carry the secret flow. Secondly, as shown in Step 2, M odel se encodes the secret image as z se by the reversible operation of M odel se . Specially, Step 1 and 2 can run in parallel or exchange their sequences. Through the operation of the high-dimensional space replacement and distribution consistency transformation on Step 3, z se can be passed to z ′ st to generate a stego image. Meanwhile, the hiding phase maintains reversibility for extracted secret images. Finally, I st will be generated by z ′ st utilizing the M odel st in step 4. 2) Restoring process: As shown in Figure. 3 (b), the extracting phase is the inverse process of hiding phase. Hence, DF-SWE can extract the secret image with high quality because we construct an invertible mapping of the secret and stego images. Firstly, the stego image is decoded as z ′ st by utilizing the reversible operation of M odel st in Step 5. And then, through the reverse operation of high-dimensional space replacement and distribution consistency transformation of Step 6, z ′ st can be passed to z se to extract the secret image. The reverse operation of high-dimensional space replacement and distribution consistency transformation are described in detail in subsection IV-B2 and IV-B3. Finally, M odel se extracts the secret image I ′ se with high quality in Step 7.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Hiding and restoring processes. (a) is the hiding phase, including Step 1-4. (b) is the extracting phase, including Step 5-7.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Hiding evaluation with steganography without embedding.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The quality of generated images with the size 64 × 64 using DF-SWE.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. The quality of generated images with the size 128 × 128 using DF-SWE.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Multiple image hiding.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Multiple image hiding with 128 × 128 × 3.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Data domain generalization.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. Ablation experiment of 3 tactics.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "STATISTICS OF HIDDEN CAPACITY. SWE reduces the extraction error, which is attributed to the reversibility of the hiding and restoring processes by the invertible bijective mapping. Furthermore, DF-SWE guarantees the quality of the generated stego images to enhance the imperceptibility of the secret images.In sum, our novel DF-SWE method achieves state-of-the-art steganographic performance in the payload capacity, extraction error, and stealthiness of hiding large images. Intriguingly, DF-SWE shows a capability of domain generalization, which makes it applicable to privacy-critical, resource-limited scenarios. The detailed contributions are as follows:", "figure_data": "MethodsYearHiding TypeMax payloads (BPP)DCGAN-Steg [12]2018bit9.1e-3SSteGAN [13]2018bit2.9e-1SAGAN-Steg [11]2021bit4e-1IDEAS [14]2022bit2.3e-2S2IRT [15]2022bit4DF-SWE can hide multiple secret images at one time, whichgreatly extends the capability of SWE-based methods. Tothe best of our knowledge, this is the first SWE methodworking towards image data rather than small binary bits.In addition, DF-hiding per-formance, providing diverse and realistic images to mini-mize the exposure risk compared to the prior steganogra-phy works. Meanwhile, our proposed SWE also achievesbetter security performance against steganalysis detec-tions.• Domain generalization: Our experiments show that,once trained, DF-SWE can be applied in the steganog-raphy of secret images from different domains withoutfurther model training or fine tuning. This property makesDF-SWE the first domain-agnostic steganography methodwhich can be applied to unseen private data and beexecuted on resource-limited systems.This paper is organized as follows. Section II introducesthe related work. Section III briefly describes the Glow modelas a backbone network. Section IV elaborates the proposedDF-SWE method. Section V presents and discusses the ex-perimental results. Discussion and future work are drawn inSection VI.", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "INFORMATION EXTRACTION ACCURACY OF THOSE METHODS WITH DIFFERENT HIDDEN PAYLOADS.", "figure_data": "MethodsType12Hiding payloads (BPP) 4 61224DCGAN-Steg [12]bit0.71340.7120.7122---SAGAN-Steg [11]bit0.72450.72320.723---SSteGAN [13]bit0.71390.71260.7124---WGAN-Steg [41]bit0.71220.71140.7113---IDEAS [14]bit0.75520.7550.7546---S2IRT [15]bit110.9942---Oursimage1110.99210.98360.5124reducing the extraction error. In contrast, the extracted ac-curacy of other SWE methods decrease with the increaseof hiding payload. At high hiding capacity, existing SWEmethods cannot hide secret messages or generate stego imagesare twisted and distorted. Thus, existing SWE methods cannotachieve accurate information extraction under high hidingpayloads.The extraction metrics of the different ES methods are givenin", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "which describes the extraction quality of secret images by PSNR, SSIM and RMSE. The columns of LSUN, CelebA and LFW represent the experimental results under different datasets, respectively. Unlike our DF-SWE to built image-to-image reversible bijective mapping, existing SWE methods directly write the secret message into a latent space and generate the image directly by the latent space. It is difficult to balance the hiding capacity and generation quality. Since the existing SWE methods face the problem of low hidden capacity and incapability of hiding secret images with the same size, we compared DF-SWE with ES methods to verify the extraction quality of the secret images. In particular, ES methods usually have a better extraction performance than SWE methods, because ES methods have cover images to hide the secret image and do not consider the generated quality. On the contrary, SWE methods require plausible visual quality of both the generated stego image and the recovered secret image. From TableIII, it is evident that DF-SWE outperforms all other methods, providing better secret image extraction quality.", "figure_data": "", "figure_id": "tab_2", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "METRICS OF SECRET IMAGES COMPARED WITH PREVALENT METHODS.", "figure_data": "MethodsTypePSNR↑LSUN SSIM↑RMSE↓PSNR↑CelebA SSIM↑RMSE↓PSNR↑LFW SSIM↑RMSE↓4bit-LSBES23.060.763818.0523.130.751817.8623.130.766817.92Baluja[2]ES32.410.92426.3132.940.93256.1132.610.93926.03Weng et al. [42]ES33.740.95185.2534.510.95824.9834.230.96574.98HiDDeN[1]ES34.490.95364.3236.340.96294.0736.240.96823.96OursSWE34.510.95424.2437.850.96752.5138.130.96973.21", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "EVALUATION BY STEGANALYSIS.", "figure_data": "TypeMethodsHiding typePayload (BPP) ↑Pe → 0.5ESBaluja[2]image240.04ESWeng et al. [42]image240.04ESHiDDeN[1]image240.03SWEDCGAN-Steg [12]Binary1.5e -30.48SWESAGAN-Steg [11]Binary3.2e -30.47SWESSteGAN [13]Binary1.5e -30.52SWEWGAN-Steg [41]Binary1.5e -30.52SWEOursimage240.51", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" } ]
Bingbing Song; Renyang Liu; Derui Wang; Tianwei Zhang; Yu Lin; Wei Zhou
[ { "authors": "J Zhu; R Kaplan; J Johnson; L Fei-Fei", "journal": "", "ref_id": "b0", "title": "Hidden: Hiding data with deep networks", "year": "2018" }, { "authors": "S Baluja", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b1", "title": "Hiding images within images", "year": "2020" }, { "authors": "M Liu; M Zhang; J Liu; Y Zhang; Y Ke", "journal": "CoRR", "ref_id": "b2", "title": "Coverless information hiding based on generative adversarial networks", "year": "2017" }, { "authors": "R Van Schyndel; A Z Tirkel; C F Osborne", "journal": "", "ref_id": "b3", "title": "A digital watermark", "year": "1994" }, { "authors": "Z Wang; Z Zhang; J Jiang", "journal": "ISSRE", "ref_id": "b4", "title": "Multi-feature fusion based image steganography using GAN", "year": "2021" }, { "authors": "S Baluja", "journal": "", "ref_id": "b5", "title": "Hiding images in plain sight: Deep steganography", "year": "2017" }, { "authors": "Z Zhou; H Sun; R Harit; X Chen; X Sun", "journal": "", "ref_id": "b6", "title": "Coverless image steganography without embedding", "year": "2015" }, { "authors": "S X Zhouzhili; Cao Yi", "journal": "Journal of Applied Sciences", "ref_id": "b7", "title": "Coverless information hiding based on bag-of-words model of image", "year": "2016" }, { "authors": "S Zheng; L Wang; B Ling; D Hu", "journal": "", "ref_id": "b8", "title": "Coverless information hiding based on robust image hashing", "year": "2017" }, { "authors": "Guangyuanfu Zhuozhang; Jialiu ; Wenyufu ", "journal": "Springer, Cham", "ref_id": "b9", "title": "Generative information hiding method based on adversarial networks", "year": "2018" }, { "authors": "C Yu; D Hu; S Zheng; W Jiang; M Li; Z Zhao", "journal": "Peer-to-Peer Netw. Appl", "ref_id": "b10", "title": "An improved steganography without embedding based on attention GAN", "year": "2021" }, { "authors": "D Hu; L Wang; W Jiang; S Zheng; B Li", "journal": "IEEE Access", "ref_id": "b11", "title": "A novel image steganography method via deep convolutional generative adversarial networks", "year": "2018" }, { "authors": "Z Wang; N Gao; X Wang; X Qu; L Li", "journal": "", "ref_id": "b12", "title": "Sstegan: Self-learning steganography based on generative adversarial networks", "year": "2018" }, { "authors": "X Liu; Z Ma; J Ma; J Zhang; G Schaefer; H Fang", "journal": "IEEE", "ref_id": "b13", "title": "Image disentanglement autoencoder for steganography without embedding", "year": "2022" }, { "authors": "Z Zhou; Y Su; Q M J Wu; Z Fu; Y Shi", "journal": "CoRR", "ref_id": "b14", "title": "Secret-to-image reversible transformation for generative steganography", "year": "2022" }, { "authors": "C Chan; L Cheng", "journal": "Pattern Recognit", "ref_id": "b15", "title": "Hiding data in images by simple LSB substitution", "year": "2004" }, { "authors": "J Mielikäinen", "journal": "IEEE Signal Process. Lett", "ref_id": "b16", "title": "LSB matching revisited", "year": "2006" }, { "authors": "O Elharrouss; N Almaadeed; S Al-Máadeed", "journal": "ICIoT", "ref_id": "b17", "title": "An image steganography approach based on k-least significant bits (k-lsb)", "year": "2020" }, { "authors": "A K Sahu; A Gutub", "journal": "Multim. Tools Appl", "ref_id": "b18", "title": "Improving grayscale steganography to protect personal information disclosure within hotel services", "year": "2022" }, { "authors": "T Pevný; T Filler; P Bas", "journal": "", "ref_id": "b19", "title": "Using high-dimensional image models to perform highly undetectable steganography", "year": "2010" }, { "authors": "V Holub; J J Fridrich; T Denemark", "journal": "EURASIP J. Inf. Secur", "ref_id": "b20", "title": "Universal distortion function for steganography in an arbitrary domain", "year": "2014" }, { "authors": "L Guo; J Ni; Y Shi", "journal": "IEEE Trans. Inf. Forensics Secur", "ref_id": "b21", "title": "Uniform embedding for efficient JPEG steganography", "year": "2014" }, { "authors": "Y Pan; J Ni; W Su", "journal": "", "ref_id": "b22", "title": "Improved uniform embedding for efficient JPEG steganography", "year": "2016" }, { "authors": "L Guo; J Ni; W Su; C Tang; Y Shi", "journal": "IEEE Trans. Inf. Forensics Secur", "ref_id": "b23", "title": "Using statistical image model for JPEG steganography: Uniform embedding revisited", "year": "2015" }, { "authors": "P Wu; Y Yang; X Li", "journal": "", "ref_id": "b24", "title": "Image-into-image steganography using deep convolutional network", "year": "2018" }, { "authors": "Y Zhang; W Zhang; K Chen; J Liu; Y Liu; N Yu", "journal": "", "ref_id": "b25", "title": "Adversarial examples against deep neural network based steganalysis", "year": "2018" }, { "authors": "P G Kuppusamy; K C Ramya; S S Rani; M Sivaram; D Vigneswaran", "journal": "Scalable Comput. Pract. Exp", "ref_id": "b26", "title": "A novel approach based on modified cycle generative adversarial networks for image steganography", "year": "2020" }, { "authors": "H Porav; V Musat; P Newman", "journal": "CVPR", "ref_id": "b27", "title": "Reducing steganography in cycle-consistency gans", "year": "2019" }, { "authors": "R Zhang; S Dong; J Liu", "journal": "Multim. Tools Appl", "ref_id": "b28", "title": "Invisible steganography via generative adversarial networks", "year": "2019" }, { "authors": "J Jing; X Deng; M Xu; J Wang; Z Guan", "journal": "IEEE", "ref_id": "b29", "title": "Hinet: Deep image hiding by invertible network", "year": "2021" }, { "authors": "Z Guan; J Jing; X Deng; M Xu; L Jiang; Z Zhang; Y Li", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b30", "title": "Deepmih: Deep invertible network for multiple image hiding", "year": "2023" }, { "authors": "S X Yi; Zhou Zhili", "journal": "", "ref_id": "b31", "title": "Coverless information hiding based on the molecular structure images of material", "year": "2018" }, { "authors": "A Qiu; X Chen; X Sun; S Wang; G Wei", "journal": "Journal of Information Hiding Privacy Protection", "ref_id": "b32", "title": "Coverless image steganography method based on feature selection", "year": "2019" }, { "authors": "Q Liu; X Xiang; J Qin; Y Tan; Y Qiu", "journal": "EURASIP J. Image Video Process", "ref_id": "b33", "title": "Coverless image steganography based on densenet feature mapping", "year": "2020" }, { "authors": "X Chen; Z Zhang; A Qiu; Z Xia; N N Xiong", "journal": "IEEE Trans. Netw. Sci. Eng", "ref_id": "b34", "title": "Novel coverless steganography method based on image selection and stargan", "year": "2022" }, { "authors": "H H Zhu Yiming; Chen Fan", "journal": "Journal of Applied Sciences", "ref_id": "b35", "title": "Orthogonal gan information hiding model based on secret information driven", "year": "2019" }, { "authors": "W Jiang; D Hu; C Yu; M Li; Z Zhao", "journal": "", "ref_id": "b36", "title": "A new steganography without embedding based on adversarial training", "year": "2020" }, { "authors": "X Liu; Z Ma; J Ma; J Zhang; G Schaefer; H Fang", "journal": "", "ref_id": "b37", "title": "Image disentanglement autoencoder for steganography without embedding", "year": "2022" }, { "authors": "P Wei; G Luo; Q Song; X Zhang; Z Qian; S Li", "journal": "", "ref_id": "b38", "title": "Generative steganographic flow", "year": "2022" }, { "authors": "D P Kingma; P ", "journal": "", "ref_id": "b39", "title": "Glow: Generative flow with invertible 1x1 convolutions", "year": "2018-12-03" }, { "authors": "J Li; K Niu; L Liao; L Wang; J Liu; Y Lei; M Zhang", "journal": "", "ref_id": "b40", "title": "A generative steganography method based on wgan-gp", "year": "2020" }, { "authors": "X Weng; Y Li; L Chi; Y Mu", "journal": "", "ref_id": "b41", "title": "High-capacity convolutional video steganography with temporal residual modeling", "year": "2019" }, { "authors": "F Yu; Y Zhang; S Song; A Seff; J Xiao", "journal": "CoRR", "ref_id": "b42", "title": "LSUN: construction of a large-scale image dataset using deep learning with humans in the loop", "year": "2015" }, { "authors": "G B H E Learned-Miller", "journal": "", "ref_id": "b43", "title": "Labeled faces in the wild: Updates and new reporting procedures", "year": "2014-05" }, { "authors": "Z Liu; P Luo; X Wang; X Tang", "journal": "", "ref_id": "b44", "title": "Deep learning face attributes in the wild", "year": "2015-12" }, { "authors": "Z Wang; A C Bovik; H R Sheikh; E P Simoncelli", "journal": "IEEE Trans. Image Process", "ref_id": "b45", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "J Ye; J Ni; Y Yi", "journal": "IEEE Trans. Inf. Forensics Secur", "ref_id": "b46", "title": "Deep learning hierarchical representations for image steganalysis", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 153.43, 719.19, 146.6, 10.81 ], "formula_id": "formula_0", "formula_text": "z ∼ p θ (z),(1)" }, { "formula_coordinates": [ 3, 160.9, 739.08, 135.26, 10.81 ], "formula_id": "formula_1", "formula_text": "∼ g θ (z), (2" }, { "formula_coordinates": [ 3, 296.15, 739.4, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 3, 431.5, 117.41, 92.06, 9.65 ], "formula_id": "formula_3", "formula_text": "f = f 1 • f 2 • • • • • f K ," }, { "formula_coordinates": [ 3, 378.87, 147.47, 184.17, 13.73 ], "formula_id": "formula_4", "formula_text": "x f1 ←→ h 1 f2 ←→ h 2 . . . f k ←→ z,(3)" }, { "formula_coordinates": [ 3, 355.52, 239.7, 207.52, 10.81 ], "formula_id": "formula_5", "formula_text": "log p θ (x) = log p θ (z) + log |det(dz/dx).(4)" }, { "formula_coordinates": [ 4, 71.26, 499.51, 72.8, 12.47 ], "formula_id": "formula_6", "formula_text": "′ st = z ′ 1 , z ′ 2 , ..., z ′ L ." }, { "formula_coordinates": [ 4, 48.96, 536.88, 251.06, 40.74 ], "formula_id": "formula_7", "formula_text": "f se := f 1 • f 2 • • • • • f K and f st := f ′ 1 • f ′ 2 • • • • • f ′ n , we have I se f1 ←→ h 1 . . . h k-1 f k ←→ z se , I st f ′ 1 ←→ h ′ 1 . . . h ′ n-1 f ′ n ←→ z ′ st ." }, { "formula_coordinates": [ 4, 54.63, 688.24, 245.4, 36.58 ], "formula_id": "formula_8", "formula_text": "I se f1 ←→ h 1 . . . f k ←→ z se t ←→ z ′ st f ′ 1 ←→ . . . h ′ n-1 f ′ n ←→ I st . (5) z se t ←→ z ′" }, { "formula_coordinates": [ 4, 400, 353.66, 163.03, 23.9 ], "formula_id": "formula_9", "formula_text": "I ge = Glow gst (z), z ∼ N(0, I).(6)" }, { "formula_coordinates": [ 4, 376.82, 438.06, 186.21, 16.98 ], "formula_id": "formula_10", "formula_text": "I ge f ′ 1 ←→ h ′ 1 . . . h ′ n-1 f ′ n ←→ z ′ st .(7)" }, { "formula_coordinates": [ 4, 311.98, 581.97, 251.06, 12.48 ], "formula_id": "formula_11", "formula_text": "z se = {z 1 , . . . , z L-1 , z L } and z ′ st = z ′ 1 , . . . , z ′ L-1 , z ′ L ." }, { "formula_coordinates": [ 5, 72.57, 121.4, 192.58, 144.07 ], "formula_id": "formula_12", "formula_text": "DCT Z se … … Z L-1 Z L-1 Z L Z L Z 1 Z 1 Z 1 Z L-1 Z L HDSR Z st Z st Std + + × × Mean" }, { "formula_coordinates": [ 5, 115.38, 569.27, 184.64, 33.31 ], "formula_id": "formula_13", "formula_text": "Std = n i=1 ẑst i - n i=1 ẑst i n n i=1 ẑse i - n i=1 ẑse i n ,(8)" }, { "formula_coordinates": [ 5, 107.18, 605.97, 192.84, 25.41 ], "formula_id": "formula_14", "formula_text": "M ean = n i=1 Std × ẑst i -ẑse i n ,(9)" }, { "formula_coordinates": [ 5, 120.5, 645, 179.52, 9.2 ], "formula_id": "formula_15", "formula_text": "ẑse = ẑse × Std + M ean.(10)" }, { "formula_coordinates": [ 5, 133.9, 731.24, 166.13, 22.56 ], "formula_id": "formula_16", "formula_text": "ẑst = ẑst -M ean Std .(11)" }, { "formula_coordinates": [ 6, 143.14, 510.26, 156.88, 17.88 ], "formula_id": "formula_17", "formula_text": "RM SE = √ M SE,(12)" }, { "formula_coordinates": [ 6, 93.77, 550.42, 206.25, 30.32 ], "formula_id": "formula_18", "formula_text": "M SE = 1 W * H W i=1 H j=1 (X i,j -X i,j ) 2 ,(13)" }, { "formula_coordinates": [ 6, 125.83, 660.2, 174.19, 23.89 ], "formula_id": "formula_19", "formula_text": "P SN R = 10 * log 10 R 2 M SE ,(14)" }, { "formula_coordinates": [ 6, 337.95, 349.43, 225.09, 23.91 ], "formula_id": "formula_20", "formula_text": "SSIM = (2µ X µ Y + C 1 )(σ (X,Y ) + C 2 ) (µ X 2 + µ Y 2 + C 1 )(σ X 2 + σ Y 2 + C 2 )(15)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b8", "b2", "b19", "b16" ], "table_ref": [], "text": "In an era dominated by social media platforms such as Facebook, Instagram, and TikTok, billions of individuals have found themselves connected like never before, enabling them to swiftly share their thoughts and viewpoints. The growth of social networks provides people all over the world with unprecedented levels of connectedness and enriched communication. However, social media posts often abound with comments containing varying degrees of violence, whether expressed overtly or covertly (Kumar et al., 2018(Kumar et al., , 2020)). To combat this worrisome trend, social media platforms established community guidelines and standards that users are expected to adhere to. 1,2 .Violations of these rules may result in the removal of offensive content or even the suspension of user accounts. Given the vast amount of user-generated content on these platforms, manually scrutinizing and filtering potential violence is a very challenging task. This moderation approach is limited by moderators' capacity to keep pace, comprehend evolving slang and language nuances, and navigate the complexity of multilingual content (Das et al., 2022). To address this issue, several social media platforms turn to AI and NLP models capable of detecting inappropriate content across a range of categories such as aggression and violence, hate speech, and general offensive language (Zia et al., 2022;Weerasooriya et al., 2023).\nThe shared task on Violence Inciting Text Detection (VITD) (Saha et al., 2023a) aims to categorize and discern various forms of communal violence, aiming to shed light on mitigating this complex phenomenon for the Bangla speakers. For this task, we carry out various experiments presented in this paper. We employ various models and data augmentation techniques for violent text identification in Bangla." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b11", "b2", "b12", "b2", "b17", "b18", "b0", "b9" ], "table_ref": [], "text": "Violence Identification in Bangla Several works have been done on building datasets similar to this task and training models on those data. Such datasets include the works of (Remon et al., 2022;Das et al., 2022), which mostly gather data by social media mining. However, most of the datasets are comparatively small in size. One of the larger datasets is prepared by Romim et al. (2022), which consists of 30,000 user comments from YouTube and Facebook, annotated using crowdsourcing.\nWhile most works focus primarily on the datasets, they also present some experimental analysis. Das et al. (2022) Zampieri et al. (2019Zampieri et al. ( , 2020) ) organized OffensEval, a series of shared tasks identifying and categorizing offensive language in tweets organized at SemEval 2019 and 2020. At OffensEval, participants trained a variety of models ranging from machine learning to deep learning approaches. While BERT and other transformed dominated the leaderboard in 2020, systems' performance in 2019 was more varied with traditional ML classifiers and ensemblebased approaches achieving competition performance along with deep learning approaches. Another shared task, MEX-A3T track at IberLEF 2019 (Aragon et al., 2019), focused on author profiling and aggressiveness detection in Mexican Spanish tweets. Additionally, Modha et al. (2021) presents an overview of the HASOC track at FIRE 2021 for hate speech and offensive content detection in English, Hindi, and Marathi, where the highest accuracy is achieved on the Marathi dataset." }, { "figure_ref": [], "heading": "Related Shared Tasks", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b12", "b15" ], "table_ref": [ "tab_1" ], "text": "The VITD shared task (Saha et al., 2023b) provides the participants with a Bangla dataset including 2700 instances for training and 1330 instances for development. The blind test set contains 2016 instances. The dataset (Saha et al., 2023a) has been annotated using three labels: Non-Violence, Direct-Violence, and Passive-Violence. This threeclass annotated dataset differs from similar datasets where a binary annotation is used (Romim et al., 2022;Wadud et al., 2021). The data distribution per label is shown in Table 1." }, { "figure_ref": [], "heading": "Label", "publication_ref": [ "b6", "b3", "b1", "b5" ], "table_ref": [], "text": "Train Dev Test Non-Violence 51% 54% 54% Passive-Violence 34% 31% 36% Direct-Violence 15% 15% 10% Transformers We test multiple transformer models pre-trained on Bangla. Our initial experiments include Bangla-BERT (Kowsher et al., 2022) which is only pre-trained on Bangla corpus. We finetune the model on the train set and evaluate it on the dev set with empirical hyperparameter tuning.\nWe then use multilingual transformer models like multilingual-BERT (Devlin et al., 2019) and xlm-roBERTa (Conneau et al., 2020), which are pretrained on 104 and 100 different languages respectively, including Bangla. We also do the same hyperparameter tuning with both models. Lastly, we use MuRIL (Khanuja et al., 2021) " }, { "figure_ref": [], "heading": "Data Augmentation", "publication_ref": [], "table_ref": [], "text": "Given the relatively small size of the VITD dataset, we implement a few data augmentation strategies to expand its size. First, we use Google's Translator API (Google, 2021) to translate the train and dev set to 3 other languages that are very similar to Bangla (Hindi, Urdu, and Tamil). Bangla, Hindi, Urdu belong to Indo-Aryan language branch and Tamil from Dravidian language brach, though, all of these languages have cultural interaction in south-east asian region. The native speakers of these languages live in closer geographic proximity. Moreover, these languages have similar morphosyntactic features. So, translating Bangla text to those languages do not hamper structural and grammatical integrity of the sentences. Therefore, we combine these new synthetic datasets with the original train dataset and finetune the multilingual transformer models on them.\nThe second approach to augment the dataset is back translation. We again use the Translator API to translate the original train and dev set to a few and Azarbijani as the intermediary language for back translation, in order to add more context. Zulu is from Niger-Congo, Pashto is Indo-Iranian and Azabijani is from Turkic language family. As these languages does not have any cultural interaction with Bangla, back translating from these languages will make three additional version of same sentences with versatility. Then we combine these data with the original dataset. We observe that xlm-roBERTa produces a better macro F1 than the first approach, but still the same as it was on the original data, 0.73." }, { "figure_ref": [ "fig_0" ], "heading": "Two-step Classification with Data Augmentation", "publication_ref": [], "table_ref": [], "text": "Finally, we combine the two dataset augmentation techniques discussed previously. After combining the synthetic data with the original train set, we have a New Dataset that is 7 times the size of the original train set. We generate two different datasets using this New Dataset. For the First Dataset, we convert all the labels in the New Dataset to either Violent (1) or non-Violent (0). And for the Second Dataset, we only keep the violent data (both Direct and Passive) from the New Dataset.\nWe finetune mBERT, MuRIL and xlm-roBERTa on both binary labeled First Datatset and Second Dataset and save their model weights. xlm-roBERTa outperforms the other two when finetuned the First Dataset and MuRIL outperforms the other two when fine-tuned on the Second Dataset. For the test set, we first use the finetuned xlm-roBERTa to label the whole dataset as either violent or non-violent data. We then separate all the data from the test set that are labeled as 'violent' by the finetuned xlm-roBERTa model and use the finetuned MuRIL model to predict the 'active violence' and 'passive violence' labels. Finally, we merge this with all the 'non-violent' labeled datasets from the first step. Thus, we get all the predicted labels for the test set using 2-step classification by two fine-tuned models. The whole procedure is demonstrated in Figure 1." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "At the start of the shared task, three baseline macro F1 scores have been provided by the organizers. For BanglaBERT, XLM-R and mBERT, the provided baselines are 0.79, 0.72, and 0.68 respectively. The results of our experiments are shown in Table 2. Among the statistical machine learning models, we use logistic regression and support vector machine. For logistic regression, we achieve a macro F1 score of 0.56 and for the support vector machine the F1 is 0.63. For transformer-based models, we use BanglaBERT, mBERT, MuRIL and XLM-R where we get the best F1 score of 0.73 by XLM-R. Task fine-tuned model BanglaHateBERT scores 0.60 macro F1. A few shot learning procedure is used by using GPT3.5 Turbo. We give a few instances of each label as prompt and got 0.43 F1 which is significantly lower than our other attempted approaches. This is because GPT3.5 is still not enough efficient for any downstream classification problem in Bangla like this shared task." }, { "figure_ref": [], "heading": "Models", "publication_ref": [], "table_ref": [], "text": "We also perform some customization in our approach instead of directly using the existing models. We use transfer learning. Instead of using the basic idea of transfer learning by fine-tuning a model with a larger dataset of the same label, we translate the train set to English with Google Translator API and used XLM-R on that data. Then we use that finetune model and perform the same procedure over the actual Bangla train set. We refer this procedure as self-transfer learning and the F1 score from this procedure is 0.72.\nIntroducing multilinguality to many downstream tasks proves to be effective. So we also opt for this procedure by translating the train data using Google Translator API to Hindi, Urdu, and Tamil as they are grammatically less diverse and vocabulary is close in contact among the native speakers of these languages. That is how we make the size of our train set three times higher than the original one and got a 0.72 F1 score.\nOn the other hand, we use Zulu, Azerbaijan, and Pashto -3 very diverse languages from Bangla for back translation. So, we also get the size of our train set three times higher than the original Bangla one with significantly different translations for each instance. And we get a 0.73 F1 score for that.\nMoreover, we use a two-step classification with the data achieved by multilinguality and back translation. Along with these data, we also merge our original Bangla train set. Then, we perform two separate streams of classification. At first, instead of direct and passive violence, we convert them as violence and finetune by XLM-R, mBERT, and MuRIL to classify violence and non-violence where XLM-R performs the best. Then we use the same procedure with the same models to classify direct and passive violence from the merged labels of violence where MuRIL performs the best. Following this procedure, we achieve our best macro F1 score of 0.74 for this shared task." }, { "figure_ref": [ "fig_1" ], "heading": "Analysis", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "In terms of text length, the model attains a perfect macro F1 score of 1.000 for texts of 10 words or fewer but struggles with longer texts, evidenced by a macro F1 of only 0.329 for texts of 500-1000 words (Figure 2, Table 3). Though, it maintains respectable F1 scores for text lengths commonly encountered in the dataset, future work should focus on enhancing F1 score for texts with direct violence content. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper we described the nlpBDpatriots approach to the VITD shared task. We evaluated various models on the data provided by the shared task organizers, namely statistical machine learning models, transformer-based models, few shot prompting, and some customization with transformer-based models with multilinguality, back translation, and two-step classification.\nWe show that the two-step classification procedure with multilinguality and back translation is the most successful approach achieving a macro F1 score of 0.74. Our two-step approach towards solving the problem presented for this shared task shows promising results. However, the relatively small size of the dataset made it difficult for the other pre-trained models to learn informative features that would help them perform classification. Also, the dataset contains three imbalanced labels making it easy for the models to overfit. Our approach with data augmentation and two-step classification generates good results, but it is still below one of the three baseline results announced by the organizers prior to the start of the competition." }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "We would like to thank the VITD shared task organizing for proposing this interesting shared task. We further thank the anonymous reviewers for their valuable feedback." } ]
In this paper, we discuss the nlpBDpatriots entry to the shared task on Violence Inciting Text Detection (VITD) organized as part of the first workshop on Bangla Language Processing (BLP) co-located with EMNLP. The aim of this task is to identify and classify the violent threats, that provoke further unlawful violent acts. Our best-performing approach for the task is two-step classification using back translation and multilinguality which ranked 6 th out of 27 teams with a macro F1 score of 0.74.
nlpBDpatriots at BLP-2023 Task 1: A Two-Step Classification for Violence Inciting Text Detection in Bangla
[ { "figure_caption": "Figure 1 :1Figure 1: Two-step Classification with Data Augmentation", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Performance analysis based on text length.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Confusion Matrix", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Label-wise data distribution across training, development, and test datasets.", "figure_data": "4 Methodologies", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Dev and test macro F-1 score for all evaluated models and procedures.", "figure_data": "Dev TestLogistic Regression0.55 0.56Support Vector Machine0.61 0.63BanglaBERT0.66 0.67mBERT0.71 0.67MuRIL0.81 0.72XLM-R0.79 0.73BanglaHateBERT0.59 0.60GPT 3.5 Turbo0.46 0.43XLM-R (Self-transfer Learning) 0.79 0.72XLM-R (Multilinguality)0.78 0.72XLM-R (Back Translation)0.77 0.73XLM-R, MuRIL (Two-step)0.84 0.74", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance analysis based on text length.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" } ]
Nishat Raihan; Dhiman Goswami; Sadiya Sayara; Chowdhury Puspo; Marcos Zampieri; George Mason
[ { "authors": "Mario Aragon; Miguel Angel Carmona; Manuel Montes; Hugo Jair Escalante; Luis Villaseñor-Pineda; Daniela Moctezuma", "journal": "", "ref_id": "b0", "title": "Overview of mex-a3t at iberlef 2019: Authorship and aggressiveness analysis in mexican spanish tweets", "year": "2019" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Édouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b1", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Mithun Das; Somnath Banerjee; Punyajoy Saha; Animesh Mukherjee", "journal": "", "ref_id": "b2", "title": "Hate speech and offensive language detection in bengali", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b3", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Md Saroar Jahan; Mainul Haque; Nabil Arhab; Mourad Oussalah", "journal": "", "ref_id": "b4", "title": "BanglaHateBERT: BERT for abusive language detection in Bengali", "year": "2022" }, { "authors": "Simran Khanuja; Diksha Bansal; Sarvesh Mehtani; Savya Khosla; Atreyee Dey; Balaji Gopalan; Dilip Kumar Margam; Pooja Aggarwal; Rajiv Teja Nagipogu; Shachi Dave", "journal": "", "ref_id": "b5", "title": "Muril: Multilingual representations for indian languages", "year": "2021" }, { "authors": "Abdullah As Kowsher; Sami; Jahan Nusrat; Mohammad Prottasha; Pranab Shamsul Arefin; Takeshi Kumar Dhar; Koshiba", "journal": "IEEE Access", "ref_id": "b6", "title": "Bangla-bert: transformer-based efficient model for transfer learning and language understanding", "year": "2022" }, { "authors": "Ritesh Kumar; Atul Kr Ojha; Shervin Malmasi; Marcos Zampieri", "journal": "", "ref_id": "b7", "title": "Benchmarking Aggression Identification in Social Media", "year": "2018" }, { "authors": "Ritesh Kumar; Atul Kr; Shervin Ojha; Marcos Malmasi; Zampieri", "journal": "", "ref_id": "b8", "title": "Evaluating aggression identification in social media", "year": "2020" }, { "authors": "Sandip Modha; Thomas Mandl; Kishore Gautam; Hiren Shahi; Shrey Madhu; Tharindu Satapara; Marcos Ranasinghe; Zampieri", "journal": "", "ref_id": "b9", "title": "Overview of the hasoc subtrack at fire 2021: Hate speech and offensive content identification in english and indo-aryan languages and conversational hate speech", "year": "2021" }, { "authors": " Openai", "journal": "", "ref_id": "b10", "title": "Gpt-3.5 turbo fine-tuning and api updates", "year": "2023-08-28" }, { "authors": "Nafisa Nasif Istiak Remon; Ranit Hasan Tuli; Akash Debnath", "journal": "", "ref_id": "b11", "title": "Bengali hate speech detection in public facebook pages", "year": "2022" }, { "authors": "Nauros Romim; Mosahed Ahmed; Md Saiful Islam; Arnab ; Sen Sharma; Hriteshwar Talukder; Mohammad Ruhul Amin", "journal": "", "ref_id": "b12", "title": "Bd-shs: A benchmark dataset for learning to detect online bangla hate speech in different social contexts", "year": "2022" }, { "authors": "Sourav Saha; Jahedul Alam Junaed; Maryam Saleki; Mohamed Rahouti; Nabeel Mohammed; Mohammad Ruhul Amin", "journal": "", "ref_id": "b13", "title": "a. Blp-2023 task 1: Violence inciting text detection (vitd)", "year": "2023" }, { "authors": "Sourav Saha; Jahedul Alam Junaed; Maryam Saleki; Sen Arnab; Mohammad Sharma; Mohamed Rashidujjaman Rifat; Rahout; Ishtiaque Syed; Nabeel Ahmed; Mohammad Ruhul Mohammad; Amin", "journal": "", "ref_id": "b14", "title": "Vio-lens: A novel dataset of annotated social network posts leading to different forms of communal violence and its evaluation", "year": "2023" }, { "authors": "Md Anwar; Hussen Wadud; Md ; Abdul Hamid; Muhammad Mostafa Monowar; Atif Alamri", "journal": "Ieee Access", "ref_id": "b15", "title": "Lboost: Identifying offensive texts from social media post in bengali", "year": "2021" }, { "authors": "Cyril Tharindu; Sujan Weerasooriya; Tharindu Dutta; Marcos Ranasinghe; Christopher M Zampieri; Ashiqur R Homan; Khudabukhsh", "journal": "", "ref_id": "b16", "title": "Vicarious offense and noise audit of offensive speech classifiers: Unifying human and machine disagreement on what is offensive", "year": "2023" }, { "authors": "Marcos Zampieri; Shervin Malmasi; Preslav Nakov; Sara Rosenthal; Noura Farra; Ritesh Kumar", "journal": "", "ref_id": "b17", "title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (Offen-sEval)", "year": "2019" }, { "authors": "Marcos Zampieri; Preslav Nakov; Sara Rosenthal; Pepa Atanasova; Georgi Karadzhov; Hamdy Mubarak; Leon Derczynski; Zeses Pitenis; Çagrı Çöltekin", "journal": "", "ref_id": "b18", "title": "SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (Offen-sEval 2020)", "year": "2020" }, { "authors": "Zia Haris Bin; Ignacio Castro; Arkaitz Zubiaga; Gareth Tyson", "journal": "", "ref_id": "b19", "title": "Improving zero-shot crosslingual hate speech detection with pseudo-label finetuning of transformer language models", "year": "2022" } ]
[]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b21", "b17" ], "table_ref": [], "text": "NLP has become a major domain of modern computational research, offering a lot of applications from machine translation to chatbots. However, much of this research has been concentrated on English and other high-resource languages like French, German, and Spanish.\nBangla, despite being the seventh most spoken language in the world with approximately 273 million speakers (Ethnologue, 2023), has not received similar attention from the NLP community. This gulf is not just an academic oversight; it has realworld implications. Bangla is a language of significant cultural heritage and economic activity. The development of NLP technologies for Bangla is both a scientific necessity and a practical imperative. The limited availability of Bangla NLP resources has led to a reliance on traditional machine learning techniques like SVMs and Naive Bayes classifiers for classification tasks such as sentiment analysis. The advent of deep learning models has opened new avenues. Models like BERT (Devlin *These three authors contributed equally to this work. WARNING: This paper contains examples that are offensive in nature. et al., 2019) have shown promising results in languages other than English and has been recently trained to support Bangla (Kowsher et al., 2022).\nSentiment analysis is increasingly becoming a vital tool for understanding public opinion and people's behavior (Rosenthal et al., 2017). It has found applications in various sectors, including finance, where it helps investors to leverage social media data for better investment decisions (Mishev et al., 2020). In the context of Bangla, the utility of sentiment analysis extends beyond mere academic interest. It can serve as a powerful tool for businesses to gauge customer satisfaction, for policymakers to understand public sentiment, and even for social scientists studying behavioral trends.\nIn this paper, we evaluate several models and implement transfer learning for the shared task on Sentiment Analysis of Bangla Social Media Posts organized at the first workshop on Bangla Language Processing (BLP-2023) (Hasan et al., 2023a). Moreover, an ensemble model consisting of three transformer-based models generates a superior performance over the other approaches." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b24", "b22", "b18", "b1", "b6", "b11", "b23", "b2", "b14" ], "table_ref": [], "text": "Initiating Sentiment Analysis in Bangla Sentiment analysis, which was mainly focused on English (e.g. Yadav andVishwakarma 2020, Saberi andSaad 2017), is now becoming popular in other low resource languages like Urdu (e.g. Noor et al. 2019, Muhammad andBurney 2023), Pashto (e.g. Iqbal et al. 2022, Kamal et al., Kamal et al.), Bangla (e.g. Islam et al. 2020, Akter et al. 2021). Researchers are actively working to improve how people analyze and modify Bangla online comments using different methods and datasets. They are doing a variety of tasks, from classifying documents to mining opinions and analyzing sentiment, all while adapting their techniques to the specifics of the Bangla language. For example, for document classification, Rahman et al. ( 2020) presented an approach using the transformer-based models BERT and ELECTRA with transfer learning. The models were fine-tuned on three Bangla datasets. Similarly, Rahman et al. ( 2020) explored characterlevel deep learning models for Bangla text classification, testing Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) models. On the other hand, for opinion mining, Haque et al. (2019) analyzed Bangla and Phonetic Bangla restaurant reviews using machine learning on a dataset of 1500 reviews. SVM achieved the highest accuracy of 75.58%, outperforming prior models. Islam et al. (2020) presented two new Bangla sentiment analysis datasets which achieved state-of-theart results with multi-lingual BERT (71% accuracy for 2-class, 60% for 3-class), and notes sentiment differences in newspaper comments. Tuhin et al. (2019) proposed two Bangla sentiment analysis methods: Naive Bayes and a topical approach, aiming at six emotions, which achieved over 90% accuracy for sentence-level emotion classification, outperforming Naive Bayes. Similarly, Al Kaiser et al. (2021) discussed research focused on sentiment analysis and hate speech detection in Bangla language Facebook comments; compiling a dataset of over 11,000 comments, categorized by polarity (positive, negative, neutral) and various sentiment types, including gender-based hate speech. Furthermore, there are researches conducted on sentiment analysis in the field of online Bangla reviews. For example, Khan et al. (2020) detected depression in Bangla social media using sentiment analysis. They preprocessed a small dataset and employed machine learning classifiers, but faced limitations due to the dataset's size and basic classifiers." }, { "figure_ref": [], "heading": "Advancements of Sentiment Analysis in Bangla", "publication_ref": [ "b3", "b8" ], "table_ref": [], "text": "Akter et al. ( 2021) used machine learning for Bangla e-commerce review sentiment analysis, with KNN achieving 96.25% accuracy, outperforming other classifiers. This highlighted machine learning's potential in analyzing Bangla ecommerce reviews. Whereas, Banik and Rahman (2018) introduced a Bangla movie review sentiment analysis system using 800 annotated social media reviews. (Hasan et al., 2023b) introduced a significant dataset of 33,605 manually annotated Bangla social media posts and examined how different language models perform in zero-and few-shot learning situations. Thus, the research of sentiment analysis is continuously growing, and it's helping us better understand sentiment in Bangla online content." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b12" ], "table_ref": [ "tab_0" ], "text": "The dataset provided for the shared task (Hasan et al., 2023a), consists of a training set, a development set, and a blind test set. For each set, the texts have been annotated using three labels -'Positive', 'Neutral', or 'Negative' (Islam et al., 2021). The label distribution for each set is provided in Table 1." }, { "figure_ref": [], "heading": "Label", "publication_ref": [], "table_ref": [], "text": "Train Dev Test Positive 35% 35% 31% Neutral 20% 20% 19% Negative 45% 45% 50% The dataset is imbalanced across the labels, hence it is challenging for the models to learn well." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We conduct a wide range of experiments with several models and data augmentation strategies. Our experiments include statistical models, transformer-based models; data augmentation strategies like back-translation, multilinguality and also prompting proprietary LLMs." }, { "figure_ref": [ "fig_0" ], "heading": "Statistical ML Classifiers", "publication_ref": [ "b16", "b5", "b4", "b15", "b9" ], "table_ref": [ "tab_2", "tab_1" ], "text": "In our experiments, we use statistical machine learning models like Logistic Regression and Support Vector Machine using TF-IDF vectors. We implement both models and some hyperparameter tuning. While SVM performs better with a 0.55 F1 score (Micro) the overall results do not improve much.\nTransformers We also test several transformerbased models which are pre-trained on Bangla data. Our initial experiments include Bangla-BERT (Kowsher et al., 2022) which is only pre-trained on bangla corpus. We finetune the model on the train set and evaluate it on the dev set with empirical hyperparameter tuning. We get 0.64 as the best micro F1 using Bangla-BERT. We then use multilingual transformer models like multilingual-BERT (Devlin et al., 2019) and xlm-roBERTa (Conneau et al., 2020), which are pre-trained on 104 and 100 different languages respectively, including Bangla. We also do the same hyperparameter tuning with both models. While mBERT gets a 0.60 Micro F1 score, xlm-roBERTa does better with 0.71 on the dev set and 0.70 on the test set. Lastly, we use MuRIL (Khanuja et al., 2021), another transformer pre-trained in 17 Indian languages including Bangla. It has a test micro F1 score of 0.67. While experimenting with these models, we observe the losses while fine-tuning to make sure the models do not overfit.\nPrompting Next, we try prompting with gpt-3.5turbo model (OpenAI, 2023) from OpenAI for this classification task. We use the API to prompt the model, while providing a few examples for each label and ask the model to label the dev and test set. The model does not do well with a micro F1 of 0.57 on the dev and 0.51 on the test set.\nTransfer Learning on Augmented Data Finally, we augment the data of the Bangla YouTube Sentiment and Emotion dataset by Hoq et al. (2021). The dataset has highly positive (2), positive (1), neutral (0), negative (-1) and highly negative (-2) labels. We merge the highly positive and positive labels to Positive, negative and highly negative labels to Negative and keep the neutral label unchanged. This is how we get three labels out of five and merge it with our train data. Following this procedure, we get 0.71 micro F1 score for test dataset.\nEnsemble After finding the results of transformer-based models, we perform an ensemble approach on BanglaBERT, MuRIL, and XLM-R. We then find the weighted average confidence of these three models. For Negative, the confidence interval is fixed 0.0 -0.33, for Neutral between 0.33 to 0.66 exclusive and for Positive 0.66 -1.0. The weights are their corresponding test F1 scores found in Table 3. With that confidence interval, we predict the test labels. We get a 0.72 micro F1 score by this approach. However this result is not reported to the shared task test phase as we get this result by additional experiments. The detailed label prediction procedure is given in Table 2 and the workflow of the whole ensemble method is given in Figure 1. For the first instance, the example is indeed Neutral but BanglaBERT predicts it borderline Negative and XLM-R predicts it Positive. But the power of ensemble approach bring it to the confidence interval of Neutral and thus predicts the label correctly. Similarly, for the second one, a corrected Neutral label is predicted from a Negative, Neutral and borderline Positive confidence. For the last two cases, Negative and Positive labels are determined correctly even with the presence of two Neutral confidence." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [ "b9" ], "table_ref": [ "tab_2", "tab_2" ], "text": "At the start of the share task competition, 3 baseline micro F1 scores are provided by the organizers. For random selection the provided baseline is 0.34, for majority selection 0.50, and n-gram 0.55. The results of different models are given in Table 3.\nAmongst the statistical machine learning models, we use logistic regression and support vector machine. For logistic regression, we achieve a micro F1 score of 0.45 and for the support vector machine, the F1 is 0.55.\nFor transformer-based models, we use mBERT, BanglaBERT, MuRIL and XLM-R where we get the best F1 score of 0.70 by XLM-R.\nA few shot learning procedure is used by using GPT3.5 Turbo. We give a few instances of each label as prompt and got 0.51 F1 which is significantly lower than our other attempted approaches except logistic regression. It is because GPT3.5 is still not efficient enough for any downstream classification problem in bangla like this shared task.\nMoreover, we augment the data of Bangla YouTube Sentiment and Emotion dataset by Hoq et al. (2021). The dataset has highly positive, positive labels which we consider as positive and negative, highly negative labels which we consider negative. We keep the neutral label unchanged. This is how we get three labels out of five labels and merge it with our train data. Following this procedure, we finally achieve micro F1 score of 0.71 which we this shared task's leader board.\nAdditionally, we perform ensemble method over the test micro F1 score of BanglaBERT, MuRIL and XLM-R. Instead of doing majority voting on the predicted test label, we find weighted average of confidence interval for the each instances of the test set for the three transformer based models shown in Table 3 " }, { "figure_ref": [], "heading": "Error Analysis", "publication_ref": [], "table_ref": [], "text": "The classification report provides a comprehensive understanding of our model's performance across the three classes. The overall accuracy of the model is 0.71. The 'Positive' class has the highest F1score of 0.78, driven by a precision of 0.75 and a recall of 0.80. The 'Neutral' class, on the other hand, shows a relatively weaker performance with an F1-score of 0.42, a result of its lower precision and recall, 0.51 and 0.37 respectively. The 'Negative' class offers a competitive performance with an F1-score of 0.74, a precision of 0.72, and a recall of 0.76. On a macro level, the average values indicate a precision of 0.66, recall of 0.64, and an F1-score of 0.65. When weighted by support, the averages show a slightly better picture with precision at 0.69, recall identical to the overall accuracy at 0.71, and an F1-score of 0.70. Further dissecting the errors by text length offers more insights. Texts with lengths in the range of 50 to 100 characters contribute the most to the dataset, constituting 43.73% of the samples, and have an F1-score of 0.74. The second largest group, texts ranging from 20 to 50 characters, contribute 26.64% to the dataset with a slightly better F1score of 0.70. It is also worth noting that the performance drastically reduces for texts with lengths between 500 and 1000 characters, yielding the lowest F1-score of 0.39, albeit they only make up 0.73% of the samples. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this shared task, we use statistical machine learning models, transformer-based models, a few shot prompting, some customization with transformer- based models with transfer learning, data augmentation, and an ensemble-based approach. The transfer learning and data augmentation procedure is reported as the most successful approach in terms of a micro F1 score of 0.71. But additional experiments by doing an ensemble over three transformer-based models provide a 0.72 F1 score. Overall, this paper can be treated as a holistic experimental outcome for this shared task.\nOur transfer learning approach towards solving the problem presented for this shared task shows promising results. However, in most cases, our models keep overfitting. We use dropouts and weight decaying to handle the issue. Even though we perform a lot of hyper-parameter tuning with all the models, it might still be the case that we are not able to find the optimal set of parameters for a few models in our experiments." }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "We would like to thank the shared task organizing for proposing this interesting shared task. We further thank the anonymous workshop reviewers for their valuable feedback." } ]
In this paper, we discuss the nlpBDpatriots entry to the shared task on Sentiment Analysis of Bangla Social Media Posts organized at the first workshop on Bangla Language Processing (BLP) co-located with EMNLP. The main objective of this task is to identify the polarity of social media content using a Bangla dataset annotated with positive, neutral, and negative labels provided by the shared task organizers. Our best system for this task is a transfer learning approach with data augmentation which achieved a micro F1 score of 0.71. Our best system ranked 12 th among 30 teams that participated in the competition.
nlpBDpatriots at BLP-2023 Task 2: A Transfer Learning Approach to Bangla Sentiment Analysis
[ { "figure_caption": "Figure 1 :1Figure 1: Workflow of the Ensemble Model", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Models test Micro-F1 score in percentage.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance analysis based on text length.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "FigureFigure 4: Confusion Matrix", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Distribution of instances and labels across training, development, and test sets.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ensemble with Three Transformer Based Models based on Confidence Score", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": ". With that confidence interval, test labels are predicted with 0.72 F1 score which is the best among all our experiments. A comparison bar chart for different models' performance is shown Dev and test micro F-1 score for different models and procedures.", "figure_data": "ModelsDev TestLogistic Regression0.47 0.45Support Vector Machine0.56 0.55mBERT0.60 0.60BanglaBERT0.66 0.64MuRIL0.70 0.67XLM-R0.71 0.70GPT 3.5 Turbo0.57 0.51XLM-R (Transfer Learningon Augmented data)0.71 0.71Ensemble-0.72", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance analysis based on text length.", "figure_data": "Text_Length Micro_F1 Count%(0, 10]0.67691.03(10, 20]0.642503.73(20, 50]0.701787 26.64(50, 100]0.742933 43.73(100, 200]0.691288 19.20(200, 300]0.642023.01(300, 500]0.591191.77(500, 1000]0.39490.73(1000, 5000]0.80100.15", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Dhiman Goswami; Nishat Raihan; Sadiya Sayara; Chowdhury Puspo; Marcos Zampieri
[ { "authors": "", "journal": "Mst", "ref_id": "b0", "title": "", "year": "" }, { "authors": "Tuhin Akter; Manoara Begum; Rashed Mustafa", "journal": "IEEE", "ref_id": "b1", "title": "Bengali sentiment analysis of e-commerce product reviews using k-nearest neighbors", "year": "2021" }, { "authors": "Shad Al Kaiser; Sudipta Mandal; Ashraful Kalam Abid; Ekhfa Hossain; Ferdous Bin Ali; Intisar Tahmid Naheen", "journal": "IEEE", "ref_id": "b2", "title": "Social media opinion mining based on bangla public post of facebook", "year": "2021" }, { "authors": "Nayan Banik; Md Hasan; Hafizur Rahman", "journal": "IEEE", "ref_id": "b3", "title": "Evaluation of naïve bayes and support vector machines on bangla textual movie reviews", "year": "2018" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Édouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b4", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Fabliha Haque; Md Motaleb Hossen; Manik; Hashem", "journal": "IEEE", "ref_id": "b6", "title": "Opinion mining from bangla and phonetic bangla reviews using vectorization methods", "year": "2019" }, { "authors": " Arid Md; Firoj Hasan; Anika Alam; Shudipta Anjum; Afiyat Das; Anjum", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Blp-2023 task 2: Sentiment analysis", "year": "2023" }, { "authors": " Arid Md; Shudipta Hasan; Afiyat Das; Firoj Anjum; Anika Alam; Avijit Anjum; Sheak Sarker; Haider Rashed; Noori", "journal": "", "ref_id": "b8", "title": "Zero-and few-shot prompting with llms: A comparative study with finetuned models for bangla sentiment analysis", "year": "2023" }, { "authors": "Muntasir Hoq; Promila Haque; Mohammed Nazim Uddin", "journal": "Springer", "ref_id": "b9", "title": "Sentiment analysis of bangla language using deep learning approaches", "year": "2021" }, { "authors": "Saqib Iqbal; Farhad Khan; Hikmat Ullah Khan; Tassawar Iqbal; Jamal Hussain Shah", "journal": "Journal of Internet Technology", "ref_id": "b10", "title": "Sentiment analysis of social media content in pashto language using deep learning algorithms", "year": "2022" }, { "authors": "Ittehadul Khondoker; Md Islam; Md Ruhul Saiful Islam; Amin", "journal": "IEEE", "ref_id": "b11", "title": "Sentiment analysis in bengali via transfer learning using multi-lingual bert", "year": "2020" }, { "authors": "Ittehadul Khondoker; Sudipta Islam; Md Kar; Mohammad Ruhul Saiful Islam; Amin", "journal": "", "ref_id": "b12", "title": "SentNoB: A dataset for analysing sentiment on noisy Bangla texts", "year": "2021" }, { "authors": "Uzair Kamal; Imran Siddiqi; Hammad Afzal; Arif Ur Rahman", "journal": "", "ref_id": "b13", "title": "Pashto sentiment analysis using lexical features", "year": "2016" }, { "authors": "Md Rafidul Hasan Khan; Umme Sunzida Afroz; Abu Kaisar; Mohammad Masum; Sheikh Abujar; Syed Akhter Hossain", "journal": "IEEE", "ref_id": "b14", "title": "Sentiment analysis from bengali depression dataset using machine learning", "year": "2020" }, { "authors": "Simran Khanuja; Diksha Bansal; Sarvesh Mehtani; Savya Khosla; Atreyee Dey; Balaji Gopalan; Dilip Kumar Margam; Pooja Aggarwal; Rajiv Teja Nagipogu; Shachi Dave", "journal": "", "ref_id": "b15", "title": "Muril: Multilingual representations for indian languages", "year": "2021" }, { "authors": "Abdullah As Kowsher; Sami; Jahan Nusrat; Mohammad Prottasha; Pranab Shamsul Arefin; Takeshi Kumar Dhar; Koshiba", "journal": "IEEE Access", "ref_id": "b16", "title": "Bangla-bert: transformer-based efficient model for transfer learning and language understanding", "year": "2022" }, { "authors": "Kostadin Mishev; Ana Gjorgjevikj; Irena Vodenska; T Lubomir; Dimitar Chitkushev; Trajanov", "journal": "IEEE access", "ref_id": "b17", "title": "Evaluation of sentiment analysis in finance: from lexicons to transformers", "year": "2020" }, { "authors": "Khalid Bin; Muhammad ; Sm Aqil Burney", "journal": "Symmetry", "ref_id": "b18", "title": "Innovations in urdu sentiment analysis using machine and deep learning techniques for two-class classification of symmetric datasets", "year": "2023" }, { "authors": "Faiza Noor; Maheen Bakhtyar; Junaid Baber", "journal": "Springer. OpenAI", "ref_id": "b19", "title": "Sentiment analysis in e-commerce using svm on roman urdu text", "year": "2019-08-19" }, { "authors": "Md Md Mahbubur Rahman; Rifat Aktaruzzaman Pramanik; Monikrishna Sadik; Partha Roy; Chakraborty", "journal": "IEEE", "ref_id": "b20", "title": "Bangla documents classification using transformer based deep learning models", "year": "2020" }, { "authors": "Sara Rosenthal; Noura Farra; Preslav Nakov", "journal": "", "ref_id": "b21", "title": "Semeval-2017 task 4: Sentiment analysis in twitter", "year": "2017" }, { "authors": "Bilal Saberi; Saidah Saad", "journal": "Int. J. Adv. Sci. Eng. Inf. Technol", "ref_id": "b22", "title": "Sentiment analysis or opinion mining: A review", "year": "2017" }, { "authors": "Rashedul Amin Tuhin; Bechitra Kumar Paul; Faria Nawrine; Mahbuba Akter; Amit Kumar Das", "journal": "IEEE", "ref_id": "b23", "title": "An automated system of sentiment analysis from bangla text using supervised learning techniques", "year": "2019" }, { "authors": "Ashima Yadav; Dinesh Kumar; Vishwakarma ", "journal": "Artificial Intelligence Review", "ref_id": "b24", "title": "Sentiment analysis using deep learning architectures: a review", "year": "2020" } ]
[]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b0", "b5", "b6", "b8" ], "table_ref": [], "text": "Synchrotron radiation micro-computed tomography (micro-CT) is an imaging technique that produces high-resolution three-dimensional (3D) images non-destructively. These images are composed of two-dimensional (2D) trans-axial projections of an object [1,2]. This enables the examination of intricate biological and synthetic materials with submicron resolution [3]. At the KIT Imaging Cluster, the experiments of the NOVA project employ hard X-rays to examine arthropods and deposit them in an extensive storage system with datasets of several gigabytes in magnitude [4,5]. However, using traditional tools to navigate the vast data collection is laborious. Although a single dataset may not meet the big data criteria, the cumulative effect of thousands of datasets is overwhelming scientists' capacity to browse and interact with them in real time. Given the unprecedented rate at which synchrotron radiation facilities produce data, the need for an efficient visual exploration system has become more pertinent and pivotal than ever before.\nA tomographic workflow starts with the X-ray intensity projections of the samples being recorded continuously at various angles and hence produces a sequence of images in the sinogram domain. These images are then reconstructed into volumetric data by using algorithms such as filtered back projections [1,6]. In the post data acquisition stage, the final volumetric data will be used by scientists to perform further analysis. Particularly, the analysis of volumetric biological imaging data often requires isolating individual structures from the volumetric data by segmentation [7]- [9]. This research represents the first comprehensive study to produce lowlatency arthropod visual previews during data acquisition.\nBy generating visual previews of micro-CT datasets during data acquisition, scientists can quickly identify the type of arthropods visually without manual dataset labeling, thus facilitating the process of data identification. Therefore, an effective data browsing platform must resolve two primary challenges: perceptual scalability and data responsiveness. Perceptual scalability alludes to scientists' ability to recognize the sample, post executing data reduction techniques. More precisely, the smaller datasets must conserve the geometric structure of the arthropods. To ensure data responsiveness, it" }, { "figure_ref": [], "heading": "List Preview", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data Preview", "publication_ref": [ "b9", "b10" ], "table_ref": [], "text": "Interactive Preview \n→ → → Data Preview → → → Interactive Preview → → →\nis necessary that the datasets are available to users in realtime, regardless of their hardware requirements. Hence, we need to achieve a minimal latency between the server and the client. Two methods are available: first, the dataset can be rendered on the server-side, and then the resulting image can be transmitted to the client. Second, the server-side dataset can be reduced and then transmitted as a reduced volumetric dataset to the client [10,11]. The first method requires high hardware capability on the server-side, whereas the second approach would delegate the volume rendering responsibility to the client. This paper will detail the implementation of data processing techniques that allow for a reduction in data size while retaining the geometrical structure of arthropods. The primary contributions we offer are:\n(1.) Given the vast array of visual representations proposed in the literature, we first analyze the characteristics of visual outputs in a traditional data exploration system, leading us to identify three distinct visual outputs (Section 2).\n(2.) We present data reduction methods that are based on the three visual outputs that can reduce the size from gigabytes to megabytes. A significant emphasis was placed on maintaining the geometry of the arthropods in the presented methods (Section 3). Notably, the ITS method, the optimal image approach and the histogram filtering introduce innovative approaches that are unique within this research domain.\n(3.) Our assessment of the different methods mentioned encompasses both visual and analytical approaches (Section 4)." }, { "figure_ref": [ "fig_0" ], "heading": "II. DESIGN CONSIDERATIONS OF VISUAL PREVIEWS", "publication_ref": [ "b11", "b9" ], "table_ref": [], "text": "Using visual previews enables domain experts to narrow down and identify relevant data they are interested in. The preview terminology denotes a reduced version of the initial data that keeps its geometrical information [12]. To search a specific dataset, skilled data seekers were reported to follow the Visual Information Seeking Mantra, which recommends starting with an overview, then zooming in and filtering, and finally requesting details only when necessary. This data searching pattern is deemed to be the sure path to discovery.\nOur adoption of this concept results in three distinct perspectives, with the list view showing an overview of all datasets, the data view detailing the chosen dataset, and the interactive view providing a comprehensive visualization of the dataset (refer to Figure 1). The final visual outputs are denoted by the red boxes. Below, we outline each design consideration for every preview.\nList Preview. The resultant visual output is a thumbnail image representing the dataset. Considering the arthropod's three-dimensional nature, what is the most optimal method for generating a two-dimensional image that accurately represents the arthropod? What is the optimal viewing angle for creating an image snapshot? What is the metric that could automate the process of image generation? Through the answering of these questions, we shall produce a diminutive two-dimensional image that portrays the contour of the arthropods.\nData Preview. The data preview offers a thorough representation of the designated data, featuring an enlarged visual depiction. This preview should have more information than the list preview while having a rather small data size. One idea that could be interesting is to retain the outer geometry information while discarding the internal volume, resulting in a hollow dataset. Potentially, we could include object movement to augment object recognition.\nInteractive Preview. The entire volumetric data will be loaded when using the interactive preview. In order to attain a visualization response in real-time, a preliminary dataset featuring a crude resolution is initially loaded, which is then followed by the primary dataset [10]. The characteristic enables users to choose a region of interest for further examination." }, { "figure_ref": [], "heading": "III. METHODS", "publication_ref": [ "b9", "b12" ], "table_ref": [ "tab_0" ], "text": "Within this section, we will explore approaches to reduce the primary dataset while complying with the design considerations stated in Section II. An overview of the data processing methods employed to generate the final visual output of the previews is presented in Table I. The reconstructed micro-CT dataset constitutes the initial state, where 2D images are stacked together to create a volumetric series. Henceforth, we will refer to them as slices. Prior to starting the data processing pipeline for every visual preview, the slices are converted into a 3D object representation that will be subjected to data reduction operations. Within our context, the transformation of the slices is performed by converting them into either slicemaps [10,13] or a 3D file format, such as OBJ-format. Subsequently, the data will undergo data reduction processes before being transferred to the final visual outputs." }, { "figure_ref": [], "heading": "A. Thresholding", "publication_ref": [ "b13", "b14" ], "table_ref": [], "text": "The goal is to isolate specific regions of interest within the 3D volume, such as different tissue types, voids, or materials. In our context, we want to extract the arthropods from the surroundings. The process of thresholding aids in distinguishing between these regions through the application of binary classification to each voxel (3D pixel) based on its Xray attenuation value. Tan Jerome et al. demonstrated the use of a real-time local noise filter and Otsu thresholding to reduce over-thresholding [14]. This filter is integrated into the GPU shader and will be used throughout our work. Therefore, we will use the Otsu threshold technique to determine the optimal threshold value. Additionally, a novel algorithm for threshold selection, named iterative threshold selection (ITS), has been formulated using a greedy algorithm approach. Below, we describe each of these algorithms.\nOtsu thresholding. The Otsu thresholding technique executes a conceptual sweep line to determine the most suitable threshold based on a criterion function that measures the statistical separation between the foreground and background classes. The criterion function entails the minimization of the ratio of between-classes variance and total variance (Equation 1).\nσ 2 ω (T ) = ω 0 (T )σ 2 0 (T ) + ω 1 (T )σ 2 1 (T ),(1)\nwhere ω 0 and ω 1 are the weights which represent the probabilities of the two classes separated by the threshold T . Let σ 2 0\nT otsu ← 0, σ otsu ← 0; for T sweep from 0...255 do R a ← histogram[0:T sweep ]; R b ← histogram[T sweep :255]; W a ← density(R a ), W b ← density(R b ); U a ← Mean(R a ), U b ← Mean(R b ); σ ← W a × W b × (U a -U b ) 2 ; if σ > σ otsu then σ otsu ← σ; T otsu ← T sweep ; end end\nAlgorithm 1: Otsu thresholding and σ 2 1 show the variances of the two types. The Otsu threshold is used to establish the lower limit of the intensity range (Algorithm 1). Starting from the initial grey value, T otsu = 0, the histogram is partitioned into two regions, and the threshold that optimizes the between-class variance is chosen.\nIterative threshold selection (ITS). The iterative threshold selection (ITS) method bears resemblance to greedy algorithms [15]. It aims to achieve a global minimum by computing the average intensities of the foreground and background clusters. The ITS method comprises five sequential stages, in which the second till fourth steps are reiterated until a threshold value converges (refer to Algorithm 2 for details). In the first step, the algorithm selects a starting threshold, represented by T it from the interval of [0,255]. As the histogram distribution is concentrated in the middle of the dynamic range, the middle threshold can be considered a suitable initial point. Next, the histogram is partitioned into two regions, namely R 1 and R 2 , determined by the chosen threshold T it (Step 2). In the third step, the mean intensity values for each region, denoted as µ 1 and µ 2 , are computed. The fourth step requires an update to the threshold, which is determined by calculating the average of both mean intensities as T it = (µ 1 + µ 2 )/2. Finally, Steps 2 through 4 are reiterated until the mean values µ 1 and µ 2 remain constant in consecutive iterations (Step 5).\nT it ← T start ; r ← T start ; // T it converges if r = 0 while r ̸ = 0 do R a ← histogram[0:T it ]; R b ← histogram[T it :255]; µ 1 ← MeanIntensity(R a ); µ 2 ← MeanIntensity(R b ); T tmp ← (µ 1 + µ 2 )/2; r ← |r -T tmp |; T it ← T tmp ; end Algorithm 2: Iterative threshold selection (ITS)" }, { "figure_ref": [], "heading": "B. Container Removal", "publication_ref": [ "b15" ], "table_ref": [], "text": "At the data acquisition stage, a cylindrical container made of 3D printing is used to house the biological samples. Therefore, the geometry of the container is scanned along with the collected data. An approach that can be feasible is to define the geometry of the container holding the sample and eliminate it from the data. Despite the uncomplicated cylindrical geometry of the sample container, the main obstacle lies in accurately determining the position and radius of said geometry. As a response, we make use of the Hough Circle Transform [16] to recognize a circle from the top slice image. As the samples are at the base of the container, we assume that the aforementioned images delineate the geometry information. Lastly, we discard the information of the image that are outside of the determined circle, and apply this operation to every image slice." }, { "figure_ref": [], "heading": "C. Server-side Rendering (Optimal Image Snapshot)", "publication_ref": [ "b16" ], "table_ref": [], "text": "Within this section, our proposed method will be presented to identify the most optimal viewpoint of a 3D dataset. This will be achieved by projecting the final 3D view space into a 2D image. The term \"optimal\" pertains to the perspective that encompasses the highest amount of information in a two-dimensional image. The method is advantageous because of the high-quality of the rendered image, which is directly derived from the raw 3D data.\nIn order to determine the optimal 2D image, we employ the Shannon entropy criterion [17] as our comparison metric, given that the Shannon entropy indicates the image with the most information. The Shannon entropy, denoted as H, provides a statistical measure of randomness that is used to characterize the texture of the rendered image formally.\nH = - m-1 i=0 p i log 2 (p i ) ,(2)\nwhere p i contains the normalised histogram counts of the image. Greater entropy value implies increased information content in the generated image, enabling domain experts to identify the data more accurately." }, { "figure_ref": [ "fig_1" ], "heading": "D. Histogram Filtering", "publication_ref": [], "table_ref": [], "text": "In order to streamline the data processing pipeline for the data preview, it is possible to decrease the dataset size by discarding the internal volume and retaining solely the external geometry data. The methodology postulates that the grey values of the surrounding artefacts, which may be air or the sample container, are represented by the top 3 slices of the volume. The validity of the assumption can be asserted since the object occupies only the lower portion of the sample container and does not fill up the entire space within it. From this point forward, we dispose the histogram bins containing undesirable grey values. The procedure of removing unwanted bins from the original data histogram is illustrated in Figure 2. Consequently, only the sample characteristics were retained, albeit at the expense of finer details. The outcome indicated a surface that lacked intricate details and appeared rough. " }, { "figure_ref": [], "heading": "IV. EVALUATIONS AND DISCUSSIONS", "publication_ref": [], "table_ref": [], "text": "It can be observed from the three distinct visual previews that each data processing pipeline has its own design considerations, leading to differing approaches. Our attention is directed towards the two foremost challenges, namely perceptual scalability and data responsiveness. To address these challenges, we must decrease the dataset size to maintain data responsiveness while still achieving arthropod recognition. In this section, we will examine the visual outputs post the application of the data reduction method." }, { "figure_ref": [ "fig_2" ], "heading": "A. Comparing Otsu and ITS Thresholding Approaches", "publication_ref": [], "table_ref": [ "tab_1", "tab_1", "tab_1", "tab_1", "tab_1", "tab_1", "tab_1" ], "text": "The fundamental variation between the two approaches resides in the number of iterations. Regarding Otsu thresholding, the algorithm consistently carries out a comprehensive scan throughout the entire dynamic range, from (0 to 255 range) for an 8-bit data. On the contrary, the ITS method ceases its operation after discovering the global minimum, which frequently entails a significantly reduced number of iterations. The comparison of the two thresholding approaches has been conducted using the datasets described in Table II.\nThe resulting visual outcomes are presented in Figure 3. The Otsu and ITS methods are providing identical threshold values for the first three datasets (Table II: A, B, and C), thus emphasizing the shape of the samples. Despite this, the final two datasets (Table II II. and ITS methods were able to extract sample shapes more effectively after omitting sample container details from the histogram. However, the Pseudoscorpion dataset (Table II: D) remains a difficult task for the ITS method, while the Otsu approach exhibits robustness, even when confronted with a narrow histogram distribution. Considering the current dynamic range of the data, which is limited to 8-bit image, the Otsu thresholding method is deemed more reliable and thus, a superior thresholding method that should be implemented across all datasets. If datasets with higher dynamic range are available, the ITS method may be a more suitable alternative.\nIn order to attain a more comprehensive comparison of both algorithms on distinct datasets, we must initially convert the raw slices into slicemaps based on the 512 3 scheme. The performance of each method is assessed by computing the Time, s Fig. 6. The performances of the finding the optimal 3D view point along Z-axis rotation using datasets described in Table II.\naverage of ten computation runs on a MacbookPro having a 64 bit Quad-Core Intel(R) Core i7 CPU at 2.60 GHz and 16 GB of DDR3 memory. Figure 5 illustrates the average computation time for each dataset. The Otsu threshold exhibits a significantly prolonged computation time owing to the comprehensive scanning of the dynamic range of the image.\nThe ITS approach relied exclusively on the convergence of the iterations and, using these datasets, the threshold converges after roughly ten iterations, enabling the ITS approach to execute 20 times faster than the Otsu method.\nTo evaluate the performance of the image snapshot approach, we chose five 3D datasets and performed Z-axis rotations to ascertain the highest entropy value in order to evaluate the system's effectiveness (Table II). It should be noted that the datasets are in 3D file format, which have been directly converted from the raw slices. The aggregate time taken for every process is documented in Figure 6. Empirical evidence suggests that, on average, the system requires 3.21 s to identify the most informative 3D image via Z-axis rotation. The software for server-side rendering can be expanded to serve as a component for image streaming to distributed clients." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "The unprecedented rate at which synchrotron radiation facilities are producing micro-CT datasets has resulted in an overwhelming amount of gigabyte-sized data that scientists struggle to browse and interact with in real-time. Within this work, we have established three distinct previews that conform to the best practices of data exploration. In the experiments of the NOVA project, arthropods are scanned into micro-CT resulting in thousands of datasets, with each in the gigabyte range in size. Our demonstration proved that reducing the dataset size to the megabyte range is achievable without compromising the arthropod's geometry information. In order to optimize data responsiveness, we have developed individual data processing pipelines for each aforementioned visual preview. The techniques we have presented are suitable for our particular use case and are modular, enabling customization for other experiments with comparable needs. Concerning future work, the methods displayed could be incorporated into a software framework to be helpful for the community. This has the potential to draw in more users and to subject the methods to further testing with additional micro-CT datasets." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENT", "publication_ref": [], "table_ref": [], "text": "Data and/or analytical tools used in this study were provided by the projects ASTOR and NOVA (Michael Heethoff, TU Darmstadt; Vincent Heuveline, Heidelberg University; Jürgen Becker, Karlsruhe Institute of Technology), funded by the German Federal Ministry of Education and Research (BMBF; 05K2013, 05K2016)." } ]
The unprecedented rate at which synchrotron radiation facilities are producing micro-computed (micro-CT) datasets has resulted in an overwhelming amount of data that scientists struggle to browse and interact with in real-time. Thousands of arthropods are scanned into micro-CT within the NOVA project, producing a large collection of gigabyte-sized datasets. In this work, we present methods to reduce the size of this data, scaling it from gigabytes to megabytes, enabling the micro-CT dataset to be delivered in real-time. In addition, arthropods can be identified by scientists even after implementing data reduction methodologies. Our initial step is to devise three distinct visual previews that comply with the best practices of data exploration. Subsequently, each visual preview warrants its own design consideration, thereby necessitating an individual data processing pipeline for each. We aim to present data reduction algorithms applied across the data processing pipelines. Particularly, we reduce size by using the multi-resolution slicemaps, the serverside rendering, and the histogram filtering approaches. In the evaluation, we examine the disparities of each method to identify the most favorable arrangement for our operation, which can then be adjusted for other experiments that have comparable necessities. Our demonstration proved that reducing the dataset size to the megabyte range is achievable without compromising the arthropod's geometry information.
Low-latency Visual Previews of Large Synchrotron Micro-CT Datasets
[ { "figure_caption": "Fig. 1 .1Fig.1. Classification of visual outputs based on the data exploration process which narrows down the data using top-down methodology, integrated within the NOVA data portal. The visual outputs in the three previews, namely list preview, data preview and interactive preview, are represented by the red boxes. Usually, users start by clicking on the dataset in the list preview. Subsequently, they are directed to the data preview page. If they wish to examine the dataset in a 3D view, they can simply click on the image within the data preview, which will then lead them to the interactive preview page.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. The data in its original form is presented in the left column, with the histogram distribution depicted in blue. Row A displays the surface rendering of the oribatid mite dataset (Table II: A), while row B shows the volumetric dataset's histogram. Row C, on the other hand, provides a zoomed-in view of the respective histogram distribution. The histogram distribution of the first three cross-section images (indicated in orange) that are artefacts is displayed in the middle column. The right column shows the resulting surface rendering of the filtered data (histogram distribution is shown in purple colour), where the unwanted bins are extracted from the original data.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The visual results of datasets when applying threshold values calculated by the Otsu and iterative threshold selection (ITS) methods. The red colour shows the values that are under-or over-thresholded. The green colour shows the optimal threshold for each particular dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .Fig. 5 .45Fig. 4. The visual results of cropped datasets when applying threshold values calculated by the Otsu and iterative threshold selection (ITS) methods. The red colour shows the values that are under-or over-thresholded. The green colour shows the optimal threshold for each particular dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "OF THE DATA PROCESSING PIPELINE FOR THE VISUAL PREVIEWS. THE GREY BOX REPRESENT THE ACTIVE PROCESS WITHIN EACH PREVIEW.", "figure_data": "VisualRaw Data3DSlicemapsThresholdingServer-sideHistogramPreviews(Slices)ConversionConversion+ Container RemovalRenderingFilteringList Preview→", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "INFORMATION OF DATASETS USED IN THE EVALUATIONS.", "figure_data": "Label TypeImage Resolution Total Slices SizeAOribatid mite1536 × 153611522.89 GBBBox mite1536 × 153611522.72 GBCGammasid mite 2016 × 201620168.19 GBDPseudoscorpion2016 × 201616926.88 GBETachinid fly1968 × 196814562.21 GB", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" } ]
Nicholas Tan Jerome; Suren Chilingaryan; Thomas Van De Kamp; Andreas Kopmann
[ { "authors": "S.-Y Chung; J.-S Kim; D Stephan; T.-S Han", "journal": "Construction and Building Materials", "ref_id": "b0", "title": "Overview of the use of micro-computed tomography (micro-ct) to investigate the relation between the material characteristics and properties of cement-based materials", "year": "2019" }, { "authors": "B D Arhatari; A W Stevenson; D Thompson; A Walsh; T Fiala; G Ruben; N Afshar; S Ozbilgen; T Feng; S Mudie", "journal": "Applied Sciences", "ref_id": "b1", "title": "Microcomputed tomography beamline of the australian synchrotron: Micronsize spatial resolution x-ray imaging", "year": "2023" }, { "authors": "T Bicer; D Gürsoy; R Kettimuthu; F De Carlo; I T Foster", "journal": "Journal of synchrotron radiation", "ref_id": "b2", "title": "Optimization of tomographic reconstruction workflows on geographically distributed resources", "year": "2016" }, { "authors": "T Van De Kamp; A Ershov; T Santos Rolo; A Riedel; T Baumbach", "journal": "Entomologie heute", "ref_id": "b3", "title": "Insect imaging at the ANKA synchrotron radiation facility", "year": "2013" }, { "authors": "S Schmelzle; M Heethoff; V Heuveline; P Lösel; J Becker; F Beckmann; F Schluenzen; J U Hammel; A Kopmann; W Mexner; M Vogelgesang; N Tan Jerome; O Betz; R Beutel; B Wipfler; A Blanke; S Harzsch; M Hörnig; T Baumbach; T Van De Kamp", "journal": "International Society for Optics and Photonics", "ref_id": "b4", "title": "The NOVA project: maximizing beam time efficiency through synergistic analyses of SRµCT data", "year": "2017" }, { "authors": "F Brun; L Massimi; M Fratini; D Dreossi; F Billé; A Accardo; R Pugliese; A Cedola", "journal": "Advanced structural and chemical imaging", "ref_id": "b5", "title": "Syrmep tomo project: a graphical user interface for customizing ct reconstruction workflows", "year": "2017" }, { "authors": "T Van De Kamp; P Vagovič; T Baumbach; A Riedel", "journal": "Science", "ref_id": "b6", "title": "A biological screw in a beetle's leg", "year": "2011" }, { "authors": "T Van De Kamp; A H Schwermann; T Dos Santos Rolo; P D Lösel; T Engler; W Etter; T Faragó; J Göttlicher; V Heuveline; A Kopmann", "journal": "Nature Communications", "ref_id": "b7", "title": "Parasitoid biology preserved in mineralized fossils", "year": "2018" }, { "authors": "P D Lösel; T Van De Kamp; A Jayme; A Ershov; T Faragó; O Pichler; N Tan Jerome; N Aadepu; S Bremer; S A Chilingaryan; M Heethoff; A Kopmann; J Odar; S Schmelzle; M Zuber; J Wittbrodt; T Baumbach; V Heuveline", "journal": "Nature communications", "ref_id": "b8", "title": "Introducing biomedisa as an opensource online platform for biomedical image segmentation", "year": "2020" }, { "authors": "N Tan Jerome; S Chilingaryan; A Shkarin; A Kopmann; M Zapf; A Lizin; T Bergmann", "journal": "", "ref_id": "b9", "title": "WAVE: A 3D online previewing framework for big data archives", "year": "2017" }, { "authors": "N ; Tan Jerome", "journal": "KIT Scientific Publishing", "ref_id": "b10", "title": "Low-latency big data visualisation", "year": "2019" }, { "authors": "N ; Tan Jerome; A Kopmann", "journal": "", "ref_id": "b11", "title": "Digital visual exploration library", "year": "2018" }, { "authors": "J M Noguera; J.-R Jiménez", "journal": "Václav Skala-UNION Agency", "ref_id": "b12", "title": "Visualization of very large 3D volumes on mobile devices and WebGL", "year": "2012" }, { "authors": "N Tan Jerome; Z Ateyev; S Schmelzle; S Chilingaryan; A Kopmann", "journal": "IEEE Transactions on Nuclear Science", "ref_id": "b13", "title": "Real-time local noise filter in 3-d visualization of ct data", "year": "2019" }, { "authors": "J Edmonds", "journal": "Mathematical programming", "ref_id": "b14", "title": "Matroids and the greedy algorithm", "year": "1971" }, { "authors": "D Ioannou; W Huda; A F Laine", "journal": "Image and vision computing", "ref_id": "b15", "title": "Circle recognition through a 2d hough transform and radius histogramming", "year": "1999" }, { "authors": "D Applebaum", "journal": "Cambridge University Press", "ref_id": "b16", "title": "Probability and information: An integrated approach", "year": "1996" } ]
[ { "formula_coordinates": [ 2, 105.73, 312.24, 385.22, 25.9 ], "formula_id": "formula_0", "formula_text": "→ → → Data Preview → → → Interactive Preview → → →" }, { "formula_coordinates": [ 3, 96.07, 677.55, 203.95, 12.69 ], "formula_id": "formula_1", "formula_text": "σ 2 ω (T ) = ω 0 (T )σ 2 0 (T ) + ω 1 (T )σ 2 1 (T ),(1)" }, { "formula_coordinates": [ 3, 321.94, 51.51, 180.39, 140.39 ], "formula_id": "formula_2", "formula_text": "T otsu ← 0, σ otsu ← 0; for T sweep from 0...255 do R a ← histogram[0:T sweep ]; R b ← histogram[T sweep :255]; W a ← density(R a ), W b ← density(R b ); U a ← Mean(R a ), U b ← Mean(R b ); σ ← W a × W b × (U a -U b ) 2 ; if σ > σ otsu then σ otsu ← σ; T otsu ← T sweep ; end end" }, { "formula_coordinates": [ 3, 321.94, 523.63, 226.15, 138.96 ], "formula_id": "formula_3", "formula_text": "T it ← T start ; r ← T start ; // T it converges if r = 0 while r ̸ = 0 do R a ← histogram[0:T it ]; R b ← histogram[T it :255]; µ 1 ← MeanIntensity(R a ); µ 2 ← MeanIntensity(R b ); T tmp ← (µ 1 + µ 2 )/2; r ← |r -T tmp |; T it ← T tmp ; end Algorithm 2: Iterative threshold selection (ITS)" }, { "formula_coordinates": [ 4, 124.39, 414.73, 175.63, 30.32 ], "formula_id": "formula_4", "formula_text": "H = - m-1 i=0 p i log 2 (p i ) ,(2)" } ]
2024-02-02
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b42", "b34", "b36", "b37", "b10", "b9", "b39", "b51", "b2", "b44", "b14" ], "table_ref": [], "text": "Recently, the advent of DDPMs [17,43] has ushered in a new era of image generation. Existing text-to-image generation methods [4, 35,37,38] can generate images in wellknown styles by incorporating style descriptions in the textual prompt. However, the textual prompt is ambiguous which makes it hard to express style precisely [11]. Recently, personalized generation [10,40,52] has been introduced to generate novel concepts, which can be utilized for stylized generation by viewing the reference style as a new concept. These approaches typically necessitate multiple images as references. However, the subtle variations in their styles present a challenge for the model to accurately learn and replicate the intended style. For example, although The Yellow House ( Van Gogh 1888) and Sunflowers ( Van Gogh 1888) are all Van Gogh's artworks, the difference in terms of color and texture [2] pose a significant challenge for a model to learn and replicate the exact style. Some recent approaches [3,45,57] focus on prompt learning to more accurately capture the target style but may ignore the detailed, fine-grained information that reference images provide.\nIn this paper, we propose INSTASTYLE (Inversion Noise of a Stylized Image is Secretly a Style Adviser) based on the observation that the inversion noise from a stylized reference image inherently contains the style signal. Specifically, our approach only requires a single reference image, which is transformed into a noise via DDIM inversion. The inversion noise, which preserves the style signal, is then used to generate stylized images by utilizing the generation ability of diffusion models. To shed light on this phenomenon, we demonstrate that the inversion noise maintains a non-zero signal-to-noise ratio in Sec. 3.2, indicating that it retains essential information (including the style signal) from the reference image. Furthermore, previous works [15,30] have shown that the images are not corrupted up to complete white noise during the training of diffusion models. This underscores the models' innate proficiency in denoising from inversion noise that retains such \"leaked\" signals during inference, enabling the generation of new stylized images.\nNevertheless, the typical method of using human-written prompts to describe styles for stylized image generation often encounters challenges due to the inherent ambiguity of natural language. For instance, specific style descriptors like \"ink painting\" don't always work effectively with various objects. As shown in Fig. 2, the style \"ink painting\" might suit an object like a \"chair\" but not as well with a \"boy\". This inconsistency can arise because \"ink painting\" is typically associated with landscapes and might not yield optimal results for human subjects. Conversely, when using vague descriptors like \"specific style\", they may fail to provide enough information, leading to unpredictable generation quality (see the second row in Fig. 2). To address this, we propose Prompt Refinement to learn a style token. Specifically, we select high-quality images generated by our approach to constitute the dataset for prompt refinement. Then we optimize the embedding of the style token and the key and value projection matrices in the cross-attention layers of the diffusion model. As illustrated in the last row of Fig. 2, the learned style token precisely encapsulates the style, enhancing the style details of the generated images.\nAs shown in Fig. 1 (a), our approach can effectively retain the fine-grained style in the reference image as well as generate new objects in high fidelity. Furthermore, our method supports the creative generation of style combination (Fig. 1 (b) and (c)) and allows adjusting the degree of two styles dynamically (Fig. 1 (c)) with mixed inversion noise. To sum up, our contributions are as follows:\n• We find that the inversion noise of a reference image via DDIM inversion potentially retains style information, which is evidenced by its non-zero signal-to-noise ratio. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Text-to-image Synthesis", "publication_ref": [ "b49", "b28", "b45", "b59", "b32", "b34", "b36", "b9", "b9", "b39", "b40", "b9", "b0", "b51", "b18", "b39", "b40", "b39", "b43" ], "table_ref": [], "text": "With the advance of pretrained vison-language models [36,50] and diffusion models [6, 29,46,60], text-to-image generation [7, 33,35,37] has been widely studied and shown remarkable generalization ability. Recently, Lin et al. [30] have pointed out that the noise of the last step during noising in the training process has a non-zero signal-to-noise ratio (SNR), i.e., there exists a signal leak from the original image. This results in a misalignment between the training process and the inference process. Therefore, some works [30,42] propose to train diffusion models by enforcing the SNR to zero to avoid the signal leak in the training process. On the contrary, our approach leverages the leaked signal, which potentially includes the style details, from the reference image for the stylized generation. As these methods ignore the concepts that do not appear in the training set [10], some works [10,40,41] study personalized text-to-image generation which aims to adapt text-to-image models to new concepts given several reference images. For example, Textual Inversion [10] introduces and optimizes a word vector for each new concept. Subsequent works [1,52,61] further enhance the flexibility and adaptiveness of the learning strategy. Some methods [19,40,41] propose to finetune the original diffusion model, showing more satisfactory results. For example, DreamBooth [40] finetunes all the parameters in the pretrained model. However, maintaining fine-grained style while simultaneously generating new objects remains a challenge for current personalized generation methods [44]." }, { "figure_ref": [], "heading": "Stylized Image Generation", "publication_ref": [ "b12", "b4", "b30", "b48", "b13", "b19", "b22", "b25", "b50", "b52", "b54", "b55", "b17", "b21", "b46", "b53", "b58", "b61", "b44", "b43" ], "table_ref": [], "text": "Stylized image generation is a new paradigm for image generation which aims to generate content in a specific style given a few reference images. Although it is similar to the neural style transfer task [12,13,21,25,31,49] which generates stylized images as well, they are fundamentally different. Style transfer solves an image translation task, focusing on translating a content image to target style [14,20,23,26,51,53,55,56,58]. For example, some works explore the global and local information to preserve the content of the source image [5, 18,22,24,47,54,59,62]. On the contrary, stylized image generation is geared towards generating a new image in specific style conditioned on a text. For example, ZDAIS [45] views stylized image generation as a domain adaptive task, where each style belongs to a domain. It learns disentangled prompts to adapt the model to new domains. Some approaches [8,44] fine-tune the model to capture style properties. Our work differs from these previous works. Firstly, we reveal that the inversion noise of the reference image can act as a style adviser to provide fine-grained style information. Secondly, we avoid the ambiguity and bias caused by the natural language prompt by learning a style token. Moreover, benefiting from the inversion \"style\" noise and the learnable style token, our method supports the dynamic combination of multiple styles." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 3, our proposed method involves two main stages. In the first stage, we employ DDIM inversion to transform the reference image into noise. Notably, the in-version noise exhibits a non-zero signal-to-noise ratio, suggesting the presence of style signals from the reference image. Subsequently, we generate M stylized images from the inversion noise conditioned on the given textual prompts. Due to the inherent ambiguity of the textual prompts that describe the style, it is challenging to precisely convey the desired style. Addressing this, the second stage involves the incorporation of human feedback to select N high-quality generated images from the first stage. The selected images are then used to learn a style token via prompt refinement.\nNext, we briefly present the preliminaries in Sec. 3.1, followed by a detailed introduction of our approach in Secs. 3.2 to 3.4." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b37", "b16", "b42", "b45", "b33", "b45" ], "table_ref": [], "text": "We begin by reviewing the fundamental diffusion model. Subsequently, we introduce Stable Diffusion [38], a key component of our framework. Finally, we discuss both DDIM and DDIM inversion. In our method, DDIM serves the purpose of denoising a noise to generate new images. Concurrently, DDIM inversion is utilized to transform the reference image into a corresponding \"style\" noise.\nDiffusion models. Diffusion models [17,43] contain a forward process and a backward process. The forward process adds noise to the data according to a predetermined, nonlearned variance schedule β 1 , ..., β T , which is defined as a Gaussian transition: q(z t |z t-1 ) := N (z t ; 1 -β t z t-1 , β t I).\n(\n)1\nAn important property of the forward process is that we can obtain the latent variable z t directly based on z 0 :\nz t = √ α t z 0 + √ 1 -α t ε,(2)\nwhere ε ∼ N (0, I), α t := t i=1 (1 -β i ). Diffusion models restore the information by learning the backward process: where C = ψ(P) is the embedding of text condition P.\np θ (z t-1 |z t ) := N (z t-1 ; µ θ (z t , t), Σ θ (z t , t)).(3)\nDDIM. In inference time, given a noise vector z T , the noise is gradually removed by sequentially predicting the added noise for T steps. DDIM [46] is one of the denoising approaches with a deterministic process:\nz t-1 = αt-1 αt z t + 1 αt-1 -1 - 1 αt -1 • εθ , (5\n)\nwhere εθ is the estimated noise. DDIM inversion [34,46] is proposed to transform an image to a noise conditioned on a prompt. The diffusion process is performed in the reverse direction, i.e., z 0 → z T :\nz t+1 = αt+1 αt z t + 1 αt+1 -1 - 1 αt -1 • εθ . (6)" }, { "figure_ref": [ "fig_0" ], "heading": "Initial Stylized Image Generation", "publication_ref": [ "b37", "b37" ], "table_ref": [], "text": "As shown in Fig. 3 (a), the first stage of our INSTASTYLE involves obtaining the inversion noise via DDIM inversion and sampling images via DDIM.\nIn DDIM inversion, the added noise for each step is calculated conditioned on a prompt and gradually incorporated into the reference image following Eq. ( 6). We demonstrate that the inversion noise from a stylized reference image inherently carries the style signal from the perspective of the signal-to-noise ratio of the inversion noise. As the estimated noise ε θ (z t , t, C) is trained to approximate the artificial noise ε ∼ N (0, I), we assume that ε θ (z t , t, C) ∼ N (0, I). According to Eq. ( 6), z t+1 can be approximately obtained via DDIM inversion in a closed form:\nz t+1 = αt+1 α0 z 0 + t i=0 αt+1 αi+1 1 αi+1 -1 - 1 αi -1 2 • ε0 ,(7)\nwhere ε0 ∼ N (0, I).\nSignal-to-Noise ratio. Signal-to-noise ratio (SNR) is introduced to measure the ratio of signals from the original image preserved in the noise [30,38]. The SNR of the inversion noise can be calculated as:\nSNR(t) := 1 t i=0 α0 αi+1 1 αi+1 -1 - 1 αi -1 2 . (8\n)\nWe present more derivations in Sec. 7 in Supplementary Material. In Stable Diffusion [38], the predetermined variance schedule\nβ t = ( √ 0.00085 • (1 -j) + √ 0.012 • j) 2 , where j = t-1\nT -1 . At the terminal timestep T = 1000, the SN R(T ) = 0.015144, i.e., z T has a none-zero signal-tonoise ratio. This non-zero signal-to-noise ratio suggests that the noise obtained via DDIM inversion still retains information from the reference image to some extent and deviates from white noise. Besides, we conduct qualitative experiments and observed that style information can consistently be preserved at the terminal timestep (refer to Sec. 8.3 in Supplementary Material). Therefore, we can generate stylized images by sampling from the inversion noise conditioned on target prompts via DDIM based on Eq. (5)." }, { "figure_ref": [ "fig_0" ], "heading": "Prompt Refinement", "publication_ref": [ "b47", "b18" ], "table_ref": [], "text": "As the natural language prompts may not precisely convey the style for different objects (Fig. 2), we propose Prompt Refinement to learn the style tokens as shown in Fig. 3 (b).\nSpecifically, we introduce a new token, i.e., \"<style1>\" to represent the style descriptor and learn its embedding. In practice, we initialize the new token with the embeddings of the textual style descriptor of the reference image. The text condition is input to the model via two projection matrices in the cross-attention block of diffusion model ε θ , i.e., W k ∈ R d×d ′ and W v ∈ R d×d ′ . Therefore, we also fine-tune these two projection matrices. The text feature c ∈ R s×d is projected to key K = cW k and value V = cW v . The query matrix W q ∈ R l×d ′ projects the latent image feature f ∈ R (h×w)×l into query feature Q = f W q . Then the cross-attention [48] is calculated as:\nAttention(Q, K, V ) = Softmax QK T √ d ′ V.(9)\nWe utilize the generated stylized images in the first stage to constitute the training data for prompt refinement learning. While these generated images may not always be precise, we observe that most of them successfully retain the style information alongside appropriate content. To ensure the quality of our dataset, we implemented various strategies for selecting high-quality images: (1) Random selection: we choose several images randomly; (2) Score-based selection: as the style loss and text similarity are different in scale, we convert them into style rank (Rank s ) and content rank (Rank c ) by sorting the generated images based on style loss and text similarity, respectively. Then, the overall rank of an image is calculated as Rank = max(Rank s , Rank c ). (3) Human selection: we manually select images that distinctly embody the reference style and the target object.\nWe utilize the LoRA [19] for model fine-tuning. Specifically, for a projection matrix W ∈ R d×d ′ (i.e., W = W k or W = W v ) to be fine-tuned, we update a low-rank residual rather than directly fine-tuning W . Formally, we denote the fine-tuned projection matrix as\nW ′ = W + BA, where B ∈ R d×r , A ∈ R r×d ′\n, and the rank r ≪ min(d, d ′ ). During training, only A and B are learnable. For y = W x, the modified forward pass is y = W x + BAx. Putting the two stages together, our full algorithm is shown in algorithm 1." }, { "figure_ref": [], "heading": "Inference", "publication_ref": [ "b15" ], "table_ref": [], "text": "Stylized image generation. For stylized image generation, we first learn the style token \"<style1>\" to describe the style in the reference image as introduced above. During inference, the diffusion model ε θ is utilized as the backbone of DDIM inversion and DDIM. Specifically, the reference image is first inverted to a noise z T via DDIM inversion conditioned on a prompt containing the learned style token. Then we sample from the inversion noise conditioned on a target prompt which contains the target content and the learned style token via DDIM. Specifically, the estimated noise is calculated based on the classifier-free guidance [16]:\nεθ (z t , t, C, ∅) = ε θ (z t , t, ∅) + w • (ε θ (z t , t, C) -ε θ (z t , t, ∅)),(10)\nwhere ∅ = ψ(\"\") is the embedding of a null text. ε θ (z t , t, C) represents the conditional predictions. w is the guidance scale parameter.\nCombination of two styles. To combine two styles, we learn a style token for each style, i.e., \"<style1>\" and \"<style2>\". Specifically, we use the selected images of both styles to constitute the training set. Then we jointly optimize the embeddings of the style tokens and fine-tune the key projection and value projection in the cross-attention block. During inference, we individually transform two x 0 , c ∼ q(x 0 , c)\n8 z 0 = E(x 0 ) 9 t ∼Uniform({1, . . . , T }) 10 ε ∼ N (0, I) 11 z t = √ α t z 0 + √ 1 -α t ε 12 Take gradient step on ∇ θ,v ∥ε -ε θ (z t , t, c)∥ 2 2\n13 until converged; 14 Return v * , ε * θ reference images into their corresponding inversion noise z t1 ∈ R H×W ×C and z t2 ∈ R H×W ×C via DDIM inversion conditioned on a prompt containing their learned style token. As the style can be described by both the \"style\" noise and style token, we combine the styles from these two perspectives. For the \"style\" noise, the combined inversion noise z t is obtained based on a masking strategy:\nz t = (1 -M) ⊙ z t1 + M ⊙ z t2 ,(11)\nwhere ⊙ is element-wise multiplication and 1 is a binary mask filled with ones. M ∈ {0, 1} H×W denotes a random binary mask indicating where to drop out and fill in from two inversion noises. The noise mix ratio between two inversion noises is α, representing the percent of M that is set to 1 (the rest is set to 0). For the style token, we utilize a composed guidance mechanism [32] to estimate the noise:\nεθ (z t , t, C, ∅) = ε θ (z t , t, ∅) + w • (1 -β) • (ε θ (z t , t, C 1 ) -ε θ (z t , t, ∅)) + w • β • (ε θ (z t , t, C 2 ) -ε θ (z t , t, ∅)) , (12\n)\nwhere β is a prompt mix ratio. C 1 and C 2 are the embeddings of the target prompts for the two styles, respectively. Denote the target object as <obj>, they can be formulated as C 1 = ψ(\"A <obj> in the style of <style1>\") and C 2 = ψ(\"A <obj> in the style of <style2>\"). " }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [], "table_ref": [], "text": "We collect 55 images as the reference style dataset. The complete information on these images is listed in Tab. 4 in the Supplementary Material. Our method is implemented using PyTorch and executed on a single NVIDIA GeForce RTX 3090 GPU. The training procedure consists of 500 iterations, employing the Adam optimizer with a learning rate of 1 × 10 -5 . During sampling, we set the guidance scale w = 2.5. The number of generated images M in the first stage is 15 whose details are shown in Sec. 8.1 in the Supplementary Material. The number of selected images N in the second stage is set to 5." }, { "figure_ref": [ "fig_1" ], "heading": "Stylized Image Synthesis", "publication_ref": [ "b43", "b39", "b9" ], "table_ref": [], "text": "We conduct a comparative analysis of our INSTASTYLE against four recent methods, i.e., StyleDrop [44], Dream-Booth [40], Custom Diffusion [28], and Textual Inversion [10]. We implement these comparison methods following the open-sourced code 1,2 . In contrast, benefiting from the style signal in the inversion noise, our INSTASTYLE can generate stylized images with fine-grained style details and higher fidelity. Take the bicycle in Fig. 4 as an example, our method can better preserve the color, textures, and brushstroke. Besides, the generated bicycle is more complete and accurate. More qualitative comparison results are shown in Figs. 12 to 15 in the Supplementary Material.\nQuantitative results. We also report quantitative results. We utilize 100 objects in CIFAR100 [27] as the target objects. The generated images are evaluated from two aspects. For style preservation, we calculate the style loss [12,21] between the generated image and the reference style image. For text alignment, CLIP similarity [36] is calculated between the generated image and the prompt.\nAs shown in Tab. 1, INSTASTYLE achieves the lowest style loss and highest text similarity, indicating that the images generated by INSTASTYLE exhibit a consistent style with the reference image while retaining its generation capability. For the comparison methods, Textual Inversion and Custom Diffusion have a higher style loss, falling short of INSTASTYLE in style preservation. As for DreamBooth and StyleDrop, our approach surpasses them in generating target objects with a higher text similarity. Table 1. Quantitative comparison regarding the style loss (lower is better) and text similarity (higher is better). Our approach exhibits the best style preservation and content generation ability." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b9", "b39", "b43" ], "table_ref": [], "text": "Style loss (↓) Text similarity (↑)\nTextual Inversion [10] 2.369 0.278 Custom Diffusion [28] 1.809 0.280 DreamBooth [40] 1.167 0.244 StyleDrop [44] 1 We report the performance of our approach with different selection principles in prompt refinement in Tab. 1, including random selection (Random), score-based selection (Score), and human selection (Human). We can find that the random selection principle already outperforms existing approaches. The score-based selection achieves better results by considering the text score and style score simultaneously, which is effective in practice. Our human selection shows the best performance as the selected images to refine the style token are more consistent with human preference.\nUser study. Given the highly subjective nature of stylized generation and the inherent biases in text similarity and style loss, we conduct a user study involving 30 participants. Specifically, we randomly chose 30 sets of results, each comprising outputs from five methods. The participants are tasked with scoring the five methods in terms of style alignment, text alignment, and overall stylized generation performance, using a scale where 5 indicated the best and 1 indicated the poorest. The average alignment scores " }, { "figure_ref": [ "fig_3" ], "heading": "Combination of Two Styles", "publication_ref": [ "b15" ], "table_ref": [], "text": "We present style combination results in Fig. 5, where we set α = 0.5 and β = 0.5 to illustrate a more obvious effect of style combination. Specifically, in each case, we also show the stylized generation results of style 1 (the top left) and style 2 (bottom left), respectively. The biggest image on the right is our combination results, showing the powerful style combination ability of our approach. More visualizations are shown in. Figs. 16 and 17 in Supplementary Material.\nAs introduced in Sec. 3.4, both the noise mix ratio α and the prompt mix ratio β can affect the style combination. Fig. 6 illustrates the impact of the two parameters, where each row shows a different noise mix ratio (α = 0.1, 0.3, 0.5, 0.7, 0.9) and each column shows a different prompt mix ratio (β = 0.1, 0.3, 0.5, 0.7, 0.9). As we can see, the style is mainly influenced by the noise and the prompt further improves the style details, showing that our approach is flexible and can generate diverse results." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Our approach achieves stylized generation from two perspectives. One is the \"style\" noise obtained via DDIM inversion, which is utilized as the initial noise during inference to provide fine-grained style information. The other is the style token, which is learned via prompt refinement tuning to describe the style precisely. In this section, we ablate them to show their necessity. The visualization results and quantitative results are shown in Fig. 7 and Fig. 8, respectively. Specifically, the quantitative results present the average style loss and text similarity of 100 generated images for each reference image." }, { "figure_ref": [], "heading": "Text (↑) Style (↓)", "publication_ref": [], "table_ref": [], "text": "Figure 8. Ablation study. Removing inversion noise will result in poor style and no prompt refinement will harm the content. Our method lies further along the bottom right corner, showing better style preservation and content generation capability.\nImpact of inversion noise. A prominent advantage of our INSTASTYLE is that we utilize the inversion noise as the initial image during inference time to preserve the fine-grained style information. Thereby, we first conduct an ablation study by replacing the inversion \"style\" noise with random noise. As shown in Fig. 7 (second row), without the inversion noise, the stylized performance is seriously degraded. The quantitative results in Fig. 8 further illustrate that ablating the \"style\" noise (blue dots) will result in a higher style loss, harming the style of the generated image.\nImpact of prompt refinement tuning. Then, we ablate the prompt refinement stage to show the necessity of learning a style token to avoid ambiguity and bias in human-writing textual style tokens. Fig. 7 (third row) reveals that our learnable style token can better describe the style without harming the generation ability. The quantitative results in Fig. 8 show that ablating the prompt refinement (green dots) will result in a lower text similarity, further proving the importance of the learnable style token." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our research solves the task of stylized generation by exploring the fine-grained style information in the reference image. Through extensive analysis and empirical investigation, we find that the initial noise with a non-zero signal-to-noise ratio in the diffusion model leans toward a particular style. Therefore, DDIM inversion is introduced to obtain an initial noise containing fine-grained style information. Furthermore, we highlight the challenge caused by the internal bias in textual style tokens and propose to learn a style token adaptively to better describe the style without bias. Our approach produces satisfactory stylized generation results and supports the combination of two styles. Extensive qualitative and quantitative comparisons demonstrate the superiority of our method. Our approach supports adjusting the degree of two styles during combination and can generate various target objects, demonstrating the flexibility and universality of our approach. The noise mix ratio and the prompt mix ratio are set to be equal, which are 0, 0.1, 0.3, 0.5, 0.7, 0.9, and 1 from left to right. Objects for synthesis are Apple, Cat, Elephant, Grape, Horse, Rabbit, and Taxi. " }, { "figure_ref": [], "heading": "Additional Preliminaries", "publication_ref": [ "b16", "b45", "b16", "b37" ], "table_ref": [], "text": "Diffusion Denoising Probabilistic Model [17] is a generative model that aim to approximate the data distribution q(z 0 ) with a distribution p θ (z 0 ). Generating a new image is equivalent to sampling from the data distribution q(z 0 ). In practice, we utilize a backward process q(z t-1 |z t ) to iteratively denoise from a Gaussian noise z T . Since the backward process q(z t-1 |z t ) depends the unknown data distribution q(z 0 ), a parameterized Gaussian transition network p θ (z t-1 |z t ) := N (z t-1 ; µ θ (z t , t), Σ θ (z t , t)) is introduced to approximate q(z t-1 |z t ). The µ θ (z t , t) can be reparameterized as:\nµ θ (z t , t) = 1 √ α t z t - β t √ 1 -α t ε θ (z t , t) ,(13)\nwhere β t is predetermined variance schedule, α t := t i=1 (1 -β i ). z t is obtained by adding an artificial noise ε ∼ N (0, I) to z 0 , i.e., z t = √ α t z 0 + √ 1 -α t ε. ε θ (z t , t) is the learnable network that predicts the artificial noise. Once we have trained ε θ (z t , t), we can iteratively denoise from a Gaussian noise z T as follows:\nz t-1 = µ θ (z t , t) + σ t z, z ∼ N (0, I). (14\n)\nThe σ t of each sample stage has different settings in different denoising approaches. For example, in DDIMs [46], the denoising process is made to be deterministic by setting σ t = 0.\nThe training process is to learn ε θ (z t , t) which predicts the artificial noise added to the current image:\nmin θ E z0,ε∼N (0,I),t∼Uniform(1,T ) ∥ε -ε θ (z t , t)∥ 2 2 . (15\n)\nIt is worth noting that the fundamental diffusion model [17] is an unconditional model, while the Stable Diffusion [38] utilized in our framework is a conditional model. Therefore, compared to the optimization target in Eq. (4), Eq. ( 15) does not have the conditional term C." }, { "figure_ref": [ "fig_11" ], "heading": "SNR of Inversion Noise", "publication_ref": [ "b37", "b43", "b39", "b9", "b8" ], "table_ref": [], "text": "As ε θ (z t , t, C) is trained to approximate the artificial noise ε ∼ N (0, I), we can assume that ε θ (z t , t, C) ∼ N (0, I). For simplicity, we denote ε θ (z t , t, C) as ε t . According to Eq. ( 6), z t+1 can be approximately obtained in closed form:\nz t+1 = α t+1 α t z t + 1 α t+1 -1 - 1 α t -1 • ε t = α t+1 α t α t α t-1 z t-1 + 1 α t -1 - 1 α t-1 -1 • ε t-1 + 1 α t+1 -1 - 1 α t -1 • ε t = α t α t-1 z t-1 + α t+1 α t 1 α t -1 - 1 α t-1 -1 • ε t-1 + 1 α t+1 -1 - 1 α t -1 • ε t = α t α t-1 z t-1 + α t+1 α t 1 α t -1 - 1 α t-1 -1 2 + 1 α t+1 -1 - 1 α t -1 2 • εt-1 = α t α t-1 z t-1 + α t+1 α t 1 α t -1 - 1 α t-1 -1 2 + α t+1 α t+1 1 α t+1 -1 - 1 α t -1 2 • εt-1 = ... = α t α 0 z 0 + t i=0 α t+1 α i+1 1 α i+1 -1 - 1 α i -1 2 • ε0 ,(16)\nwhere ε t , ε t-1 , • • • ε 1 ∼ N (0, I). εt-1 merges two Gaussions ε t and ε t-1 . εt-1 , εt-2 , ε0 ∼ N (0, I). Following [30,38], the signal-to-noise ratio (SNR) can be calculated as: Datasets. The reference image sources for experiments are presented in Tab. 4. We also label the name of the style and object in the reference image, which is utilized in the first stage to transform the reference image into a \"style\" noise. Besides, the name of the style is also used to initialize the learnable style token.\nSNR(t) : = αt α0 2 t i=0 αt+1 αi+1 1 αi+1 -1 - 1 αi -1 2 2 = 1 t i=0 α0 αi+1 1 αi+1 -1 - 1 αi -1 2 . (17\n)\nObjects for generation. In the first stage, we generate 15 objects for each reference image which is utilized to fine-tune the learnable style token in the second stage. Specifically, these objects are cat, lighthouse, volcano, goldfish, table lamp, tram, palace, tower, cup, desk, chair, pot, laptop, door, and car. For quantitative comparisons, we utilize object classes in CIFAR100 [27], i.e., 100 classes, as the target objects. The details of the objects are presented in Tab. 3, where the 100 classes are categorized into 20 superclasses for better visualization.\nDetails on user study. In the user study, we randomly select 30 sets of results. Each set of results contains five generated images obtained by our INSTASTYLE, StyleDrop [44], DreamBooth (DB) [40], Custom Diffusion (CD) [28], and Textual Inversion (TI) [10]. We conduct the blind evaluation where participants will not be informed about the methods utilized for the images. Fig. 18 presents the interface of the user study. For each generated image, we ask the user three questions to score the image in terms of style alignment, text alignment, and overall stylized generation performance. Besides, to evaluate the level of agreement among the responses provided by all participants, we calculate the inter-rater agreement score in terms of Fleiss' kappa [9]. The average inter-rater agreement score of our user study is 0.27, which indicates a fair agreement." }, { "figure_ref": [], "heading": "Instructions", "publication_ref": [], "table_ref": [], "text": "Task: Given a reference image and a target prompt, there are five generated images based on five different methods, score their performance based on the question. Please note that the relative scores of each method should be consistent with your assessment. The definition of style: style (from Merriam-Webster): A particular manner or technique by which something is done, created, or performed. " }, { "figure_ref": [], "heading": "Questions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_9", "fig_10", "fig_9", "fig_10", "fig_10", "fig_10", "fig_10", "fig_10" ], "heading": "More Qualitative Results", "publication_ref": [], "table_ref": [], "text": "Stylized generation on various styles. We provide additional visualization results in Figs. 9 to 11. Specifically, Figs. 9 and 10 show qualitative results on various reference styles. Fig. 11 shows qualitative results on various target objects. On the one hand, our method can capture fine-grained style details as well as generate high-quality objects, demonstrating the effectiveness of our approach. On the other hand, our method can be utilized to generate various stylized objects, indicating the universality of our proposed method.\nComparison results. In addition to Fig. 4 in the main paper, we provide more comparison results with other methods to further show the superiority of our method in Figs. 12 to 15. As present in these results, our INSTASTYLE exhibits better performance in style preservation and content generation. Take Fig. 12 as an example, our INSTASTYLE achieves better image quality than StyleDrop. Although DreamBooth can preserve the style of the reference image, generating target objects is challenging for DreamBooth. Custom Diffusion and Textual Inversion can generate objects in high fidelity. However, the generated images are in the style of a superclass style as the reference image, rather than the fine-grained style of the reference image.\nCombination of multiple styles. In addition to Figs. 5 and 6 in the main paper, we provide additional style combination visualization results in Figs. 16 and17. We set the noise mix ratio α and the prompt mix ratio β to be equal, which are 0, 0.1, 0.3, 0.5, 0.7, 0.9, and 1 from left to right. As shown in Fig. 16, our approach supports adjusting the degree of two styles during combination and can generate various target objects, demonstrating the flexibility and universality of our approach. To better illustrate the creative ability of our method, we present the style combination results of a fixed style with different other styles in Fig. 17. As shown in Fig. 17, when the mix ratio is small, e.g., 0.1, the generation results among different style combinations look similar. This can be attributed to the information from the second style is too little, which is not enough to affect the style of the generated image. When the mix ratio is medium, e.g., 0.5, the results present better style combination effects and there is a significant difference among different style combinations. For example, the results in Fig. 17 (a) contain green color and textures from the second style. The results in Fig. 17 (b) have some white space just as the reference watercolor dog image. The results in Fig. 17 (c) have some characteristics of watercolor painting which is similar to the reference watercolor mountain. When the mix ratio is large, the resulting images tend towards the second style." }, { "figure_ref": [ "fig_12" ], "heading": "More Analysis", "publication_ref": [], "table_ref": [], "text": "Analysis of the timestep of DDIM inversion. We analyze the effect of the terminal timestep of DDIM inversion in inference time. As shown in Fig. 19, with the increase of timestep, the style information of the picture can always be preserved, while the redundant structural information, e.g., the object in the reference image, is gradually eliminated. On the one hand, this further demonstrates our finding that the inversion noise from a stylized reference image inherently carries the style signal. On the other hand, the avoiding of redundant structural information makes our approach flexible to generate new content. In practice, we set the timestep as 1000 for a trade-off between preserving the style information and avoiding the negative effects of redundant information.\nAnalysis of guidance scale. In Fig. 20, we provide visualization results of INSTASTYLE with varying levels of guidance scales. When the guidance scale is small, the generated image can preserve the style information in the reference image but may have difficulty in generating the target object. As the guidance scale increases, the model synthesizes more precise and refined objects at the price of losing the style information. A medium guidance scale can make a trade-off between the style and content and we set the guidance scale to 2.5 in all experiments. " }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "In this supplementary material, we first present additional preliminaries for the diffusion model in Sec. 6. Subsequently, the derivation of the signal-to-noise ratio (SNR) of the inversion noise is shown in Sec. 7. Finally, we show more experiment details and experiment results in Sec. 8. Specifically, implementation details including the datasets, experiment settings, and user study settings are provided in Sec. 8.1. We present more qualitative results and the corresponding analysis of stylized generation (Figs. " } ]
Stylized image generation (b)Combination of two styles (c) Smooth style combination between two styles "Rabbit" "Castle" "Bridge" "Oak" Style 1 Style 2 "Tram" Style 1 Style 2 Style 1+Style2 "Horse" "Palace" Figure 1. Visualization of INSTASTYLE. (a) Stylized image generation with a single reference image. Each column belongs to the same style category, i.e., "Chinese ink painting", "Watercolor painting", "Aurora", and "Van Gogh painting". Our method excels at capturing style details including colors, textures, and brush strokes. (b) Combination of two styles. The first column and third column show stylized image generation given style 1 and style 2 as references, respectively. The middle column shows the generated images in the combined style. (c) Our method supports adjusting the degree of two styles during combination, dynamically ranging from one style to another.
INSTASTYLE: Inversion Noise of a Stylized Image is Secretly a Style Adviser
[ { "figure_caption": "Figure 3 .3Figure 3. The training process of INSTASTYLE. (a) The first stage is initial stylized image generation. The reference image is inverted to noise conditioned on a prompt via DDIM Inversion. Then the inversion noise is utilized to generate initial stylized images. (b) The second stage is prompt refinement which leverages the initial stylized images in high quality to learn a style token.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Qualitative comparison of stylized image generation on various styles. Objects for synthesis are Bicycle, Sunflowers and Chair. Our method excels at capturing fine-grained style information, such as color, textures, and brushstrokes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Qualitative results. The qualitative comparison results are shown in Fig.4. Textual Inversion and Custom Diffusion often yield an unsatisfactory style. Taking the well-known Van Gogh painting style (e.g., row 3 and row 4) as an example, they fail to capture fine-grained style information in 1 https://github.com/aim-uofa/StyleDrop-PyTorch 2 https://github.com/huggingface/diffusers the reference image. The fine-tuning-based methods, i.e., DreamBooth and StyleDrop, result in distorted content. For example, they generate bicycles with leaked content from the reference image in the last row.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization of the combination of two styles. In each case, we show our combination results (the biggest image). For better comparison, we also present the stylized generation results of style1 and style2 in the top left (blue box) and bottom left (red box), respectively. Objects for synthesis are Boat, Truck, Horse, Elephant, Pear, Lion, Plate, Pot, Helicopter, Bicycle, Palace, and Motorcycle.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .Figure 7 .67Figure6. Visualization of the combination of two styles. The style can be controlled via the noise mix ratio and the prompt mix ratio. Our approach enables continuous style combinations, demonstrating the flexibility and diversity of our approach.", "figure_data": "", "figure_id": "fig_4", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 11 .Figure 12 .1112Figure 11. Qualitative results on various objects. The image on the top left is the reference style image. Objects for synthesis are Apple, Aqariumfish, Baby, Bear, Beetle, Bicycle, Boy, Bus, Butterfly, Castle, Cattle, Dolphin, Elephant, Fox, Girl, Lion, Lizard, Man, Motorcycle, Mountain, Mushrooms, Orchids, Otter, Pears,Pine, Poppies, Rabbit, Raccoon, Road, Roses, Squirrel, Sunflowers, Sweetpeppers, Tank, Tiger, Tractor, Tulips, Turtle, and Wolf .", "figure_data": "", "figure_id": "fig_5", "figure_label": "1112", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. More comparison results (b). Objects for synthesis are Butterfly, Castle, Girl, Motorcycle, Orchids, and Wolf. Our IN-STASTYLE exhibits better performance in both style preservation and content generation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. More comparison results (c). Objects for synthesis are Butterfly, Castle, Girl, Motorcycle, Orchids, and Wolf. Our IN-STASTYLE exhibits better performance in both style preservation and content generation.", "figure_data": "", "figure_id": "fig_7", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure 15. More comparison results (d). Objects for synthesis are Butterfly, Castle, Girl, Motorcycle, Orchids, and Wolf. Our IN-STASTYLE exhibits better performance in both style preservation and content generation.", "figure_data": "", "figure_id": "fig_8", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 .16Figure16. More style combination results. Our approach supports adjusting the degree of two styles during combination and can generate various target objects, demonstrating the flexibility and universality of our approach. The noise mix ratio and the prompt mix ratio are set to be equal, which are 0, 0.1, 0.3, 0.5, 0.7, 0.9, and 1 from left to right. Objects for synthesis are Apple, Cat, Elephant, Grape, Horse, Rabbit, and Taxi.", "figure_data": "", "figure_id": "fig_9", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 .17Figure 17. More style combination results. We present style combination results of a Van Gogh painting with different reference style images, illustrating the creative ability of our method,. The noise mix ratio and the prompt mix ratio are set to be equal, which are 0, 0.1, 0.3, 0.5, 0.7, 0.9, and 1 from left to right. The reference images are shown in the bottom right corner of the left most image and the right most image, respectively. Objects for synthesis are Boat, Cabin, and Helicopter.", "figure_data": "", "figure_id": "fig_10", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Style alignment:Figure 18 .18Figure 18. User study interface.", "figure_data": "", "figure_id": "fig_11", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 .19Figure 19. Visualization of various timestep.With the increase of timestep, the style information of the picture can always be preserved, while the redundant structural information, e.g., the object in the reference image, is gradually eliminated. The timestep of adding noise in DDIM inversion is set to 200, 400, 600, 800, and 1000, respectively. Objects for synthesis are Sunflowers and Lion.", "figure_data": "", "figure_id": "fig_12", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Optimized style token embedding v * and diffusion model ε * θ . // Initial Stylized Image Generation 3 Compute the inversion noise z T using DDIM inversion over I conditioned on C based on Eq. (6);", "figure_data": "Algorithm 1: INSTASTYLE4 Generate M images conditioned on a predefinedprompt set based on Eq. (5);// Prompt Refinement5 Select N images from M images with humanfeedback as the training dataset q(x 0 , c);6 repeat7", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "User study. Style and Text denote user satisfaction with the style and content, respectively. Overall denotes user satisfaction when considering both style and content simultaneously.", "figure_data": "MethodStyle (↑) Text (↑) Overall (↑)Textual Inversion [10]1.924.271.88Custom Diffusion [28]2.184.102.09DreamBooth [40]3.163.012.57StyleDrop [44]3.363.742.74Ours3.734.623.61are detailed in Tab. 2. Our INSTASTYLE achieves the high-est alignment score on all three metrics, illustrating that ourmethod surpasses its competitors in both style preservationand content generation.", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Objects for generation in the quantitative experiment.", "figure_data": "SuperclassObjectsSuperclassObjectsaquaticmammals beaver, dolphin, otter, seal, whalelarge natural outdoor scenescloud, forest, mountain, plain, seafishaquarium fish, flatfish, ray, shark, troutlarge omnivores and herbivorescamel, cattle, chimpanzee, elephant, kangarooflowersorchids, poppies, roses, sunflowers, tulipsmedium-sized mammalsfox, porcupine, possum, raccoon, skunkfood containersbottles, bowls, cans, cups, platesnon-insect invertebratescrab, lobster, snail, spider, wormfruit and vegetablesapples, mushrooms, oranges, pears, sweet pepperspeoplebaby, boy, girl, man, womanhousehold electrical devicesclock, computer keyboard, lamp, telephone, televisionreptilescrocodile, dinosaur, lizard, snake, turtlehousehold furniturebed, chair, couch, table, wardrobesmall mammalshamster, mouse, rabbit, shrew, squirrelinsectsbee, beetle, butterfly, caterpillar, cockroachtreesmaple, oak, palm, pine, willowlarge carnivoresbear, leopard, lion, tiger, wolfvehicles 1bicycle, bus, motorcycle, pickup truck, trainlarge man-made outdoor thingsbridge, castle, house, road, skyscrapervehicles 2lawn-mower, rocket, streetcar, tank, tractor", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Image sources for experiments.", "figure_data": "StyleObjectLink", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" } ]
Xing Cui; Zekun Li; Peipei Li; Huaibo Huang; Zhaofeng He
[ { "authors": "Yuval Alaluf; Elad Richardson; Gal Metzer; Daniel Cohen-Or", "journal": "", "ref_id": "b0", "title": "A neural space-time representation for text-toimage personalization", "year": "2023" }, { "authors": "Haibo Chen; Lei Zhao; Zhizhong Wang; Huiming Zhang; Zhiwen Zuo; Ailin Li; Wei Xing; Dongming Lu", "journal": "", "ref_id": "b1", "title": "Dualast: Dual style-learning networks for artistic style transfer", "year": "2021" }, { "authors": "Junhyeong Cho; Gilhyun Nam; Sungyeon Kim; Hunmin Yang; Suha Kwak", "journal": "", "ref_id": "b2", "title": "Promptstyler: Prompt-driven style generation for source-free domain generalization", "year": "2023" }, { "authors": "Xing Cui; Zekun Li; Peipei Li; Yibo Hu; Hailin Shi; Zhaofeng He", "journal": "", "ref_id": "b3", "title": "I2edit: Towards multi-turn interactive image editing via dialogue", "year": "2023" }, { "authors": "Yingying Deng; Fan Tang; Weiming Dong; Chongyang Ma; Xingjia Pan; Lei Wang; Changsheng Xu", "journal": "", "ref_id": "b4", "title": "Stytr2: Image style transfer with transformers", "year": "2022" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "", "ref_id": "b5", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Patrick Esser; Robin Rombach; Bjorn Ommer", "journal": "", "ref_id": "b6", "title": "Taming transformers for high-resolution image synthesis", "year": "2021" }, { "authors": "Nicolas Martin; Marco Everaert; Sami Bocchio; Sabine Arpa; Radhakrishna Süsstrunk; Achanta", "journal": "", "ref_id": "b7", "title": "Diffusion in style", "year": "2023" }, { "authors": "L Joseph; Jacob Fleiss; Cohen", "journal": "EPM", "ref_id": "b8", "title": "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability", "year": "1973" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; Amit Haim Bermano; Gal Chechik; Daniel Cohen-Or", "journal": "ICLR", "ref_id": "b9", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2022" }, { "authors": "Rinon Gal; Or Patashnik; Haggai Maron; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "ACM TOG", "ref_id": "b10", "title": "Stylegan-nada: Clipguided domain adaptation of image generators", "year": "2022" }, { "authors": "Leon A Gatys; Alexander S Ecker; Matthias Bethge", "journal": "", "ref_id": "b11", "title": "Image style transfer using convolutional neural networks", "year": "2016" }, { "authors": "Leon A Gatys; Alexander S Ecker; Matthias Bethge; Aaron Hertzmann; Eli Shechtman", "journal": "", "ref_id": "b12", "title": "Controlling perceptual factors in neural style transfer", "year": "2017" }, { "authors": "Bohai Gu; Fan Heng; Libo Zhang", "journal": "", "ref_id": "b13", "title": "Two birds, one stone: A unified framework for joint learning of image and video style transfers", "year": "2023" }, { "authors": "Nicholas Guttenberg", "journal": "", "ref_id": "b14", "title": "Diffusion with offset noise", "year": "2023" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "NeurIPSW", "ref_id": "b15", "title": "Classifier-free diffusion guidance", "year": "2021" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b16", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Kibeom Hong; Seogkyu Jeon; Junsoo Lee; Namhyuk Ahn; Kunhee Kim; Pilhyeon Lee; Daesik Kim; Youngjung Uh; Hyeran Byun", "journal": "", "ref_id": "b17", "title": "Aespa-net: Aesthetic pattern-aware style transfer networks", "year": "2023" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "ICLR", "ref_id": "b18", "title": "Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Siyu Huang; Jie An; Donglai Wei; Jiebo Luo; Hanspeter Pfister", "journal": "", "ref_id": "b19", "title": "Quantart: Quantizing image style transfer towards high visual fidelity", "year": "2023" }, { "authors": "Xun Huang; Serge Belongie", "journal": "", "ref_id": "b20", "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "year": "2017" }, { "authors": "Yongcheng Jing; Yang Liu; Yezhou Yang; Zunlei Feng; Yizhou Yu; Dacheng Tao; Mingli Song", "journal": "", "ref_id": "b21", "title": "Stroke controllable fast style transfer with adaptive receptive fields", "year": "2018" }, { "authors": "Yongcheng Jing; Xiao Liu; Yukang Ding; Xinchao Wang; Errui Ding; Mingli Song; Shilei Wen", "journal": "", "ref_id": "b22", "title": "Dynamic instance normalization for arbitrary style transfer", "year": "2020" }, { "authors": "Yongcheng Jing; Yining Mao; Yiding Yang; Yibing Zhan; Mingli Song; Xinchao Wang; Dacheng Tao", "journal": "", "ref_id": "b23", "title": "Learning graph neural networks for image style transfer", "year": "2022" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b24", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Zhanghan Ke; Yuhao Liu; Lei Zhu; Nanxuan Zhao; Rynson W H Lau", "journal": "", "ref_id": "b25", "title": "Neural preset for color style transfer", "year": "2023" }, { "authors": "Alex Krizhevsky", "journal": "", "ref_id": "b26", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "", "ref_id": "b27", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "Peipei Li; Rui Wang; Huaibo Huang; Ran He; Zhaofeng He", "journal": "", "ref_id": "b28", "title": "Pluralistic aging diffusion autoencoder", "year": "2023" }, { "authors": "Shanchuan Lin; Bingchen Liu; Jiashi Li; Xiao Yang", "journal": "", "ref_id": "b29", "title": "Common diffusion noise schedules and sample steps are flawed", "year": "2023" }, { "authors": "Tianwei Lin; Zhuoqi Ma; Fu Li; Dongliang He; Xin Li; Errui Ding; Nannan Wang; Jie Li; Xinbo Gao", "journal": "", "ref_id": "b30", "title": "Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer", "year": "2021" }, { "authors": "Nan Liu; Shuang Li; Yilun Du; Antonio Torralba; Joshua B Tenenbaum", "journal": "", "ref_id": "b31", "title": "Compositional visual generation with composable diffusion models", "year": "2022" }, { "authors": "Chenlin Meng; Yutong He; Yang Song; Jiaming Song; Jiajun Wu; Jun-Yan Zhu; Stefano Ermon", "journal": "ICLR", "ref_id": "b32", "title": "Sdedit: Guided image synthesis and editing with stochastic differential equations", "year": "2021" }, { "authors": "Ron Mokady; Amir Hertz; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b33", "title": "Null-text inversion for editing real images using guided diffusion models", "year": "2023" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "ICLR", "ref_id": "b34", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b35", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b36", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b37", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b38", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b39", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Wei Wei; Tingbo Hou; Yael Pritch; Neal Wadhwa; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b40", "title": "Hyperdreambooth: Hypernetworks for fast personalization of text-to-image models", "year": "2023" }, { "authors": "Tim Salimans; Jonathan Ho", "journal": "ICLR", "ref_id": "b41", "title": "Progressive distillation for fast sampling of diffusion models", "year": "2021" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b42", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Kihyuk Sohn; Nataniel Ruiz; Kimin Lee; Daniel Castro Chin; Irina Blok; Huiwen Chang; Jarred Barber; Lu Jiang; Glenn Entis; Yuanzhen Li", "journal": "", "ref_id": "b43", "title": "Styledrop: Text-to-image generation in any style", "year": "2023" }, { "authors": "Kihyuk Sohn; Albert Shaw; Yuan Hao; Han Zhang; Luisa Polania; Huiwen Chang; Lu Jiang; Irfan Essa", "journal": "", "ref_id": "b44", "title": "Learning disentangled prompts for compositional image synthesis", "year": "2023" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "ICLR", "ref_id": "b45", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Hao Tang; Songhua Liu; Tianwei Lin; Shaoli Huang; Fu Li; Dongliang He; Xinchao Wang", "journal": "", "ref_id": "b46", "title": "Master: Meta style transformer for controllable zero-shot and few-shot artistic style transfer", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "NeurIPS", "ref_id": "b47", "title": "Attention is all you need", "year": "2017" }, { "authors": "Huan Wang; Yijun Li; Yuehai Wang; Haoji Hu; Ming-Hsuan Yang", "journal": "", "ref_id": "b48", "title": "Collaborative distillation for ultra-resolution universal style transfer", "year": "2020" }, { "authors": "Rui Wang; Peipei Li; Huaibo Huang; Chunshui Cao; Ran He; Zhaofeng He", "journal": "NeurIPS", "ref_id": "b49", "title": "Learning-to-rank meets language: Boosting language-driven ordering alignment for ordinal classification", "year": "2023" }, { "authors": "Zhizhong Wang; Lei Zhao; Wei Xing", "journal": "", "ref_id": "b50", "title": "Stylediffusion: Controllable disentangled style transfer via diffusion models", "year": "2023" }, { "authors": "Yuxiang Wei; Yabo Zhang; Zhilong Ji; Jinfeng Bai; Lei Zhang; Wangmeng Zuo", "journal": "", "ref_id": "b51", "title": "Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation", "year": "2023" }, { "authors": "Linfeng Wen; Chengying Gao; Changqing Zou", "journal": "", "ref_id": "b52", "title": "Capvstnet: Content affinity preserved versatile style transfer", "year": "2023" }, { "authors": "Xiaolei Wu; Zhihao Hu; Lu Sheng; Dong Xu", "journal": "", "ref_id": "b53", "title": "Styleformer: Real-time arbitrary style transfer via parametric style composition", "year": "2021" }, { "authors": "Xin Xie; Yi Li; Huaibo Huang; Haiyan Fu; Wanwan Wang; Yanqing Guo", "journal": "", "ref_id": "b54", "title": "Artistic style discovery with independent components", "year": "2022" }, { "authors": "Wenju Xu; Chengjiang Long; Yongwei Nie", "journal": "", "ref_id": "b55", "title": "Learning dynamic style kernels for artistic style transfer", "year": "2023" }, { "authors": "Zipeng Xu; Enver Sangineto; Nicu Sebe", "journal": "", "ref_id": "b56", "title": "Stylerdalle: Language-guided style transfer using a vector-quantized tokenizer of a large-scale generative model", "year": "2023" }, { "authors": "Serin Yang; Hyunmin Hwang; Jong Chul; Ye ", "journal": "", "ref_id": "b57", "title": "Zero-shot contrastive loss for text-guided diffusion image style transfer", "year": "2023" }, { "authors": "Yuxin Zhang; Nisha Huang; Fan Tang; Haibin Huang; Chongyang Ma; Weiming Dong; Changsheng Xu", "journal": "", "ref_id": "b58", "title": "Inversion-based style transfer with diffusion models", "year": "2023" }, { "authors": "Zicheng Zhang; Bonan Li; Xuecheng Nie; Congying Han; Tiande Guo; Luoqi Liu", "journal": "", "ref_id": "b59", "title": "Towards consistent video editing with text-to-image diffusion models", "year": "2023" }, { "authors": "Yufan Zhou; Ruiyi Zhang; Tong Sun; Jinhui Xu", "journal": "", "ref_id": "b60", "title": "Enhancing detail preservation for customized text-to-image generation: A regularization-free approach", "year": "2023" }, { "authors": "Mingrui Zhu; Xiao He; Nannan Wang; Xiaoyu Wang; Xinbo Gao", "journal": "", "ref_id": "b61", "title": "All-to-key attention for arbitrary style transfer", "year": "2023" }, { "authors": "Van Gogh; House", "journal": "", "ref_id": "b62", "title": "", "year": null }, { "authors": "", "journal": "", "ref_id": "b63", "title": "Visualization of various guidance scales. A medium guidance scale can make a trade-off between the style and content. The guidance scale in inference is set to 1", "year": null } ]
[ { "formula_coordinates": [ 3, 537.37, 420.83, 7.74, 8.64 ], "formula_id": "formula_0", "formula_text": ")1" }, { "formula_coordinates": [ 3, 375.17, 466.4, 169.95, 17.63 ], "formula_id": "formula_1", "formula_text": "z t = √ α t z 0 + √ 1 -α t ε,(2)" }, { "formula_coordinates": [ 3, 337.01, 528.25, 208.11, 9.65 ], "formula_id": "formula_2", "formula_text": "p θ (z t-1 |z t ) := N (z t-1 ; µ θ (z t , t), Σ θ (z t , t)).(3)" }, { "formula_coordinates": [ 4, 58.47, 327.68, 224.02, 14.37 ], "formula_id": "formula_3", "formula_text": "z t-1 = αt-1 αt z t + 1 αt-1 -1 - 1 αt -1 • εθ , (5" }, { "formula_coordinates": [ 4, 282.49, 330.81, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 58.48, 418.87, 227.89, 14.38 ], "formula_id": "formula_5", "formula_text": "z t+1 = αt+1 αt z t + 1 αt+1 -1 - 1 αt -1 • εθ . (6)" }, { "formula_coordinates": [ 4, 57.93, 636.09, 228.43, 14.6 ], "formula_id": "formula_6", "formula_text": "z t+1 = αt+1 α0 z 0 + t i=0 αt+1 αi+1 1 αi+1 -1 - 1 αi -1 2 • ε0 ,(7)" }, { "formula_coordinates": [ 4, 315.12, 270.44, 226.12, 31.07 ], "formula_id": "formula_7", "formula_text": "SNR(t) := 1 t i=0 α0 αi+1 1 αi+1 -1 - 1 αi -1 2 . (8" }, { "formula_coordinates": [ 4, 541.24, 277.5, 3.87, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 4, 308.86, 328.88, 236.25, 29.15 ], "formula_id": "formula_9", "formula_text": "β t = ( √ 0.00085 • (1 -j) + √ 0.012 • j) 2 , where j = t-1" }, { "formula_coordinates": [ 4, 340.71, 691.6, 204.41, 25.24 ], "formula_id": "formula_10", "formula_text": "Attention(Q, K, V ) = Softmax QK T √ d ′ V.(9)" }, { "formula_coordinates": [ 5, 50.11, 312.69, 236.25, 22.82 ], "formula_id": "formula_11", "formula_text": "W ′ = W + BA, where B ∈ R d×r , A ∈ R r×d ′" }, { "formula_coordinates": [ 5, 64.12, 549.48, 222.24, 24.84 ], "formula_id": "formula_12", "formula_text": "εθ (z t , t, C, ∅) = ε θ (z t , t, ∅) + w • (ε θ (z t , t, C) -ε θ (z t , t, ∅)),(10)" }, { "formula_coordinates": [ 5, 309.26, 277.65, 208.18, 61.1 ], "formula_id": "formula_13", "formula_text": "8 z 0 = E(x 0 ) 9 t ∼Uniform({1, . . . , T }) 10 ε ∼ N (0, I) 11 z t = √ α t z 0 + √ 1 -α t ε 12 Take gradient step on ∇ θ,v ∥ε -ε θ (z t , t, c)∥ 2 2" }, { "formula_coordinates": [ 5, 360.81, 489.12, 184.3, 9.68 ], "formula_id": "formula_14", "formula_text": "z t = (1 -M) ⊙ z t1 + M ⊙ z t2 ,(11)" }, { "formula_coordinates": [ 5, 317.52, 607.85, 223.44, 34.92 ], "formula_id": "formula_15", "formula_text": "εθ (z t , t, C, ∅) = ε θ (z t , t, ∅) + w • (1 -β) • (ε θ (z t , t, C 1 ) -ε θ (z t , t, ∅)) + w • β • (ε θ (z t , t, C 2 ) -ε θ (z t , t, ∅)) , (12" }, { "formula_coordinates": [ 5, 540.96, 620.66, 4.15, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 17, 207.53, 159.24, 337.58, 23.59 ], "formula_id": "formula_17", "formula_text": "µ θ (z t , t) = 1 √ α t z t - β t √ 1 -α t ε θ (z t , t) ,(13)" }, { "formula_coordinates": [ 17, 222.88, 234.39, 318.08, 9.65 ], "formula_id": "formula_18", "formula_text": "z t-1 = µ θ (z t , t) + σ t z, z ∼ N (0, I). (14" }, { "formula_coordinates": [ 17, 540.96, 234.7, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 17, 198.18, 292.55, 342.78, 17.63 ], "formula_id": "formula_20", "formula_text": "min θ E z0,ε∼N (0,I),t∼Uniform(1,T ) ∥ε -ε θ (z t , t)∥ 2 2 . (15" }, { "formula_coordinates": [ 17, 540.96, 295.84, 4.15, 8.64 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 17, 128.25, 417.58, 416.86, 181.63 ], "formula_id": "formula_22", "formula_text": "z t+1 = α t+1 α t z t + 1 α t+1 -1 - 1 α t -1 • ε t = α t+1 α t α t α t-1 z t-1 + 1 α t -1 - 1 α t-1 -1 • ε t-1 + 1 α t+1 -1 - 1 α t -1 • ε t = α t α t-1 z t-1 + α t+1 α t 1 α t -1 - 1 α t-1 -1 • ε t-1 + 1 α t+1 -1 - 1 α t -1 • ε t = α t α t-1 z t-1 + α t+1 α t 1 α t -1 - 1 α t-1 -1 2 + 1 α t+1 -1 - 1 α t -1 2 • εt-1 = α t α t-1 z t-1 + α t+1 α t 1 α t -1 - 1 α t-1 -1 2 + α t+1 α t+1 1 α t+1 -1 - 1 α t -1 2 • εt-1 = ... = α t α 0 z 0 + t i=0 α t+1 α i+1 1 α i+1 -1 - 1 α i -1 2 • ε0 ,(16)" }, { "formula_coordinates": [ 17, 200.56, 632.49, 340.4, 70.68 ], "formula_id": "formula_23", "formula_text": "SNR(t) : = αt α0 2 t i=0 αt+1 αi+1 1 αi+1 -1 - 1 αi -1 2 2 = 1 t i=0 α0 αi+1 1 αi+1 -1 - 1 αi -1 2 . (17" }, { "formula_coordinates": [ 17, 540.96, 664.25, 4.15, 8.64 ], "formula_id": "formula_24", "formula_text": ")" } ]
2023-11-25
[ { "figure_ref": [ "fig_1", "fig_2", "fig_2", "fig_1", "fig_2", "fig_2", "fig_1" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b11", "b12", "b5", "b6", "b13", "b14", "b8", "b10", "b6", "b13", "b16", "b9", "b10", "b12", "b6", "b13", "b10", "b9", "b17", "b18", "b19", "b20", "b3", "b21", "b22", "b17", "b19", "b20", "b0", "b0", "b2", "b17", "b18", "b23" ], "table_ref": [], "text": "I N the past three years, the computer vision community has video learning [12], [13]. The central challenge lies in the modality disparity between images and videos. Specifically, videos inherently contain unique temporal information, and video-text data is generally more complex and noisy when compared to image-text data. Consequently, our investigation, built upon existing temporal modeling methods and various video-language datasets, has revealed two often overlooked points. As depicted in Fig. 1, we have found that current efforts in temporal modeling are predominantly confined to either video-language tasks [6], [7], [14], [15] or videospecific tasks [9]- [11], resulting in reduced efficiency when applied to a different category of video task. Meanwhile, our observation indicates that video-text paired training samples typically suffer from partial misalignment in both pretraining and downstream datasets.\nTo gain a deep insight into the first issue, we further dive into the structures of existing CLIP-based temporal modules. We find current efforts can be roughly categorized into posterior structure based methods and intermediate structure based methods as shown in Fig. 2. Posterior structure based methods [7], [14]- [17] adopt a late modeling strategy, utilizing CLIP as a feature extractor and applying temporal modeling to embeddings independently extracted from different frames. Built upon the highly semantic embeddings, this structure, while beneficial for preserving well-aligned visual-language representations, falls short in capturing the low-level spatialtemporal visual patterns among frames, which are essential for video understanding. As a result, methods based on posterior structures tend to exhibit marginal performance improvements, a trend that becomes particularly pronounced in action recognition tasks where low-level spatial-temporal visual patterns are crucial. Unlike posterior structure based methods, intermediate structure based methods [10], [11], [13] equip CLIP with temporal modeling capability by integrating temporal modeling modules between CLIP layers, which sees significant improvements in the video recognition task. Nevertheless, we have observed that incorporating additional modules inside CLIP would impact the pretrained high-level semantic knowledge in the model, leading to trivial or even negative impacts on the text-video retrieval task. These statistical patterns are more pronounced in Fig. 3, where both the posterior structure and intermediate structure excel only in their respective tasks.\nIn contrast to the extensive research on temporal modeling, another critical issue has received limited attention: video-text paired training samples generally exhibit partial misalignment. Partial misalignment refers to the situation in which the aligned information between a video and its corresponding text is distributed only across specific frames Fig. 1. The two issues in image-to-video transfer for vision-language models. (a) Generalizability: We illustrate CLIP-based temporal modules struggle to generalize across different video tasks. We present the performance of various models concerning the baseline, which is based on CLIP with mean pooling.\nThe models include the text-video retrieval models CLIP4clip-seqTrans [7] and CLIP2video-TDB [14], as well as video recognition models STadapter [11] and XCLIP [10]. Evaluation is based on Recall@1 for MSRVTT [18] and Top-1 accuracy for Kinetics-400 [19]. (b) Partial Misalignment: Above, we showcase a misaligned training sample in MSRVTT, where only \"people on a beach\" and the 1 st , 5 th and 6 th frames are aligned to each other. Below, we quantitatively assess the extent of partial misalignment in video-text datasets, including MSRVTT, DiDeMo [20], and WebVid2.5M [21]. The degree of alignment progressively deteriorates from \"up\" to \"bottom\". and phrases, while other components of the video/text are noisy which hinders precise vision-language alignment and strong image-to-video adaptation. Fig. 1(b) shows a case of partial misalignment, where only the phrase \"people on a beach\" and the red-marked frames are semantically aligned. Due to the complexity and redundancy of video content, such cases occur much more frequently in video-text than in imagetext data. Moreover, the situation is even more severe in video pretraining datasets, which are constructed using instructional videos and noisy narrations [4], [22], [23]. To quantitatively assess the partial misalignment present in video datasets, we have selected and analyzed two downstream datasets (MSR-VTT [18] and DiDeMo [20]) and one pretraining dataset (WebVid2.5m [21]). Specifically, we employ CLIP-ViT-L/14 [1] to measure misalignment, utilizing dot-product similarity followed by sigmoid to compute the correlation between text and each frame. A frame is considered aligned with the text if the probability exceeds 0.5. Then, we categorize the video-text alignment degree into three levels: (1)up when more than 2/3 frames are aligned with the text. (2)bottom when less than 1/3 frames are aligned with the text. (3)middle in between the two. As revealed in Fig. 1(b), in all three datasets, more than half of the video-text pairs suffer from partial misalignment (middle and bottom), even if these datasets are widely recognized for their high quality in video-text tasks.\nPartial misalignment, together with the temporal modeling, has raised a subsequent challenge: post-pretraining 1 imagelanguage models on large-scale video-language datasets shows very limited gains. As depicted in Fig. 3(b), we can observe that CLIP, after being post-pretrained on either WebVid10M or HowTo100M, does not significantly outperform the baseline without post-pretraining.\nFrom the aforementioned analysis, we conclude two key factors for extending image-language pretrained models to the video domain: (1) Effective temporal modeling while taking advantage of knowledge in different levels of representation. (2) Suppressing the partial misalignment during training on video-text data. To this end, we propose Spatial-Temporal Auxiliary Network with Mutual-guided alignment module (Mug-STAN) -a plug-and-use framework adapting image-language models to general video tasks, where STAN introduces effective temporal modeling and Mug mitigates partial misalignment during training. In Fig. 2 and3(a), it is noticeable that temporal modeling structure in STAN exhibits strong performance in both retrieval tasks and recognition tasks. In Fig. 3(b), we can see that STAN and Mug contribute significantly to the effectiveness of post-pretraining respectively, where Mug excels particularly well on the noisy HowTo100M dataset.\nSpecifically, rather than posterior or intermediate structure, our proposed STAN introduces a distinctive branch structure located outside the visual backbone , featuring multiple levels 1 Further pretraining on relatively large scale video-text corpora based on pretrained image models for downstream video tasks is termed as postpretraining. Finetuning means directly tuning for adapting image-text models on downstream video datasets. Evaluation is based on Recall@1 for MSRVTT [18] and Top-1 accuracy for Kinetics-400 [19]. The methods are clustered into posterior structure, intermediate structure, and our branch structure. (b) Performance comparison of post-pretraining on different models. We report the finetuned result of Recall@1 on DiDemo text-video retrieval. Based on CLIP, effective temporal modeling (STAN) and partial-misalignment suppression (Mug) respectively bring noticeable improvements.\nof input, as shown in Fig. 2. This novel structure enables STAN to enrich the features of video frames with spatial-temporal contexts, leveraging different output levels of image-text model, while preserving the forward-propagation of source model. Thereby, it can effectively utilizes both high-level and low-level knowledge from the pretrained model simultaneously, making it adaptable to various downstream video tasks. STAN comprises multiple layers with a spatial-temporal separated design. Each layer conducts spatial-temporal modeling by alternately stacking two distinct modules: an intra-frame module and a cross-frame module. This approach allows the layer to enhance model performance by reusing pretrained parameters from image-text pretrained models to initialize the intra-frame spatial modules. Meanwhile, Mug is constructed using a parameter-free, token-wise interaction modeling mechanism with negligible computational cost, which can be easily plugged into existing state-of-the-arts. Given a video-text pair, we can get its frame-wise feature sequence and text feature sequence, respectively. To realize the mutual-guided alignment, we first perform the frame-token interaction to obtain the frame-specific text embedding for each frame and token-specific video embedding for each token. Then, for each modality, we attain its final global embedding through guidance from the other modality. At last, the pair of mutually guided representations are employed in contrastive learning during post-pretraining or finetuning. In this way, we can capture and align the relevant parts of video and text, freeing the adaptation of image-text pretrained models from the videotext partial misalignment problem.\nThrough extensive experiments, we have demonstrated the impressive performance of our proposed Mug-STAN. Specifically, we have implemented Mug-STAN on two well-known image-language models, CLIP and CoCa. Furthermore, we have adopted a fresh perspective on post-training by evaluating our model on datasets with varying levels of noise, such as WebVid10M and HowTo100M. The comprehensive results highlight the efficacy of Mug-STAN not only in the finetuning but also in post-pretraining. Remarkably, we achieve state-ofthe-art results in both zero-shot and finetuning settings across a diverse range of video tasks, including text-video retrieval, video action recognition, and video detection. Moreover, given the current popularity of multimodal dialogue systems, we have also plugged the pretrained Mug-STAN on LLaVa [24], achieving the capability of zero-shot video chatting without any instruction tuning.\nThe main contributions of this paper are:\n• We present an in-depth analysis of the factors that impede the adaptation of image-language models to video domains. By revisiting the temporal modeling on CLIP in current research and carefully examining video-text datasets, we identify non-generalizable temporal modeling and partially misaligned video-text data as the primary culprits affecting the performance. " }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Image-Language PreTraining", "publication_ref": [ "b22", "b24", "b26", "b0", "b2", "b27", "b28", "b0", "b29", "b30", "b31", "b32", "b23" ], "table_ref": [], "text": "Image-Language pre-training has been drawing increasing attention from researchers in the computer vision community [23], [25]- [27]. Recently, contrastive language-image pretraining on web-scale data [1]- [3], [28], [29] has experienced significant success, primarily due to its outstanding performance when applied to various downstream tasks. One of the most renowned works is CLIP [1], which has demonstrated surprising capabilities in zero-shot recognition and domain generalization [30], [31]. The wealth of knowledge contained within these image-language pretrained models holds a promising future for their adaptation to video tasks. Thankfully, our Mug-STAN can be implemented on these image-language models in a plug-and-play manner, leading to substantial performance improvements in various video tasks. It's worth noting that recent advancements in multimodal understanding have been largely propelled by the fusion of image-based vision models with LLMs, such as Flamingo [32], BLIP-2 [33], and LLaVA [24], fortunately, these multimodal dialogue models generally employ CLIP-L/14 as the visual encoder. Consequently, our Mug-STAN can be seamlessly implemented on these models to achieve zero-shot video chatting." }, { "figure_ref": [ "fig_1" ], "heading": "B. Video-Language Pretraining", "publication_ref": [ "b33", "b34", "b20", "b35", "b4", "b12", "b36", "b38", "b5", "b39", "b41", "b39", "b41", "b41" ], "table_ref": [], "text": "As a subset of vision-language pretraining, video-language pretraining has also been the subject of numerous explorations in recent years, such as Violet [34], clipBert [35], Frozen [21], BridgeFormer [36], and Clover [5]. In video-language pretraining, models typically initialize the video encoder and text encoder with separately pre-trained weights [13], [37]- [39], and then use multiple pretraining targets to achieve crossmodal alignment and multimodal learning, such as contrastive learning, masked language modeling, and video-text matching. However, video-language pretrained models face difficulties in simultaneously handling temporal modeling and modality alignment due to the challenges posed by unaligned initialization. In contrast, image-text pretrained models inherently possess extensive knowledge as a result of the vast diversity and scale of image-text data they are trained on. As a result, when finetuned on downstream video-language datasets, we have observed significant advantages of image-text pretrained models over video-language pretrained models, even if the former have not been pretrained on video datasets.\nSimilar to our research, CLIP-ViP [6] is among the few studies that delve into the realm of video post-pretraining. However, CLIP-ViP relies on large-scale data and the annotation from an additional captioner for its post-pretraining process. In contrast, our work demonstrates that with an appropriate method, post-pretraining can yield superior results on both smaller datasets (Webvid10M) and noisy datasets (HowTo100M) without requiring extra frame-wise annotation. In addition, several studies have also ventured into the domain of pretraining under noisy and misaligned video-text data [40]- [42]. Miech et al. [40] and Han et al. [42] introduced the MIL-NCE loss and the Temporal Alignment Network, respectively, for noisy video-narration pretraining. Compared to these works, our paper differs in three aspects: (1)Setting. The previous works primarily focus on the datasets filled with completely misaligned video-text pairs and ASR captions (e.g., Howto100M), while our focus lies on the issue of partial misalignment, which is a more general problem and can even occur in relatively high-quality datasets, as depicted in Fig. 2(b). (2) Method. [42] employs the black-box network to learn the similarity between video and text, while we propose a parameter-free video-text mutual-guided module to identify and filter out the unrelated parts from video and text. (3) Results. In experiments, we convey much better results than those works under the same setting." }, { "figure_ref": [], "heading": "C. Image-Language Pretrained Models For Video Tasks", "publication_ref": [ "b5", "b6", "b8", "b10", "b13", "b14", "b23", "b42", "b43", "b6", "b13", "b14", "b9", "b10", "b9", "b44", "b14", "b15", "b45", "b46" ], "table_ref": [], "text": "In contrast to further post-pretraining, the majority of current studies primarily concentrate on the direct fine-tuning of image-text models for video tasks. An intuitive direction is temporal modeling [6], [7], [9]- [11], [14], [15], [24], [43], [44], as the image model cannot capture temporal information. In video-language tasks, such as text-video retrieval, most adaptation models tend to utilize posterior-based structures to handle temporal aspects , e.g.,, the sequential transformer in [7], the temporal difference block in [14], and token selection module in [15]. Despite the advancements achieved by these methods, the temporal modeling they provide is restricted to high-level embeddings and lacks effectiveness, as illustrated in Fig. 1(a). In video-only tasks such as action recognition, the mainstream expansion of CLIP for temporal modeling is to utilize the intermediate structure. For instance, Ni et al [10] developed a message token mechanism to pass messages among different frames. Pan et al [11] inserted the 3D convolution adapter inside the transformer to activate temporal modeling. Besides temporal modeling, there are also other efforts focused on adapting image-language models for video tasks from different perspectives. For example, [10], [45] explored the prompt modeling, while [15], [16], [46], [47] improved the ways of cross-modal interaction. However, most of the aforementioned methods tend to perform worse when transferred to another video task, whereas our model performs well across various video tasks." }, { "figure_ref": [], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "In this section, we will elaborate on our proposed strong and flexible Mug-STAN for adapting image-language models to general video tasks." }, { "figure_ref": [ "fig_3" ], "heading": "A. Motivation", "publication_ref": [ "b47", "b48", "b0", "b1", "b28" ], "table_ref": [], "text": "Large-scale image-language models, such as CLIP and CoCa, which undergo pretraining on hundreds of millions to billions of image-text pairs, typically comprise two encoders as fundamental components. Each encoder is responsible for encoding one modality to facilitate cross-modal alignment. As we ascend through the layers of the visual transformer [48], the model gradually learns visual patterns at different levels of abstraction [49]. Eventually, the visual encoder produces highlevel visual embeddings that are semantically aligned with the corresponding embeddings in the text modality. Formally, as illustrated in Fig. 4(left), given a video clip with T frames and a text description with K tokens, we feed them into a standard image-text pretrained visual encoder and text encoder, treating each frame as an individual image. This process generates frame-wise video representations denoted as V , and tokenwise text representations denoted as C:\nV = {v i } T i=1 ∈ R T ×D , C = {c j } K j=1 ∈ R K×D (1)\nwhere D is the feature dimension. Note that v i can be obtained from either the CLS token [1], [2] or the average of all patch tokens [29] of each frame. Then, frame-wise video representations {v i } T i=1 are averaged as the global video embedding v and the CLS token embedding is chosen from C as the global text representation c, where v and c are employed for cross-modal alignment. However, in the above process, two important issues are dismissed: temporal modeling and videotext partial misalignment.\nFirstly, each frame is encoded independently as it passes through the visual encoder, which neglects the interactions between frames and hinders temporal understanding. To address this problem, existing research often introduces additional modules as either a posterior or intermediate structure for the visual encoder to explicitly incorporate temporal modeling for various downstream video tasks. For high-level semantic knowledge dominated tasks, i.e., video-language task, the posterior structure fully leverages the pretrained visual-language alignment knowledge by applying temporal modeling to the visual encoder output {v i } T i=1 . Nevertheless, the highly semantic nature of v i T i=1 makes it challenging to capture low-level spatial-temporal patterns, leading to less effective temporal modeling. As for visual pattern dominated tasks, i.e., videoonly task, the intermediate structure integrated within the visual encoder fully leverages the pretrained low-level visual patterns. This empowers the encoder with the capability of learning spatial-temporal patterns from the video. However, the plug-in modules disrupt the original model's structure and internal feature flow, resulting in the inability to inherit the high-level semantic information alignment capability from the pretrained models.\nSecondly, the simple strategy in cross-modal interaction overlooks the prevalent issue of partial misalignment within video-text pairs. This misalignment results in aligned information being distributed selectively across specific frames and phrases, while other contextual elements may lack relevance to each other. The irrelevant parts are a kind of noise to video-language alignment. Therefore, simply representing the video and text with averaged representation or CLS embedding would introduce the noise hindering the learning of crossmodal alignment.\nIn response to the issue of existing models not being able to simultaneously inherit the pretrained high-level and lowlevel knowledge, we introduce Spatial-Temporal Auxiliary Network (STAN), a novel temporal modeling mechanism for image-language pretrained models. As shown in Fig. " }, { "figure_ref": [ "fig_3" ], "heading": "B. Spatial-Temporal Auxiliary Network", "publication_ref": [ "b4", "b5", "b12" ], "table_ref": [], "text": "Again, in the case of a video with T frames, the frames are fed into the pretrained visual backbone, which generates intermediate outputs at the last K + 1 levels of visual layers. We denote the outputs of the kth selected visual layer as:\nV k = {f k i,l ∈ R D |i ∈ [1, T ], l ∈ [0, L]},(2)\nwhich is a visual embedding sequence of the video where T , L and D represents the frame number, per-frame patch number and embedding dimension, respectively. In V k , f k i,0 refers to the embedding of the [CLS] token in the i-th frame of the video, while f k i,l>0 represents the visual embedding of the lth patch within that frame. Then, we take each intermediate output V k and pass it through the corresponding level of layer in STAN to model the spatial-temporal correspondence between video frames. At last, frame-wise outputs of the last pretrained visual layer are fuesed with the output of STAN to obtain the frame-wise video representation contextualized with temporal information, denoted as {v i } T i=1 in Eq. 1. STAN is composed of a stack of K spatial-temporal layers, with the input for each layer constructed upon the output of a pretrained vision layer and the last STAN layer. For the kth layer in STAN, its input is an embedding sequence of the whole video denoted as:\nV ′k = {f ′k 0,0 , f ′k 1,1 , .., f ′k 1,L , .., f ′k T,1 , .., f ′k T,L },(3)\nwhere f ′k 0,0 is the embedding representing the whole video while others denote the embedding of image patches in different frames. The output of the STAN layer is also an embedding sequence maintaining the same size as its input, which is denoted as:\nV k = { f k 0,0 , f k 1,1 , .., f k 1,L , .., f k T,1 , .., f k T,L }.(4)\nAt the first STAN layer, to construct its input from output of any pretrained visual layer V m , we first average the embedding of [CLS] tokens in each frame as a new embedding f ′1 0,0 = 1 T i∈T f m i,0 , and then update patch embeddings in V k with both spatial and temporal position embeddings as:\nf ′1 i,l = Dropout(f m i,l + Pos t (t) + Pos s (l)),(5)\nwhere l > 0 and Pos t and Pos s are the learnable embeddings for the temporal and spatial positions of each patch. For the other layers in STAN, the input V ′k is built based on the output from the previous STAN layer V k-1 and pretrained visual layer output V m+k-1 as follows:\nf ′k 0,0 = f k-1 0,0 + W k proj 1 T i∈T f m+k-1 i,0 ,(6)\nf ′k i,l = f k-1 i,l + W k proj f m+k-1 i,l ,(7)\nwhere i ∈ [1, T ], l ∈ [1, L], and W k proj ∈ R D×D is a projection layer. When compared to posterior structure based methods, STAN conducts spatial-temporal modeling on multilevel pretrained visual representations, enabling it to effectively capture visual dynamics information in the video. Meanwhile, unlike previous intermediate structure based methods that insert modules into pretrained visual encoder, STAN's branch structure protects the pretrained knowledge without disrupting the inherent encoder structure.\nGiven the input embedding sequence of a video, the STAN layer learns spatiotemporal information between video frames. As depicted in Fig. 4(right), it performs temporal modeling through the alternating stacking of two independent modules -the intra-frame module and the inter-frame module. Thanks to this separated design, we can reuse the structure of the pretrained visual encoder layer as our intra-frame spatial module and initialize it with the pre-trained parameter. This approach significantly reduces the optimization search space and improves the performance of downstream tasks. Same as most image-text pretrained models like CLIP, the intraframe module is also a self-attention block designed for spatial modeling. To simplify notation, we omit the superscript of embedding and denote the embedding representation of the i-th frame as X i ∈ R (L+1)×D . Here, the embedding of the [CLS] token in the video is duplicated and concatenated with the patch embeddings. Within each frame, the spatial module updates the embeddings using self-attention:\nXi = softmax(X i W Q (X i W K ) T / √ D)(X i W V ) + X i ,(8)\nwhere W Q /W K /W V denote the linear projections for the query, key and value in self-attention layer of the spatial module. Afterward, the duplicated [CLS] embeddings in each frame are averaged to form the video [CLS] embedding. The cross-frame module is dedicated to temporal modeling. To simplify notation, we omit the superscript of the embedding and represent the collection of l-th patch embeddings in different frames as Y l ∈ R T ×D . At each spatial position, the patch embeddings are updated using the function T emp(), which denotes the message passing strategy across temporal dimensions. In experiments, we will show that this strategy can be instantiated in various ways to facilitate temporal information exchange among frames. Here, we detail the instantiation of temporal self-attention, which possesses a natural advantage in sequence modeling. At each specific spatial position, the patch embeddings from different frames can be updated as:\nŶl = W proj (softmax(Y l W Q (Y l W K ) T / √ D)(Y l W V ) + Y l ),(9)\nwhere W Q /W K /W V denote the linear projections for the query, key, and value in the self-attention layer of the crossframe module, and W proj is the extra temporal linear projection initialized as zero. By employing temporal attention, each patch in the video is contextualized with temporal information from the same locations, while the zero projection helps maintain training stability during the early stages. At the final stage, with the output of the last pretrained visual layer V -1 and the output of the last STAN layer V K , we can simply combine them through addition to form the ultimate output of the video encoder:\nV = W v proj (LN(V -1 ⊕ V K )), (10\n)\nwhere LN is the final layer normalization in pretrained visual encoder and W v proj is the linear weight projecting the visual embedding into joint visual-text feature space. Furthermore, ⊕ means the global [CLS] token of STAN is duplicated T times and added to the [CLS] of each frame in V -1 , while the patch tokens are combined through simple addition. Finally, the same as the image encoder, we only have L + 1 tokens for the video encoding. This property significantly reduces the computational burden if we need to further feed these tokens into multimodal encoders or LLMs, in comparison to the joint space-time video encoder [5], [6], [13]." }, { "figure_ref": [ "fig_5", "fig_1" ], "heading": "C. Mutual-Guided Cross-Modal Alignment", "publication_ref": [], "table_ref": [], "text": "In the previous section, we have acquired the token-wise text embeddings C and frame-wise video embeddings V . In this section, we will further delve into how to filter out misaligned information using Mug, as depicted in Fig. 5. Mug first establishes token-frame-wise correspondences by calculating the dot-product similarity between C and V . With the similarity matrix, we then introduce how to provide mutual guidance for feature aggregation from the perspective of each modality, respectively.\nFrom the perspective of video modality, we first filter out the most relevant information in the text for each video frame. This is achieved by calculating the frame-to-token attention distribution, which assigns a score to each text token based on its relevance to the current video frame. Specifically, the attention score of the i th video frame with respect to the j th text token is given by:\ns i,j = exp(τ c j • v i ) K j=1 exp(τ c j • v i ) ,(11)\nwhere\nK j=1 s i,j = 1,\n• represents dot-product operation and τ controls the sharpness of attention distribution. For example, in Fig. 2(a), the 1 st video frame is expected to have dominant attention on the tokens corresponding to the action of \"people on a beach\" across all the text tokens.\nThen, we aggregate the text embeddings based on the attention distribution and get the frame-specific text embedding for each frame:\nc i = K j=1 s i,j c j , where c i ∈ R D . (12\n)\nThe set of frame-specific text embedding {c i } T i=1 represents updated text embeddings specified for each frame, where irrelevant information in the original text that is not aligned with the frame is suppressed and information that is relevant to the frame is strengthened. We use these updated text embeddings to evaluate the correspondence of each frame with respect to the text. This evaluation is done using dot-product similarity scores as the metric:\ns i = exp(τ c i • v i ) T n=1 exp(τ c n • v n ) ,(13)\nwhere s i represents the attention weight of each frame towards the text. Through s i , we can further aggregate frame-wise embedding to the global video-level representation with the guidance of text. Formally, we define this global text-guided video embedding as:\nv = T i=1 s i v i , where v ∈ R D . (14\n)\nAnalogously, from the perspective of text modality, we follow the same procedure to get improved text embedding under the guidance of video. Specifically, we first calculate token-to-frame attention distribution, which assigns a score to each frame embedding based on its relevance to the current text token:\ns ′ i,j = exp(τ c j • v i ) T i=1 exp(τ c j • v i ) , (15\n)\nwhere i and j indicate the index of text token and frame. Then, we get token-specific video embedding for each text token to assess the token-to-video correspondence:\nv j = T i=1 s ′ i,j v i , where v i ∈ R D(16)\ns ′ j = exp(τ c j • v j ) K n=1 exp(τ c n • v n ) ,(17)\nwhere s ′ j represents the attention weight of each text token towards the video. We obtain the global video-guided text embedding c by aggregating the text token embeddings according to { s ′ j } K j=1 :\nc = K j=1 s ′ j c j , where c ∈ R D ,(18)\nIn our proposed Mug, we default to using the frame-to-token interaction. However, our method can be readily adapted to various granularities of video-text interaction, such as videoto-token, frame-to-text, and token-to-token interaction. This flexibility allows for a trade-off between computation and interaction granularity, catering to different requirements based on the specific application. We will further explore and discuss this in our experiments." }, { "figure_ref": [], "heading": "D. Training", "publication_ref": [ "b52", "b3", "b0", "b9", "b49", "b50", "b51" ], "table_ref": [], "text": "Post-pretraining & text-video retrieval. Both postpretraining and retrieval tasks utilize video-text pairs as training sources, resulting in the same training pipeline. Specifically, given text-guided video embedding v and video-guided text embedding c, we calculate the dot-product similarity between the two embeddings, which serves as the similarity metric for the video and text in contrastive learning in a B-batch by:\nL t2v = - 1 B B m=1 log exp(τ c mn • v nm ) B n=1 exp(τ c mn • v nm ) , L v2t = - 1 B B n=1 log exp(τ v nm • c mn ) B m=1 exp(τ v nm • c mn )\n,\nL co = L t2v + L v2t ,(19)\nwhere c mn and v nm denotes the mutual-guided text/video embedding of the m th text and n th video in the batch, and L co denotes the final contrastive loss. It is worth noting that the video and text embeddings have been normalized before computing Mug, thereby the normalization is not included in calculating the similarity. Method MSR-VTT DiDeMo LSMDC HMDB-51 UCF-101 Kinetics400 R@1 R@5 R@10 MdR R@1 R@5 R@10 MdR R@1 R@5 R@10 MdR Acc@1 Acc@1 Acc@1 Non-CLIP models VideoCLIP [53] 10. 4 Video action recognition. Different from video-language tasks, action recognition tasks have fixed textual labels. Hence, we freeze the text encoder and only train the video encoder during finetuning. Besides, we do not employ any additional prompt templates like \"a video of action { }\" [1], [10] to wrap the tags. Then, we compute the loss with v and c as follows:\nL cr = N n=1 y n log exp(τ v • c n ) N i=1 exp(τ v • c i ) , (20\n)\nwhere N is the class number, y n is the one-hot label for class n, c n is the value of class n in global text embedding, and L cr denotes the final cross-entropy loss. Video action detection. Following the action detection pipeline in Slowfast [50] and VideoMAE [51], we add ROIAlign [52] with MaxPooling to generate the regions of interest in the last layer, following a cross-entropy with sigmoid loss for multi-label prediction." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS A. Datasets", "publication_ref": [ "b17", "b19", "b59", "b18", "b60", "b61", "b20", "b3", "b62", "b6", "b13", "b63", "b34" ], "table_ref": [], "text": "We evaluate our Mug-STAN on both video-language tasks, i.e.,, video-text retrieval, and video-only tasks, i.e.,, video recognition and video detection, which trials our methods from the two different perspectives. For video-text retrieval, we use MSR-VTT [18], DiDemo [20] and LSMDC [60]; for video recognition, we use Kinetics-400 [19] and Something-Something-v2 [61]; for video detection, we adopt Atomic Visual Action V2.2 [62]. Besides, we conduct the videotext post-pretraining on datasets with different levels of noise, including WebVid10M [21] and HowTo100M [4].\nVideo-Language Datasets: MSR-VTT is the most widely used benchmark for video-text retrieval. It consists of 10,000 YouTube videos, each associated with 20 captions. We report our results on the 1K-A split [63], which contains 9000 videos for training and 1000 for testing. DiDemo includes 10,611 videos sourced from Flicker, accompanied by 40,000 sentences. Notably, this dataset features longer video durations compared to other retrieval datasets. Following previous works [7], [14], we concatenate all captions of a video into a single query. LSMDC is a large-scale video-text retrieval benchmark comprising 118,081 videos sourced from 202 movies. This dataset offers a higher level of diversity in terms of concepts and video durations compared to other datasets.\nVideo-only Datasets: Kinetics-400 (K-400) is the most popular video recognition benchmark. Comprising over 300,000 video clips, Kinetics-400 covers 400 human action classes with average 300 frames. Something-Something-v2 (SSv2) is a video action recognition benchmark specifically designed for temporal modeling capabilities. It consists of 220,485 videos, each associated with 174 action classes. In contrast, K-400 has a bias towards action categories with static scene context, as noted in [64]. However, in SSv2, the action classes are less influenced by static scene context and instead focus more on dynamic information within the videos. Atomic Visual Action (AVA) v2.2 is designed for spatial-temporal action detection. It provides dense annotation for 80 atomic visual actions across 430 15-minute movie clips, resulting in 1.62M action labels with multiple labels per human occurring frequently.\nVideo Pretraining Datasets: WebVid10M is a large-scale video-text pretraining dataset of short videos with textual Method MSR-VTT DiDeMo LSMDC R@1 ↑ R@5 ↑ R@10 ↑ MdR ↓ R@1 ↑ R@5 ↑ R@10 ↑ MdR ↓ R@1 ↑ R@5 ↑ R@10 ↑ MdR ↓ Non-CLIP models CLIPBert [35] 22 " }, { "figure_ref": [], "heading": "B. Experiment Settings", "publication_ref": [ "b34", "b72" ], "table_ref": [], "text": "Model Setting. In most experiments, we adopt CLIP as the baseline image-language pretrained models for a fair comparison with previous works. For STAN, the number of STAN layers is set as 4 for all datasets except on SSv2 when it is set to 6. The STAN layers and CLIP layers are one-toone corresponded from top to bottom. For Mug, we employ frame-to-token interaction by default. The temperature scalar τ in Mug is set to the same unlearnable value as the logit scale in CLIP because Mug does not change the scale of CLIP features during feature transformation. To further evaluate the generalizability of Mug-STAN, we also implement Mug-STAN upon CoCa using the same configuration as CLIP.\nPost-pretraining. On both datasets, we employ a sparse sampling strategy [35] to sample 12 frames with each frame resized to 224*224 for each video clip, and for text, the token length is set to 64. We use AdamW [73] optimizer with a weight decay of 0.001, and set the initial learning rate as 4e-6 and 4e-5 for CLIP layers and STAN layers with a cosine annealing decay schedule. We train our model using only normalized contrastive loss and do not include other targets like masked language modeling or video-text matching. We train models with a batch size of 1024 for 3 epochs. It takes 1.6k GPU hours with 32 A100 GPUs for post-pretraining on HowTo100M, while the consumption is 0.8k GPU hours on WebVid10M. To evaluate the efficacy of post-pretraining, we compare the performance of post-pretrained models through both zero-shot and fine-tuning settings on downstream tasks.\nFinetuning. For all datasets, the batch size is set to 128, and we adopt AdamW as our optimizer with a weight decay of " }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b12" ], "table_ref": [], "text": "Frames Testing Views GFLOPs K400 Acc@1 K400 Acc@5 SSv2 Acc@1 SSv2 Acc@5 Non-CLIP models TimeSformer-L [13] 96 " }, { "figure_ref": [], "heading": "C. Comparison With State-of-the-Art Methods", "publication_ref": [ "b58", "b15", "b5", "b6", "b58", "b5", "b77", "b69", "b58", "b36", "b37", "b0" ], "table_ref": [ "tab_1", "tab_3", "tab_5" ], "text": "Zero-Shot Results. The zero-shot results of WebVid10M post-pretraining are posted in Table . I. We evaluate Mug-STAN on three text-video retrieval datasets and three video recognition datasets. We report our results under different model capacities, including on CLIP-B/32, CLIP-B/16, and CLIP-L/14. As evident from the presentation, numerous approaches that introduce new structures onto CLIP tend to compromise its zero-shot capabilities, despite achieving improved fine-tuning outcomes, such as ActionCLIP, CLIP-ViP, and XCLIP. In contrast, Mug-STAN demonstrates clear zeroshot advantages over CLIP following post-pretraining. Note that our comparison with CLIP is conducted fairly, considering the little improvement achieved through CLIP post-pretraining detailed in Table I. Moreover, in comparison to the previous SOTA methods in the zero-shot setting, our approach demonstrates significant advantages across all datasets, even when the comparisons are conducted unfairly for us. For instance, InternVideo [59] utilizes dual visual encoders, and generative self-supervised techniques, and involves 50 times more GPU days compared to our approach. Nevertheless, our method outperforms InternVideo by significant margins, achieving improvements of 1.7%, 8.1%, 3.1%, and 0.8% on the MSRVTT, DiDeMo, LSMDC, and Kinetics400 datasets, respectively. The results demonstrate our post-pretraining on Mug-STAN does not damage the rich knowledge in the CLIP while providing a stronger zero-shot capacity for video tasks.\nVideo-Language Tasks. We report the finetuning results of text-to-video retrieval in Table II. We compare our Mug-STAN with current SOTAs with various setting, including directly finetuning, finetuning after post-pretraining and using extra tricks during inference. As demonstrated in the results, when directly fine-tuning for video-text retrieval tasks, Mug-STAN brings about obvious advantage over CLIP, outperforming CLIP4clip by 4.7% at R@1 on average across the three datasets with CLIP-B/32 as backbone. Compared to another state-of-the-art method DRL [16], which also leverages frametoken wise interaction to boost performance, Mug-STAN outperforms it by 1.1% at R@1 on average across the three datasets. When it comes to post-pretraining, it is worth noting that only a few methods [6], [7], [59] have explored this area, with CLIP-ViP [6] being the strongest competitor. Compared to CLIP-ViP , which introduces an external strong captioner [78] to augment pre-training datasets with additional captions, our method is free from such complex data augmentation and achieves competitive or even better performance across different datasets. Moreover, Mug-STAN is able to bring about performance gains by post-pretraining on smaller or noisier datasets, while CLIP-ViP requires larger dataset i.e., HDVilla-100M [70]. Furthermore, compared to large competitors [59], despite the disadvantages in terms of training cost, pretraining method, and model scale, MugSTAN still outperforms Inter- Video across the three datasets.\nVideo-Only Tasks. We report the finetuning results of video recognition and video detection on Kinetics-400, Something-Something-2, and AVA-v2.2 in Table III and IV respectively. In the K400-recognition benchmark, CLIP-based methods demonstrate competitive performance with smaller model scales compared to image-pretrained methods. For instance, our VIT-B/16 based STAN achieves superior results compared to models like ViViT [37] and Video-swin [38], which have more than 10× GFLOPs compared to our method. As for SSv2 and AVA benchmark, we observe that, without temporal modeling, bare CLIP model [1] achieves only 44.0% top-1 accuracy and 25.9 mAP which dramatically under-performs ImageNet-Kinetics pretrained models, though it owns pretrained knowledge obtained from a much larger image-text dataset. The result suggests that the domain gap is significant between SSv2/AVA and CLIP model, and temporal modeling capability is desired for the two datasets. STAN brings about more than 25.5% and 4.4% performance improvement over the CLIP baseline on SSv2 and AVA, which demonstrates that Mug-STAN empowers CLIP with strong temporal modeling capability. It is worth noting that, in comparison to videolanguage tasks, the contrastive video-text pretraining does not demonstrate significant advantages over image-pretraining on video-only tasks. This is particularly evident for selfsupervised reconstruction methods. Nevertheless, Mug-STAN manages to achieve competitive performance even in the face of this challenge when compared to single-modality pretrained methods. Moreover, in comparison to other CLIPbased methods, Mug-STAN consistently exhibits advantages across various datasets." }, { "figure_ref": [], "heading": "D. Ablation Study", "publication_ref": [ "b5", "b1", "b78", "b15", "b16" ], "table_ref": [ "tab_9", "tab_10", "tab_11", "tab_11" ], "text": "Ablations on components of Mug-STAN. To evaluate the contribution of different components in our method, we conduct ablation experiments on both finetuning setting and zeroshot setting as shown in Table . V. First of all, in the first three lines are the overall performance of STAN and Mug, we can conclude that STAN and Mug are compatible with each other while each of them contributes to the adaption of imagelanguage pretraining models, i.e., Mug addresses the issue of partial misalignment in video-text data and STAN focuses on the temporal modeling. Moreover, combining Mug and STAN, the performance is further increased by a considerable margin, which demonstrates that the temporal modeling capability and the addressing of partial misalignment are mutually beneficial Fig. 6. Ablation results on the hyper-parameter setting of STAN. We report the finetuning results without post-pretraining on both and SSv2. We study the number of STAN layers, the relative location of STAN layer respect to CLIP, the interval of STAN layer (i.e., the number of CLIP layers between STAN layer), and the number of STAN networks. to each other. Secondly, lines 4-7 demonstrate the internal structure of STAN. Specifically, when we eliminate the branch structure or multi-level feature learning, the performance of STAN experiences a substantial decline across all four benchmarks. This serves as strong evidence of the superiority of our model structure over the posterior structure. Additionally, adopting joint-ST temporal modeling in STAN also brings noticeable improvements, albeit not surpassing the separate approach, which underscores the significance of reusing parameters from the pretrained model.\nAblations on Post-Pretraining. CLIP-ViP [6] points out two factors that potentially hinder the video post-pretraining to further improve the performance on downstream video tasks: dataset scale and domain gap. In this paper, through ablation study on post-pretraining, we figure out that empowering the pretrained model with temporal modeling capability and addressing partial-misalignment problem are also crucial for post-pretraining. We employ HowTo100M and WebVid10M as pretraining dataset and train different models on the two datasets, respectively.\nAs shown in Table VI, for the CLIP baseline, which employs a simple mean pooling strategy for cross-frame modeling, it takes trivial advantages from post-pretraining. As for experiments on STAN, which owns expertise on temporal modeling, we observe that post-pretraining on WebVid10M brings more performance gains than that on CLIP baseline. When it comes to Mug-STAN, the performance gains of post-pretraining on WebVid10M further increase to 2.8% on DiDemo and 2.0% on MSR-VTT. Moreover, even on HowTo100M, which consists of instructional videos with noise narrations and suffers from extremely severe partial-misalignment problem, our method still brings about 2.0% and 1.1% performance gains on DiDemo and MSR-VTT, respectively. The results reveal that temporal modeling capability is beneficial to the post-pretraining while addressing partial-misalignment problem is able to further amplify the performance gains remarkably.\nCan Mug-STAN work on image-language pretrained models beyond CLIP? To verify the generalizability of our method, we further implement Mug-STAN based on another famous image-text pretrained model, i.e., CoCa [2]. We only use the visual and text encoder of CoCa and load the pretrained weights released by OpenCLIP, which is pretrained on LAION2b [79]. As is illustrated in Table VII, compared to the CoCa baseline, which is directly fine-tuned on downstream tasks with mean pooling as its temporal modeling strategy, both STAN and Mug bring significant performance improvement, while the post-pretraining on WebVid10M further boost the finetuning result. The expermental results demonstrate that Mug-STAN has the potential to be migrated to various emergent image-text pretrained models.\nWhat is the best hyper-parameter setting of STAN? STAN functions as a new branch positioned alongside the pretrained visual backbone, which takes the video frame representation at different levels of pretrained visual layers as inputs. To study impact of different setting of STAN, we present extensive ablation study for STAN-CLIP-B/32 in Fig. 6 on both videolanguage tasks and video-only tasks. The first is the number of STAN layers, as is shown, for MSRVTT retrieval, the performance enhancement of STAN reaches its peak at 4 layers, after which the performance begins to decline with further increases of layers; On SSv2, the performance improvement of STAN seems to stabilize after 6 layers. Overall, using STAN with 4 to 6 layers is recommended as a suitable choice for various tasks, considering the optimal balance between performance gains and computational efficiency. Secondly is the location of STAN layer. We fix the number of STAN layers to 4 and align STAN layers with 1-4, 5-8 and 9-12 CLIP layers respectively. The results suggest that the mid-to-high level of pretrained Fig. 10. The qualitative result of the softmax scores of sentence guiding frames in Eq .13 and video guiding tokens in Eq. 17.\nCLIP representation holds more significance for downstream tasks. Then, we align the last layer of CLIP and STAN, and vary the interval of selected CLIP layers between the STAN layers, e.g.,, interval=2 means STAN receives outputs of the 6th, 8th, 10th, and 12th layers. As shown in Table, interval=1 is the best choice for both datasets. Finally is the number of the whole STAN networks. We find that introducing more STAN layers makes no difference on MSRVTT but can bring a slight improvement to ssv2, but this is not cost-effective considering the increase in computational complexity.\nIs Mug the optimal design for aligning videos and text? To understand the optimal design for the video-text interaction module, we perform a detailed ablation analysis. Initially, we explore the granularity of interaction within Mug. In Table VIII (middle), Frame-Text Interaction indicates substituting video-guided text embedding in Mug with conventional [CLS] token embedding, while Video-Token Interaction represents substituting text-guided video embedding with conventional averaged frame-wise embedding. The results demonstrate that text-guided video embedding is more important than the video-guided text embedding, which reveals that the partialmisalignment problem is more severe in the video modality. Then, we investigate different cross-modal interaction strategies. A well-known interaction modeling module is WTI in DRL [16] and its follower hunyuan [17], which learns singlemodality based attention scores to determine which tokenframe scores are most representative of text-video correspondence. In contrast, Mug utilizes token-frame correspondence scores to introduce cross-modal mutual guidance, where the most relevant parts between the video-text pair, which potentially have higher scores, would be highlighted. Table VIII (bottom) shows that Mug outperforms WTI and hunyuan in terms of performance. Besides, the mutual-guided cross-modal embedding aggregation in Mug is akin to a soft key concept selection process. To explore this idea, we further replace the softmax operations in Eq.11,13,15,17 with a top-k hard selection operation. However, we find that the optimal \"topk\" value varies across datasets, and the module with softmax What is happening when the camera is pointed stationary towards the riders and the dog? When the camera is pointed stationary towards the riders and the dog, it is capturing a scene where a group of people are riding horses through a wooded area, and a dog is accompanying them. The riders are likely enjoying a leisurely ride through the forest, and the dog is likely a companion or a working dog, assisting the riders in navigating the terrain or providing companionship. The image captures a peaceful and serene moment in nature, showcasing the bond between humans and animals as they share the experience of exploring the outdoors together.\nWhat happened when the female gymnast was performing a somersault?\nIn video, the female gymnast was performing a somersault, which is a type of acrobatic move. As she was executing the somersault, she lost her balance and fell off the balance beam. This fall could have resulted from a variety of factors, such as incorrect technique, misjudgment of the landing, or simply a momentary lapse in concentration." }, { "figure_ref": [], "heading": "What does the little boy do on the armbar?", "publication_ref": [ "b41", "b41" ], "table_ref": [], "text": "The little boy is performing a trick on the armbar. He puts both his hands on the armbar and swings his legs in the air and then puts them on the armbar.\nIn the video, a man is seen holding a tennis racquet and preparing to serve the ball. He is standing on a tennis court, and the camera captures the sequence of events as he moves into position, swings the racquet, and strikes the ball. The ball is then in motion, and the man follows through with his swing, demonstrating his technique and skill in serving the tennis ball. The sequence captures the essence of a tennis serve, showcasing the player's form and execution of the serve.\nCan you describe the sequence of events during the serve in the video?\nIn the video, a man is seen hitting a tennis ball with a tennis racket. He then goes to hit the ball back and forth.\nWhen the camera is pointed stationary towards the riders and the dog, the riders are seen riding horses down a river. The dog is also riding on a horse.\nThe little boy does a handstand on the armbar, and then gets down.\nIn the image, a female gymnast is performing a somersault on the balance beam during a gymnastics competition. As she performs the maneuver, she appears to be in mid-air, with one leg extended and the other tucked underneath her body. The somersault is a challenging element in gymnastics that requires strength, flexibility, and coordination. To ensure a fair comparison, following [42], we pretrain Mug-STAN on HTM-370K and evaluate its zero-shot performance on datasets as presented in [42]. The results clearly demonstrate that Mug-STAN has a significant advantage over other state-of-the-art methods when operating under the same experimental conditions. In summary, our experimental findings across various dimensions consistently highlight the effectiveness of Mug in addressing misalignment in video-text data." }, { "figure_ref": [ "fig_6", "fig_7", "fig_8" ], "heading": "E. Qualitative Results", "publication_ref": [ "b9", "b6" ], "table_ref": [], "text": "In experiments, we substantiated Mug-STAN's capacity for proficient temporal modeling, all the while harnessing the benefits of pretrained knowledge. Expanding on these quantitative findings, we now present qualitative results that unveil the efficacy of Mug-STAN across these two aspects.\nFirst of all, we showcase text-to-video retrieval outcomes of the intermediate-structure based method XCLIP [10] and our Mug-STAN. Illustrated in Fig. 7, these instances can be effortlessly resolved if a model can effectively align the emphasized object concepts in queries, like \"salad,\" with videos that contain corresponding visual content. However, XCLIP produces inaccurate outcomes by returning results where the crucial objects are missing from the videos. This comparison underscores the limitation of the intermediate structure in effectively transferring high-level visual-text alignment knowledge, the work at which our method excels. Subsequently, we provide comparison results of text-to-video retrieval for CLIP4clip [7] and Mug-STAN in Fig. 8. The figure demonstrates that CLIP4clip, which is based on the posterior structure, produces incorrect outcomes. Although the results encompass accurate static contexts as described in queries (such as \"stroller\" and \"gymnasts\"), they feature erroneous dynamic information that doesn't align with the emphasized concepts in the queries (such as \"folds up\" and \"roll\"). These results emphasize that our approach can more effectively harness spatial-temporal information for enhanced video comprehension. Then, we visualize the attention of Mug-STAN's intra-frame module using VideoCAM, as depicted in Fig. 9. These visualizations demonstrate that our STAN module consistently directs its attention towards critical content within videos, spanning across different moments in time. Finally, to shed more light to the effectiveness of Mug, we present a qualitative result of the cross-modal guidance. In Fig. 10, we present both the textto-frame correspondence scores s i (above) and video-to-token correspondence scores s ′ j (below). The results show that for text-to-frame guidance, most of the attention is focused on the last two frames where the cars are rolling, which contain the most relevant information with the text. For video-to-token guidance, the attention is guided towards the tokens \"car\", \"wreck\", and the ending token (CLS token). It reveals that Mug efficiently enhancing the aligned parts in the video-text pair for cross-modal alignment." }, { "figure_ref": [ "fig_9" ], "heading": "F. Video Chatting", "publication_ref": [ "b23", "b31", "b32", "b79", "b64", "b23" ], "table_ref": [], "text": "The domain of natural language processing has undergone a significant transformation with the introduction of pretrained Large Language Models (LLMs). The achievements of LLMs have also hastened the advancement of AI systems that integrate visual models with LLMs, enabling multimodal reasoning and action [24], [32], [33], [80]. Commonly, these models construct a projection from the output of the pretrained visual encoder (e.g., CLIP) to the input of the LLM. They then engage in visual instruction tuning, a process that facilitates multimodal interactions and conversations. Inspired by these visual-language chatbots, a new wave of methods has emerged that involve video chatting, which engages video backbone with LLMs and performs video instruction tuning [65]. Nonetheless, the training of video-language chatbots encounters similar challenges as video-language pretraining, namely huge computation costs and limited training source.\nFortunately, Mug-STAN offers a potential solution to these challenges. Unlike existing video chatbots, our approach does not involve the resource-intensive instruction tuning. Instead, we harness the power of existing image-language knowledge in a zero-shot manner. Specifically, we first post-pretraining Mug-STAN-CLIP on video-language datasets. Following this, we incorporate the pretrained branch networks into the visual backbone of image-language chatbots. Given that most existing multimodal chatting commonly utilizes a frozen CLIP as the visual backbone, our method can seamlessly empower image-language chatbots with the capacity for video understanding and processing. Lastly, the video token can also be seamlessly fed into the LLM for video chatting. This integration is facilitated by the fact that the output of STAN matches the token count of the image encoder. We take LLaVa [24] as pretrained image-text chatbots and present the qualitative results of STAN-LLaVa in Fig. 11. Compared with LLaVa, our method empowers the chatbot to precisely narrate the events within the video sequence and accurately recognize temporal-extensive actions. Notably, these results are achieved without resorting to any instruction tuning. This underscores the significant potential of Mug-STAN in adapting pretrained image-language chatbots to the realm of videos." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we first investigate and identify the key point of adapting pretrained image-language models to video domains: building generalizable temporal modeling and suppressing video-text partial misalignment. To this end, we propose the novel Spatial-Temporal Auxiliary Network with Mutual-guided alignment module (Mug-STAN), where STAN utilizes the multi-level branch structure for effective temporal modeling and Mug introduces cross-modal mutual-guided feature aggregation to mitigate misalignment. Finally, we perform comprehensive experiments to demonstrate the superiority of Mug-STAN. Extensive experimental results show that our adaption method achieves state-of-the-art results on a broad range of video tasks." } ]
Large-scale image-language pretrained models, e.g., CLIP, have demonstrated remarkable proficiency in acquiring general multi-modal knowledge through web-scale image-text data. Despite the impressive performance of image-language models on various image tasks, how to effectively expand them on general video understanding remains an area of ongoing exploration. In this paper, we investigate the image-to-video transferring from the perspective of the model and the data, unveiling two key obstacles impeding the adaptation of imagelanguage models: non-generalizable temporal modeling and partially misaligned video-text data. To address these challenges, we propose Spatial-Temporal Auxiliary Network with Mutualguided alignment module (Mug-STAN) -a simple yet effective framework extending image-text model to diverse video tasks and video-text data. Specifically, STAN adopts a branch structure with decomposed spatial-temporal modules to enable generalizable temporal modeling, while Mug suppresses misalignment by introducing token-wise feature aggregation of either modality from the other. Extensive experimental results verify Mug-STAN significantly improves adaptation of languageimage pretrained models such as CLIP and CoCa at both video-text post-pretraining and finetuning stages. With our solution, state-of-the-art zero-shot and finetuning results on various downstream datasets, including MSR-VTT, DiDeMo, LSMDC, Kinetics-400, Something-Something-2, HMDB-51, UCF-101, and AVA, are achieved. Moreover, by integrating pretrained Mug-STAN with the emerging multimodal dialogue model, we can realize zero-shot video chatting. Codes are available at https://github.com/farewellthree/STAN
Mug-STAN: Adapting Image-Language Pretrained Models for General Video Understanding
[ { "figure_caption": "trailer for an upcoming movie with people on a beach", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Different structures of temporal modeling: posterior structure (left), intermediate structure (middle), and our branch structure (right).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig.3. (a) Performance comparison of various methods on both text-tovideo retrieval and video recognition. Evaluation is based on Recall@1 for MSRVTT[18] and Top-1 accuracy for Kinetics-400[19]. The methods are clustered into posterior structure, intermediate structure, and our branch structure. (b) Performance comparison of post-pretraining on different models. We report the finetuned result of Recall@1 on DiDemo text-video retrieval. Based on CLIP, effective temporal modeling (STAN) and partial-misalignment suppression (Mug) respectively bring noticeable improvements.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. (left) The overall architecture of our proposed method, including the text and visual encoders, the temporal modeling module (STAN), and the cross-modal interaction module (Mug). (middle) Schematic diagram of feature forward propagation in and between pretrained visual encoder and STAN. (right) Details of the internal structure of the STAN spatial-temporal module.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "4(middle), STAN functions as a branch structure alongside the pretrained visual encoder. With the sophisticated design, STAN leverages various levels of features while retaining the pretrained knowledge. The operation of STAN will be detailed in Sec. III-B. Additionally, as depicted in Figure 5, to address the problem of partial misalignment, we introduce a novel cross-modal interaction module called Mutual-guided cross-modal alignment (Mug). This module takes frame-wise video representations V and token-wise text representations C as inputs. With guidance from the other modality, Mug efficiently filters out unrelated content and preserves aligned information in each modality, yielding new global video and text representation v and c. Details about Mug will be provided in Section III-C.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The overview of our proposed Mug. Based on the outputs of the video and text encoder, we first implement the mutual token-frame interaction on frame-wise video features and token-wise text features. Then, we compute the global video embedding and text embedding through guidance from another modality. Finally, we align the text-guided video embedding and video-guided text embedding.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Qualitative results of text-video retrieval on MSR-VTT. Given a text query, we present the correct matched video returned by Mug-STAN in the first row, and show the false result of XCLIP in the second row. The word highlighted in red indicates the key content missed in the false result.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Qualitative results of text-video retrieval on MSR-VTT. Given a text query, we present the correct matched video returned by Mug-STAN in the first row, and show the false result of CLIP4clip in the second row. The word highlighted in red indicates the key content missed in the false result.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Visualziation of intra-frame module of STAN on MSR-VTT. Given a text query. The region in red gains more attention from the model. We visualize the attention with VideoCAM.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. The qualitative results of video chatting. We showcase the results from LLaVa (above) and STAN-LLaVa (below).", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "ZERO-SHOT RESULTS OF TEXT-TO-VIDEO RETRIEVAL AND VIDEO RECOGNITION ON SIX DOWNSTREAM DATASETS. MODELS EXHIBITING OBVIOUS UNFAIR COMPARISON ARE DE-EMPHASIZED, i.e., INVOLVING EXTRA MODALITY, MUCH LARGER MODELS, OR SELF-SUPERVISED PRETRAINING.", "figure_data": "", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "FINETUNING RESULTS OF TEXT-TO-VIDEO RETRIEVAL ON MSRVTT, DIDEMO, AND LSMDC. MODELS EXHIBITING OBVIOUS UNFAIR COMPARISON ARE DE-EMPHASIZED. FOR CLIP-BASED METHODS, * MEANS EXTRA TRICKS (e.g., DSL [67] AND QB-NORM [68]) ARE UTILIZED DURING INFERENCE; AND † DENOTES POST-PRETRAINING THE MODELS ON VIDEO-TEXT DATASETS BEFORE FINETUNING.", "figure_data": "", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "FINETUNING RESULTS OF VIDEO RECOGNITION ON KINETICS-400 AND SOMETHING-SOMETHING-2. WE PRESENT METHODS OF COMPARABLE SCALE FOR FAIR COMPARISON. WE REPORT THE FLOPS OF ALL VIEWS.", "figure_data": "", "figure_id": "tab_5", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "FINETUNING RESULTS OF VIDEO DETECTION ON AVA 2.2. MODELS UTILIZING SELF-SUPERVISED RECONSTRUCTION ARE DE-EMPHASIZED. *", "figure_data": "MEANS OUR IMPLEMENTATION.MethodPretrainFramesGFLOPsmAPSlowFast [50]K4003213823.8MViTv1-B [74]K4006445527.3MViTv2-B [76]K4003225528.1MVD-B [77]K400818029.8VideoMAE-B [51]K400818031.8CLIP-B/16* [1]K400818024.9XCLIP-B/16* [10]K400818527.6Mug-STAN-B/16K400819729.30.02. For video-text retrieval, we adopt a frame number of 12and a token length of 32 for MSRVTT, LSMDC. On Didemowhere videos have a longer duration, the frame number andtoken number are set to 64 and 64. The learning rates areinitialized to 2e-6 and 2e-5 for parameters in CLIP and STANrespectively. For video-only tasks, we sample 8 frames bydefault. The learning rates are initialized to 8e-6 and 8e-5for CLIP and STAN layers. For action detection, we furtherpretrain Mug-STAN on K400 following previous work, andadopt a frame span of 300, which aligns with the default framenumber of Kinetics videos.", "figure_id": "tab_7", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "RESULTS OF DIFFERENT COMPONENTS IN OUR MODEL ON DIFFERENT SETTINGS. \"FT\" MEANS DIRECT FINETUNING RESULTS WITHOUT PERTAINING; \"ZS\" MEANS THE ZERO-SHOT RESULT AFTER PERTAINING. WE REPORT THE RESULT OF RECALL@1.", "figure_data": "ComponentsResultsBranch structureMulti-levelSeparated-STMutual-GuidedFT-MSRVTTFT-DiDemoZS-MSRVTTZS-DiDemo43.143.430.624.7✓✓✓46.946.233.028.1✓46.145.433.129.8✓44.943.531.925.4✓✓44.243.631.825.4✓✓45.544.732.226.9✓✓✓✓48.949.635.933.7", "figure_id": "tab_8", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "RESULTS ON THE POST-PRETRAINING. WE REPORT THE FINETUNING RESULTS AFTER POST-PRETRAINING WITH CLIP-B/32 ON BOTH MSR-VTT AND DIDEMO. WE CONDUCT THE PRETRAINING WITH DIFFERENT METHODS AND PRETRAINING DATASETS.", "figure_data": "ModelPretrain DatasetR@1DiDemo R@5 R@10MdRR@1MSR-VTT R@5 R@10MdRCLIP-43.470.979.2243.169.881.82CLIPHowTo100M43.070.280.3243.469.782.32CLIPWebVid10M43.670.480.5243.970.180.02STAN+CLIP-46.571.580.9246.972.882.82STAN+CLIPHowTo100M47.072.181.6247.172.382.52STAN+CLIPWebVid10M48.276.785.0247.572.882.92Mug+STAN+CLIP-49.675.384.6248.974.584.12Mug+STAN+CLIPHowTo100M51.677.385.2150.075.183.62Mug+STAN+CLIPWebVid10M52.478.185.8150.974.684.11", "figure_id": "tab_9", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "RESULTS OF MUG-STAN ON COCA[2] ON MSR-VTT AND DIDEMO RETRIEVAL. † DENOTES FINETUNING AFTER POST-PRETRAINING.", "figure_data": "ModelR@1R@5R@10MdRMSR-VTTCoCa (Baseline)42.770.279.22.0STAN-CoCa44.772.981.92.0Mug-STAN-CoCa46.273.482.32.0Mug-STAN-CoCa †48.073.982.42.0DiDemoCoCa (Baseline)39.667.977.02.0STAN-CoCa43.572.982.02.0Mug-STAN-CoCa46.773.982.02.0Mug-STAN-CoCa †48.873.883.71.5", "figure_id": "tab_10", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "RESULTS ON THE INTERACTION MODULE, INCLUDING INTERACTION GRANULARITY (MIDDLE) AND INTERACTION STRATEGIES (BOTTOM). WE REPORT THE RESULTS ON BOTH MSRVTT AND DIDEMO.", "figure_data": "ModelMSR-VTT R@1 meanDiDemo R@1 meanSTAN (Baseline)46.967.546.566.3Frame-Text Interaction47.568.248.768.3Video-Token Interaction46.868.047.267.1Frame-Token Interaction48.969.249.669.8Mug-STAN (max)47.368.547.968.2Mug-STAN (top3)48.669.148.568.7Mug-STAN (top5)48.168.249.469.3Mug-STAN (top7)47.067.848.468.3WTI-STAN [16]47.568.847.267.9Hunyuan-STAN [17]47.568.847.567.4Mug-STAN (softmax)48.969.249.669.8", "figure_id": "tab_11", "figure_label": "VIII", "figure_type": "table" }, { "figure_caption": "RESULTS ON THE EFFECTIVENESS OF MUG IN MITIGATING PARTIAL MISALIGNMENT. ABOVE, WE REPORT THE R@1 SCORES FOR DATA WITH VARYING DEGREES OF MISALIGNMENT IN MSRVTT AND DIDEMO DATASETS. BELOW, WE COMPARE MUG-STAN WITH OTHER STATE-OF-THE-ART VIDEO DENOISING METHODS.", "figure_data": "level%MSR-VTT STAN+Mug%DiDemo STAN+Mugtop3853.754.04254.354.8middle2347.249.02343.346.9bottom3940.143.03539.245.1all10046.948.910046.549.6ModelR@1 on HTM-AlignR@1 on YouCook2MIL-NCE [40]31.315.1TAN [42]49.420.1Mug-STAN51.629.7", "figure_id": "tab_12", "figure_label": "IX", "figure_type": "table" } ]
Ruyang Liu; Jingjia Huang; Wei Gao; Thomas H Li; Ge Li; Stan Mug-Stan
[ { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b0", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "J Yu; Z Wang; V Vasudevan; L Yeung; M Seyedhosseini; Y Wu", "journal": "", "ref_id": "b1", "title": "Coca: Contrastive captioners are image-text foundation models", "year": "2022" }, { "authors": "W Wang; H Bao; L Dong; J Bjorck; Z Peng; Q Liu; K Aggarwal; O K Mohammed; S Singhal; S Som", "journal": "", "ref_id": "b2", "title": "Image as a foreign language: Beit pretraining for vision and vision-language tasks", "year": "2023" }, { "authors": "A Miech; D Zhukov; J.-B Alayrac; M Tapaswi; I Laptev; J Sivic", "journal": "", "ref_id": "b3", "title": "Howto100m: Learning a text-video embedding by watching hundred million narrated video clips", "year": "2019" }, { "authors": "J Huang; Y Li; J Feng; X Wu; X Sun; R Ji", "journal": "", "ref_id": "b4", "title": "Clover: Towards a unified video-language alignment and fusion model", "year": "2023" }, { "authors": "H Xue; Y Sun; B Liu; J Fu; R Song; H Li; J Luo", "journal": "", "ref_id": "b5", "title": "Clip-vip: Adapting pre-trained image-text model to video-language alignment", "year": "2022" }, { "authors": "H Luo; L Ji; M Zhong; Y Chen; W Lei; N Duan; T Li", "journal": "Neurocomputing", "ref_id": "b6", "title": "Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning", "year": "2022" }, { "authors": "R Liu; J Huang; G Li; J Feng; X Wu; T H Li", "journal": "", "ref_id": "b7", "title": "Revisiting temporal modeling for clip-based image-to-video knowledge transferring", "year": "2023" }, { "authors": "S Buch; C Eyzaguirre; A Gaidon; J Wu; L Fei-Fei; J C Niebles", "journal": "", "ref_id": "b8", "title": "Revisiting the\" video\" in video-language understanding", "year": "2022" }, { "authors": "B Ni; H Peng; M Chen; S Zhang; G Meng; J Fu; S Xiang; H Ling", "journal": "Springer", "ref_id": "b9", "title": "Expanding language-image pretrained models for general video recognition", "year": "2022" }, { "authors": "J Pan; Z Lin; X Zhu; J Shao; H Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b10", "title": "St-adapter: Parameterefficient image-to-video transfer learning", "year": "2022" }, { "authors": "J Carreira; A Zisserman", "journal": "", "ref_id": "b11", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "G Bertasius; H Wang; L Torresani", "journal": "ICML", "ref_id": "b12", "title": "Is space-time attention all you need for video understanding?", "year": "2021" }, { "authors": "H Fang; P Xiong; L Xu; Y Chen", "journal": "", "ref_id": "b13", "title": "Clip2video: Mastering videotext retrieval via image clip", "year": "2021" }, { "authors": "Y Liu; P Xiong; L Xu; S Cao; Q Jin", "journal": "Springer", "ref_id": "b14", "title": "Ts2-net: Token shift and selection transformer for text-video retrieval", "year": "2022" }, { "authors": "Q Wang; Y Zhang; Y Zheng; P Pan; X.-S Hua", "journal": "", "ref_id": "b15", "title": "Disentangled representation learning for text-video retrieval", "year": "2022" }, { "authors": "J Jiang; S Min; W Kong; H Wang; Z Li; W Liu", "journal": "IEEE Access", "ref_id": "b16", "title": "Tencent textvideo retrieval: Hierarchical cross-modal interactions with multi-level representations", "year": "2022" }, { "authors": "J Xu; T Mei; T Yao; Y Rui", "journal": "", "ref_id": "b17", "title": "Msr-vtt: A large video description dataset for bridging video and language", "year": "2016" }, { "authors": "W Kay; J Carreira; K Simonyan; B Zhang; C Hillier; S Vijayanarasimhan; F Viola; T Green; T Back; P Natsev", "journal": "", "ref_id": "b18", "title": "The kinetics human action video dataset", "year": "2017" }, { "authors": "L Hendricks; O Wang; E Shechtman; J Sivic; T Darrell; B Russell", "journal": "", "ref_id": "b19", "title": "Localizing moments in video with natural language", "year": "2017" }, { "authors": "M Bain; A Nagrani; G Varol; A Zisserman", "journal": "", "ref_id": "b20", "title": "Frozen in time: A joint video and image encoder for end-to-end retrieval", "year": "2021" }, { "authors": "H Xue; T Hang; Y Zeng; Y Sun; B Liu; H Yang; J Fu; B Guo", "journal": "", "ref_id": "b21", "title": "Advancing high-resolution video-language representation with largescale video transcriptions", "year": "2022" }, { "authors": "R Zellers; X Lu; J Hessel; Y Yu; J S Park; J Cao; A Farhadi; Y Choi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Merlot: Multimodal neural script knowledge models", "year": "2021" }, { "authors": "H Liu; C Li; Q Wu; Y J Lee", "journal": "", "ref_id": "b23", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Z Huang; Z Zeng; Y Huang; B Liu; D Fu; J Fu", "journal": "", "ref_id": "b24", "title": "Seeing out of the box: End-to-end pre-training for vision-language representation learning", "year": "2021" }, { "authors": "Z Huang; Z Zeng; B Liu; D Fu; J Fu", "journal": "", "ref_id": "b25", "title": "Pixel-bert: Aligning image pixels with text by deep multi-modal transformers", "year": "2020" }, { "authors": "H Xue; Y Huang; B Liu; H Peng; J Fu; H Li; J Luo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Probing inter-modality: Visual parsing with self-attention for visionand-language pre-training", "year": "2021" }, { "authors": "C Jia; Y Yang; Y Xia; Y.-T Chen; Z Parekh; H Pham; Q Le; Y.-H Sung; Z Li; T Duerig", "journal": "PMLR", "ref_id": "b27", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "L Yuan; D Chen; Y.-L Chen; N Codella; X Dai; J Gao; H Hu; X Huang; B Li; C Li", "journal": "", "ref_id": "b28", "title": "Florence: A new foundation model for computer vision", "year": "2021" }, { "authors": "K Zhou; J Yang; C C Loy; Z Liu", "journal": "International Journal of Computer Vision", "ref_id": "b29", "title": "Learning to prompt for visionlanguage models", "year": "2022" }, { "authors": "O Patashnik; Z Wu; E Shechtman; D Cohen-Or; D Lischinski", "journal": "", "ref_id": "b30", "title": "Styleclip: Text-driven manipulation of stylegan imagery", "year": "2021" }, { "authors": "J.-B Alayrac; J Donahue; P Luc; A Miech; I Barr; Y Hasson; K Lenc; A Mensch; K Millican; M Reynolds", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "J Li; D Li; S Savarese; S Hoi", "journal": "", "ref_id": "b32", "title": "Blip-2: Bootstrapping languageimage pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "T.-J Fu; L Li; Z Gan; K Lin; W Y Wang; L Wang; Z Liu", "journal": "", "ref_id": "b33", "title": "Violet: End-to-end video-language transformers with masked visualtoken modeling", "year": "2021" }, { "authors": "J Lei; L Li; L Zhou; Z Gan; T L Berg; M Bansal; J Liu", "journal": "", "ref_id": "b34", "title": "Less is more: Clipbert for video-and-language learning via sparse sampling", "year": "2021" }, { "authors": "Y Ge; Y Ge; X Liu; D Li; Y Shan; X Qie; P Luo", "journal": "", "ref_id": "b35", "title": "Bridging video-text retrieval with multiple choice questions", "year": "2022" }, { "authors": "A Arnab; M Dehghani; G Heigold; C Sun; M Lučić; C Schmid", "journal": "", "ref_id": "b36", "title": "Vivit: A video vision transformer", "year": "2021" }, { "authors": "Z Liu; J Ning; Y Cao; Y Wei; Z Zhang; S Lin; H Hu", "journal": "", "ref_id": "b37", "title": "Video swin transformer", "year": "2022" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b38", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "A Miech; J.-B Alayrac; L Smaira; I Laptev; J Sivic; A Zisserman", "journal": "", "ref_id": "b39", "title": "End-to-end learning of visual representations from uncurated instructional videos", "year": "2020" }, { "authors": "Z Zeng; Y Ge; X Liu; B Chen; P Luo; S.-T Xia; Y Ge", "journal": "", "ref_id": "b40", "title": "Learning transferable spatiotemporal representations from natural script knowledge", "year": "2023" }, { "authors": "T Han; W Xie; A Zisserman", "journal": "", "ref_id": "b41", "title": "Temporal alignment networks for long-term video", "year": "2022" }, { "authors": "Z Gao; J Liu; W Sun; S Chen; D Chang; L Zhao", "journal": "", "ref_id": "b42", "title": "Clip2tv: Align, match and distill for video-text retrieval", "year": "2021" }, { "authors": "H Zhang; A Sun; W Jing; J T Zhou", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b43", "title": "Temporal sentence grounding in videos: A survey and future directions", "year": "2023" }, { "authors": "C Ju; T Han; K Zheng; Y Zhang; W Xie", "journal": "Springer", "ref_id": "b44", "title": "Prompting visuallanguage models for efficient video understanding", "year": "2022" }, { "authors": "P Hu; Z Huang; D Peng; X Wang; X Peng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b45", "title": "Cross-modal retrieval with partially mismatched pairs", "year": "2023" }, { "authors": "F Liu; X Wu; C You; S Ge; Y Zou; X Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b46", "title": "Aligning source visual and target language domains for unpaired video captioning", "year": "2021" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b47", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "L Yuan; Y Chen; T Wang; W Yu; Y Shi; Z.-H Jiang; F E Tay; J Feng; S Yan", "journal": "", "ref_id": "b48", "title": "Tokens-to-token vit: Training vision transformers from scratch on imagenet", "year": "2021" }, { "authors": "C Feichtenhofer; H Fan; J Malik; K He", "journal": "", "ref_id": "b49", "title": "Slowfast networks for video recognition", "year": "2019-10" }, { "authors": "Z Tong; Y Song; J Wang; L Wang", "journal": "Advances in neural information processing systems", "ref_id": "b50", "title": "Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training", "year": "2022" }, { "authors": "K He; G Gkioxari; P Dollár; R Girshick", "journal": "", "ref_id": "b51", "title": "Mask r-cnn", "year": "2017" }, { "authors": "H Xu; G Ghosh; P.-Y Huang; D Okhonko; A Aghajanyan; F Metze; L Zettlemoyer; C Feichtenhofer", "journal": "", "ref_id": "b52", "title": "Videoclip: Contrastive pretraining for zero-shot video-text understanding", "year": "2021" }, { "authors": "D Li; J Li; H Li; J C Niebles; S C Hoi", "journal": "", "ref_id": "b53", "title": "Align and prompt: Video-and-language pre-training with entity prompts", "year": "2022" }, { "authors": "J Wang; D Chen; Z Wu; C Luo; L Zhou; Y Zhao; Y Xie; C Liu; Y.-G Jiang; L Yuan", "journal": "Advances in neural information processing systems", "ref_id": "b54", "title": "Omnivl: One foundation model for image-language and video-language tasks", "year": "2022" }, { "authors": "J A Portillo-Quintero; J C Ortiz-Bayliss; H Terashima-Marín", "journal": "Springer", "ref_id": "b55", "title": "A straightforward framework for video retrieval using clip", "year": "2021" }, { "authors": "M Wang; J Xing; Y Liu", "journal": "", "ref_id": "b56", "title": "Actionclip: A new paradigm for video action recognition", "year": "2021" }, { "authors": "R Girdhar; A El-Nouby; Z Liu; M Singh; K V Alwala; A Joulin; I Misra", "journal": "", "ref_id": "b57", "title": "Imagebind: One embedding space to bind them all", "year": "2023" }, { "authors": "Y Wang; K Li; Y Li; Y He; B Huang; Z Zhao; H Zhang; J Xu; Y Liu; Z Wang", "journal": "", "ref_id": "b58", "title": "Internvideo: General video foundation models via generative and discriminative learning", "year": "2022" }, { "authors": "A Rohrbach; A Torabi; M Rohrbach; N Tandon; C Pal; H Larochelle; A Courville; B Schiele", "journal": "International Journal of Computer Vision", "ref_id": "b59", "title": "Movie description", "year": "2017" }, { "authors": "R Goyal; S Ebrahimi Kahou; V Michalski; J Materzynska; S Westphal; H Kim; V Haenel; I Fruend; P Yianilos; M Mueller-Freitag", "journal": "", "ref_id": "b60", "title": "The\" something something\" video database for learning and evaluating visual common sense", "year": "2017" }, { "authors": "C Gu; C Sun; D A Ross; C Vondrick; C Pantofaru; Y Li; S Vijayanarasimhan; G Toderici; S Ricco; R Sukthankar", "journal": "", "ref_id": "b61", "title": "Ava: A video dataset of spatio-temporally localized atomic visual actions", "year": "2018" }, { "authors": "Y Yu; J Kim; G Kim", "journal": "", "ref_id": "b62", "title": "A joint sequence fusion model for video question answering and retrieval", "year": "2018" }, { "authors": "L Sevilla-Lara; S Zha; Z Yan; V Goswami; M Feiszli; L Torresani", "journal": "", "ref_id": "b63", "title": "Only time can tell: Discovering temporal data for temporal modeling", "year": "2021-01" }, { "authors": "K Li; Y He; Y Wang; Y Li; W Wang; P Luo; Y Wang; L Wang; Y Qiao", "journal": "", "ref_id": "b64", "title": "Videochat: Chat-centric video understanding", "year": "2023" }, { "authors": "Z Luo; D Chen; Y Zhang; Y Huang; L Wang; Y Shen; D Zhao; J Zhou; T Tan", "journal": "", "ref_id": "b65", "title": "Videofusion: Decomposed diffusion models for high-quality video generation", "year": "2023" }, { "authors": "X Cheng; H Lin; X Wu; F Yang; D Shen", "journal": "", "ref_id": "b66", "title": "Improving video-text retrieval by multi-stream corpus alignment and dual softmax loss", "year": "2021" }, { "authors": "S.-V Bogolin; I Croitoru; H Jin; Y Liu; S Albanie", "journal": "", "ref_id": "b67", "title": "Cross modal retrieval with querybank normalisation", "year": "2022" }, { "authors": "V Gabeur; C Sun; K Alahari; C Schmid", "journal": "Springer", "ref_id": "b68", "title": "Multi-modal transformer for video retrieval", "year": "2020" }, { "authors": "H Xue; T Hang; Y Zeng; Y Sun; B Liu; H Yang; J Fu; B Guo", "journal": "", "ref_id": "b69", "title": "Advancing high-resolution video-language representation with largescale video transcriptions", "year": "2022" }, { "authors": "J Wang; Y Ge; R Yan; Y Ge; K Q Lin; S Tsutsui; X Lin; G Cai; J Wu; Y Shan", "journal": "", "ref_id": "b70", "title": "All in one: Exploring unified video-language pre-training", "year": "2023" }, { "authors": "S Zhao; L Zhu; X Wang; Y Yang", "journal": "", "ref_id": "b71", "title": "Centerclip: Token clustering for efficient text-video retrieval", "year": "2022" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b72", "title": "Fixing weight decay regularization in adam", "year": "2018" }, { "authors": "H Fan; B Xiong; K Mangalam; Y Li; Z Yan; J Malik; C Feichtenhofer", "journal": "", "ref_id": "b73", "title": "Multiscale vision transformers", "year": "2021" }, { "authors": "S Yan; X Xiong; A Arnab; Z Lu; M Zhang; C Sun; C Schmid", "journal": "", "ref_id": "b74", "title": "Multiview transformers for video recognition", "year": "2022" }, { "authors": "Y Li; C.-Y Wu; H Fan; K Mangalam; B Xiong; J Malik; C Feichtenhofer", "journal": "", "ref_id": "b75", "title": "Mvitv2: Improved multiscale vision transformers for classification and detection", "year": "2022-06" }, { "authors": "R Wang; D Chen; Z Wu; Y Chen; X Dai; M Liu; L Yuan; Y.-G Jiang", "journal": "", "ref_id": "b76", "title": "Masked video distillation: Rethinking masked feature modeling for self-supervised video representation learning", "year": "2023" }, { "authors": "P Wang; A Yang; R Men; J Lin; S Bai; Z Li; J Ma; C Zhou; J Zhou; H Yang", "journal": "PMLR", "ref_id": "b77", "title": "Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework", "year": "2022" }, { "authors": "C Schuhmann; R Beaumont; R Vencu; C Gordon; R Wightman; M Cherti; T Coombes; A Katta; C Mullis; M Wortsman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b78", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "D Zhu; J Chen; X Shen; X Li; M Elhoseiny", "journal": "", "ref_id": "b79", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 337.66, 503.26, 225.38, 12.69 ], "formula_id": "formula_0", "formula_text": "V = {v i } T i=1 ∈ R T ×D , C = {c j } K j=1 ∈ R K×D (1)" }, { "formula_coordinates": [ 5, 356.54, 391.42, 206.49, 12.69 ], "formula_id": "formula_1", "formula_text": "V k = {f k i,l ∈ R D |i ∈ [1, T ], l ∈ [0, L]},(2)" }, { "formula_coordinates": [ 5, 351.01, 620.39, 212.03, 12.69 ], "formula_id": "formula_2", "formula_text": "V ′k = {f ′k 0,0 , f ′k 1,1 , .., f ′k 1,L , .., f ′k T,1 , .., f ′k T,L },(3)" }, { "formula_coordinates": [ 5, 353.68, 705.63, 209.36, 13.25 ], "formula_id": "formula_3", "formula_text": "V k = { f k 0,0 , f k 1,1 , .., f k 1,L , .., f k T,1 , .., f k T,L }.(4)" }, { "formula_coordinates": [ 6, 88.57, 310.43, 211.45, 12.69 ], "formula_id": "formula_4", "formula_text": "f ′1 i,l = Dropout(f m i,l + Pos t (t) + Pos s (l)),(5)" }, { "formula_coordinates": [ 6, 98.05, 397.61, 201.98, 26.8 ], "formula_id": "formula_5", "formula_text": "f ′k 0,0 = f k-1 0,0 + W k proj 1 T i∈T f m+k-1 i,0 ,(6)" }, { "formula_coordinates": [ 6, 112.97, 428.37, 187.05, 13.38 ], "formula_id": "formula_6", "formula_text": "f ′k i,l = f k-1 i,l + W k proj f m+k-1 i,l ,(7)" }, { "formula_coordinates": [ 6, 323.5, 290.13, 239.53, 18.57 ], "formula_id": "formula_7", "formula_text": "Xi = softmax(X i W Q (X i W K ) T / √ D)(X i W V ) + X i ,(8)" }, { "formula_coordinates": [ 6, 317.72, 507.16, 245.32, 29.83 ], "formula_id": "formula_8", "formula_text": "Ŷl = W proj (softmax(Y l W Q (Y l W K ) T / √ D)(Y l W V ) + Y l ),(9)" }, { "formula_coordinates": [ 6, 370.51, 681.86, 188.37, 12.17 ], "formula_id": "formula_9", "formula_text": "V = W v proj (LN(V -1 ⊕ V K )), (10" }, { "formula_coordinates": [ 6, 558.89, 684.7, 4.15, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 7, 118.86, 360.24, 181.16, 26.56 ], "formula_id": "formula_11", "formula_text": "s i,j = exp(τ c j • v i ) K j=1 exp(τ c j • v i ) ,(11)" }, { "formula_coordinates": [ 7, 87.6, 392.04, 51.85, 14.11 ], "formula_id": "formula_12", "formula_text": "K j=1 s i,j = 1," }, { "formula_coordinates": [ 7, 106.13, 491.51, 189.74, 30.32 ], "formula_id": "formula_13", "formula_text": "c i = K j=1 s i,j c j , where c i ∈ R D . (12" }, { "formula_coordinates": [ 7, 295.87, 502.24, 4.15, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 7, 119.62, 628.19, 180.4, 26.56 ], "formula_id": "formula_15", "formula_text": "s i = exp(τ c i • v i ) T n=1 exp(τ c n • v n ) ,(13)" }, { "formula_coordinates": [ 7, 111.79, 721.89, 184.09, 30.32 ], "formula_id": "formula_16", "formula_text": "v = T i=1 s i v i , where v ∈ R D . (14" }, { "formula_coordinates": [ 7, 295.87, 732.62, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 7, 382.32, 125.25, 176.57, 26.56 ], "formula_id": "formula_18", "formula_text": "s ′ i,j = exp(τ c j • v i ) T i=1 exp(τ c j • v i ) , (15" }, { "formula_coordinates": [ 7, 558.89, 132.31, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 7, 368.01, 198.28, 195.03, 30.32 ], "formula_id": "formula_20", "formula_text": "v j = T i=1 s ′ i,j v i , where v i ∈ R D(16)" }, { "formula_coordinates": [ 7, 382.19, 236.15, 180.84, 26.56 ], "formula_id": "formula_21", "formula_text": "s ′ j = exp(τ c j • v j ) K n=1 exp(τ c n • v n ) ,(17)" }, { "formula_coordinates": [ 7, 375.05, 326.94, 187.98, 30.32 ], "formula_id": "formula_22", "formula_text": "c = K j=1 s ′ j c j , where c ∈ R D ,(18)" }, { "formula_coordinates": [ 7, 336.19, 593.03, 186.03, 64.07 ], "formula_id": "formula_23", "formula_text": "L t2v = - 1 B B m=1 log exp(τ c mn • v nm ) B n=1 exp(τ c mn • v nm ) , L v2t = - 1 B B n=1 log exp(τ v nm • c mn ) B m=1 exp(τ v nm • c mn )" }, { "formula_coordinates": [ 7, 336.19, 627.85, 226.84, 43.07 ], "formula_id": "formula_24", "formula_text": "L co = L t2v + L v2t ,(19)" }, { "formula_coordinates": [ 8, 100.91, 464.78, 194.97, 30.24 ], "formula_id": "formula_25", "formula_text": "L cr = N n=1 y n log exp(τ v • c n ) N i=1 exp(τ v • c i ) , (20" }, { "formula_coordinates": [ 8, 295.87, 475.51, 4.15, 8.64 ], "formula_id": "formula_26", "formula_text": ")" } ]
10.1109/IALP.2012.28
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b13", "b2", "b6" ], "table_ref": [], "text": "Over half of the world's population uses at least two languages regularly (Ansaldo et al., 2008). Despite this common occurrence, automatic speech recognition (ASR) models don't work well with speech that includes code-switching: when a speaker alternates between two or more languages or varieties within utterances (Myers-Scotton, 2017). For lowresource languages, we encounter two issues when attempting to address this problem: insufficient data for end-to-end-training and insufficient data for language modelling.\nRecently, self-supervised pre-training of speech such as wav2vec 2.0 (Baevski et al., 2020) have proven to give very low error rates for English ASR. Although very costly to pre-train, the English models and cross-lingual (XLSR) representations (Conneau et al., 2020) are available for finetuning to efficiently make speech recognisers for many languages.\nIn this work we ask: Does fine-tuning XLSR improve recognition of code-switched data over traditional training on code-switched data? To test this phenomenon, we look at four African languages (isiZulu, isiXhosa, Sesotho, Setswana) code-switched with English. We also explore three questions about how to go about this fine-tuning process. We first experiment with different types of data to add to the codeswitched dataset in order to improve ASR performance, asking 1. Should we add monolingual data? Many other methods incorporate language identification (language ID) into models, so we ask: 2. Does it help to add language identification in our pipeline (either explicitly or implictly)? . We test this by augmenting utterances to implicitly identify the language and use a multi-task learning setup to learn frame-level language ID and ASR simultaneously. Finally, we ask: 3. Does a simple n-gram language model trained on the code-switched data improve performance despite the tiny amount of data?. We use the codeswitched corpus to train bigram and trigram models which we use when decoding the models.\nWe find that finetuning multilingual pretrained models, augmented with a simple trigram language model, works well for recognizing code-switched data in low-resource languages, significantly better than prior methods of training bespoke models (CNN-TDNN-F acoustic model + LSTM language model) from scratch. We find that neither language ID nor adding monolingual data adds further performance gains and perhaps surprisingly, that adding monolingual data worsened model performance. Our findings suggest that in circumstances with limited training data, finetuning self-supervised representations are likely a better performing and viable solution." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b16", "b20", "b18", "b2", "b11", "b22", "b0", "b12", "b2" ], "table_ref": [], "text": "In speech processing, work on code-switching can be divided into code-switching detection (Rallabandi et al., 2018;Yılmaz et al., 2016;Wang et al., 2019) using language identification (Choud-hury et al., 2017) andend-to-end recognition (Indra Winata et al., 2018). In this work, we look at both methods via finetuning of self-supervised representations, namely wav2vec 2.0 (Baevski et al., 2020). Language identification methods either identify the language before doing the ASR on the speech or have language ID trained in tandem with the acoustic model of representations. End-toend recognition splits into two main approaches: a multilingual modelling with cross lingual representations (Li et al., 2019a;Luo et al., 2018;Zhang et al., 2022) and parallel modelling generating multiple transcriptions which are interpolated to result in one transcription with the highest likelihood (Ahmed and Tan, 2012;Lyu et al., 2006).\nFor low-resource languages, we encounter two issues when attempting to apply these methods: a lack of sufficient data for end-to-end training and a lack of sufficient data for neural language modelling in the low-resource language or the codeswitched language pair. The absence of a language model for the codeswitched pair leads to prior less computationally expensive methods to fail and the lack of sufficient data for the model to generalise, resulting in poor performance of models.\nIn our work, we focus on leveraging a pretrained self-supervised acoustic model, wav2vec 2.0 (Baevski et al., 2020) to finetune an existing multilingual acoustic model for our chosen language pairs. We incorporate language identification to see if this additional signal can improve performance given the small datasets." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Languages", "publication_ref": [], "table_ref": [], "text": "The languages used in this work are four South African languages and English. The South African languages are all Southern Bantu (SB) languages, in the Nguni and Sotho-Tswana branches. The English used in this work is English spoken with a South African accent." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b14", "b3" ], "table_ref": [], "text": "We use the South African corpus of multilingual code-switched soap opera speech (Niesler et al., 2018) For additional monolingual data in the languages, we use the isiZulu, isiXhosa, Sesotho, Setswana and English portions of the NCHLT Speech Corpus (Barnard et al., 2014) " }, { "figure_ref": [], "heading": "Baseline Model", "publication_ref": [ "b4", "b15" ], "table_ref": [], "text": "We compare our models to those trained from scratch on this data by Biswas et al. (2022). Their best performing acoustic model is a Kaldi-based (Povey et al., 2011) CNN-TDNN-F trained on all 5 languages and finetuned for each language pair. For language model decoding, the authors used a bidirectional LSTM architecture with a 256dimensional embedding and 256-dimensional matrices. The LSTMs are trained on language pairs, resulting in four separate language models. We compare our methods to the best performing model for each language pair in this work.\n4 Which additional data is helpful?\nGiven the low-resource natural of codeswitched speech datasets, we ask which type of data can best supplement the codeswitched dataset to improve downstream results. To test this, we \"pre-finetune\" the model with additional data other than the Soap Opera Corpus data for each language pair, before finetuning it on the codeswitched language pair. To test whether in-domain data is most useful, we pre-finetune the model with Soap Opera Corpus data from all four language pairs for 42000 steps. This model is then further finetuned with the Soap Opera Corpus data for each individual language pair alone for 12000 steps, resulting in the +all 4 pairs models.\nTo test whether adding monolingual data improves performance, we use NCHLT monolingual data from each language in a language pair, plus the data from the corresponding language pair in the Soap Opera Corpus data to pre-finetune models for 42000 steps. We then further finetune these models with Soap Opera Corpus data from that specific language pair, resulting in +monolingual models.\nTo compare the proposed methods with finetuning with solely Soap Opera Corpus data in the desired language pair, we finetune the model for 15000 steps with the Soap Opera Corpus data for that language pair, resulting in the One pair models.\nTable 3 shows the results for these experiments with greedy decoding." }, { "figure_ref": [], "heading": "Lang pair", "publication_ref": [ "b11", "b21" ], "table_ref": [], "text": "Model 3: Effects of additional data used in \"prefinetuning\" on ASR performance. WER is word error rate of models. +all 4 lang pairs is \"pre-finetuned\" with in-domain codeswitched data from the Soap Opera Corpus and +monolingual is \"pre-finetuned\" with monolingual data in each language in the lamguage pair along with the Soap Opera Corpus data for that specific pair.\nWe see that across languages, using codeswitched-data from all four languages (i.e., \"pre-finetuning\" with Soap Opera Corpus data from all 4 languages) gives the best results on each South African language pair. The fact that adding data from three different languages helps on the 4th language is somewhat surprising, and points both to the importance of the similarity of the 4 languages, and to the fact that all data are from a single Soap Opera genre. By contrast, the genre difference from the monolingual read speech data is enough to severely hurt performance. In summary, when finetuning multilingual, self-supervised ASR models on low-resource codeswitched data, we find that matching domain and genre properties (such as the presence of codeswitching) is more important than adding monolingual data from the same language if the genre is a mismatch.\n5 Does adding implicit or explicit language id information help?\nPrior work has shown that for codeswitched ASR, simultaneously learning the language identification (language ID) and ASR improved the ASR performance (Luo et al., 2018;Li et al., 2019b;Zeng et al., 2019). Here we try to add language ID information in two ways: by augmenting the data and by training a classifier. We experiment with augmenting the Soap Opera Corpus utterances to encapsulate the bilingualism in the utterances in lieu of explicit language labels or timestamps. For each language pair, we use two methods: language specific casing and language specific tags. For language specific casing, we double the vocabulary size by giving each language a specific case, e.g., English in uppercase and isiZulu in lowercase. We then finetune wav2vec 2.0 XLSR 300M with this data for 12000 steps resulting in +casingID models for each language pair. For language specific tags, we put opening and closing tags on either side of the text in a specific language. We then finetune wav2vec 2.0 XLSR 300M with this data for 12000 steps resulting in +tagsID models for each language pair." }, { "figure_ref": [], "heading": "Casing: WHAT IF etholwa amaphoyisa kuqala", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Tags: <eng> what if </eng> <zul> etholwa amaphoyisa kuqala </zul> Example 1: Demonstration of implicit addition of language information to our models through language-specific casing and language-specific tags.\nTo train a language ID classifier on our data, we add a frame-level classification head to the wav2vec 2.0 XLSR encoder. We use the timestamps in the corpus to label frames with either English or the South African language, and train a model with cross-entropy loss. The results of the language ID models are in Table 4." }, { "figure_ref": [], "heading": "Language Pair", "publication_ref": [ "b21", "b17", "b7", "b4" ], "table_ref": [], "text": "Lang ID Accuracy English-isiZulu 97% English-isiXhosa 98% English-Sesotho 96% English-Setswana 97% The frame-level language ID models work well, so we try a multi-task setting in hopes of improving the model performance. We learn language ID and ASR at the same time, summing the weighted loss of the two tasks. The loss calculation is summarised in Equation 1. As ASR is the priority, we always keep the CTC weight higher than the language ID weight. The resulting models are the +multitaskID models, with each language pair finetuned for 12 00 steps. The model architecture is visualised in Figure 1 .\nLoss CT C+LID = λ CT C L CT C +(1-λ CT C )L LID(1)\nFigure 1: Our multi-task learning setup for combining frame-level language ID with CTC by a weighted sum of the losses. 5: Effects of incorporating language ID on ASR performance. WER is word error rate of models. +tagsID uses language specific tags around utterances in the dataset and +casingID uses one case per language (e.g. uppercase for English and lowercase for isiZulu). Models trained to learn both language ID and ASR at the same time during finetuning are referred to as +multitaskID models. The +multitaskID models work better that +tagsID and +casingID. But none of the language ID models work as well as the baseline of not using Language ID at all (the \"One pair\" row).\nThe results of our experiments are in Table 5. For the multi-task setup, the results with the best language ID and CTC weights are reported.\nThe multi-task learning setup improves performance downstream over language specific casing and tags, but not over further fine-tuning, possibly due to the model being hindered rather than helped trying to learn two tasks at once. Language specific casing does not improve model performance, it actually worsens the models compared to the baselines. This is likely due to the unnecessary doubling of the vocabulary.\nLanguage ID tags work better than the casing across languages, however they do not outperform finetuning without tags. This is likely due to the fact that the tags do not correspond to any speech, so the introduction of them creates initial confusion.\nIn summary, adding language identification information does not improve ASR performance on our code-switched dataset. This could be due due to the lack of data available for training, the fact that the character sets for our 5 languages are all overlapping, or the fact that our experiments consist of finetuning and not end-to-end pretraining. Other work that uses multitask learning for codeswitched speech recognition (Li et al., 2019b;Zeng et al., 2019;Song et al., 2022;Winata et al., 2018) has shown success with a language pair with an non-overlapping character set: English and Mandarin Chinese. Those English/Chinese models are also trained from scratch end-to-end, so it is possible that incorporation of language ID is more useful during training and less useful at later stages such as finetuning.\n6 Does a language model improve performance?\nFor our experiments thusfar, we do greedy decoding from the wav2vec 2.0 model finetuned with a CTC head. Could adding language model information improve performance? The baseline system with which we are comparing used an LSTM language model, suggesting that this information might be useful.\nIn this section, we study whether using the transcripts from the Soap Opera Corpus as training data for a small n-gram language model could improve accuracy. We train separate bigram and trigram (word) language models using KenLM (Heafield, 2011) from each of the 4 language-pair datasets, and then use this language model in decoding.\nThe language model results for the best finetuned models per language pair are presented in Table 6. Table 6: Effect of language modelling on ASR performance (measured in WER). The numbers in the baseline raw are taken from (Biswas et al., 2022); their system (which includes an LSTM language model) is compared to wave2vec 2.0 finetuned on the Soap Opera Corpus data, using greedy decoding (no LM) as well as bigram, and trigram n-gram models trained with the Soap Opera Corpus data. Without n-gram language models, the baseline model outperforms finetuning wav2vec 2.0. However, training an n-gram language model with the ASR data improves over the baseline.\nAlthough greedy decoding does not work better than the baseline (CNN-TDNN-F acoustic model plus a bidirectional LSTM model) since the base-line has a language model, we find that the finetuned models equipped with a simple n-gram language model consistently beat baseline models. These results suggest that fine-tuning large pretrained models with only very simple language model support can be a better solution in lowresource scenarios." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we have finetuned wav2vec 2.0 XLSR with codeswitched data of South African languages and English. We found that this system augmented with a simple bigram or trigram language model beats baseline models trained with LSTM language models. We also found that it helps to add data from other languages, albeit very related languages and in the exact the same genre/domain.\nWe were not able to improve the model with various kinds of language ID information; these methods may see more success for languages with character sets that overlap less, or when there is enough data to train an end-to-end model from scratch.\nThis work demonstrates a method to train ASR models on codeswitching data with relatively minimal computation and a very basic n-gram language model, suggesting a direction for addressing an important task in the low-resource settings that characterise many of the world's languages." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank the reviewers for their comments and suggestions. This research was funded in part by NSF award number IIS-2128145 and in part by a Stanford School of Engineering Fellowship to TO. CM is a fellow in the CIFAR Learning in Machines and Brains program." } ]
While many speakers of low-resource languages regularly code-switch between their languages and other regional languages or English, datasets of codeswitched speech are too small to train bespoke acoustic models from scratch or do language model rescoring. Here we propose finetuning self-supervised speech representations such as wav2vec 2.0 XLSR to recognize code-switched data. We find that finetuning self-supervised multilingual representations and augmenting them with n-gram language models trained from transcripts reduces absolute word error rates by up to 20% compared to baselines of hybrid models trained from scratch on code-switched data. Our findings suggest that in circumstances with limited training data finetuning self-supervised representations is a better performing and viable solution.
Multilingual self-supervised speech representations improve the speech recognition of low-resource African languages with codeswitching
[ { "figure_caption": ". It is a corpus of speech collected from 626 South African soap opera episodes, with utterances from four South African languages: isiZulu, isiXhosa, Sesotho and Setswana codeswitched with English. An overview of the languages used in this work. The South African languages are in the Nguni and Sotho-Tswana branches of the Southern Bantu (SB) language family and English is in the Western Germanic branch of the Indo-European (IE) language family.", "figure_data": "LanguageNo. speakers (millions)Language FamilyisiXhosa11.6SB: NguniisiZulu8.2SB: NguniSesotho4.0SB: Sotho-TswanaTswana3.8SB: Sotho-TswanaEnglish380IE: Western Germanic", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "to add as monoingual supplementary finetuning data. We use the NCHLT-clean partition of the dataset. The datasets used in this work are summarised in Table2.", "figure_data": "Lang(s) No. utts Duration (hrs)Soap Opera CorpusEng-Zul Eng-Xho Eng-Sot Eng-Tsn9347 7941 6303 65635.45 3.14 2.86 2.83isiZulu4467356.2NCHLT CorpusisiXhosa Sesotho Setswana 58414 46651 5753956.3 56.3 56.3English7741256.4", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "typeWEROne pair72.2xho-eng+all 4 pairs59.0+monolingual 77.5One pair60.8zul-eng+all 4 pairs50.8+monolingual 67.6One pair59.4sot-eng+all 4 pairs50.2+monolingual 63.3One pair51.4tsn-eng+all 4 pairs42.7+monolingual 60.4", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results from frame-level language identification of the four South African languages and English", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Lang pairModel typeWEROne pair72.2xho-eng+tagsID +casingID83.4 87.9+multitaskID 75.2One pair60.8zul-eng+tagsID +casingID80.8 80.9+multitaskID 64.2One pair59.4sot-eng+tagsID +casingID76.3 89.4+multitaskID 65.6One pair51.4tsn-eng+tagsID +casingID72.6 86.6+multitaskID 64.5", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Tolúlo Pé Ògúnrè; Christopher D Mí; Dan Manning; Jurafsky
[ { "authors": "H A Basem; Tien-Ping Ahmed; Tan", "journal": "", "ref_id": "b0", "title": "Automatic speech recognition of code switching speech using 1-best rescoring", "year": "2012" }, { "authors": "Ana Inés Ansaldo; Karine Marcotte; Lilian Scherer; Gaelle Raboyeau", "journal": "", "ref_id": "b1", "title": "Language therapy and bilingual aphasia: Clinical implications of psycholinguistic and neuroimaging research", "year": "2008" }, { "authors": "Alexei Baevski; Yuhao Zhou; Abdelrahman Mohamed; Michael Auli", "journal": "", "ref_id": "b2", "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations", "year": "2020" }, { "authors": "Etienne Barnard; H Marelie; Charl Davel; Febe De Van Heerden; Jaco Wet; Badenhorst", "journal": "", "ref_id": "b3", "title": "The NCHLT speech corpus of the South African languages", "year": "2014" }, { "authors": "Astik Biswas; Emre Yılmaz; Ewald Van Der Westhuizen; Febe De Wet; Thomas Niesler", "journal": "Computer Speech & Language", "ref_id": "b4", "title": "Codeswitched automatic speech recognition in five south african languages", "year": "2022" }, { "authors": "Monojit Choudhury; Kalika Bali; Sunayana Sitaram; Ashutosh Baheti", "journal": "", "ref_id": "b5", "title": "Curriculum design for code-switching: Experiments with language identification and language modeling with deep neural networks", "year": "2017" }, { "authors": "Alexis Conneau; Alexei Baevski; Ronan Collobert; Abdelrahman Mohamed; Michael Auli", "journal": "", "ref_id": "b6", "title": "Unsupervised cross-lingual representation learning for speech recognition", "year": "2020" }, { "authors": "Kenneth Heafield", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "KenLM: Faster and smaller language model queries", "year": "2011" }, { "authors": "Genta Indra Winata; Andrea Madotto; Chien-Sheng Wu; Pascale Fung", "journal": "", "ref_id": "b8", "title": "Towards end-to-end automatic code-switching speech recognition", "year": "2018" }, { "authors": "Ke Li; Jinyu Li; Guoli Ye; Rui Zhao; Yifan Gong; ; ", "journal": "", "ref_id": "b9", "title": "Towards code-switching asr for end-to-end ctc models", "year": "2019" }, { "authors": "Ke Li; Jinyu Li; Guoli Ye; Rui Zhao; Yifan Gong", "journal": "", "ref_id": "b10", "title": "Towards code-switching asr for end-to-end ctc models", "year": "2019" }, { "authors": "Ne Luo; Dongwei Jiang; Shuaijiang Zhao; Caixia Gong; Wei Zou; Xiangang Li", "journal": "", "ref_id": "b11", "title": "Towards endto-end code-switching speech recognition", "year": "2018" }, { "authors": "Daucheng Lyu; Yuang Ren Yuan Lyu; Chun Nan Chin Chiang; Hsu", "journal": "", "ref_id": "b12", "title": "Speech recognition on codeswitching among the chinese dialects", "year": "2006" }, { "authors": "Carol Myers-Scotton", "journal": "", "ref_id": "b13", "title": "Code-switching. The handbook of sociolinguistics", "year": "2017" }, { "authors": "Thomas Niesler", "journal": "", "ref_id": "b14", "title": "A first south african corpus of multilingual code-switched soap opera speech", "year": "2018" }, { "authors": "Daniel Povey; Arnab Ghoshal; Gilles Boulianne; Lukas Burget; Ondrej Glembek; Nagendra Goel; Mirko Hannemann; Petr Motlicek; Yanmin Qian; Petr Schwarz", "journal": "IEEE Signal Processing Society", "ref_id": "b15", "title": "The kaldi speech recognition toolkit", "year": "2011" }, { "authors": "Saikrishna Rallabandi; Sunayana Sitaram; Alan W Black", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Automatic detection of code-switching style from acoustics", "year": "2018" }, { "authors": "Tongtong Song; Qiang Xu; Meng Ge; Longbiao Wang; Hao Shi; Yongjie Lv; Yuqin Lin; Jianwu Dang", "journal": "", "ref_id": "b17", "title": "Language-specific characteristic assistance for code-switching speech recognition", "year": "2022" }, { "authors": "Qinyi Wang; Emre Yılmaz; Adem Derinel; Haizhou Li", "journal": "", "ref_id": "b18", "title": "Code-switching detection using asrgenerated language posteriors", "year": "2019" }, { "authors": "Genta Indra Winata; Andrea Madotto; Chien-Sheng Wu; Pascale Fung", "journal": "", "ref_id": "b19", "title": "Towards end-to-end automatic code-switching speech recognition", "year": "2018" }, { "authors": "Emre Yılmaz; Henk Van Den; David Heuvel; Van Leeuwen", "journal": "IEEE", "ref_id": "b20", "title": "Code-switching detection using multilingual dnns", "year": "2016" }, { "authors": "Zhiping Zeng; Yerbolat Khassanov; Haihua Xu Van Tung; Eng Pham; Haizhou Siong Chng; Li", "journal": "", "ref_id": "b21", "title": "On the end-to-end solution to mandarin-english codeswitching speech recognition", "year": "2019" }, { "authors": "Shuai Zhang; Jiangyan Yi; Zhengkun Tian; Jianhua Tao; Yu Ting Yeung; Liqun Deng", "journal": "", "ref_id": "b22", "title": "Reducing multilingual context confusion for end-to-end codeswitching automatic speech recognition", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 70.87, 497.5, 219, 23.36 ], "formula_id": "formula_0", "formula_text": "Loss CT C+LID = λ CT C L CT C +(1-λ CT C )L LID(1)" } ]
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b3", "b10", "b17", "b14", "b18", "b12", "b16", "b9", "b5", "b0", "b8", "b6", "b2", "b4", "b11", "b13", "b15" ], "table_ref": [], "text": "Image-guided navigation plays a crucial role in modern surgical procedures. In the field of orthopedics, many surgical procedures such as total hip arthroplasty, total knee arthroplasty, and pedicle screw injections utilize intraoperative fluoroscopy for surgical navigation [2,4,11]. Due to overlapping anatomical structures in X-Ray images, it is often difficult to correctly identify and reason the 3D structure from solely the image. Therefore, registration of an intraoperatively acquired X-Ray image to the preoperatively acquired CT scan is crucial in performing such procedures [18,15,19,13]. The standard procedure for acquiring highly accurate registration involves embedding fiducial markers into the patient and acquiring a preoperative CT scan [17,10,6]. Intraoperative registration is performed using the explicitly identified 2D-3D correspondences. Inserting fiducial markers onto the body involves extra surgical costs and might not be a viable option for minimally invasive surgeries. To circumvent such issues with the feature-based method, an intensity-based optimization scheme for registration has been extensively researched in the past [1,9]. Since the objective function is highly nonlinear for optimizing pose parameters, a good initialization is necessary for the method to converge in a global minimum. Therefore, it is usually accompanied by initial coarse registration using manual alignment of the 3D model to the image, which interrupts the surgical flow. On the other hand, learning-based methods have proved to be efficient in solving the registration task. Existing learning-based methods can be categorized broadly into two subdomains: landmark estimation and direct pose regression. Landmark estimation methods aim to solve for pose using correspondences between 3D landmark annotations and its estimated 2D projection points [7,3,5], while methods based on pose regression estimate the global camera pose in a single inference [12]. Pose regressors are known to overfit training data and generalize poorly to unseen images [14]. This makes the landmark estimation methods stand out in terms of registration quality as well as generalization. However, there exist two main issues with landmark estimation methods: 1) Annotation cost of a sufficiently large number of landmarks in the CT image. 2) Failure to solve for the pose in extreme views where projected landmarks are not visible or the number of visible landmarks is small. In this paper, these issues are addressed by introducing scene coordinates [16] to establish dense 2D-3D correspondences. Specifically, the proposed method regresses the scene coordinates of the CT-scan model from corresponding X-Ray images. A rigid transformation that aligns the CT-scan model to the image is then calculated by solving the PnP problem with the RANSAC algorithm." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "The problem of 2D-3D registration can be formulated as finding the rigid transformation that transforms the 3D model defined in the anatomical or world coordinate system to the camera coordinate system. Specifically, given a CTscan volume V CT (x w ) where x w is defined in the world coordinate system, the registration problem is concerned with finding T c w = [R|t] such that the following holds.\nI = ℜ{V CT (T c -1 w x c ); K}(1)\nWhere ℜ{•} is the X-Ray transform that can be applied to volumes in the camera coordinate system given an intrinsic matrix K and I, the target X-Ray image.\nFig. 1. An overview of the proposed method. Scene coordinates are regressed using a U-Net architecture given an X-Ray image. With the obtained dense correspondences, PnP with RANSAC is run to get the transformation matrix that aligns the projection of the 3D model with the X-Ray image in the camera coordinate system." }, { "figure_ref": [ "fig_0" ], "heading": "Registration", "publication_ref": [], "table_ref": [], "text": "The overview of the proposed registration pipeline is shown in Fig. 1. The proposed method has four parts. First, the scene coordinates are regressed given a single-view X-Ray image as the input to a U-Net model. Second, the PnP + RANSAC algorithm is used to solve for the pose of the captured X-Ray system. Third, the CT-scan volume is segmented to obtain a 3D model of the bone regions. And fourth, the computed rigid transformation from world coordinates to camera coordinates is used to generate projection overlay images.\nScene Coordinates. Scene coordinates are defined as the points of intersection between a camera's back-projected rays and the 3D model in a world coordinate system (i.e., only the first intersection and the last intersection are considered). The same concept is adapted for X-Ray images and its underlying 3D model obtained from CT-scan. Specifically, given an arbitrary point x ij in the image plane, the scene coordinates X ij satisfy the following conditions.\nX ij = [R T | -t](dK T x ij )(2)\nwhere R and t are the rotation matrix and translation vector that maps points in the world coordinate system to the camera coordinate system, K is the intrinsic matrix, d is the depth, as seen from the camera, of the point X on the 3D model.\nUncertainty Estimation. The task of scene coordinate regression is to estimate these X ij for every pixel ij, given an X-Ray image I. However, the existence of X ij is not guaranteed for all pixels since the back-projected rays may not intersect the 3D model. One of many ways to address such a case would be to prepare a mask (i.e., 1 if bone area, 0 otherwise) in advance so that only the pixels that lie inside the mask are estimated. Since this approach would require an explicit way of estimating the mask image, an alternative approach has been taken in this work. Instead of estimating a single X ij , the mean and variance of the scene coordinate is estimated. The non-intersecting scene coordinates are identified by applying thresholding to the estimated variance (i.e., points with high variance are considered non-existent scene coordinates and thus filtered out). This approach assumes that the observed scene coordinates are corrupted with a zero mean, non-zero and non-constant variance, and isotropic Gaussian noise.\nX ij ∼ N (u(I, x ij ), σ(I, x ij ))(3)\nWhere u(I, x ij ) and σ(I, x ij ) are the functions producing the mean and standard deviation of the scene coordinate respectively. This work represents these functions using a fully convolutional neural network.\nLoss Function. A U-Net architecture is used for estimating the mean and standard deviation of scene coordinates at every pixel in a given image. The loss function for intersecting scene coordinates is derived from the maximum likelihood estimates using the likelihood X ij . It can be written as follows:\nLoss intersecting = ( (X ij -u(I, x ij )) σ(I, x ij ) ) 2 + 2 log(σ(I, x ij ))(4)\nSince it is desired to have a high variance for non-existent scene coordinates, the loss function for non-existent coordinates is designed as follows:\nLoss non-existent = 1 σ(I, x ij )(5)\n2D-3D Registration. Iterative PnP implementation from OpenCV is run using RANSAC with maximum iteration of 1000 and reprojection error of 10px and 20px for simulated and real X-Ray images respectively. An example of a successful registration is shown in the left part of Fig. 2 below." }, { "figure_ref": [], "heading": "Experiments and Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Dataset", "publication_ref": [ "b6", "b2", "b2" ], "table_ref": [], "text": "To properly evaluate the proposed method, a dataset that contains 6 annotated CT scans each with several registered real X-Ray images from [7] was used.\nThe annotations include 14 landmarks and 7 segmentation labels. The CT scans are pelvic bones of cadaveric specimens. Since there are only a few real X-Ray images, simulated X-Ray images are generated from each CT-scans for training and testing the model. Specifically, DeepDRR [3] was used for simulating a Siemens Cios Fusion Mobile C-arm imaging device. Similar to [3], LAO/RAO of [-45, 45] degrees respectively were samples at 1-degree intervals. Random offset was added in each direction. The offset vector was sampled from a normal distribution of zero means and 90 mm standard deviation in the lateral direction, and 30 mm standard deviations each in the other two directions. Through this, images with partially visible structures were intentionally simulated. Some randomly picked samples are shown in the right part of Fig. 2. For each image, the ground truth scene coordinates were obtained from the 3D model of the CT scans. For each specimen, 8100 simulated X-Rays were generated, of which, 5184 images were randomly assigned as the training set, 1296 for the validation set, and the remaining 1620 for the test set." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "The U-Net was implemented in Pytorch 1.13.0. An image size of 512×512 is used for input as well as output scene coordinates. The output channel size is 8 (i.e., 3 for scene coordinates and 1 for standard deviation, multiplied by 2 for entry and exit points). The model was trained with each dataset individually (i.e., patient-specific models were obtained for each specimen) using Adam with a constant learning rate of 0.0001 and batch size of 16. Online data augmentation with a probability of 0.5 was applied for domain randomization. It included random invert, color jitter with brightness and contrast parameters each set to 1, and random erasing. The scene coordinates were filtered using a log variance threshold of 0 for simulated images and -2 for real X-Ray images." }, { "figure_ref": [], "heading": "Baselines and Evaluation Metrics", "publication_ref": [ "b7", "b6" ], "table_ref": [], "text": "The proposed method was compared against two other baseline methods: PoseNet [8] and DFLNet [7]. PoseNet was implemented using ResNet-50 as the backbone for the feature extractor and trained using geometric loss. DFLNet uses the same architecture as the proposed method however the last layer regresses 14 heatmaps of the landmarks instead of scene coordinates. Note that the segmentation layer along with the gradient-based optimization phase in the original paper has been left out for architectural comparison. Each baseline was trained in a patient-specific manner following the proposed method. The mean target registration error (mTRE) and Gross Failure Rate (GFR), were used as the evaluation metric to compare with the baselines. mTRE is defined in 6, where X k is the position of the ground truth landmark Xk after applying the predicted transformation. GFR is the ratio of failed cases where the failed cases are defined as the registration results with mTRE greater than 10mm. Since we could only get the projection of ground truth landmarks and not the ground truth transformation matrix for the real X-Ray images, projected mTRE (proj. mTRE) was used for evaluation. It is similar to mTRE except the X k and Xk represent the projected coordinates of the landmarks, in the detector plane (i.e., the pixel coordinates are scaled according to the detector size to match the units).\nmTRE = 1 N k=N k=1 ∥X k -Xk ∥ 2 (6)" }, { "figure_ref": [ "fig_1" ], "heading": "Registration Results", "publication_ref": [ "b13" ], "table_ref": [ "tab_0", "tab_1" ], "text": "Simulated X-Ray Images. Table 1 shows the mTRE in the 25 th , 50 th , and 95 th percentile of the total test sample size and the GFR. For most of the specimens, the proposed method could retain the GFR below 20% whereas PoseNet and DFLNet fail to register with more than 20% GFR in most cases. For PoseNet, this is because the network cannot reason about the spatial structure and its local relation to the image patches. For DFLNet, this is inevitable due to the visibility issue of landmark points that were mostly located in the pubic region of the pelvis. Comparing the mTRE of each specimen with each method, the proposed method achieved an mTRE of 7.98 mm even in the 95 th percentile of specimen 2. DFLNet achieved the lowest mTRE of 0.98 mm in the 25 th percentile of specimen 4. This illustrates the highly accurate registration that landmark estimation methods are capable of. However, with extreme or partial views such as the one shown in Fig. 3, the method cannot estimate the correct pose parameter due to incorrect landmark localization or an insufficient number of visible landmarks. Please refer to the supplemental material for registration overlay results of different specimens using the proposed method.\nReal X-Ray Images. Table 2 shows the mTRE calculated on projected image points (abbreviated as proj. mTRE) for PoseNet and the proposed method. DFLNet could not adapt to real X-Ray images, therefore, was left out of the table. Since our dataset consisted mostly of images with partially visible hips, only a few landmarks are visible per image. This causes the DFLNet to overfit to the partially visible landmark distribution while our proposed model mitigates this issue by learning the general structure (i.e., every surface point that is visible). The proposed method estimated good transformations (i.e., proj. mTRE approximately around 10 mm in the 50 th percentile). In contrast, the proj. mTRE for PoseNet is significantly higher. This suggests that PoseNet overfits the training data despite the application of domain randomization. This result agrees with previous reports [14] that address this issue. A visualization of the overlays is presented in the supplemental material." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "As the proposed method is designed for producing initial estimates of the pose parameters, an extra refinement step using an intensity-based optimization method is required for obtaining clinically relevant registration accuracy. Although the proposed method can provide a good initial estimate, the average runtime for the whole pipeline was 1.75 seconds which is around two orders of magnitude greater than PoseNet, which had an average runtime of 0.06 seconds. This is because RANSAC has to find a good pose from a dense set of correspondences. This issue may be addressed by heuristically selecting a good variance threshold per image that filters out bad correspondences. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presented a scene coordinate regression-based approach for the X-Ray to CT-scan model registration problem. The proposed method does not require labeling of anatomical landmarks and is effective in extreme view angles. Experiments with simulated X-Ray images, as well as real X-Ray images, showed that the proposed method could perform well even under partially visible structures and extreme view angles, compared to direct pose regression and landmark estimation methods. Testing the model trained solely on simulated X-Ray images, on real X-Ray images did not result in catastrophic failure, instead, the results were positive for instantiating further refinement steps." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement This work was partially supported by a grant from JSPS KAKENHI Grant Number JP23K08618. This work (in part) used computational resources of Cygnus provided by Multidisciplinary Cooperative Research Program in Center for Computational Sciences, University of Tsukuba." } ]
Intraoperative fluoroscopy serves as a frequently employed modality in minimally invasive orthopedic surgeries. Aligning the intraoperatively acquired X-Ray image with the preoperatively acquired 3D model of a computed tomography (CT) scan reduces the mental burden on surgeons induced by the overlapping anatomical structures in the acquired images. This paper proposes a fully automatic registration method that is robust to extreme viewpoints and does not require manual annotation of landmark points during training. It is based on a fully convolutional neural network (CNN) that regresses scene coordinates for a given X-Ray image. Scene coordinates are defined as the intersection of the back-projected ray from a pixel toward the 3D model. Training data for a patient-specific model is generated through a realistic simulation of a C-arm device using preoperative CT scans while intraoperative registration is achieved by solving the perspective-n-point (PnP) problem with random sample and consensus (RANSAC) algorithm. Experiments were conducted using a pelvis CT dataset including several real fluoroscopic (X-Ray) images with ground truth annotations. The proposed method achieved an average mean target registration error (mTRE) of 3.79+/1.67 mm in the 50 th percentile of the simulated test dataset and projected mTRE of 9.65+/-4.07 mm in the 50 th percentile of real fluoroscopic images for pelvis registration.
X-Ray to CT Rigid Registration Using Scene Coordinate Regression
[ { "figure_caption": "Fig. 2 .2Fig. 2. An example of successful registration with the proposed method (left two images) and Randomly picked data samples in the test set (right). The X-Ray image and model's gradient projection overlay (middle) and the pose of the model in the camera coordinates system (left). The origin of the view frustum is the X-Ray source position and the simulated X-Ray images are placed in the detector plane for visualization (right).", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. An example case illustrating an extreme partial viewpoint. The proposed method successfully registers the image with 1.70 mm mTRE, while PoseNet struggles with 16.95 mm mTRE. Since there is an insufficient number (less than 4) of visible landmarks, the DFLNet hallucinates landmarks providing incorrect 2D-3D correspondences which leads to large mTRE.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "The mean target registration errors each in 25 th , 50 th , and 95 th percentile of the simulated test dataset. All models are trained individually on the 6 specimens shown below. The proposed method outperforms other methods in terms of 50 th percentile mTRE and GFR in most of the specimens.", "figure_data": "PoseNetDFLNetOursmTRE[mm]↓mTRE[mm]↓mTRE[mm]↓Specimen 25 th50 th95 thGFR[%]↓ 25 th50 th95 thGFR[%]↓ 25 th50 th95 thGFR[%]↓#15.758.3724.8138.533.36212.59 680.5862.271.372.509.804.87#26.9710.2325.9551.631.987.04656.4146.151.152.147.982.54#35.427.8623.3435.351.032.51583.6328.651.673.0512.258.56#44.676.4616.9118.770.982.30558.2023.591.763.3819.1912.54#54.816.5218.9822.431.514.28767.5637.473.095.3017.3218.85#64.065.8518.4222.692.26139.96 15321.19 58.723.806.3718.2523.18mean5.287.5521.4031.571.8561.45 3094.60 42.812.143.7914.1311.76std1.021.623.7712.580.9091.88 5990.24 15.761.061.674.758.05", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The projective mean target registration error evaluated on real X-Ray dataset. The proposed method achieved significantly low registration errors compared to PoseNet, implying that it generalizes well to unseen data and domain.", "figure_data": "PoseNetOursNumberproj. mTRE [mm]↓proj. mTRE [mm]↓ofSpecimenImages25 th50 th95 th25 th50 th95 th#111143.6449.1164.235.458.0255.87#22419.4227.1843.682.743.326.48#310431.1838.9766.067.6011.85162.83#42435.5238.3757.0711.1215.5292.34#54838.9746.6069.066.099.0721.91#65534.5137.1347.727.1810.1420.78mean33.8739.5657.976.709.6560.04std8.257.7710.372.774.0759.15", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Pragyan Shrestha; Chun Xie; Hidehiko Shishido; Yuichi Yoshii; Itaru Kitahara
[ { "authors": "S Aouadi; L Sarry", "journal": "Comput. Vis. Image Underst", "ref_id": "b0", "title": "Accurate and precise 2D-3D registration based on x-ray intensity", "year": "2008-04" }, { "authors": "P Belei; A Skwara; M De La Fuente; E Schkommodau; S Fuchs; D C Wirtz; C Kämper; K Radermacher", "journal": "Comput. Aided Surg", "ref_id": "b1", "title": "Fluoroscopic navigation system for hip surface replacement", "year": "2007-05" }, { "authors": "B Bier; M Unberath; J N Zaech; J Fotouhi; M Armand; G Osgood; N Navab; A Maier", "journal": "", "ref_id": "b2", "title": "X-ray-transform invariant anatomical landmark detection for pelvic trauma surgery", "year": "2018-03" }, { "authors": "M P Bradley; J R Benson; J M Muir", "journal": "Cureus", "ref_id": "b3", "title": "Accuracy of acetabular component positioning using computer-assisted navigation in direct anterior total hip arthroplasty", "year": "2019-04" }, { "authors": "J Esteban; M Grimm; M Unberath; G Zahnd; N Navab", "journal": "Springer International Publishing", "ref_id": "b4", "title": "Towards fully automatic X-Ray to CT registration", "year": "2019" }, { "authors": "A K George; M Sonmez; R J Lederman; A Z Faranesh", "journal": "Med. Phys", "ref_id": "b5", "title": "Robust automatic rigid registration of MRI and x-ray using external fiducial markers for XFM-guided interventional procedures", "year": "2011-01" }, { "authors": "R B Grupp; M Unberath; C Gao; R A Hegeman; R J Murphy; C P Alexander; Y Otake; B A Mcarthur; M Armand; R H Taylor", "journal": "Int. J. Comput. Assist. Radiol. Surg", "ref_id": "b6", "title": "Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2D/3D registration", "year": "2020-05" }, { "authors": "A Kendall; M Grimes; R Cipolla", "journal": "", "ref_id": "b7", "title": "PoseNet: A convolutional network for Real-Time 6-DOF camera relocalization", "year": "2015-12" }, { "authors": "H Livyatan; Z Yaniv; L Joskowicz", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b8", "title": "Gradient-based 2-D/3-D rigid registration of fluoroscopic x-ray to CT", "year": "2003-11" }, { "authors": "Jr Maurer; C R Fitzpatrick; J M Wang; M Y Galloway; R L Jr; R J Maciunas; G S Allen", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b9", "title": "Registration of head volume images using implantable fiducial markers", "year": "1997-08" }, { "authors": "P Merloz; J Troccaz; H Vouaillat; C Vasile; J Tonetti; A Eid; S Plaweski", "journal": "Proc. Inst. Mech. Eng. H", "ref_id": "b10", "title": "Fluoroscopy-based navigation system in spine surgery", "year": "2007-10" }, { "authors": "S Miao; Jane Wang; Z Liao; R ", "journal": "", "ref_id": "b11", "title": "Real-time 2D/3D registration via CNN regression", "year": "2015-07" }, { "authors": "J C Reichert; A Hofer; G Matziolis; G I Wassilew", "journal": "J. Clin. Med. Res", "ref_id": "b12", "title": "Intraoperative fluoroscopy allows the reliable assessment of deformity correction during periacetabular osteotomy", "year": "2022-08" }, { "authors": "T Sattler; Q Zhou; M Pollefeys; L Leal-Taixé", "journal": "", "ref_id": "b13", "title": "Understanding the limitations of CNN-Based absolute camera pose regression", "year": "2019-06" }, { "authors": "C A Selles; M S H Beerekamp; P A Leenhouts; M J M Segers; J C Goslings; N W L Schep", "journal": "J. Hand Surg. Am", "ref_id": "b14", "title": "EF3X Study Group: The value of intraoperative 3-dimensional fluoroscopy in the treatment of distal radius fractures: A randomized clinical trial", "year": "2020-03" }, { "authors": "J Shotton; B Glocker; C Zach; S Izadi; A Criminisi; A Fitzgibbon", "journal": "", "ref_id": "b15", "title": "Scene coordinate regression forests for camera relocalization in RGB-D images", "year": "2013-06" }, { "authors": "T S Y Tang; R E Ellis; G Fichtinger", "journal": "Springer", "ref_id": "b16", "title": "Fiducial registration from a single X-Ray image: A new technique for fluoroscopic guidance and radiotherapy", "year": "2000" }, { "authors": "M Woerner; E Sendtner; R Springorum; B Craiovan; M Worlicek; T Renkawitz; J Grifka; M Weber", "journal": "Acta Orthop", "ref_id": "b17", "title": "Visual intraoperative estimation of cup and stem position is not reliable in minimally invasive hip arthroplasty", "year": "2016-06" }, { "authors": "J D Wylie; J A Ross; J A Erickson; M B Anderson; C L Peters", "journal": "Clin. Orthop. Relat. Res", "ref_id": "b18", "title": "Operative fluoroscopic correction is reliable and correlates with postoperative radiographic correction in periacetabular osteotomy", "year": "2017-04" } ]
[ { "formula_coordinates": [ 2, 254.91, 612.72, 225.69, 14.34 ], "formula_id": "formula_0", "formula_text": "I = ℜ{V CT (T c -1 w x c ); K}(1)" }, { "formula_coordinates": [ 3, 253.86, 542.02, 226.73, 11.72 ], "formula_id": "formula_1", "formula_text": "X ij = [R T | -t](dK T x ij )(2)" }, { "formula_coordinates": [ 4, 246.84, 239.29, 233.75, 9.65 ], "formula_id": "formula_2", "formula_text": "X ij ∼ N (u(I, x ij ), σ(I, x ij ))(3)" }, { "formula_coordinates": [ 4, 187.55, 370.67, 293.04, 23.22 ], "formula_id": "formula_3", "formula_text": "Loss intersecting = ( (X ij -u(I, x ij )) σ(I, x ij ) ) 2 + 2 log(σ(I, x ij ))(4)" }, { "formula_coordinates": [ 4, 246.26, 437.49, 234.33, 23.22 ], "formula_id": "formula_4", "formula_text": "Loss non-existent = 1 σ(I, x ij )(5)" }, { "formula_coordinates": [ 6, 243.58, 296.95, 237.01, 30.55 ], "formula_id": "formula_5", "formula_text": "mTRE = 1 N k=N k=1 ∥X k -Xk ∥ 2 (6)" } ]
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0" ], "table_ref": [], "text": "Despite the remarkable success achieved by deep reinforcement learning (RL), its practical application often encounters significant hurdles, primarily stemming from its vulnerability in real-world settings. The performance of many RL methods tends to falter when confronted with disparities between training and testing scenarios, giving rise to critical safety and security concerns. At the heart of the challenge lies the imperative to develop policies robust enough to withstand disturbances and environmental changes, particularly in addressing the pervasive 'reality gap' problem. This issue encapsulates the formidable task of designing systems capable of effectively navigating the transition from controlled simulated training environments to the intricate and unpredictable conditions of the real world.\nOne notable limitation in current RL systems is the random selection of initial states for each training episode. This method overlooks the essential characteristics of the environment in the short term, resulting in policies that adeptly handle common scenarios but neglect fundamental yet rare situations and catastrophic phenomena [1]. Consider a self-driving car navigating through a city devoid of cars and pedestrians. Random placement in the city often allows the agent, maintaining a fixed speed and untouched steering, to achieve a high reward. However, crucial actions such as turning and changing lanes, while less frequent, remain vital. If the agent encounters difficulties in changing lanes but excels at turning, it should be exposed to more lane-changing scenarios than turning scenarios during training. This targeted exposure guides the agent toward mastering more sophisticated maneuvers, minimizing the risk of catastrophic events and enhancing overall learning. This paper introduces a novel approach, named Where2Start, aimed at training an agent with fewer episodes by strategically selecting initial states where the agent is prone to suboptimal actions. By doing so, the agent is compelled to learn from more challenging scenarios. Additionally, by excluding less informative states, the number of sampled trajectories significantly decreases. Our approach is versatile and can be seamlessly integrated with a wide array of 2 Related Work" }, { "figure_ref": [], "heading": "Robust RL", "publication_ref": [ "b1", "b4", "b5", "b7" ], "table_ref": [], "text": "The task of formulating robust strategies that can secure high rewards amidst adversarial environmental interferences, a notion commonly known as robust RL, has been examined in scholarly research. This involves creating policies that not only perform well under normal conditions but also maintain their performance when faced with unexpected or adverse changes in the environment [2][3] [4].\nSeveral studies have attempted to leverage adversarial attacks on neural networks to create disturbances in environmental dynamics. For instance, these works utilize the gradient of the critic network to generate disturbances that have meaningful long-term effects. This approach allows for the exploration of how these disturbances impact the performance and robustness of reinforcement learning algorithms in various environments [5][6] [7]. However, this line of research does not encompass scenarios or environments where the features of the environment are not directly correlated with observations. This leaves a gap in understanding how reinforcement learning algorithms perform when environmental features and observations are decoupled.\nSeveral studies are exploring ways to improve the robustness of systems by introducing an adversary into the environment, turning the problem into a zero-sum game. This setup is seen as a duel between two players: the protagonist, who is learning to perform a task, and the adversary, who is learning to disrupt the environment to make the protagonist fail. The adversary can cause various disruptions, such as creating disturbances that alter the environment's natural progression and behavior, or manipulating physical obstacles to change the protagonist's navigation routes or strategies. The ultimate goal is to train the protagonist to become more robust against the adversary's disruptions. [8][9][10] [11][12].This methodology does present certain challenges. For instance, there isn't a clear correlation between the states of the adversary and the protagonist in these environments. This is akin to the feature engineering of the adversary's states, which can introduce complexity and potential inaccuracies into the system. It's crucial to address these issues to ensure the effectiveness and reliability of the approach.\nIn our research, we explore a unique scenario where the agent is confronted with complexities in its observations, making it difficult to manage the random noise inherent in these observations. This environment presents unique challenges as the agent is expected to perform efficiently despite the unpredictability and variations introduced by the noise. This situation is particularly relevant as it closely mirrors real-world conditions. It's not uncommon for an agent's sensors to experience issues or malfunctions, which can further complicate the agent's tasks. These complications can arise from various factors such as environmental conditions, hardware limitations, or even software glitches. The agent, therefore, needs to be robust and adaptable, capable of making accurate decisions even when the input data is noisy or incomplete." }, { "figure_ref": [], "heading": "Sample Efficieny", "publication_ref": [ "b13", "b14", "b15", "b17", "b18", "b19", "b21" ], "table_ref": [], "text": "The upcoming topic will focus on the issue of how efficiently samples are used in reinforcement learning. The aim in reinforcement learning that is efficient in terms of data usage, especially when dealing with a Markov decision process without any prior knowledge, is to find a policy that achieves a specific goal with the fewest interactions. For a agent, it's a significant challenge to learn an effective policy with the least number of interactions, especially when there's no prior knowledge. Data-efficient methods have been proposed to focus on policy evaluation and enhancement stages. These methods usually use random techniques or fixed strategies to sample initial states. It's important to note that having a more informative initial states can help in learning a more accurate controller, which can lead to fewer interactions.\nIn many real world scenarios, each interaction with the environment comes at a cost, and it is desirable for deep reinforcement learning (RL) algorithms to learn with a minimal amount of samples [13]. Operating real-world physical systems, such as robots, can be expensive, which makes it crucial to learn with the fewest possible number of real-world experiments. This is where the concept of data-efficient reinforcement learning comes into play. It aims to develop a policy that can achieve the desired outcome with minimal interactions, thereby reducing the cost and time associated with numerous trials. The challenge lies in the fact that these systems often lack prior knowledge, making it difficult to establish an effective policy from the outset. However, by utilizing data-efficient methods and having a more informative initial states, we can expedite the learning of a precise controller, ultimately leading to a decrease in the number of required real-world interactions.\nThe issue of the impact of initial states dates back to the early years of reinforcement learning. In [14] a reset distribution has been suggested, which determines the next state based on a specific distribution. Extensive research has been carried out to identify the most critical states for effective training. Some approaches limit the initial state to a specific set of states. This can be seen, for example, in [15]. They utilized a collection of demonstration states, initiating each episode by resetting to a state from a demonstration. Interestingly, their agent does not exactly replicate the demonstrated behavior. Instead, it is capable of discovering new and innovative solutions that the human demonstrator might not have considered. This leads to achieving a higher score on Montezuma's Revenge than what was obtained using previously published methods.\nOther research has adopted an approach where a memory of the most significant previous states is retained, such as [16][17]. As an illustration, [17] presents a technique where the agent's past trajectories are recorded. They then revisit those intersections that seem to hold the potential for revealing new insights, and restart their investigation from those locations.\nMoreover, there exists a body of research focused on goal-oriented problems, which employs a \"reverse\" training approach. This method progressively learns to achieve the goal from a variety of starting points that are incrementally distanced from the goal [18].\nOur research stands out from others, especially regarding sample efficiency, due to our unique methodology. We assign a score to each state before initiating each episode, reflecting its level of uncertainty or sensitivity. This strategy is somewhat akin to randomly selecting the initial state, but it yields significantly better convergence than random sampling. We commence exploration from the state with the highest uncertainty or sensitivity, enabling us to develop a policy that concentrates on the most informative or sensitive regions. In addition, we offer a comprehensive framework that accommodates both off-policy and on-policy algorithms. This is a contrast to some previous works that only cater to specific scenarios such as tabular MDPs [19][20] [21]." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Relative Conditional Number", "publication_ref": [], "table_ref": [], "text": "The relative condition number is a metric that quantifies the potential variation in the output of a function due to a small perturbation in the input. This measure is particularly useful in environments with noise, as it allows for consistent output generation for an input and its slightly altered, or noisy, version. Essentially, it provides a mechanism to assess the stability of a function's output in the presence of minor fluctuations in the input, making it a valuable tool in maintaining output integrity amidst input uncertainties. For a function f with several variables, we could define the relative condition number as:\n∥J(X)∥.∥X∥ ∥f (X)∥(1)\nwhere J(X) is the jacobian matrix of partial derivaties of f at x.It is important to note that in this study, we have defined the function f : R n → R. Consequently, the Jacobian matrix is equivalent to its gradient. Therefore, the relative condition number that we utilize is as follows:\n∥∇f (X)∥.∥X∥ ∥f (X)∥(2)\nIn the context of our research, we apply the relative condition number in a unique manner. We designate the function 'f' to correspond to the value function, and then calculate the gradient in relation to the parameters of the policy network. This methodology allows us to effectively leverage the relative condition number within our research framework. The expression related to this methodology for each state s t is provided below.\n∥∇ θ V alue π (s t )∥.∥s t ∥ ∥V alue π (s t )∥(3)\nFurthermore, In our analysis, we disregard the term ∥s t ∥, as it does not hold as much significance as the other quantities. The final metric is presented as follows:\n∥∇ θ V alue π (s t )∥ ∥V alue π (s t )∥(4)" }, { "figure_ref": [], "heading": "GP", "publication_ref": [], "table_ref": [], "text": "Gaussian processes offer a method for directly contemplating the overarching characteristics of functions that could align with our data. They enable us to integrate features such as rapid variation, periodicity, conditional independencies, or translation invariance. The procedure commences with the definition of a prior distribution over potential reasonable functions. This prior is not in search of functions that match the dataset, but rather it aims to specify plausible overarching characteristics of the solutions, like their rate of variation with inputs. Upon data observation, this prior aids in inferring a posterior distribution over functions that could align with the data. Posterior samples can be utilized for making predictions by averaging the values of every potential sample function from the posterior. The posterior mean can be employed for point predictions, and a representation of uncertainty can also be provided to indicate confidence in the predictions. In this study, we employ two metrics to determine the uncertainty or sensitivity of various random states. Utilizing these metrics, we then apply a Gaussian Process (GP) model to fit our data. Essentially, we calculate a score based on these metrics for the entire state space. The state with the highest score is then selected. This approach allows us to systematically evaluate and select the most significant states based on our established criteria." }, { "figure_ref": [], "heading": "Soft Actor Critic", "publication_ref": [], "table_ref": [], "text": "SAC incentivizes stochastic exploration through entropy regularization on its policy, this random action noise is insufficient for thoroughly exploring complex environments. Simply increasing entropy leads to undirected state space coverage, wasting samples in already-visited regions. Intelligently searching the state space requires more than injecting random noise into the policy. SAC lacks directed goals for seeking out novel and uncertain states. To enable efficient exploration, the agent needs logic for identifying regions of unpredictability and information gain, setting intrinsic goals for visiting these areas. This targeted search across the state space could uncover new states faster than entropy-driven noise. Overall, SAC's core algorithm does not include mechanisms for focused, information-maximizing exploration." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we detail our methodologies for bolstering the robustness of our model against environmental noise. Despite comprehensive experiments, conventional off-policy reinforcement learning algorithms such as Soft Actor-Critic (SAC) often struggle to achieve robustness against random environmental disturbances. To tackle this, we put forth two distinct strategies. These strategies are specifically designed to enhance the robustness of SAC, thereby making it more resilient to environmental noise. The specifics of these strategies will be elaborated in the following sections." }, { "figure_ref": [], "heading": "The issue of environmental noise", "publication_ref": [], "table_ref": [], "text": "To begin with, we try to understand how the model could handle environmental noise. When some perturbation is applied to the agent's observation, the agent should perform as before. Additionally, the agent cannot handle some states in the environment or behave differently in them because it does not learn how to act in them. Seeing a variety of situations is therefore crucial to training an agent effectively and robustly. As a result, the agent is aware of how to behave in different situations, and when there is noise in the observation, which causes a different state for the agent, the agent should know how to react appropriately." }, { "figure_ref": [], "heading": "Random selection of initial states", "publication_ref": [], "table_ref": [], "text": "We propose an initial strategy that involves a more extensive random selection of starting states, marking a departure from traditional methods. While many existing reinforcement learning algorithms also employ random selection, they typically limit their initial states to a confined region within the environment. This restricted scope of random noise may not fully explore the state space, potentially limiting the model's resilience against random disturbances. Our approach, on the other hand, champions a wider selection of initial states with the goal of bolstering the model's robustness.\nOwing to this comprehensive exploration, our model encounters a variety of situations and is better equipped to handle noisier settings. It's important to note that this broader random selection is not available in existing baseline algorithms." }, { "figure_ref": [], "heading": "A new metric for measuring the importance of states", "publication_ref": [], "table_ref": [], "text": "In this section, we focus on a key measure of significance: the Relative condition number. This measure is used to assess the stability of the system. A lower condition number indicates a more stable system, while a higher number suggests potential instability. This metric provides a comprehensive understanding of the system's behavior and performance. To identify the state with the highest score using this method, we initially select various states at random and compute the specified scores for these states. Following this, we implement a Gaussian process on the states to estimate scores across the entire state space. The state yielding the highest score, as determined by the output of the Gaussian process, is then selected. This approach allows us to efficiently explore the state space and identify states of interest based on their scores. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b22" ], "table_ref": [], "text": "We designed some experiments to evaluate the performance and measure the impacts of Our proposed method compared to state-of-the-art reinforcement learning algorithms. To demonstrate this enhancement we conducted extensive benchmarking experiments analyzing the performance of SAC [22] versus On-StabilitySAC using the condition number metric across three distinct environments. The environments examined were the OpenAI benchmarks Pendulum-v1, MountainCarContinuous-v0, and Deepmind's Swimmer-v3. Pendulum-v1 represents a typical environment that is readily solved by common algorithms. MountainCarContinuous-v0 provides a sparse reward landscape, often trapping algorithms in local minima as exemplified by SAC (1). Swimmer-v3 offers a high dimensional observation space inducing substantial computational complexity.\nWhere2Start: Leveraging initial States for Robust and Sample-Efficient Reinforcement Learning Note : In comparison to the baseline SAC, our method, SAC + RandomSelection, expands the exploration range from the initial state to cover the entire observation space while opting for random selection" }, { "figure_ref": [], "heading": "Robustness to different noise regimes", "publication_ref": [], "table_ref": [], "text": "Performance of SAC versus On-StabilitySAC using the condition number metric across the mentioned environments under four different noise regimes. The noise types tested were L 0 , L 2 , L infinity , and Gaussian noise, with the first three representing Lp noise models. These experiments aimed to demonstrate our model's robustness across diverse noise types. Overall, our contributions illustrate enhanced stability and noise resistance compared to established RL techniques on representative benchmark environments and noise models.1 Figure 1: Our experiments underscored the challenge of noise robustness in MountainCarContinuous, a sparse reward environment. Despite the low dimensional observation space, the SAC algorithm failed to identify the global minimum in MountainCarContinuous. This inability to converge highlights the difficulty reinforcement learning algorithms face in overcoming local minima and sparse rewards. Our model demonstrates superior noise resilience in this setting, reaching the global minimum under L0, L2, Linfinity, and Gaussian noise regimes." }, { "figure_ref": [], "heading": "Effect of exploration metric", "publication_ref": [], "table_ref": [], "text": "Our results indicate that the choice of exploration metric significantly impacts measured performance. We compared our model using two approaches for selecting initial states: random selection versus selection based on the condition number metric. This benchmarking demonstrated substantially different outcomes depending on the exploration metric used. When initial states were chosen randomly, our model showed modest improvements compared to baseline algorithms. However, conditioning initial states on the condition number revealed dramatic enhancements in stability and noise resilience.2 " }, { "figure_ref": [], "heading": "Scalability and Sample efficiency", "publication_ref": [], "table_ref": [], "text": "Duo to evaluate our model's data efficiency we take advantage of the training curves of our proposed method, on-stability SAC utilizing the condition number metric compared to the baseline SAC conditioning initial state on Random selection on both the Swimmer and MountainCarContinuous environments. Additionally, On-Stability SAC reaches its maximum performance in approximately 150,000 timesteps, while SAC requires over 300,000 interactions to converge. These results validate our approach's ability to attain superior asymptotic performance with fewer environmental interactions on complex tasks. The condition number metric enables more efficient learning, allowing our algorithm to surpass baseline SAC's performance with nearly 50% less experience.\nAlso by having the advantage of sample efficiency search, we have to compare the order of computation time per one episode of models, to ensure how scalable methods are versus complex environments. A comprehensive examination of environmental complexity, spanning from Pendulum-v1 to Swimmer-v3, reveals a heightened density in the observation space due to an increased number of dimensions. Consequently, the exploration of numerous states becomes essential to identify an optimal initial state. (It is noteworthy that the quantity of states considered for metric calculation exhibits an exponential relationship with the observation space's dimension.) As a result, the computational cost associated with our metric calculation is conditioned upon the extensive application of the backward() function on the value function of each state. This, in turn, leads to significant time-consuming training. Nonetheless, there is a conceptual approach wherein leveraging the backward function throughout the training process by baseline algorithms enables the calculation of gradients for the value function, allowing for their efficient reuse. Now, by incorporating sample efficiency and scalability through the reuse concept, we can assert that our method is poised to be applied effectively in complex environments." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we introduced the innovative approach Where2Start to tackle pivotal challenges in deep reinforcement learning (RL), with a particular focus on mitigating vulnerabilities during the transition from simulated training environments to real-world applications. The persistent 'reality gap' dilemma highlights the imperative for RL systems to dynamically adapt to the intricacies and unpredictabilities inherent in diverse scenarios.\nWhere2Start takes a strategic approach to selecting initial states for training episodes based on the agent's suboptimal actions, providing a promising avenue for bolstering RL robustness. The method systematically exposes the agent to challenging scenarios influenced by conditional numbers, fostering a more comprehensive understanding of its environment.\nMoreover, Where2Start contributes to efficiency gains by significantly reducing the number of sampled trajectories. This efficiency, coupled with the adaptability of our approach to seamlessly integrate with various state-of-the-art methods, positions Where2Start as a valuable tool for achieving superior RL results in a condensed number of training steps. Notably, our experiments reveal that Where2Start can converge to a better sub-optimal agent up to 8 times faster than common approaches. However, it is essential to acknowledge that the computation cost of the conditional number at the start of each episode is up to 95 times higher than common approaches, a critical consideration for future research endeavors.\nThe emphasis on stability within our approach establishes a solid foundation for ongoing advancements in RL research and applications. As we navigate the evolving landscape of artificial intelligence, Where2Start signifies a significant stride toward more reliable, safer, and efficient RL systems, setting the stage for broader and more impactful real-world implementations." } ]
The reinforcement learning algorithms that focus on how to compute the gradient and choose next actions, are effectively improved the performance of the agents. However, these algorithms are environment-agnostic. This means that the algorithms did not use the knowledge that has been captured by trajectory. This poses that the algorithms should sample many trajectories to train the model. By considering the essence of environment and how much the agent learn from each scenario in that environment, the strategy of the learning procedure can be changed. The strategy retrieves more informative trajectories, so the agent can learn with fewer trajectory sample. We propose Where2Start algorithm that selects the initial state so that the agent has more instability in vicinity of that state. We show that this kind of selection decreases number of trajectories that should be sampled that the agent reach to acceptable reward. Our experiments shows that Where2Start can improve sample efficiency up to 8 times. Also Where2Start can combined with most of state-of-the-art algorithms and improve that robustness and sample efficiency significantly.
WHERE2START: LEVERAGING INITIAL STATES FOR ROBUST AND SAMPLE-EFFICIENT REINFORCEMENT LEARNING
[ { "figure_caption": "Figure 2 :2Figure 2: We compared the convergence rates across metrics in Pendulum-v1, MountainCarContinuous-v0, and Swimmer-v3 benchmark environments. The x-axis indicates training time in thousands of timesteps, while the y-axis shows the mean cumulative reward over 100 evaluation episodes. The condition number metric achieves superior cumulative rewards given equal interactions with the environment. Dashed lines demonstrate the substantial additional training time required for alternative metrics to reach parity with the condition number's performance.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "33", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: On-Stability SAC achieves substantially higher cumulative rewards over the course of training compared to SAC. Additionally, On-Stability SAC reaches its maximum performance in approximately 150,000 timesteps, while SAC requires over 300,000 interactions to converge. These results validate our approach's ability to attain superior asymptotic performance with fewer environmental interactions on complex tasks. The condition number metric enables more efficient learning, allowing our algorithm to surpass baseline SAC's performance with nearly 50% less experience.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 Calculate Condition Number of Value FunctionInput: Observations state S 0 , policy parameters θ, Q-function parameters ϕ, number of sampled actions n µ, σ ← π θ (S 0 ) sampled actions ← N (µ, σ, n) sampled actions probability ← calculate probability(µ, σ, sampled actions) value function(S 0 ) ← action in sampled actions Q ϕ (S 0 , action) • sampled actions probability[action] gradients ← ∇ θ value function(S 0 ) output: condition number ← ||gradients||", "figure_data": "||value function||Algorithm 2 Training on-stability policy gradienti ← argmax(variance)initial_state ← X testielsei ← argmax(mean)initial_state ← X testiend ifobs ← reset env with the initial_statewhile True doaction ← π(obs)obs, reward, done, ← env.step(action)if done thenbreakend ifend whileπ ← update policy's parametersZ train ← Z testend for", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "On-Stability SAC + ConditionNumber, while exhibiting sample efficiency in search environments, is associated with a high computation time due to the complexity of environments. This complexity could potentially overshadow data efficiency in search, making it important to consider in particularly complex environments.", "figure_data": "1", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Pouya Parsa; Zare Raoof; Moayedi; Mohammad Bornosi; Mohammad Mahdi Bejani
[ { "authors": "M Robert; French", "journal": "Trends in cognitive sciences", "ref_id": "b0", "title": "Catastrophic forgetting in connectionist networks", "year": "1999" }, { "authors": "Marek Petrik; Reazul Hasan; Russel ", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Beyond confidence regions: Tight bayesian ambiguity sets for robust mdps", "year": "2019" }, { "authors": "Reazul Hasan; Russel ; Bahram Behzadian; Marek Petrik", "journal": "", "ref_id": "b2", "title": "Entropic risk constrained soft-robust policy optimization", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b3", "title": "Where2Start: Leveraging initial States for Robust and Sample-Efficient Reinforcement Learning", "year": "" }, { "authors": "Esther Derman; Matthieu Geist; Shie Mannor", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b4", "title": "Twice regularized mdps and the equivalence between robustness and regularization", "year": "2021" }, { "authors": "Anay Pattanaik; Zhenyi Tang; Shuijing Liu; Gautham Bommannan; Girish Chowdhary", "journal": "", "ref_id": "b5", "title": "Robust deep reinforcement learning with adversarial attacks", "year": "2017" }, { "authors": "Lucas Schott; Hatem Hajri; Sylvain Lamprier", "journal": "IEEE", "ref_id": "b6", "title": "Improving robustness of deep reinforcement learning agents: Environment attack based on the critic network", "year": "2022" }, { "authors": "Sandy Huang; Nicolas Papernot; Ian Goodfellow; Yan Duan; Pieter Abbeel", "journal": "", "ref_id": "b7", "title": "Adversarial attacks on neural network policies", "year": "2017" }, { "authors": "Xiaobai Ma; Katherine Driggs-Campbell; Mykel; Kochenderfer", "journal": "IEEE", "ref_id": "b8", "title": "Improved robustness and safety for autonomous vehicle control with adversarial reinforcement learning", "year": "2018" }, { "authors": "Johannes Heinrich; Marc Lanctot; David Silver", "journal": "PMLR", "ref_id": "b9", "title": "Fictitious self-play in extensive-form games", "year": "2015-07-09" }, { "authors": "Johannes Heinrich; David Silver", "journal": "", "ref_id": "b10", "title": "Deep reinforcement learning from self-play in imperfect-information games", "year": "2016" }, { "authors": "Parameswaran Kamalaruban; Yu-Ting Huang; Ya-Ping Hsieh; Paul Rolland; Cheng Shi; Volkan Cevher", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Robust reinforcement learning via adversarial training with langevin dynamics", "year": "2020" }, { "authors": "Lerrel Pinto; James Davidson; Rahul Sukthankar; Abhinav Gupta", "journal": "PMLR", "ref_id": "b12", "title": "Robust adversarial reinforcement learning", "year": "2017" }, { "authors": "Vincent François-Lavet; Peter Henderson; Riashat Islam; Marc G Bellemare; Joelle Pineau", "journal": "Foundations and Trends® in Machine Learning", "ref_id": "b13", "title": "An introduction to deep reinforcement learning", "year": "2018" }, { "authors": "Sham Kakade; John Langford", "journal": "", "ref_id": "b14", "title": "Approximately optimal approximate reinforcement learning", "year": "2002" }, { "authors": "Tim Salimans; Richard Chen", "journal": "", "ref_id": "b15", "title": "Learning montezuma's revenge from a single demonstration", "year": "2018" }, { "authors": "Arash Tavakoli; Vitaly Levdik; Riashat Islam; Christopher M Smith; Petar Kormushev", "journal": "", "ref_id": "b16", "title": "Exploring restart distributions", "year": "2018" }, { "authors": "Adrien Ecoffet; Joost Huizinga; Joel Lehman; Kenneth O Stanley; Jeff Clune", "journal": "", "ref_id": "b17", "title": "Go-explore: a new approach for hard-exploration problems", "year": "2019" }, { "authors": "Carlos Florensa; David Held; Markus Wulfmeier; Michael Zhang; Pieter Abbeel", "journal": "PMLR", "ref_id": "b18", "title": "Reverse curriculum generation for reinforcement learning", "year": "2017" }, { "authors": "Trey Smith; Reid Simmons", "journal": "", "ref_id": "b19", "title": "Focused real-time dynamic programming for mdps: Squeezing more out of a heuristic", "year": "2006" }, { "authors": "Andrew G Barto; Steven J Bradtke; Satinder P Singh", "journal": "Artificial Intelligence", "ref_id": "b20", "title": "Learning to act using real-time dynamic programming", "year": "1995" }, { "authors": "H ; Brendan Mcmahan; Maxim Likhachev; Geoffrey J Gordon", "journal": "Association for Computing Machinery", "ref_id": "b21", "title": "Bounded real-time dynamic programming: Rtdp with monotone upper bounds and performance guarantees", "year": "2005" }, { "authors": "Tuomas Haarnoja; Aurick Zhou; Pieter Abbeel; Sergey Levine", "journal": "PMLR", "ref_id": "b22", "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 278.5, 472.59, 262.17, 22.31 ], "formula_id": "formula_0", "formula_text": "∥J(X)∥.∥X∥ ∥f (X)∥(1)" }, { "formula_coordinates": [ 3, 274.62, 546.35, 266.05, 22.31 ], "formula_id": "formula_1", "formula_text": "∥∇f (X)∥.∥X∥ ∥f (X)∥(2)" }, { "formula_coordinates": [ 3, 260.09, 642.6, 280.58, 23.23 ], "formula_id": "formula_2", "formula_text": "∥∇ θ V alue π (s t )∥.∥s t ∥ ∥V alue π (s t )∥(3)" }, { "formula_coordinates": [ 3, 270.55, 702.11, 270.12, 23.23 ], "formula_id": "formula_3", "formula_text": "∥∇ θ V alue π (s t )∥ ∥V alue π (s t )∥(4)" } ]
[ { "figure_ref": [], "heading": "Table of contents", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Title page i", "publication_ref": [ "b6", "b35", "b8", "b7", "b17", "b33", "b20", "b9", "b21", "b34", "b18", "b5", "b11", "b4", "b9", "b43", "b10", "b3", "b1", "b40" ], "table_ref": [], "text": "Abstract ii Over the past few years, research interest in movement and trajectory analysis has increased considerably because researchers are looking to aid fields such as transporta-tion planning [7] and its dynamics [35], livestock monitoring [9], and robotics [8], to name a few. Trajectory data captures the change in the position of an object relative to time. The most common sources of collection of such data are devices such as AIS sensors, GPS sensors, and mobile devices [18], which rely heavily on internet connectivity. Several tasks are necessary to properly work with trajectory data in a data mining setup, including: (i) data fusion [33,21]; (ii) compression [10,22];(iii) seg-mentation [34,19,6]; (iv) classification [12,5]; (v) clustering [10,42]; and (vi) outlier detection [11,4,2] to name a few. Furthermore, the collection of trajectory data has also raised some privacy concerns regarding tracking human subjects and their movements [40]. As a result, researchers often find it difficult to collect high-quality and sufficiently large datasets required for data-intensive tasks such as machine learning and data mining." }, { "figure_ref": [], "heading": "Acknowledgements iii", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Table of contents v", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "List of tables vii", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "List of figures viii", "publication_ref": [ "b28", "b30", "b24", "b38", "b13", "b14", "b39", "b44", "b13" ], "table_ref": [], "text": "1\nA popular way of dealing with the lack of data in many computer science domains is to generate synthetic data by performing data augmentation on existing real data. Data augmentation is the process of generating synthetic data by applying transfor-mations to the original data. Data augmentation has been proven highly effective in computer vision for object detection and image classification tasks [28,30]. Not only have the data augmentation techniques been shown to increase the efficiency of the machine learning models, but they have also been shown to make the machine learning models more robust. Furthermore, data augmentation has been proven as an excellent alternative to enlarge datasets, cost-effectively using neural networks such as GANs (generative adaptive networks) [25].\nTo benefit from this idea, this work introduces AugmenTRAJ, a state-of-the-art and novel trajectory data augmentation package in python, that provides a collection of data augmentation techniques for trajectory data. AugmenTRAJ aims to provide the researchers with a streamlined process of augmenting trajectory data using popular Python 3 tools such as pandas [38] and PTRAIL [14,15]. In summary, this thesis contributes with the following:\n• AugmenTRAJ framework written in Python 3 that uses high Object Oriented Programming standards to allow for easy extension and ease of use in various environments.\n• The implementation of AugmenTRAJ framework consists of the following packages:\n1. Candidate Trajectory Selection Strategies • A subpackage of testing utilities that contains functions that can be used to set up the testing framework detailed in Section 3.4.\n• The strategies provided by the framework have been tested extensively for cor-rectness and accuracy using three datasets containing trajectory data from var-ious domains, namely the Starkey [39] dataset for animal movement tracking, Geolife [43] and Traffic [14] dataset for transportation analysis." }, { "figure_ref": [], "heading": "Chapter 2", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Literature Review", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Overview of Data Augmentation Techniques", "publication_ref": [ "b25", "b0" ], "table_ref": [], "text": "In recent years, data augmentation has gained much traction in the fields such as image processing, speech and audio processing, medical image processing, etc. As a result, immensely popular tools such as PyTorch [26] and TensorFlow [1] have embedded a dedicated sub-package for image data augmentation within their deep learning frameworks.\nGenerally, data augmentation techniques can be classified into two separate categories based on the time when augmentation is done in the machine learning pipeline that can be data augmentation during data preprocessing step 2.1.1 or data augmentation during model training step 2.1.2." }, { "figure_ref": [], "heading": "Data Augmentation During Data Preprocessing Step", "publication_ref": [ "b16" ], "table_ref": [], "text": "Data augmentation during preprocessing is a very popular strategy due to its relative ease of use compared to active learning data augmentation. Furthermore, there are a wide variety of data augmentation techniques for various data domains, which further aids the popularity of data augmentation during preprocessing. According to [17], such techniques are:" }, { "figure_ref": [], "heading": "Geometric Transformations", "publication_ref": [ "b22" ], "table_ref": [], "text": "• Geometric transformations for data augmentation include techniques such as data jittering, image cropping, image flipping, color distortion, geometric rotation, projection, and so on [23].\n• Geometric transformations are generally used in image and video process-ing and are often very useful in the field of medical image processing to generate synthetic data due to the limited availability of data because of privacy reasons." }, { "figure_ref": [], "heading": "Fourier Transformations", "publication_ref": [ "b24" ], "table_ref": [], "text": "• Fourier transformations are another immensely popular data augmentation technique primarily used in the image and sound processing domain.\n• Fourier transforms are generally applied by converting the image from the spatial domain to Fourier domains and then by applying methods such as Fast Fourier Transform and Gaussian Noise injection to generate a con-trolled amount of variance in the data [25]." }, { "figure_ref": [], "heading": "Time Series Augmentation", "publication_ref": [ "b24", "b12" ], "table_ref": [], "text": "• Time-series data is a collection of data points collected at certain time intervals and ordered chronologically along with the flow of time.\nTime-series data has become widely popular recently in domains such as financial analysis (stock markets and cryptocurrencies), livestock monitoring, and website usage monitoring.\n• In recent times, Time-series data has gained a lot of traction.\nTechniques such as time warping, slicing, permutation, and interpolation have become widely popular [25].\n4. Many other data augmentation techniques specific to domains have become immensely popular due to recent advancements in machine learning and artificial intelligence [13] but are not that relevant for the purposes of this work." }, { "figure_ref": [], "heading": "Data Augmentation During Model Training", "publication_ref": [ "b37", "b37", "b29", "b19", "b41" ], "table_ref": [], "text": "With the recent innovations and advancements in the field of computational capacity, along with the availability of faster computational chips, neural networks have become a popular alternative for generating synthetic data during the training phase of the model. Techniques such as Generative Adaptive Networks (GANs), Encoder-Decoder Networks have become very popular recently.\n1. Generative Adversarial Networks (GANs)\n• Generative Adversarial Networks often comprise a combination of the Gen-erator Network and the Discriminator network. The main aim of the generator network is to create samples of synthetic data, whereas the discriminator network tries to tell apart the real and the synthetic data [37].\n• As Wang et al. [37] have mentioned, the effectiveness of Generative Adversarial Networks stems from their ability to estimate the distribution of the given data and generate synthetic data from it.\n• As a result, several Generative Adversarial Network architectures such as DCGAN [29] (Deep Convolutional Generative Adversarial Network)\nand StyleGAN [20] for computer vision applications andTimeGAN [41] for time-series applications have become immensely popular." }, { "figure_ref": [], "heading": "Encoder-Decoder Networks", "publication_ref": [ "b15", "b32", "b2" ], "table_ref": [], "text": "• Encoder-Decoder networks are another popular alternative with an architecture similar to the Generative Adversarial Networks.\n• As the name suggests, encoder-decoder networks consist of two distinct structures; the encoder network, which transforms reduces the higher dimensional data into a lower dimension, whereas the decoder network tries to reconstruct the lower dimensional data into the original higher dimensional [16].\n• As a result of the above, encoder-decoder networks seldom generate purely synthetic data; rather, the data generated by encoder-decoder networks is usually a combination of features learned during the decoder process.\n3. Apart from the Generative Adversarial Networks and Encoder-Decoder networks, other methods such as Rule-Based data generation techniques such as Procedural Content Generation (PCG) [32] are also very popular in the gaming context. Apart from that, Bae et al. ( 2018) [3] have also used Perlin Noise to perform data augmentation on HRCT (High-Resolution Computed Tomogra-phy) images in the context of medical image analysis." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Trajectory data is usually stored as points spaced apart in time and location data at each point. Due to such intricate nature of the data, geometric augmentation methods cannot be applied to trajectory data. Furthermore, Fourier transformations are seldom used outside of the image and wave analysis domain and are unsuited for the trajectory analysis domain. However, time-series data closely resemble trajectory data because trajectory data is similar to time-series data as it has points spread apart in time. As a result, augmentation methods in the time-series domain can be modified to be applied to the trajectory domain.\nTherefore in this work, we took inspiration from such techniques and modified them to suit the needs of the trajectory data analysis domain.\nIn the next section, we will discuss the novel data augmentation techniques we propose using AugmenTRAJ for trajectory data. Section 4 will discuss the effect of augmenting trajectory data using AugmenTRAJ on machine learning tasks such as classification. Finally, in Section 5, we will summarize the AugmenTRAJ package along with a brief discussion about the future steps for AugmenTRAJ." }, { "figure_ref": [], "heading": "Chapter 3", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Materials and Methods", "publication_ref": [ "b13" ], "table_ref": [], "text": "Trajectory data is often represented in two formats, namely point-based format and segmented-based format [14]. In the point-based format, trajectory data is recorded at regular intervals with the location, i.e., latitude and longitude of the subject being a requirement. Other relevant data, such as altitude, time of the day, and data about the environment that the subject is in, are often stored with each point recorded. On the other hand, in the segment-based form of the trajectory data, the point-based form of the trajectory is usually divided into segments and represented as a row in the data table containing statistical values of the trajectory such as mean speed, total distance traveled, displacement from the start and so on. In a segment-based format, data is represented as one trajectory per row containing all the statistical data about the entire trajectory. Data augmentation in trajectory and movement analysis has not gained popularity due to the complex nature of trajectory data. However, due to the sequential nature of trajectory data, several data augmentation techniques, such as noise introduction, shifting, and scaling that are popular for generic time-series data, can also be applied to trajectory data with necessary modifications. Furthermore, as described before, a segment-based trajectory only contains the statistical values of the trajectory as the data. As a result, a point-based format is preferred to perform augmentation on trajectory data. Another point to be highlighted here is that applying modifications or noise in the segment-based format could lead to invalid instances. For example When augmenting data, the primary purpose is to generate samples close to the original data's distribution and represent the actual data very closely. Therefore, it is of utmost importance that we do not generate samples that can be easily discerned from the original data to work towards the end goal of improving the efficiency of the machine learning model. Trajectory data augmentation is affected by two processes, namely selection of trajectories to be augmented and modification techniques used to alter points in the original trajectory to generate the synthetic data.\nTo tackle the aforementioned challenges, we have developed several techniques to select trajectories to be augmented based on different criteria. Once the trajectories are selected, AugmenTRAJ allows users to augment them using several methods pro-vided out of the box. In the following subsection, we will describe all the selection and point-modification techniques provided by AugmenTRAJ." }, { "figure_ref": [], "heading": "Augmentation Candidate Trajectory Selection Techniques in AugmenTRAJ", "publication_ref": [], "table_ref": [], "text": "The selection of trajectories to be augmented plays a significant role in the trajectory data augmentation procedure as it can greatly affect the efficiency of the machine learning model. Furthermore, when the candidates are chosen correctly, it often helps balance the skewed data toward one of the instances and classes and can help the model discern the classes better. On the other hand, when the dataset is balanced, but the trajectories within each class are very similar, the machine learning model is prone to overfitting the training data and performing much worse on the testing data. In such cases, selecting and augmenting the right trajectories helps introduce variation in the samples to help the model learn better and not overshoot the local minima.\nIn machine learning, the model is usually trained on a subset of the original data, which is called the training data, whereas the efficiency inference of the model is done using a subset of data called the testing data that the model has not seen in the training stages. In the following subsections, we will describe all the trajectory selection methods that AugmenTRAJ provides out-of-the-box, but it is to be kept in mind that the selection of trajectories as well as the modification is made to the training data exclusively to increase the size, quality, and variation of the data to help training a better model. Hence, in the following sections, when we mention selection and modification of data, it is done exclusively on the training data. In AugmenTRAJ, we have developed the following four techniques for selecting the candidates for augmentation:" }, { "figure_ref": [], "heading": "Random Trajectory Selection", "publication_ref": [], "table_ref": [], "text": "As the naming suggests, in the random trajectory selection method, the trajectories to be augmented are selected randomly from the training dataset. The users can control the proportion of the training data they want to augment. For instance, if the training dataset has 100 trajectories, and the user wants to augment 20% of the data, 20 trajectories will be selected randomly from the dataset. Furthermore, the users can control the randomness for the reproducibility of results by passing in a seed to the random selection.\nRandom trajectory selection is often useful as the stepping-stone to determine if a different trajectory selection is warranted. Since random trajectory selection does not look at the distribution of the data, it is possible that it may select trajectories from a class that is a very large proportion of the data and may end up increasing the skewness of the data. In such cases, Dataset balancing can greatly help the model improve." }, { "figure_ref": [], "heading": "Proportional Trajectory Selection", "publication_ref": [], "table_ref": [], "text": "In a proportional selection of trajectories, the user is allowed to select what proportion of trajectories are selected from each class of the data points for augmentation. For example, consider a dataset that has the following structure: " }, { "figure_ref": [ "fig_12" ], "heading": "Length-Based Trajectory Selection", "publication_ref": [], "table_ref": [], "text": "In the length-based trajectory selection method, a given proportion of the trajectories with the shortest length from the data are selected for augmentation. For instance, if a dataset has 100 trajectories and the user wants to select 20% of trajectories to be augmented, then the trajectories will be sorted according to their lengths in ascending order, and the 20 shortest trajectories will be selected for augmentation. Figure 3.3 depicts the fewest selection technique in AugmenTRAJ." }, { "figure_ref": [], "heading": "Figure 3.3: Fewest Selection Technique in AugmenTRAJ", "publication_ref": [], "table_ref": [], "text": "As we have mentioned before, trajectory data is often difficult to collect, maintain, and store, so we often get extremely short trajectories. Machine learning models usu-ally require a lot of good-quality data, and having shorter or incomplete trajectories can often lead to the model underfitting the data and performing worse on the test-ing set. In such cases, selecting the shortest trajectories and augmenting them can increase the size of the data and make the model more robust to variations in such smaller trajectories." }, { "figure_ref": [], "heading": "Representative Trajectory Selection", "publication_ref": [ "b13" ], "table_ref": [], "text": "In the representative trajectory selection technique, the trajectories are selected based on the closeness of an individual trajectory's statistics to that of the entire dataset.\nTo do so, first, the entire data is converted to segment-based form wherein each data row is one trajectory, and its statistics, such as mean, median, maximum, minimum, and so on, are calculated for kinematic features such as distance, displacement, speed, acceleration, and jerk. This is easily achieved using a single command available in the PTRAIL [14] package for trajectory data processing. Once the statistics for the entire dataset are calculated, each trajectory's statistics are compared with that of the entire dataset, and if the user-given proportion of trajectory statistics falls within the user-given tolerance level, then it is selected for augmentation. It must be noted how each selection mechanism provides the user with fine-grained control of the process, thereby following very high software engineering principles.\nThe representative selection technique works great in getting data distribution closer to a bell-curve representation. It is generally applicable when the dataset is well-balanced and representative classes well represented by increasing the number of samples in the training data. However, depending on the user-given tolerance for selection, representative selection may select all the trajectories in the dataset if the dataset is fairly balanced. Hence, representative selection should be tried with a few values of tolerance and select the most appropriate one where it selects a good number of trajectories but does not select all or most of them as it could slow down the augmentation process significantly and have little to no effect on the efficiency of the model." }, { "figure_ref": [], "heading": "Point Modification Techniques for", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Generating Synthetic Trajectories in", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "AugmenTRAJ", "publication_ref": [], "table_ref": [], "text": "Once the desired set of trajectories has been selected to be augmented, the next step is to modify the points within each selected trajectory to generate the synthetic trajectories. When augmenting data, it is useful to restrict the user-given parameters to the augmentation methods at sensible levels, and as such, it is always good to try out several techniques for data before choosing the best one because there is no one-size-fits-all solution when it comes to data augmentation. As a result, in AugmenTRAJ, we have developed the following four techniques for point modification:" }, { "figure_ref": [ "fig_12" ], "heading": "On-Circle Modification", "publication_ref": [], "table_ref": [], "text": "In the on-circle point modification technique, a point is modified using the following technique:\n• Select a circle of a given radius around the point in the original trajectory.\nThe radius is calculated using 10% of the distance between the current point and the next point in the trajectory.\n• Randomly select a degree for modifications and place the new point on the circle's circumference in the direction of the selected degree.\n• Figure 3.4 demonstrates the on-circle point modification technique in Augmen-TRAJ. In this modification technique, each point's location, i.e., latitude and longitude, of the selected trajectory is modified using the process above. However, the trajectory will have the same time intervals as the original trajectory as we only modify the location of the points, and the other features in the data are kept unmodified.\nA significant advantage of this technique is that the variance induced in the data is not huge, as the synthetic trajectories generated represent the original trajectories to some degree due to the nature of the modification technique. As a result, this technique is very useful when it is known that the data already has enough variations in each represented class. This technique will help increase the data samples with a limited variance." }, { "figure_ref": [ "fig_12" ], "heading": "In-Circle Modification", "publication_ref": [], "table_ref": [], "text": "The in-circle point modification technique is similar to the on-circle point modification technique, and the point is modified using the following technique:\n• Randomly select a distance in a circular region around the current point where the radius of that region is smaller than the distance between the current point and the next.\n• Randomly select a degree and place the point at the aforementioned distance in the direction of the selected degree.\n• Figure 3.5 demonstrates the in-circle point modification technique in Augmen-TRAJ." }, { "figure_ref": [], "heading": "Figure 3.5: In-Circle Point Modification Technique in AugmenTRAJ", "publication_ref": [], "table_ref": [], "text": "Similar to the on-circle point modification technique, only the points' locations, i.e., latitude and longitude, are modified for the entire trajectory with timestamps and other features remaining intact. However, as compared to the on-circle point modification technique, where only a limited degree of modifications to the point can be made due to the restriction of the new point being on the circumference of the selected circular region, the in-circle point modification technique allows for a higher degree of flexibility in terms of distance of the point as well as the direction as compared to the original point. The primary advantage of the in-circle method is that it allows for a higher level of variance to be introduced to the data compared to the on-circle modification method. As a result, this method is very useful when we want to introduce some amount of variance in the dataset along with increasing the number of samples of the data." }, { "figure_ref": [ "fig_12", "fig_12" ], "heading": "Point Stretching Modification", "publication_ref": [], "table_ref": [], "text": "The basic idea behind stretching modification is moving each point in the userspecified direction by a user-specified magnitude to generate synthetic trajectories. The points are modified using the following technique:\n• Based on the maximum allowed user-given distance, calculate the maximum latitudes and longitudes in each direction on a straight line that passes through the current point illustrated in Figure 3.6.\n• Once the bounds are calculated, the new point is calculated based on the user-given method. The user can choose one of the following methods for selecting the new point:\n1. Minimum Distance Point -Always select the point that is on the minimum side as displayed in Figure 3.6." }, { "figure_ref": [ "fig_12" ], "heading": "Maximum Distance Point", "publication_ref": [], "table_ref": [], "text": "-Always select the point that is on the maximum side as displayed in Figure 3.6." }, { "figure_ref": [ "fig_12" ], "heading": "Minimum/Maximum Distance Point Randomly", "publication_ref": [], "table_ref": [], "text": "-Randomly select either the maximum or the minimum point in the new trajectory as displayed in Figure 3.6." }, { "figure_ref": [ "fig_12" ], "heading": "Random Point between Minimum and Maximum", "publication_ref": [], "table_ref": [], "text": "-Randomly select a point between the minimum and the maximum bounds in Figure 3.6.\nThe point stretching method gives the user fine-grained control to the user in terms of choosing how the synthetic trajectory will be generated. Not only does it allow the user to select where the new point will be in terms of the distance, but it also allows the user to control the direction in which the new point will be generated as compared to the in-circle and on-circle methods where the user has no control over the direction and the distance of the new point.\nThe stretching modification method is generally useful in most situations as the user can control the variance induced in the data using the distance and the point selection method. As a result, the user can either generate more samples with minimal variance or more variance to enrich the data." }, { "figure_ref": [ "fig_12" ], "heading": "Point Dropping Modification", "publication_ref": [], "table_ref": [], "text": "In the point-dropping method, synthetic trajectories are generated by dropping points based on a user-given probability. The process is illustrated in Figure 3.7.\nThe point dropping drops the point from the trajectory, due to which the resultant trajectory is prone to abrupt jumps in the path of the object. As a result, the point-dropping method can potentially introduce the most variance in the data samples. Therefore, it is upon the user to control the probability of dropping the points, hence controlling the variance in the new dataset. This method is especially useful when the data samples in the same class and among different classes are very similar as it can introduce the required variance to help the modern learn the distinction among the classes better and not underfit in such cases. " }, { "figure_ref": [], "heading": "Dataset Balancing Techniques in AugmenTRAJ", "publication_ref": [], "table_ref": [], "text": "As we have briefly mentioned before, collection and storage of trajectory data is often a difficult and tedious task plagued by many issues such as loss of connection, faulty sensors, etc. Furthermore, the collection of movement data for humans also brings privacy concerns. With the world being connected more and more with each other through the internet, having a human's frequently traveled path along with its metadata can be highly risky for that person. Due to these reasons, movement datasets are often restricted to tracking animals, shifts, aircraft, etc.\nApart from that, when data for the movement of humans is available, the mode of transport is often modes of public transport such as trains, subways, buses, etc., and not personal transportation vehicles such as cars and bikes. In such cases, the machine learning model often underfits the data and performs worse at accurately classifying the trajectories since it does not have enough data for the classes with fewer data points. Furthermore, it is generally seen that machine learning models perform very well when the dataset is balanced or near-balanced, with each representative class having an equal number of samples. As a result, to deal with the aforementioned challenge and to take advantage of a balanced dataset, AugmenTRAJ provides techniques for the balanced dataset with a single method call. The functioning of dataset balancing techniques is as follows Consider a dataset with the following balance of classes: (i) Class 1: 50 trajectories; (ii) Class 2: 100 trajectories; (iii) Class 3: 75 trajectories, totaling 225 trajectories. If the user wants to balance the dataset, then an easy approach would be to augment samples in classes 1 and 3 such that each class finally has 100 samples. However, that will mean that classes 1 and 3 will have several synthetic trajectories and are induced into their distributions. However, it is generally ideal to induce variance into all the representative classes in the dataset using augmentation.\nTherefore, in AugmenTRAJ, the user is asked to specify a target number of trajectories for each class as a multiplier of the class with the highest number of samples. To simplify this, continuing with our example dataset above, if the user specifies a multiplier of 1.1, then we would have a total of 110 samples for each class because our Class 2 has the maximum number of samples with 100 samples and we will get 100 * 1.1 = 110 based on the maximum target multiplier of 1.1. Hence, our final dataset will look as follows: (i) Class 1: 110 trajectories; (ii) Class 2: 110 trajecto-ries; (iii) Class 3: 110 trajectories; a total of 330 trajectories. Since AugmenTRAJ functionalities are very flexible, all the techniques for modifying points are available for augmentation while balancing the datasets." }, { "figure_ref": [ "fig_12" ], "heading": "Experimental Setup", "publication_ref": [ "b27", "b39", "b44", "b13" ], "table_ref": [], "text": "To test the level of impact of synthetic data generation using AugmenTRAJ, we have implemented a setup wherein we used various machine learning models such as Random Forest, Gradient Boosting Classifier, Support Vector Classifier, and Decision Tree classifier to predict what class the object belongs to using the trajectory data as the input. The framework that we setup for testing is as follows:\n1. Select 80% of the trajectories from the given dataset that will be used as the training dataset, and the rest 20% of the trajectories will be set aside for the testing dataset.\n2. Once the training data is selected, pre-select and store the augmentation candi-dates in a dictionary using each of the methods described in Table A.1.\n3. Next, using each of the dataset balancing techniques, balance the datasets and store them in the aforementioned dictionary. 4. Once the dictionary above is created with the balanced datasets and augmenta-tion candidates using various techniques, we used 20 seeds (we used digits of Pi after the decimal point) to train various models for the training dataset without augmentation and then augmenting datasets using each of the techniques de-scribed in Table A.2. It should be noted that while performing augmentation, we augmented each selected candidate trajectory three times to generate three distinct samples based on the original trajectory.\n5. For each model, we calculate the f1 score and accuracy and compare those metrics with the dataset without augmentation and calculate the increase in accuracy and f1 score induced using augmentation.\n6. Figure 3.8 illustrates the testing framework described above.\nUsing this pipeline 1 , we trained 63 machine learning models (i.e., combining dis-tinct ML models, trajectory selection, and trajectory point modification strategies) for different variants of training data and compared them with the results of the models trained with un-augmented training data. Following core software engineering principles, the framework for testing the augmentation strategies have been engineered to be used with new datasets and different machine learning models available in the scikit-learn [27] python package. In the following section, we will discuss the results that we obtained by using the above framework on the Starkey animals dataset [39], a subset of Microsoft Geolife database [43] and the Traffic dataset available as one of the datasets in the PTRAIL library [14].\nFinally, it is important to describe the metrics we used to compare the accuracy of models in our experiments. To determine the enhancements caused by augmentation techniques, the following metrics were used: 1 Example Jupyter notebook containing the testing described above: Link. • The F1-score aims to combine two relatively simple metrics, precision and recall, to give a more complete score of how good the model is doing." }, { "figure_ref": [], "heading": "Accuracy", "publication_ref": [], "table_ref": [], "text": "• Accuracy measures how many correct predictions the model makes out of all the predictions made." }, { "figure_ref": [], "heading": "𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =", "publication_ref": [], "table_ref": [], "text": "𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑟𝑟𝑒𝑐𝑡 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠 𝑇𝑜𝑡𝑎𝑙 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠 (\nChapter 4" }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [ "b31", "b36", "b23" ], "table_ref": [], "text": "This section presents the experiments' outcomes as outlined in Section 3.4. After training all models with different augmentation techniques, the performance improve-ment in accuracy and F1-score was assessed. Box plots were generated to facilitate comparison, showcasing the results for each machine learning model and augmenta-tion strategy combination using AugmenTRAJ. These box plots can be observed in Figures\nAs observed in Figures 4.1, 4.2, and 4.3, the implementation of augmentation strategies generally leads to a reduction in the error rate for all utilized datasets, signifying the potential of data enhancement in improving model performance. How-ever, it is crucial to acknowledge that augmentation does not invariably result in error rate improvement; in some cases, it may introduce higher data variance, consequently leading to underfitting.\nUpon initial inspection, the box plots do not demonstrate a substantial increase in accuracy for the models trained in the classification tasks. Nonetheless, it's essential to recognize that effective machine-learning tasks require systematically exploring var-ious model configurations. In this study, we trained different machine learning models under different controlled conditions using 20 seed values and identified the optimal configurations for each model. The Table B.1 demonstrates maximum performance improvement attained with AugmenTRAJ's augmentation strategies for the geolife dataset The controlled experiment with seed 7950 reveals striking improvements when us-ing augmentation strategies. The accuracy and f1-score for the base strategy, i.e., training data without augmentations, were extremely poor at 0.38 and 0.2, respectively. However, through the generation of synthetic data and dataset balancing using the point dropping method (Section 3.2.4), the accuracy and f1-score remarkably in-creased to 0.75 for both metrics. This emphasizes the significance of dataset balancing techniques, particularly in unbalanced class distributions within the geolife dataset. When the training data is skewed towards heavily represented classes, models tend to over-fit and under-perform on the testing dataset. Augmentation techniques address this issue by providing ample samples for each class, enhancing the training dataset's balance, and improving model performance. Additionally, the geolife dataset contains trajectories with very short lengths, posing challenges for machine learning models to identify meaningful patterns and classify them accurately. Augmenting shorter trajectories can significantly enhance the training dataset by providing more samples for the models to learn from. This, in turn, leads to substantial improvements in model performance on the testing set. Seed values 2643 and 4944 notably demonstrate such enhancements, where metrics improved by over 30%. AugmenTRAJ's augmentation strategies offer valuable solutions for addressing the intricacies of the geolife dataset, mitigating class imbalances, and handling trajectories of varying lengths. By doing so, the framework enables the models to learn more effectively and achieve significantly improved classification performance.\nShorten and Taghi M. [31], and Taylor and Nitschke [36] have shown a reduction of error rate in their models by about 1% whereas Moreno et al. [24] have shown an increase in their models' test accuracy of up to 5% by using data augmentation in the domain of image classification. However, the improvements are significantly lower than 51% improvement that has been showcased using AugmenTRAJ in trajec-tory classification under controlled conditions. This emphasizes how the techniques of data augmentation can greatly aid the machine learning tasks in the movement data analysis domain, and when used correctly, it can greatly help in training models that perform very well on such data. Tables B.2 and B.3 summarize the maximum im-provement resulting from augmentation techniques in AugmenTRAJ for the Starkey and the traffic dataset similar to the Table B.1 Chapter 5" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This study has introduced AugmenTRAJ, a framework for data augmentation in the movement data analysis domain. While data augmentation has been extensively explored in various domains, such as image and time series, its potential in movement analysis remains largely untapped. The challenges associated with collecting move-ment data often complicate data mining, necessitating a specialized solution to address these issues.This research aimed to develop a framework that effectively addresses the unique complexities of movement data analysis, allowing machine learning models to perform at par with their counterparts in other computer science domains. Though data augmentation is not a panacea for all movement data analysis challenges, AugmenTRAJ's exceptional results have surpassed industry standards. This signifies the bright future of data augmentation in the movement data domain. Aug-menTRAJ's success opens up possibilities for advanced topics such as Generative Adversarial Networks (GANs) in movement data analysis, promising further advancements in the field. Furthermore, the availability of AugmenTRAJ as an open-source resource to developers worldwide is expected to drive increased interest and traction in data augmentation for movement data analysis. Collaboration and contributions from the community will enrich the framework and foster innovation in the field. While AugmenTRAJ represents a significant step forward, there is still much ground to cover in the movement data analysis domain. Future research should focus on enhancing the framework, exploring novel augmentation techniques, and fostering interdisciplinary collaborations to unlock the full potential of data augmentation in movement analysis.\nIn conclusion, AugmenTRAJ marks a pivotal contribution to the movement data analysis landscape, and we anticipate that its adoption will drive transformative ad-vancements and novel applications, making movement analysis a well-equipped field within the realm of computer science. " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "-Thank you to my supervisor, Dr. Amilcar Soares, for their advice and guidance through this research, for helping make this dissertation possible, and for being an amazing mentor, professor, and friend throughout my time at Memorial University.\n-I would also like to thank Nicholas Jesperson for initiating the work for trajectory data augmentation and helping develop the initial prototype of AugmenTRAJ.\niii\nTo my Family and Yesha, thank you for your unwavering support." }, { "figure_ref": [], "heading": "Appendix A", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Summary of Techniques in AugmenTRAJ", "publication_ref": [], "table_ref": [], "text": "This appendix contains a summary of augmentation candidate selection and pointmodification techniques available in AugmenTRAJ for a quick reference. " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Obtained with Augmentation", "publication_ref": [], "table_ref": [], "text": "In this appendix, we will showcase the bar plots for result comparisons that we dis-cussed in Chapter 4. The box plots are a complete seed and model-wise breakdown of the data summarized in Table B.1. For each seed, it can be seen that the metrics are generally improving and in some cases up-to 51% as we discussed." } ]
Data augmentation has emerged as a powerful technique in machine learning, strengthening model robustness while mitigating overfitting and underfitting issues by generating diverse synthetic data. Nevertheless, despite its success in other domains, data augmentation's potential remains largely untapped in mobility data analysis, primar-ily due to the intricate nature and unique format of trajectory data. Additionally, there is a lack of frameworks capable of point-wise data augmentation, which can reliably generate synthetic trajectories while preserving the inherent characteristics of the original data. To address these challenges, this research introduces AugmenTRAJ, an open-source Python3 framework designed explicitly for trajectory data augmentation. AugmenTRAJ offers a reliable and well-controlled approach to generating synthetic trajectories, thereby enabling the harnessing of data augmentation benefits in mobility analysis. This thesis presents a comprehensive overview of the methodologies employed in developing AugmenTRAJ and showcases the various data augmentation techniques available within the framework. AugmenTRAJ opens new possibilities for enhancing mobility data analysis models' performance and generalization capabilities by providing researchers with a practical and versatile tool for augmenting trajectory data. Its user-friendly implementation in Python3 facilitates easy integration into existing workflows, offering the community an accessible resource to leverage the full potential of data augmentation in trajectory-based applications.
AugmentTRAJ: A framework for point-based trajectory data augmentation
[ { "figure_caption": "(a) Random Trajectory Selection -Randomly select a proportion of trajectories from the training dataset as augmentation candidates. (b) Proportional Trajectory Selection -Select an equal proportion of trajectories from each representative class in the training dataset as augmentation candidates. (c) Length-Based Trajectory Selection -Select a given proportion of trajectories that are shortest in length from the training dataset as augmentation candidates. (d) Representative Trajectory Selection -Select the trajectories from the training dataset whose segment form statistics fall within a user defined range of the entire training dataset's statistics. 2. Trajectory Point Modification Strategies (a) In-circle Trajectory Point Modification Technique -Create the new point by considering a circular region around the current point and selecting the new point on the circumference of the circular region in a random direction. (b) On-circle Trajectory Point Modification Technique -Create the new point by considering a circular region around the current point and selecting the new point within the radius of the circular region in a random direction and at a random distance from the center. (c) Trajectory Point Stretching Modification Technique -Create the new point by shifting the current point by a user-given distance (in meters) and a user-given direction. (d) Point Dropping Modification Technique -Create the new trajectory by dropping the points from the original trajectory with a user-given probability.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3.1a and Figures 3.1b respectively show the point-based and the segment-based format of the trajectory data from the Starkey [39] animals dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 . 1 :31Figure 3.1: Trajectory Formats", "figure_data": "", "figure_id": "fig_3", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "(i) Class A: 20 trajec-tories; (ii) Class B: 5 trajectories. If the user wants 20% of trajectories from each class, then 3 trajectories are selected from class A, and 1 trajectory is selected from class B. Figure 3.2 depicts the visual representation of this process.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 . 2 :32Figure 3.2: Proportional Selection Technique in AugmenTRAJ", "figure_data": "", "figure_id": "fig_5", "figure_label": "32", "figure_type": "figure" }, { "figure_caption": "Figure 3 . 4 :34Figure 3.4: On-Circle Point Modification Technique in AugmenTRAJ", "figure_data": "", "figure_id": "fig_6", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 3 . 6 :36Figure 3.6: Minimum and Maximum Latitude-Longitude Calculation", "figure_data": "", "figure_id": "fig_7", "figure_label": "36", "figure_type": "figure" }, { "figure_caption": "Figure 3 . 7 :37Figure 3.7: Point Dropping Modification Technique in AugmenTRAJ", "figure_data": "", "figure_id": "fig_8", "figure_label": "37", "figure_type": "figure" }, { "figure_caption": "1. F1 Score• Mathematically, the f1 score is defined as the mean of the model's precision and recall.𝐹1 -𝑠𝑐𝑜𝑟𝑒 =2 * 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 * 𝑟𝑒𝑐𝑎𝑙𝑙 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑟𝑒𝑐𝑎𝑙𝑙(3. 1) ", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "22 Figure 3 . 8 :2238Figure 3.8: Code Snippet Illustrating the Testing Framework For AugmenTRAJ", "figure_data": "", "figure_id": "fig_10", "figure_label": "2238", "figure_type": "figure" }, { "figure_caption": "Figure 4 . 1 : 25 Figure 4 . 2 : 26 Figure 4 . 3 :4125422643Figure 4.1: Geolife Dataset Box Plot", "figure_data": "", "figure_id": "fig_11", "figure_label": "4125422643", "figure_type": "figure" }, { "figure_caption": "Figure B. 3 :3Figure B.3: Seed-wise Comparison between Base metrics and Maximum metrics for Traffic [14] Datase", "figure_data": "", "figure_id": "fig_12", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Augmentation Candidate Trajectory Selection Techniques in Augmen-TRAJ................................................................................................................... 3.1.1 Random Trajectory Selection............................................................. 3.1.2 Proportional Trajectory Selection...................................................... 11 3.1.3 Length-Based Trajectory Selection.................................................... 3.1.4 Representative Trajectory Selection.................................................. 3.2 Point Modification Techniques for Generating Synthetic Trajectories in AugmenTRAJ....................................................................................................", "figure_data": "3.1 List of tablesList of figuresChapter 1A.1 Summary of Selection Methods Available in AugmenTRAJA.2 Summary of Point Modification Methods Available in AugmenTRAJ . 30 3.1 Trajectory Formats 93.2 Proportional Selection Technique in AugmenTRAJ IntroductionB.1 Summary of Best Results Obtained For Each Seed Using AugmenTRAJ3.3 Fewest Selection Technique in AugmenTRAJFor the Geolife[43] Dataset3.4 On-Circle Point Modification Technique in AugmenTRAJB.2 Summary of Best Results Obtained For Each Seed Using AugmenTRAJFor the Starkey[39] Dataset 3.5 In-Circle Point Modification Technique in AugmenTRAJ 1.1 Motivation and SignificanceB.3 Summary of Best Results Obtained For Each Seed Using AugmenTRAJ 3.6 Minimum and Maximum Latitude-Longitude CalculationFor the Traffic[14] Dataset 3.7 Point Dropping Modification Technique in AugmenTRAJ3.8 Code Snippet Illustrating the Testing Framework For AugmenTRAJ4.1 Geolife Dataset Box Plot4.2 Starkey Dataset Box PlotIntroduction 4.3 Traffic Dataset Box Plot11.1 Motivation and Significance1B.1 Seed-wise Comparison between Base metrics and Maximum metrics2 Literature Review for Geolife Dataset42.1 Overview of Data Augmentation Techniques B.2 Seed-wise Comparison between Base metrics and Maximum metrics for4Starkey [39] Dataset2.1.1 Data Augmentation During Data Preprocessing Step4352.1.2 Data Augmentation During Model Training5B.3 Seed-wise Comparison between Base metrics and Maximum metrics for2.1.3 Conclusion Traffic [14] Dataset73 Materials and Methods8viviiiv", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Summary of Point Modification Methods Available in AugmenTRA Table B.1: Summary of Best Results Obtained For Each Seed Using AugmenTRAJ For the Geolife[43] Dataset Figure B.1: Seed-wise Comparison between Base metrics and Maximum metrics for Geolife Dataset Table B.2: Summary of Best Results Obtained For Each Seed Using AugmenTRAJ For the Starkey[39] Dataset Figure B.2: Seed-wise Comparison between Base metrics and Maximum metrics for Starkey [39] Dataset Table B.3: Summary of Best Results Obtained For Each Seed Using AugmenTRAJ For the Traffic[14] Dataset", "figure_data": "NameUser Controlled Pa-DescriptionrametersSeedOn-Circle Modification Model Base Accu--BaseF1-MaximumCreate the new point by Maximum MaximumMaximumSeed SeedIn-Circle Modification Model (Classifier) Base Accu-racy (Classifier) racy Model Base Accu-SVC 0.59 (Classifier) racy Gradient Boosting 0.50 Decision Tree 0.50 SVC 0.67 Decision Tree 0.75 SVC 0.67 Gradient 0.88 Boosting Gradient 0.88-Base Score Score Base 0.58 Score 0.43 0.50 0.64 0.78 0.62 0.82 0.90F1-F1-Maximum Accuracy Accuracy Maximum 0.65 Accuracy 0.75 0.88 0.73 0.92 0.75 0.92 0.96selecting region original randomly selecting a a circular around the point and new point on the circumference of the circular region. Create the new point by Maximum F1-Score Maximum Accuracy Strategy Maximum F1-Score Accuracy Strategy F1-Score Maximum Maximum Maximum Strategy 0.64 fewest-fewest-Accuracy F1-Score F1-Score Strategy Strategy F1-Score Strategy 0.75 balanced-in balanced-in 0.88 balanced-drop balanced-drop selected-drop elected-drop 0.73 balanced-drop 0.90 balanced-balanced-balanced-drop drop drop 0.75 fewest-selected-stretch 0.90 proportional-proportional-fewest-selected-drop selected-drop selected-stretch 0.94 balanced-in balanced-inPoint Stretching Modifi-cation Gradient Boosting 0.62 Decision Tree 0.75 Random For-est 0.50 Random For-est 0.38 SVC 0.75 Boosting Decision Tree 0.88 Gradient Boosting 0.92 Decision Tree 0.76 Decision Tree 0.92 Decision Tree 0.79 SVC 0.73 Decision Tree 0.62 Random For-est 0.75 Decision Tree 0.75 SVC 0.73 Decision Tree 0.92 Decision Tree 0.86 Decision Tree 0.75 SVC 0.92Technique of stretching, max stretch (in meters) from the original point. 0.48 1.00 0.77 1.00 0.50 0.75 0.38 0.62 0.73 0.82 0.89 0.96 0.92 0.94 0.76 0.92 0.93 1.00 0.77 0.92 0.71 0.76 0.56 1.00 0.79 1.00 0.75 0.88 0.70 0.80 0.90 0.96 0.86 0.94 0.80 0.92 0.88 0.96selecting region original randomly selecting a a circular around the point and new point at a random distance within the circu-lar region and in a random direction. Create a new point at a user specified distance and in the user specified direction. 1.00 balanced-drop balanced-drop 1.00 random-selected-on random-selected-on 0.71 balanced-in balanced-in 0.63 balanced-drop balanced-0.82 fewest-selected-drop fewest-0.96 fewest-fewest-selected-drop 0.94 representative-selected-in selected-drop selected-drop representative-0.90 random-random-selected-in 1.00 proportional-selected-drop selected-in selected-in proportional-0.90 random-random-selected-drop 0.76 fewest-selected-selected-fewest-stretch stretch drop 1.00 fewest-selected-on fewest-selected-on 1.00 balanced-drop balanced-drop 0.87 balanced-balanced-selected-in selected-in 0.80 fewest-selected-in 0.96 fewest-fewest-fewest-selected-on selected-on selected-in 0.94 random-selected-stretch 0.92 proportional-proportional-random-selected-in selected-in selected-stretch 0.95 balanced-on balanced-onPoint Dropping Modifica-tion Decision Tree 0.62 Decision Tree 0.50 Decision Tree 0.62 SVC 0.69 Decision Tree 0.84 Decision Tree 0.86 Decision Tree 0.83Probability of dropping a point 0.61 0.88 0.53 0.88 0.60 0.62 0.66 0.75 0.88 0.92 0.86 0.92 0.81 0.88Drop points from the original probability with a user given probability. drop drop 0.88 balanced-in balanced-in 0.88 fewest-selected-drop fewest-selected-drop 0.60 balanced-balanced-0.74 fewest-selected-drop 0.94 fewest-proportional-fewest-selected-drop selected-drop selected-drop 0.92 fewest-selected-on 0.88 proportional-proportional-proportional-selected-selected-selected-stretch stretch stretchDecision Tree 0.62 Gradient Boosting 0.62 Gradient Boosting 0.62 Gradient 0.75 Gradient Boosting 0.90 Decision Tree 0.80 Gradient Boosting 0.88 Gradient 0.84 Boosting SVC 0.67 SVC 0.92 Decision Tree 0.80 Decision Tree 0.80 Gradient 0.880.56 0.56 0.58 0.75 0.89 0.81 0.88 0.84 0.63 0.88 0.80 0.77 0.891.00 0.88 0.88 0.88 0.92 0.92 0.94 0.96 0.73 0.96 0.90 0.84 0.961.00 0.87 0.88 0.88 0.92 0.91 0.94 0.96 0.73 0.95 0.90 0.83 0.95drop fewest-selected-drop balanced-on balanced-drop balanced-fewest-selected-drop fewest-selected-drop fewest-selected-on representative-selected-on balanced-in balanced-on representative-selected-stretch balanced-stretch proportional-drop fewest-selected-drop balanced-on balanced-drop balanced-fewest-fewest-selected-drop selected-drop representative-fewest-selected-on selected-on balanced-on balanced-in balanced-representative-stretch selected-stretch proportional-Boosting Gradient Boosting Random For-est Gradient Boosting Decision Tree 0.86 0.75 0.38 0.50 Boosting Gradient 0.76 SVC 0.63 Boosting Decision Tree 0.76 Gradient Boosting 0.50 Decision Tree 0.84 Gradient 0.92 SVC 0.67 Boosting Decision Tree 0.84 SVC 0.650.77 0.20 0.57 0.87 0.73 0.61 0.73 0.57 0.84 0.92 0.65 0.86 0.621.00 0.75 0.75 0.96 0.88 0.69 0.84 1.00 0.88 1.00 0.80 0.96 0.731.00 0.75 0.79 0.96 0.85 0.70 0.84 1.00 0.88 1.00 0.80 0.95 0.73stretch balanced-stretch balanced-drop balanced-stretch fewest-selected-on selected-drop random-balanced-drop selected-drop representative-selected-stretch random-selected-on random-selected-in balanced-in balanced-drop proportional-fewest-selected-dropstretch balanced-stretch balanced-drop selected-drop fewest-random-selected-on selected-drop fewest-representative-selected-selected-stretch stretch balanced-stretch random-selected-on random-balanced-in selected-in balanced-proportional-drop fewest-selected-dropselected-dropselected-drop", "figure_id": "tab_1", "figure_label": "A2", "figure_type": "table" } ]
Yaksh Haranwala; Jayeshkumar; Amilcar Soares
[ { "authors": "M Abadi; A Agarwal; P Barham; E Brevdo; Z Chen; C Citro; G S Corrado; A Davis; J Dean; M Devin; S Ghemawat; I Goodfellow; A Harp; G Irving; M Isard; Y Jia; R Jozefowicz; L Kaiser; M Kudlur; J Levenberg; D Man´e; R Monga; S Moore; D Murray; C Olah; M Schuster; J Shlens; B Steiner; I Sutskever; K Talwar; P Tucker; V Vanhoucke; V Vasudevan; F Vi´egas; O Vinyals; P Warden; M Wattenberg; M Wicke; Y Yu; X Zheng", "journal": "", "ref_id": "b0", "title": "Ten-sorFlow: Large-scale machine learning on heterogeneous systems", "year": "2015" }, { "authors": "F H Abreu; A Soares; F V Paulovich; S Matwin", "journal": "ISPRS International Journal of Geo-Information", "ref_id": "b1", "title": "A trajectory scoring tool for local anomaly detection in maritime traffic using visual analytics", "year": "2021" }, { "authors": "H.-J Bae; C.-W Kim; N Kim; B Park; N Kim; J B Seo; S M Lee", "journal": "Scientific Reports", "ref_id": "b2", "title": "A perlin noise-based augmentation strategy for deep learning with small data samples of hrct images", "year": "2018-12" }, { "authors": "Y Djenouri; D Djenouri; J C W Lin", "journal": "ACM Transactions on Knowledge Discovery from Data (TKDD)", "ref_id": "b3", "title": "Trajectory outlier detection: New problems and solutions for smart cities", "year": "2021" }, { "authors": "M Etemad; Z Etemad; A Soares; V Bogorny; S Matwin; L Torgo", "journal": "Springer", "ref_id": "b4", "title": "Wise sliding window segmentation: A classification-aided approach for trajectory seg-mentation", "year": "2020-05-13" }, { "authors": "M Etemad; A Soares; E Etemad; J Rose; L Torgo; S Matwin", "journal": "GeoInformatica", "ref_id": "b5", "title": "Sws: an unsupervised trajectory segmentation algorithm based on change detection with interpolation kernels", "year": "2021" }, { "authors": "M Etemad; A Soares; S Junior; Matwin", "journal": "Springer", "ref_id": "b6", "title": "Predicting transportation modes of gps trajectories using feature engineering and noise removal", "year": "2018" }, { "authors": "M Etemad; N Zare; M Sarvmaili; A Soares; B Brandoli Machado; S Matwin", "journal": "Springer", "ref_id": "b7", "title": "Using deep reinforcement learning methods for autonomous vessels in 2d environments", "year": "2020-05-13" }, { "authors": "T Feldt; E Schlecht", "journal": "Pastoralism", "ref_id": "b8", "title": "Analysis of GPS trajectories to assess spatio-temporal differences in grazing patterns and land use preferences of domestic livestock in southwestern madagascar", "year": "2016-03" }, { "authors": "M D Ferreira; J Campbell; E Purney; A Soares; S Matwin", "journal": "International Journal of Geographical Information Science", "ref_id": "b9", "title": "Assessing compression algorithms to improve the efficiency of clustering analysis on ais vessel trajectories", "year": "2023" }, { "authors": "M D Ferreira; J N Campbell; S Matwin", "journal": "GIScience & Remote Sensing", "ref_id": "b10", "title": "A novel machine learning approach to analyzing geospatial vessel patterns using ais data", "year": "2022" }, { "authors": "M D Ferreira; G Spadon; A Soares; S Matwin", "journal": "Sensors", "ref_id": "b11", "title": "A semi-supervised method-ology for fishing activity detection using the geometry behind the trajectory of multiple vessels", "year": "2022" }, { "authors": "J Fonseca; F Bacao", "journal": "", "ref_id": "b12", "title": "Research trends and applications of data augmentation algorithms", "year": "2022" }, { "authors": "S Haidri; Y J Haranwala; V Bogorny; C Renso; V P Da Fonseca; A Soares", "journal": "SoftwareX", "ref_id": "b13", "title": "Ptrail-a python package for parallel trajectory data preprocessing", "year": "2022" }, { "authors": "Y J Haranwala; S Haidri; T S Tricco; V P Da Fonseca; A Soares", "journal": "IEEE", "ref_id": "b14", "title": "A dashboard tool for mobility data mining preprocessing tasks", "year": "2022" }, { "authors": "G Iglesias; E Talavera; A ´ Gonzalez-Prieto; A Mozo; S Gomez-Canaval", "journal": "Neural Computing and Applications", "ref_id": "b15", "title": "Data augmentation techniques in time series domain: a survey and taxonomy", "year": "2023-05" }, { "authors": "B K Iwana; S Uchida", "journal": "PLOS ONE", "ref_id": "b16", "title": "An empirical survey of data augmentation for time series classification with neural networks", "year": "2021-07" }, { "authors": "A S Junior; C Renso; S Matwin", "journal": "IEEE computer graphics and applications", "ref_id": "b17", "title": "Analytic: An active learning system for trajectory classification", "year": "2017" }, { "authors": "A S Junior; V C Times; C Renso; S Matwin; L A Cabral", "journal": "IEEE", "ref_id": "b18", "title": "A semisupervised approach for the semantic segmentation of trajectories", "year": "2018" }, { "authors": "T Karras; S Laine; T Aila", "journal": "", "ref_id": "b19", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "N Koutroumanis; G M Santipantakis; A Glenis; C Doulkeridis; G A Vouros", "journal": "GeoIn-formatica", "ref_id": "b20", "title": "Scalable enrichment of mobility data with weather information", "year": "2021" }, { "authors": "J Liu; H Li; Z Yang; K Wu; Y Liu; R W Liu", "journal": "IEEE Access", "ref_id": "b21", "title": "Adaptive douglaspeucker algorithm with automatic thresholding for ais-based vessel trajectory compression", "year": "2019" }, { "authors": "K Maharana; S Mondal; B Nemade", "journal": "", "ref_id": "b22", "title": "A review: Data pre-processing and data augmentation techniques", "year": "2022" }, { "authors": "F J Moreno-Barea; J M Jerez; L Franco", "journal": "Expert Systems with Applications", "ref_id": "b23", "title": "Improving classification accuracy using data augmentation on small data sets", "year": "2020" }, { "authors": "A Mumuni; F Mumuni", "journal": "Array", "ref_id": "b24", "title": "Data augmentation: A comprehensive survey of modern approaches", "year": "2022" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Kopf; E Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chin-Tala", "journal": "", "ref_id": "b25", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b26", "title": "", "year": "2019" }, { "authors": "F Pedregosa; G Varoquaux; A Gramfort; V Michel; B Thirion; O Grisel; M Blondel; P Prettenhofer; R Weiss; V Dubourg; J Vanderplas; A Passos; D Cournapeau; M Brucher; M Perrot; E Duchesnay", "journal": "Journal of Machine Learning Research", "ref_id": "b27", "title": "Scikit-learn: Machine learning in Python", "year": "2011" }, { "authors": "L Perez; J Wang", "journal": "", "ref_id": "b28", "title": "The effectiveness of data augmentation in image classifi-cation using deep learning", "year": "2017" }, { "authors": "A Radford; L Metz; S Chintala", "journal": "", "ref_id": "b29", "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "year": "2016" }, { "authors": "J Shijie; W Ping; J Peiyi; H Siping", "journal": "", "ref_id": "b30", "title": "Research on data augmentation for image classification based on convolution neural networks", "year": "2017" }, { "authors": "C Shorten; T M Khoshgoftaar", "journal": "Journal of Big Data", "ref_id": "b31", "title": "A survey on image data augmentation for deep learning", "year": "2019-07" }, { "authors": "G Smith", "journal": "Association for Computing Machinery", "ref_id": "b32", "title": "Understanding procedural content generation: A design-centric analysis of the role of pcg in games", "year": "2014" }, { "authors": "A Soares; R Dividino; F Abreu; M Brousseau; A W Isenor; S Webb; S Matwin", "journal": "IEEE", "ref_id": "b33", "title": "Crisis: Integrating ais and ocean data streams using semantic web standards for event detection", "year": "2019" }, { "authors": "A Soares Junior; B N Moreno; V C Times; S Matwin; L D A F Cabral", "journal": "International Journal of Geographical Information Science", "ref_id": "b34", "title": "Grasp-uts: an algorithm for unsupervised trajectory segmentation", "year": "2015" }, { "authors": "G Spadon; M D Ferreira; A Soares; S Matwin", "journal": "IEEE Access", "ref_id": "b35", "title": "Unfolding ais transmission behavior for vessel movement modeling on noisy data leveraging machine learning", "year": "2022" }, { "authors": "L Taylor; G Nitschke", "journal": "", "ref_id": "b36", "title": "Improving deep learning with generic data augmentation", "year": "2018-11" }, { "authors": "K Wang; C Gou; Y Duan; Y Lin; X Zheng; F.-Y Wang", "journal": "IEEE/CAA Journal of Automatica Sinica", "ref_id": "b37", "title": "Generative adversarial networks: introduction and outlook", "year": "2017" }, { "authors": "Wes Mckinney", "journal": "", "ref_id": "b38", "title": "Data Structures for Statistical Computing in Python", "year": "2010" }, { "authors": "M J Wisdom", "journal": "", "ref_id": "b39", "title": "The starkey project: a synthesis of long-term studies of elk and mule deer", "year": "2005" }, { "authors": "H Ye; X Cheng; M Yuan; L Xu; J Gao; C Cheng", "journal": "", "ref_id": "b40", "title": "A survey of security and privacy in big data", "year": "2016-09" }, { "authors": "J Yoon; D Jarrett; M Van Der Schaar", "journal": "", "ref_id": "b41", "title": "Time-series generative adversarial networks", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b42", "title": "", "year": "2019" }, { "authors": "L Zhao; G Shi", "journal": "Ocean Engineer-ing", "ref_id": "b43", "title": "A trajectory clustering method based on douglas-peucker compression and density for marine traffic pattern recognition", "year": "2019" }, { "authors": "Y Zheng; H Fu; X Xie; W.-Y Ma; Q Li", "journal": "", "ref_id": "b44", "title": "Geolife GPS trajectory dataset -User Guide, geolife gps trajectories 1", "year": "2011-07" } ]
[]
10.1109/JBHI.2020.3012567
[ { "figure_ref": [], "heading": "2020", "publication_ref": [ "b10", "b0", "b1", "b11", "b12", "b1", "b16", "b17", "b6", "b13", "b14", "b1", "b4" ], "table_ref": [], "text": "). Communicable diseases are those that can transfer from one person to another. According to Sharma et al. (2017), non-communicable diseases are those that cannot be passed from one person to another. Estimating Blood Pressure (BP) is essential for detecting several disorders, making it one of the most important health indicators. Invasive and non-invasive approaches are used to estimate Blood Pressure, with the invasive method providing a higher estimation accuracy but with its own difficulties and restrictions. According to the World Health Organization's (WHO) 2015 estimate, 9.4 million people worldwide pass away from high Blood Pressure (hypertension), and 25% of women and 30% of men have BP (Argha et al., 2021, Farki et al., 2021).\nFollowing diabetes as the second most common cause of death from cardiovascular disease, hypertension is regarded as a silent killer disease because it has no symptoms. Most of the clinical settings routinely check the patient's Blood Pressure, and the same is done for elderly patients and those in the Intensive Care Unit (ICU). Regular Blood Pressure monitoring can help prevent diseases like heart failure, heart attacks, and stroke (Liu et al., 2017, Shahabi et al., 2015). Additionally, over time, hypertension harms human organs like the kidneys, eyes, and brains Farki et al., 2021). BP has an impact on the air pressure that builds up in the lungs. Additionally, Blood Pressure (BP) influences heart rate, and references to the relationship between heart rate and human speech recordings may be found in (Mesleh et al., 2012, Kim et al., 2004). As a result, it has been increasingly essential in recent years to investigate the BP using an auditory signal (Ankışhan, 2020).\nMachine learning is quite successful in classifying and predicting the acoustic signal. The temporal and frequency domains of the audio signal can be considered when classifying the audio (Song et al., 2012, Krizhevsky et al., 2017). The elimination of redundant information is the most important step for reducing computational complexity since preliminary processes like pre-processing and significant feature extraction increase prediction accuracy and cut down on calculation time.\nRemoving unnecessary features, cutting processing time, and data augmentation are the three most important phases in the preparatory step for BP estimation. However, while considering real-time applications, the enhanced features raise the computation's complexity. As a result, the clusteringbased approach's basic characteristics improve BP estimation accuracy (Farki et al., 2021) Synergistic Approach-Incremental Clustering with K-means and Fact Finding Algorithm:\nIn this section, we present a synergistic approach that combines incremental clustering with the power of the k-means algorithm and the Fact Finding Instructor Optimization algorithm. This innovative combination allows for dynamic and real-time clustering of time series speech data for accurate Blood Pressure estimation. By leveraging the strengths of each algorithm, we achieve continuous updates and improved clustering accuracy, providing valuable insights for healthcare professionals in diagnosing, monitoring, and managing patients' Blood Pressure levels (Bagirov et al., 2011)." }, { "figure_ref": [], "heading": "Proposed Fact Finding Instructor based BP Estimation:", "publication_ref": [ "b22", "b23" ], "table_ref": [], "text": "The Fact-Finding Instructor Optimization algorithm is developed by combining the investigative skills of a fact finder in identifying suspects of criminal offenses with the expertise and knowledge of an instructor to enhance the performance of a trainee (Rao et al., 2016). By leveraging these characteristics, the algorithm aims to optimize the BP estimation process.\nThe fact-finding aspect of the algorithm enables it to identify relevant patterns and features within the speech signals that are indicative of BP levels. This investigative approach helps in accurately determining the suspect (i.e., the BP value) from the speech signal data. By incorporating the instructor's knowledge, the algorithm ensures that the trainee (i.e., the BP estimation model) performs optimally and achieves high accuracy.\nBy hybridizing the fact finding and instructor components, the proposed algorithm aims to overcome the limitations of existing BP estimation techniques. It is designed to provide a more efficient and accurate estimation of BP, enabling better disease diagnosis and effective management of hypertension.\nIn the proposed Fact-Finding Instructor-based BP estimation method, the collected speech signal undergoes preprocessing to remove any noise and artifacts. This is achieved by applying an adaptive filter, which helps enhance the quality of the signal. Once the preprocessing step is complete, various features are extracted from the speech signal. These features include amplitude, frequency, and width formant features, which provide information about the characteristics of the speech signal.\nAdditionally, statistical features such as zero crossing, change in zero crossing, Haar Wavelet, pitch function, loudness function, entropy, Mel-LPC, variance, mean, and harmonic ratio are computed.\nThese features collectively form a feature vector that represents the speech signal. The next step in the proposed method involves clustering the BP using two algorithms: the K-means clustering algorithm and the proposed fact-finding instructor optimization algorithm. The K-means clustering algorithm groups similar data points together based on their similarity in feature space (Sinaga et al., 2020). It helps identify clusters or patterns within the extracted features that are indicative of different BP levels.\nThe fact-finding instructor optimization algorithm, which was introduced earlier, guides the clustering process by leveraging the characteristics of a fact finder and an instructor. This algorithm aims to find the optimal clustering solution by balancing intensification (focus on local best solutions) and diversification (exploration of the search space) phases. It considers the instructor's knowledge to enhance the accuracy and efficiency of the clustering process. " }, { "figure_ref": [], "heading": "Feature Extraction:", "publication_ref": [], "table_ref": [], "text": "In the proposed clustering-based BP estimation method, the first step involves extracting features from the patient's input audio signal. These features are essential for capturing relevant information related to BP levels. Before feature extraction, a preprocessing stage is performed to remove any artifacts or noise present in the audio signal.\nDuring feature extraction, both statistical-based features and formant features of the audio signal are considered. Statistical-based features include various measures that describe the statistical properties of the signal, such as mean, variance, entropy, and zero-crossing rate. These features provide information about the overall characteristics and distribution of the audio signal.\nFormant features, on the other hand, capture specific properties related to the frequency content and resonance of the audio signal. Formants are distinct frequency bands that represent the resonant frequencies of the vocal tract during speech production. By extracting formant features, the algorithm can capture the unique patterns and characteristics in the speech signal that are relevant to BP estimation.\nBy combining statistical-based features and formant features, the algorithm aims to capture a comprehensive representation of the audio signal while reducing computational complexity. This selection of features helps to focus on the most informative aspects of the signal for BP estimation, without overwhelming the algorithm with unnecessary data.\nThe extraction of these features from the pre-processed audio signal forms the basis for subsequent stages of the clustering-based BP estimation process. By incorporating both statistical-based and formant features, the algorithm can effectively represent the speech signal and extract meaningful information for accurate BP estimation." }, { "figure_ref": [], "heading": "Formant Features:", "publication_ref": [], "table_ref": [], "text": "In the proposed BP estimation method, formant features such as amplitude, width, and frequency are extracted from the patient's input audio signal. These formant features provide valuable information about the characteristics and properties of the speech signal, which can be correlated with BP levels." }, { "figure_ref": [], "heading": "Statistical Features:", "publication_ref": [], "table_ref": [], "text": "In addition to the formant features, the proposed BP estimation method incorporates various statistical features like Zero Crossing, Entropy, Change in Zero Crossing, Haar Wavelet, Loudness, Pitch, Mel-LPC, Harmonic Ratio, Mean, Variance are extracted from the audio signal. These features are chosen to provide relevant information about the signal characteristics and contribute to the accurate estimation of BP." }, { "figure_ref": [], "heading": "Motivation:", "publication_ref": [ "b21", "b22" ], "table_ref": [], "text": "The algorithm comprises a fact-finding phase that identifies audio signal features indicative of BP levels. Formant features, statistics, and pertinent data are extracted from speech signals. The subsequent chasing phase employs clustering to categorize these features, akin to a team pursuing leads. This process uncovers patterns and relationships, grouping data points into distinct BP categories. The collaboration between the fact-finding and chasing teams is mirrored in the interplay between feature extraction and clustering (Chou & Nguyen, 2020, Rao, 2016). The algorithm leverages instructor-like guidance to optimize feature extraction accuracy, enhancing its performance for managing hypertension. The instructor component ensures the optimal functioning of the BP estimation model. Integrating these elements bolsters the accuracy and reliability of BP estimations, contributing to more effective hypertension management. The instructor also enhances the exploration phase by sharing expertise with the estimation model. This collaboration amplifies the algorithm's capacity to explore the feature space comprehensively, identifying global optimal solutions for clustering BP classes in patients. This approach strikes a balance between exploration and exploitation, thereby sidestepping local optima and achieving more accurate estimations. The algorithm's advantages include rapid convergence due to an improved fact-finding phase, as well as comprehensive BP class clustering based on speech features. In sum, the proposed algorithm synergizes the instructor's expertise with the fact-finding team's insights to cluster BP classes. This hybrid approach enhances exploration, achieves a balanced trade-off, and ensures swift, accurate convergence. By harnessing speech features, the algorithm offers a holistic solution for BP estimation in patients." }, { "figure_ref": [ "fig_1" ], "heading": "Mathematical Modeling of the Fact-Finding algorithm:", "publication_ref": [], "table_ref": [], "text": "The proposed BP estimation algorithm involves a two-team approach: Fact-finders and chasers.\nFact-finders use their expertise to identify potential accurate BP estimation locations. They share their findings with the chasing team, who focus on these locations. The process is coordinated by headquarters, with Fact-finders guiding chasers and receiving updates. This collaboration ensures accurate identification through continuous feedback. The algorithm's core principle is illustrated in Fig. 2. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": ", here, the total suspect's location needs to be detected is referred as X . The location of the police corresponding to the chasing team is denoted as\n CT D , which  refers to CT X ,..., 2 , 1 = \nand the total number of police in the chasing team is notated as X . In the proposed Fact Finding Instructor Optimization algorithm, the total number of suspect's location and police are the same and is notated as \nFF X = CT X = X\n( )             + =  1 * 1 1 ' B D A D D B l FF FF FF    (1) where, Y l ,..... 2 , 1 =\n, the random number ranges between [-1,1] is indicated as A , and\n  1 ,....... 2 , 1 1 -  a B\n. In this, the random parameter A is expressed as ( ) . By evaluating the suspect's location based on the trial and error approach, the value of 1 B is assumed as 2 and then the location is rewritten as,\n2 5 . 0  - = rand A ,\n( ) ( )       + - + = 2 * ' pl FF ql FF l FF FF FF D D D A D D    (2)\nwhere  and p q , , are locations of the suspect p and q are chosen randomly." }, { "figure_ref": [], "heading": "   ", "publication_ref": [], "table_ref": [], "text": "X p q ,......, 2 , 1 , ,  \nIn the detection of the suspect location, the instructor's knowledge in instructing the trainee is incorporated for the detection of a more accurate hiding location of the suspect. The position of the instructor based on the trainee's score is expressed as, ii) Inquiry Direction: In this stage, the probability of the suspect's hiding location is investigated, in which the best possible location is indicated as min e and the worst possible location is indicated as max e . The best location is notated as min D , and the probability of each suspect's location is:\n( ) ( ) min max max e e C e D prob FF FF - - =   (3)\nHere, the location of the suspect is changed to increase the exploration area and hence the general formulation is expressed as in equation ( 3). The movement of the suspect location is affected by the best individual.\n( )\nkl FF B l FF D D D * 2 1 min '  + =   (4)\nwhere the individuals that affect the move are indicated as 2 B and ranges between\n  1 ,..... 2 , 1 2 -  a B ; 2 ,.... 2 , 1 B k = ; and ] 1 , 1 [- =  is considered as the coefficient of effectiveness. The value of 2\nB is equal to 3 and then the suspect's new location is evaluated as,\n( ) ( ) pl FF ql FF bl FF l FF D D A D D D - + + = * 5 min 2  (5)\nwhere the random number with the range [0,1] is referred to as 5 \nl l l CT Te CT Te D D D        = +    (8)\nWhere\n1 e D \n is the location directed by the instructor through the knowledge gained by the teaching process and it is given as:\n( )\n1 1 1 * l l l Te Te Te D D A D     - - = + -(9)\nWhere the 1 l Te D previous solution provided by the leaner,  is the teaching factor and  represents the mean.\nHence, by substituting equations ( 6) and ( 9) in equation ( 8) we get:\n  ( ) ' ' 1 1 , 0.5 3 0.5 * 2* 0.5 1 * 4 CT l l l CT Te CT Te Te D D D A A D A     - -         = + - + + -         (10)\nv) Termination: The termination of the optimization process in the proposed Fact-Finding\nInstructor Optimization algorithm occurs under two conditions:\n1. Proximity to the Objective Function: The algorithm stops when the solution acquired is close to the objective function. This means that the fitness or performance of the current solution is considered satisfactory, and further iterations are not necessary." }, { "figure_ref": [ "fig_4" ], "heading": "Maximum Iterations:", "publication_ref": [ "b9", "b2", "b3", "b4" ], "table_ref": [], "text": "The algorithm also terminates when the maximum specified number of iterations is reached. This is a predefined limit set at the beginning of the algorithm to control the computational resources and prevent excessive computation.\nThese termination conditions ensure that the optimization process in the Fact-Finding Instructor Optimization algorithm stops either when an acceptable solution is found (proximity to the objective function) or when the algorithm has reached the maximum allowed iterations.\nBy terminating the optimization process, the algorithm aims to strike a balance between computational efficiency and the quality of the obtained solution. It prevents unnecessary iterations while ensuring that the algorithm has sufficient opportunity to converge towards a satisfactory result.\nThe mathematical model of the FFI is shown in Fig. 3.\nIn the proposed Fact-Finding Instructor Optimization algorithm, the global best solution is determined using equation (10). This equation integrates the investigating characteristics of the instructor, along with the knowledge obtained through the learning characteristics of the instructor. In the pursuit of enhancing the clustering-based Blood Pressure estimation method, this research has undergone a transition in its approach, opting for the adoption of the Incremental K-means algorithm (Pham et al., 2004, Lin et al., 2004). The utilization of Incremental K-means introduces several compelling advantages that address key challenges encountered in traditional K-means clustering (Bagirov et al., 2011). By harnessing the power of incremental learning and real-time data updates, Incremental K-means brings forth new opportunities to achieve more efficient and accurate BP estimation.\nThe goal of the incremental k-means algorithm is to minimize the within-cluster sum of squares, which represents the sum of the squared distances between each feature and its assigned cluster center." }, { "figure_ref": [], "heading": "Advantages of incremental K-means algorithm over traditional K-means algorithm:", "publication_ref": [ "b4" ], "table_ref": [], "text": "1. Real-time Updates: Incremental K-means allows for real-time updates as new data points arrive. It can efficiently handle data streams, making it suitable for dynamic and evolving datasets where data points are continuously added or updated. (Bagirov et al., 2011) 2. Memory Efficiency: Traditional K-means requires storing all data points in memory to calculate the centroids in each iteration. In contrast, incremental K-means updates the centroids on-the-fly, without the need to store the entire dataset in memory. This makes it more memory-efficient, especially for large datasets. -final_centroids: Updated cluster centroids -clusters: Assigned clusters for each data point\nStep 1: Initialize cluster centroids with provided centroids or randomly select K data points as initial centroids.\nStep 2: Assign each data point to the nearest centroid based on Euclidean distance. This step forms initial clusters.\nStep 3: Update the centroids by calculating the mean of data points in each cluster.\nStep 4: Repeat steps 2 and 3 until convergence or a specified number of iterations. Convergence can be determined by checking if the centroids remain unchanged between consecutive iterations." }, { "figure_ref": [], "heading": "Incremental K-means:", "publication_ref": [], "table_ref": [], "text": "Step 5: For each new data point (new_data) in the dataset:\na. Find the nearest centroid (nearest_centroid) to the new_data based on Euclidean distance.\nb. Update the centroid (nearest_centroid) by considering the new_data.\nTo do this, calculate the new mean of data points in the nearest_centroid cluster by incorporating the new_data. The updated centroid can be computed as:\nnearest_centroid = (current_sum_of_data_points_in_nearest_centroid + new_data) / (current_number_of_data_points_in_nearest_centroid + 1)(11)\nc. Reassign the new_data to the nearest_centroid.\nd. Repeat steps 2 and 3 with the updated centroid.\nStep 6: Continue the incremental process as new data points arrive." }, { "figure_ref": [], "heading": "Final Output:", "publication_ref": [ "b11", "b2", "b3", "b24" ], "table_ref": [], "text": "-final_centroids: Updated cluster centroids after incremental updates.\n-clusters: Assigned clusters for each data point after incremental updates.\nLet us consider the input feature vector for the k-means clustering Q and the cluster centers are notated as ak a a a ,..... 3 , 2 , 1\n. Here, from the total features concerning the patients, BP is subcategorized into 3 groups by considering the similarity among the extracted features. Here, by considering each patient's features Q , the following functions take place.\nStep 1: Initially, the features of patients are assigned to clusters, and the similarity between the assigned features and the cluster centers is evaluated. If the difference is substantial, the patient's features are assigned to a new cluster.\nStep 2: The evaluation of the new cluster using its center involves calculating the Euclidean distance, given by: Ei,j=√∑ (𝑓𝑖,𝑘 -𝑐 𝑗,𝑘) 2 𝑛 𝑘=1 (12) Where Ei,j is the Euclidean distance between the patient's features fi,k belonging to cluster i and the center cj,k of cluster j.\nStep 3: These steps are iteratively performed to minimize the loss through Euclidean distance-based evaluation, which can be formulated as:\n𝐿 = ∑ 𝑛 𝑖=1 ∑ 𝑑 𝑖,𝑗 𝑘 𝑗=1 ⋅ 𝐸 𝑖,𝑗 2(13)\nWhere N is the number of subjects, K is the number of clusters, di,j is a binary variable indicating if subject i is assigned to cluster j, and Ei,j is the Euclidean distance as calculated in Step2.\nStep 4: The iterations continue until termination criteria are met, typically when the cluster centers cease to change.\nFor BP estimation, the input feature vector encompasses various attributes extracted from subject's data, like amplitude, frequency, width, and statistical measures. This information-rich feature set characterizes patient data, reflecting relevant factors linked to their BP levels.\nWith Incremental K-means, the algorithm continuously adapts to evolving feature vectors and updates data point-cluster assignments (Pham et al., 2004, Lin et al., 2004). This ongoing learning approach accommodates real-time updates and changes in data patterns, enabling responsive tracking and categorization of BP levels based on corresponding patient features.\nLeveraging Incremental K-means enhances the efficiency and precision of BP estimation. This algorithm systematically organizes subjects with analogous feature patterns into clusters, providing a flexible and dynamic strategy for estimating BP levels (Kumalasari et al. 2020). This data-centric methodology ensures robust and up-to-date subject clustering, contributing to more accurate BP estimation and informed healthcare decisions." }, { "figure_ref": [], "heading": "Combining Clustered Approaches for Blood Pressure Classification:", "publication_ref": [], "table_ref": [], "text": "During the integration phase, the clusters derived from both the proposed Fact-Finding Instructor Optimization algorithm and the k-means algorithm are merged through a multiplication process.\nThis amalgamation aims to blend the distinct information and attributes obtained from each algorithm, culminating in an improved and accurate Blood Pressure (BP) estimation mechanism.\nUpon multiplication of the clusters, BP estimation is executed based on three distinct criteria: low BP, normal BP, and high BP. These criteria delineate the various ranges or classifications of Blood Pressure levels. By associating each cluster with one of these criteria, the estimation process categorizes patients' BP levels into these predefined groups.\nThe low BP category identifies patients with BP levels below the typical range, signifying hypotension. The normal BP encompasses patients with BP levels within the healthy range.\nMeanwhile, the high BP category pertains to patients with BP levels surpassing the standard range, indicating hypertension.\nBy leveraging these three criteria, the BP estimation process yields a comprehensive evaluation of patients' BP levels, facilitating enhanced identification and classification of their BP statuses. This information holds value for disease diagnosis, ongoing monitoring, and effective management. It empowers healthcare practitioners to make well-informed choices and offer targeted interventions tailored to patients' varying BP conditions.\nTo conclude, the fusion of Incremental K-means into our clustering-oriented BP estimation framework equips us with the capacity to harness real-time data updates, adapt to dynamic subject features, and ultimately provide refined and dependable BP estimations." }, { "figure_ref": [], "heading": "Dataset description:", "publication_ref": [], "table_ref": [], "text": "The " }, { "figure_ref": [], "heading": "Results and Performance analysis:", "publication_ref": [], "table_ref": [], "text": "The evaluation of the introduced FFI-driven clustering method's effectiveness in distinguishing " }, { "figure_ref": [], "heading": "Comparative methods:", "publication_ref": [], "table_ref": [], "text": "In this study, the performance of the proposed FFI-based clustering technique for audio signal analysis is compared with several traditional methods. These methods include: " }, { "figure_ref": [], "heading": "Comparative discussion", "publication_ref": [], "table_ref": [], "text": "The FFI-based clustering technique consistently shows improved performance compared to the other traditional methods across multiple performance measures. It achieves higher scores in most of the measures, indicating its effectiveness in clustering and analysing audio signals for Blood Pressure estimation. Case Study-You Tube Videos:" }, { "figure_ref": [], "heading": "FFI Outperforms Traditional", "publication_ref": [], "table_ref": [], "text": "In the digital age, YouTube has emerged as one of the most popular platforms for content consumption, providing a vast array of videos ranging from entertainment and educational content to news and lifestyle vlogs. With billions of users and an ever-expanding library of videos, YouTube has become a powerful medium of communication, shaping the way we access information and experience emotions. Amidst this vast sea of diverse content, it is fascinating to explore the potential impact that YouTube videos can have on our emotional well-being and physiological responses.\nIn recent years, there has been a growing interest in understanding how various forms of media, including videos, can influence human emotions and even physiological parameters such as Blood Pressure (BP). Studies have shown that emotional experiences can significantly affect Blood Pressure levels, with heightened emotions often leading to temporary fluctuations in BP.\nIn this context, exploring the relationship between YouTube videos and their impact on Blood Pressure assumes great significance. YouTube's vast repository of content covers a spectrum of emotions, from heartwarming and humorous videos that elicit joy and laughter to intense and suspenseful videos that evoke fear or anxiety. The emotional content and intensity conveyed through these videos have the potential to elicit diverse physiological responses in viewers.\nThe research work aims to present an efficient BP estimation technique using time series speech data extracted from YouTube videos. The approach capitalizes on the Fact-Finding Instructor " }, { "figure_ref": [], "heading": "Methodology:", "publication_ref": [], "table_ref": [], "text": "Data Collection: A collection of daily videos featuring some cool, happy, motivational thoughts like Spiritual Gurus as well as some aggressiveness, was obtained from online sources." }, { "figure_ref": [], "heading": "Daily Emotional Segments:", "publication_ref": [ "b3" ], "table_ref": [], "text": "The time series analysis of the daily online videos commences with a preprocessing step, wherein adaptive filters are utilized to remove noise and artifacts, ensuring the quality of the speech signals for subsequent analysis. Feature extraction is then carried out to generate a feature vector comprising both statistical-based features (e.g., zero crossing, entropy) and formant features, specifically tailored to facilitate time series analysis and reduce computational complexity (Lin et al., 2004) Here are the results of extracted features from time series analysis of angry and calm audio clips: " }, { "figure_ref": [], "heading": "Mel Frequency Cepstral Coefficients", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "MFCC1 MFCC2 MFCC3", "publication_ref": [], "table_ref": [], "text": "To cluster the time series, feature vectors effectively, we employ a two-pronged approach. Initially, k-means clustering is applied to group similar feature vectors. Subsequently, the Fact-Finding Instructor Optimization algorithm is integrated into the clustering process, further enhancing the accuracy of the BP estimation. To optimize the model's performance, the clustering algorithm is fine-tuned using a real-time dataset with actual BP recordings, where ground truth labels are available for training. " }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Multiple R:", "publication_ref": [], "table_ref": [], "text": "The multiple correlation coefficient (Multiple R) is a measure of the overall correlation between the independent variables and the dependent variable. In this case, the Multiple R value is approximately 0.894, indicating a strong positive relationship between the feature vector and the Blood Pressure." }, { "figure_ref": [], "heading": "R Square:", "publication_ref": [], "table_ref": [], "text": "The coefficient of determination (R Square) represents the proportion of the variance in the dependent variable that can be explained by the independent variables. Here, the R Square is approximately 0.92, which means that approximately 92% of the variability in the Blood Pressure is accounted for by the feature vector in the model." }, { "figure_ref": [], "heading": "Adjusted R Square:", "publication_ref": [], "table_ref": [], "text": "The adjusted R Square considers the number of independent variables and adjusts the R Square accordingly. In this case, the Adjusted R Square is approximately 0.744, indicating that the model is well-fitted to the data, and the chosen feature vector is strongly related to the Blood Pressure. " }, { "figure_ref": [], "heading": "Interpretation of Regression Analysis:", "publication_ref": [], "table_ref": [], "text": "The strong positive multiple R indicates a significant correlation between the independent variables and the dependent variable. The high R Square and Adjusted R Square values suggest that the independent variables in the model are effective predictors, explaining a substantial portion (approximately 92%) of the variability in the dependent variable.\nAdditionally, the relatively low standard error of approximately 4.339 indicates that the model's predictions are generally close to the actual values of the dependent variable.\nOverall, the regression analysis demonstrates a robust relationship between the extracted features from audio and the Blood Pressure. The model's strong performance suggests that the selected features are valuable for predicting the Blood Pressure, leading to accurate estimates or predictions. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "" } ]
Blood Pressure (BP) estimation plays a pivotal role in diagnosing various health conditions, highlighting the need for innovative approaches to overcome conventional measurement challenges. Leveraging machine learning and speech signals, this study investigates accurate BP estimation with a focus on preprocessing, feature extraction, and real-time applications. An advanced clusteringbased strategy, incorporating the k-means algorithm and the proposed Fact-Finding Instructor optimization algorithm, is introduced to enhance accuracy. The combined outcome of these clustering techniques enables robust BP estimation. Moreover, extending beyond these insights, this study delves into the dynamic realm of contemporary digital content consumption. Platforms like YouTube have emerged as influential spaces, presenting an array of videos that evoke diverse emotions. From heartwarming and amusing content to intense narratives, YouTube captures a spectrum of human experiences, influencing information access and emotional engagement. Within this context, this research investigates the interplay between YouTube videos and physiological responses, particularly Blood Pressure (BP) levels. By integrating advanced BP estimation techniques with the emotional dimensions of YouTube videos, this study enriches our understanding of how modern media environments intersect with health implications. Performance evaluation through metrics including Davies Bouldin score, Homogeneity, completeness, Jacquard similarity, Silhouette score, and Dunn's index demonstrates substantial enhancements, particularly with a 90% training percentage. This method offers promising potential for accurate BP estimation, contributing to the evolution of assessment methodologies and ultimately enhancing healthcare outcomes.
Speech-Based Blood Pressure Estimation with Enhanced Optimization and Incremental Clustering
[ { "figure_caption": "Fig. 1 :1Fig. 1: Block Diagram of proposed Fact Finding Instructor based BP estimation. Finally, the output generated by both clustering operations is combined by multiplying them together. This combined output serves as the estimation of the BP. By leveraging the clustering results and the optimization algorithm, the proposed method aims to provide an accurate estimation of BP based on the speech signal. The proposed Fact-Finding Instructor-based BP estimation method is illustrated in Fig.1, which visually depicts the different stages of the algorithm and their interconnections. Overall, this method utilizes preprocessing, feature extraction, clustering algorithms, and optimization techniques to estimate BP from speech signals. By integrating the fact finding and instructor components, it aims to enhance the accuracy and reliability of BP estimation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig.2: General principal of the proposed FFI algorithm Let the fact-finding team is notated as FF and the chasing team is notated as CT . The position of the suspect that needs to be investigated is notated as  FF D , which  refers to the location of the", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "here the value of rand ranges between [0,1]. l FF D  is the location of the suspect location, the objective location (possible location) of the suspect is notated as  FF C , and the location of the new suspect is denoted as ( ) '  FF C", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Aand ( ) 2 l FF D  is the second location of the suspect generated by the fact-finding team.iii) Action: The fact-finding team provides the best location for the suspect to identify the suspect.The chasing team starts the suspect chasing in a coordinated fashion and the search agent named police  CT reaches the suspect's location hence the new location of the police agent is notated as, Fitness is evaluated for the attainment of the desired solution for solving optimization issues. In the proposed fact-finding optimization algorithm for clustering the BP of the patients, the accuracy of clustering is evaluated as a fitness function and is expressed as, positive is notated as tp BP true negative is notated as tn BP false positive is notated as fp BP and false negative is notated as fn BP and the fitness function is notated as fit BP . Equation (6) increases the convergence rate of the algorithm by considering the investing characteristics of the search agents. It is mentioned that the instructor should possess' high knowledge to direct the chasing team to avoid trapping in the local optima. Hence, the final updated equation of the algorithm is given by,", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig.3: Mathematical model of the FFI algorithm", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "3 .3Faster Convergence: Incremental K-means can converge faster than traditional K-means, as it updates the centroids incrementally with each new data point. This reduces the number of iterations needed to reach convergence. 4. Scalability: Incremental K-means is more scalable than traditional K-means, particularly for datasets with many data points or dimensions. It can process new data points efficiently without reprocessing the entire dataset (Pham et al., 2004). 5. Online Learning: Incremental K-means supports online learning, where the algorithm continuously learns from new data points, allowing for adaptive and up-to-date clustering models (Hong et al., 2008). 6. Handling Concept Drift: Incremental K-means is well-suited for handling concept drift, which refers to changes in the underlying data distribution over time. It can adapt to changes in the data distribution and update the clustering accordingly (Pham et al., 2004).7. Flexibility: Incremental K-means allows users to control the rate of incremental updates, making it more flexible in adapting to specific application requirements(Pham et al., 2004).8. Reduced Computation: As incremental K-means only updates the relevant clusters affectedby the new data point, it reduces unnecessary computations compared to traditional Kmeans, which recalculates all centroids in each iteration.Overall, incremental K-means is particularly advantageous for scenarios where the data is continuously changing, and real-time updates and scalability are essential. It is well-suited for applications such as online learning, stream data analysis, and handling large and dynamic datasets(Pham et al., 2004, Lin et al., 2004). The pseudocode of incremental clustering algorithm is given in Algorithm 5.1.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "real-time dataset used in this study consists of Blood Pressure values and corresponding voice signals collected from 25 participants. The participants selected for the dataset fall within the age group of 20 to 65. Out of 25, there are 12 men and 13 female participants, and it is discovered that Blood Pressure has an impact on seven of the participants. By examining patient auditory recordings, the dataset seeks to let doctors precisely determine each patient's Blood Pressure. Each participant's data in the dataset includes their Blood Pressure measurement, which typically consists of Systolic and Diastolic values, and the corresponding voice signal captured during the measurement process. The voice signals are obtained using suitable recording devices or systems. By utilizing this real-time dataset, researchers and physicians can analyse the relationship between the voice signals and Blood Pressure values. This analysis can aid in developing techniques and algorithms for accurately estimating Blood Pressure based on audio signals.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Performance analysis based on epoch a) Davies Bouldin score. b) Homogeneity score c) Completeness score d) Jacquard similarity score e) Silhouette score f) Dunn's index.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "1 .1Fuzzy C-means clustering (FCM): Haque and Kim (2013) proposed the FCM clustering algorithm for audio signal analysis. It is based on fuzzy logic and aims to partition the input audio signals into different clusters based on similarity. 2. K-means clustering: Kumalasari et al. (2020) utilized the K-means clustering algorithm for audio signal analysis. K-means is a popular unsupervised clustering algorithm that aims to partition data into K clusters based on minimizing the sum of squared distances between data points and cluster centroids.", "figure_data": "", "figure_id": "fig_8", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "3 . 5 .Fig. 5 :355Fig. 5 compares the proposed FFI-based clustering technique to various conventional clusteringbased strategies in a comparative analysis. The Davies Bouldin score for the FFI-based clustering methodology and the other conventional methods are shown in Fig. 5(a). At a training percentage of 90, the suggested method outperforms the existing TLO-based clustering strategy by 0.20 percent. The homogeneity score for the FFI-based clustering strategy in comparison to the other conventional methods is shown in Fig. 5(b). Comparing the suggested method to the existing TLO-based clustering strategy at a training percentage of 90, the performance improvement is 6.56%. The completeness score for the FFI-based clustering methodology and the other conventional methods are shown in Fig. 5(c). Comparing the suggested method to the existing TLO-based clustering strategy at a training percentage of 90, the performance improvement is 0.62%. The Jacquard similarity score for the FFI-based clustering methodology and the other conventional methods are shown in Fig. 5(d). At a training percentage of 90, the suggested method outperforms the existing TLO-based clustering strategy by 0.10 percent.", "figure_data": "", "figure_id": "fig_9", "figure_label": "355", "figure_type": "figure" }, { "figure_caption": "2 . 3 .23Methods: The FFI-based clustering technique achieves the highest scores in terms of Davies Bouldin score, homogeneity score, completeness score, Jacquard similarity score, silhouette score, and Dunn's index. This demonstrates its superiority in capturing the underlying patterns and grouping the input audio signals accurately. Strong Homogeneity and Completeness: The FFI-based clustering technique stands out in terms of homogeneity and completeness scores. It achieves a homogeneity score of 0.961 and completeness score of 0.961, indicating high consistency within the clusters and capturing all the data points within the respective clusters effectively. Consistent Improvement: Compared to the previous TLO-based clustering technique, the FFI-based clustering technique shows notable performance improvements across different (e) (f) measures. The performance improvement ranges from 0.10% to 6.56%, further highlighting the efficacy of the FFI-based approach. Time Series data analysis: Time series data refers to a collection of observations recorded over regular time intervals, forming a sequence of data points(Pham et al., 2004, Lin et al., 2004) . In our study, time series voice data captures the speech signals of different individuals at various time points, allowing us to examine how speech characteristics affect the individual Blood Pressure.", "figure_data": "", "figure_id": "fig_10", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :Fig. 9 :89Fig.8: Formant 1 Frequency values of angry and calm audio signals", "figure_data": "", "figure_id": "fig_11", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": ":The successful integration of both the daily online videos in time series format and the real-time dataset leads to improved BP estimation accuracy. The novel clustering algorithm, incorporating both k-means clustering and the Fact-Finding Instructor Optimization algorithm, effectively captures temporal patterns in the speech signals, contributing to the overall effectiveness of the proposed technique. The regression statistics provide valuable insights into the relationship between the Blood Pressure and the feature vector. Let's interpret the key regression statistics depicted in", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4 .4Standard Error: The standard error estimates the average difference between the actual values of the dependent variable and the predicted values from the regression model. A lower standard error indicates a better fit of the model to the data. Here, the standard error is approximately 4.339, which means that the model's predictions might have an average error of around 4.339 units from the actual values of the dependent variable.", "figure_data": "", "figure_id": "fig_13", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "This research presents a comprehensive time series-based clustering approach for BP estimation using daily online videos. The incorporation of the novel clustering algorithm, fine-tuned with realtime dataset, showcases the adaptability and robustness of the proposed technique. By leveraging temporal patterns in the speech signals, the method offers promising advancements in healthcare maintenance and disease diagnosis in cardiovascular-related applications. Overall, this research highlights the potential of time series analysis in medical data analysis, paving the way for future developments in cardiovascular function evaluation using large-scale speech datasets. Overall, the FFI-based clustering technique demonstrates superior performance and outperforms the other traditional methods in accurately clustering and analysing audio signals for Blood Pressure estimation. It showcases its potential as an effective method for assisting physicians in determining the Blood Pressure of patients using audio signals. By integrating the expertise of the instructor and the investigative behaviour of the fact-finding team, the proposed Fact-Finding Instructor Optimization algorithm achieves more accurate solutions for clustering the BP features. The algorithm balances intensification and diversification phases to avoid getting trapped in local optima and ensures the discovery of the global best solution. This leads to optimal clustering of the patient's BP, ultimately contributing to accurate diagnosis.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "possible location of the suspect's hiding place is analyzed in this stage and the first location of the suspect is denoted as ( )' ", "figure_data": "and is considered as the populationsize. The iteration of the proposed Fact Finding Instructor Optimization algorithm has been notatedi and its maximal value is indicated asi . The dimension of the parameter vector is notated as Ymax.i)  D , and the general formulation isFFexpressed as,", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Real-time dataset design types and objectives", "figure_data": "Design TypesStudy objectiveSampleHomo sapiensFactorDiagnosisMeasurementBlood Pressure analysisDevicesBoya BY-M1 Omni directional microphone, Omronhem 7120 Blood Pressure monitorTable 1 provides an overview of the design types and objectives associated with the real-time datasetused in the study.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "To enhance the BP estimation accuracy, the model is trained using a real-time dataset with actual BP recordings as ground truth labels. The successful integration of both online video data and the real-time dataset demonstrates the potential of this time series-based clustering approach for improving BP estimation in medical research and healthcare applications.The proposed method integrates a newly developed clustering algorithm, combining k-means clustering with the Fact-Finding Instructor Optimization algorithm to effectively capture temporal patterns in the speech signals, leading to enhanced BP estimation accuracy. By considering YouTube speech data from the daily online videos in a time series format, this research aims to leverage the rich temporal information embedded in the data, furthering the understanding of BP variations.", "figure_data": "fine-tuned with both YouTube time series speech data and the real-time dataset, taking advantage ofboth dataset's characteristics.Time Series AnalysisOptimization algorithm, designed to handle time series data, and leverages both YouTube data'sabundance and real-time dataset's ground truth labelling.The proposed time series-based BP estimation method begins with preprocessing the YouTubespeech data to remove noise and artifacts using adaptive filters. Feature extraction is performed togenerate a feature vector, comprising statistical-based features (e.g., zero crossing, entropy) andformant features, tailored for time series analysis to reduce computational complexity. Subsequently,k-means clustering, and the Fact-Finding Instructor Optimization algorithm are applied to groupsimilar time series feature vectors together. To train the model, the real-time dataset with actualBlood Pressure recordings is used, providing ground truth labels for model optimization. The modelis trained on the real-time dataset to enhance BP estimation accuracy. The clustering algorithm is", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Several relevant features from both angry and calm audio clips are extracted. These features included the Zero Crossing Rate, Spectral Centroid, Energy, and the first three Mel-Frequency Cepstral Coefficients (MFCC1, MFCC2, and MFCC3).", "figure_data": "Zero Crossing RateRate Values0.02 0.03 0.04 0.05Crossing0 0.011 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27ZeroTime series dataAngryFig. 6: Zero Crossing Rate values of angry and calm audio signals3. MFCC Differences: The Mel-Frequency Cepstral Coefficients (MFCCs) also exhibitedvariations between the two emotional states. In particular, the MFCC1 coefficient, whichrepresents the overall energy level, tended to be lower in calm audio clips compared to angryones. Additionally, certain other MFCCs (e.g., MFCC2 and MFCC3) displayed differencesin their spectral characteristics.Spectral Centroid3200Values3000Centroid2600 2800Spectral2200 240020001 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27Additionally, we obtained the first two Formant Frequencies (Formant 1 and Formant 2) to Time series datacapture distinct characteristics of the vocal tract. Angry Calm2. Distinct Patterns: The analysis revealed distinct patterns in the extracted features betweenangry and calm audio clips. Notably, the Zero Crossing Rate, Spectral Centroid, and Energy Fig.7: Spectral Centroid values of angry and calm audio signalstended to show significant variations between the two emotional states. Calm audio clipsgenerally exhibited lower Zero Crossing Rate, lower Spectral Centroid, and lower Energyvalues compared to the corresponding features in angry audio clips.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Regression Statistics", "figure_data": "Multiple R0.89418R Square0.9234Adjusted R Square0.743879Standard Error4.339477", "figure_id": "tab_4", "figure_label": "22", "figure_type": "table" } ]
Vaishali Rajput; Preeti Mulay; Rajeev Raje
[ { "authors": "A Argha; B G Celler; N H Lovell", "journal": "IEEE Journal of Biomedical and Health Informatics", "ref_id": "b0", "title": "A Novel Automated Blood Pressure Estimation Algorithm Using Sequences of Korotkoff Sounds", "year": "2021" }, { "authors": "A Farki; R B Kazemzadeh; E A Noughabi", "journal": "", "ref_id": "b1", "title": "A Novel Clustering-Based Algorithm for Continuous and Non-invasive Cuff-Less Blood Pressure Estimation", "year": "2021" }, { "authors": "D T Pham; S S Dimov; C T Nguyen", "journal": "Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science", "ref_id": "b2", "title": "An Incremental K-means algorithm", "year": "2004" }, { "authors": "J Lin; M Vlachos; E Keogh; D Gunopulos", "journal": "", "ref_id": "b3", "title": "Iterative incremental clustering of time series", "year": "2004" }, { "authors": "A M Bagirov; J Ugon; D Webb", "journal": "Pattern Recognition", "ref_id": "b4", "title": "Fast modified global k-means algorithm for incremental cluster construction", "year": "2011" }, { "authors": "Y Hong; S Kwong; Y Chang; Q Ren", "journal": "Pattern Recognition", "ref_id": "b5", "title": "Unsupervised feature selection using clustering ensembles and population based incremental learning algorithm", "year": "2008" }, { "authors": "H Ankışhan", "journal": "Biomedical Signal Processing and Control", "ref_id": "b6", "title": "Blood Pressure prediction from speech recordings", "year": "2020" }, { "authors": "Prableen Kaur; Manik Sharma", "journal": "Journal of medical systems", "ref_id": "b7", "title": "Diagnosis of human psychological disorders using supervised learning and nature-inspired computing techniques: a meta-analysis", "year": "2019" }, { "authors": "Ritu Gautam; Prableen Kaur; Manik Sharma", "journal": "Progress in Artificial Intelligence", "ref_id": "b8", "title": "A comprehensive review on nature inspired computing algorithms for the diagnosis of chronic disorders in human beings", "year": "2019" }, { "authors": "Ritu Gautam; Manik Sharma", "journal": "Journal of medical systems", "ref_id": "b9", "title": "Prevalence and diagnosis of neurological disorders using different deep learning techniques: a meta-analysis", "year": "2020" }, { "authors": "Manik Sharma; G Singh; R Singh", "journal": "IRBM", "ref_id": "b10", "title": "Stark assessment of lifestyle based human disorders using data mining based learning techniques", "year": "2017" }, { "authors": "M Liu; L M Po; H Fu", "journal": "International Journal of Computer Theory and Engineering", "ref_id": "b11", "title": "Cuffless Blood Pressure estimation based on photoplethysmography signal and its second derivative", "year": "2017" }, { "authors": "M Shahabi; V R Nafisi; F Pak", "journal": "", "ref_id": "b12", "title": "Prediction of intradialytic hypotension using PPG signal features", "year": "2015" }, { "authors": "L Song; A Smola; A Gretton; J Bedo; K Borgwardt", "journal": "Journal of Machine Learning Research", "ref_id": "b13", "title": "Feature Selection via Dependence Maximization", "year": "2012" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "Communications of the ACM", "ref_id": "b14", "title": "Imagenet classification with deep convolutional neural networks", "year": "2017" }, { "authors": "J Smith; A Tsiartas; E Shriberg; A Kathol; A Willoughby; M Zambotti", "journal": "", "ref_id": "b15", "title": "Analysis and prediction of heart rate using speech features from natural speech", "year": "2017" }, { "authors": "A Mesleh; D Skopin; S Baglikov; A Quteishat", "journal": "Journal of computer science and technology", "ref_id": "b16", "title": "Heart rate extraction from vowel speech signals", "year": "2012" }, { "authors": "K H Kim; S W Bang; S R Kim", "journal": "Medical and biological engineering and computing", "ref_id": "b17", "title": "Emotion recognition system using short-term monitoring of physiological signals", "year": "2004" }, { "authors": "F Miao; Z D Liu; J K Liu; B Wen; Q Y He; Liy ", "journal": "IEEE journal of biomedical and health informatics", "ref_id": "b18", "title": "Multi-sensor fusion approach for cuff-less Blood Pressure measurement", "year": "2019" }, { "authors": "Y Kurylyak; F Lamonaca; D Grimaldi", "journal": "", "ref_id": "b19", "title": "A Neural Network-based method for continuous Blood Pressure estimation from a PPG signal", "year": "2013" }, { "authors": "Sung - Eun; Jungyoon Jong; Kim", "journal": "Neural Computing and Applications", "ref_id": "b20", "title": "Development of intelligent healthcare system based on ambulatory Blood Pressure measuring device", "year": "2021" }, { "authors": "J S Chou; N M Nguyen", "journal": "Applied Soft Computing", "ref_id": "b21", "title": "FBI inspired meta-optimization", "year": "2020" }, { "authors": "R V Rao", "journal": "", "ref_id": "b22", "title": "Teaching-learning-based optimization algorithm", "year": "2016" }, { "authors": "Kristina P Sinaga; Miin-Shen Yang", "journal": "IEEE access", "ref_id": "b23", "title": "Unsupervised K-means clustering algorithm", "year": "2020" }, { "authors": "D Kumalasari; Abw Putra; Afo Gaffar", "journal": "Journal of Information technology", "ref_id": "b24", "title": "Speech classification using combination virtual center of gravity and k-means clustering based on audio feature extraction", "year": "2020" }, { "authors": "M A Haque; J M Kim", "journal": "Multimedia tools and applications", "ref_id": "b25", "title": "An analysis of content-based classification of audio signals using a fuzzy c-means algorithm", "year": "2013" }, { "authors": "N Liu; Z Xu; X J Zeng; P Ren", "journal": "Information Sciences", "ref_id": "b26", "title": "An agglomerative hierarchical clustering algorithm for linear ordinal rankings", "year": "2021" }, { "authors": "J Cai; H Wei; Yang ; H Zhao; X ", "journal": "IEEE Access", "ref_id": "b27", "title": "A novel clustering algorithm based on DPC and PSO", "year": "2020" }, { "authors": "R Nagarajan", "journal": "International Journal of Electrical Engineering and Technology (IJEET)", "ref_id": "b28", "title": "A Graph Based Text Document Clustering Using Harris Hawks Optimizer", "year": "2020" } ]
[ { "formula_coordinates": [ 7, 54.24, 176.52, 329.22, 30.33 ], "formula_id": "formula_0", "formula_text": " CT D , which  refers to CT X ,..., 2 , 1 = " }, { "formula_coordinates": [ 7, 218.54, 230.48, 47.42, 9.33 ], "formula_id": "formula_1", "formula_text": "FF X = CT X = X" }, { "formula_coordinates": [ 7, 54.24, 346.18, 292.31, 62.52 ], "formula_id": "formula_2", "formula_text": "( )             + =  1 * 1 1 ' B D A D D B l FF FF FF    (1) where, Y l ,..... 2 , 1 =" }, { "formula_coordinates": [ 7, 55.94, 416.64, 68.85, 10.1 ], "formula_id": "formula_3", "formula_text": "  1 ,....... 2 , 1 1 -  a B" }, { "formula_coordinates": [ 7, 294.74, 418.19, 72, 8.02 ], "formula_id": "formula_4", "formula_text": "2 5 . 0  - = rand A ," }, { "formula_coordinates": [ 7, 121.96, 505.88, 229.1, 31.62 ], "formula_id": "formula_5", "formula_text": "( ) ( )       + - + = 2 * ' pl FF ql FF l FF FF FF D D D A D D    (2)" }, { "formula_coordinates": [ 7, 58.41, 561.49, 71.44, 8.19 ], "formula_id": "formula_6", "formula_text": "X p q ,......, 2 , 1 , ,  " }, { "formula_coordinates": [ 8, 156.22, 158.05, 187.61, 26.13 ], "formula_id": "formula_7", "formula_text": "( ) ( ) min max max e e C e D prob FF FF - - =   (3)" }, { "formula_coordinates": [ 8, 165.97, 233.35, 177.66, 23.19 ], "formula_id": "formula_8", "formula_text": "kl FF B l FF D D D * 2 1 min '  + =   (4)" }, { "formula_coordinates": [ 8, 54.24, 277.86, 329.22, 27.54 ], "formula_id": "formula_9", "formula_text": "  1 ,..... 2 , 1 2 -  a B ; 2 ,.... 2 , 1 B k = ; and ] 1 , 1 [- =  is considered as the coefficient of effectiveness. The value of 2" }, { "formula_coordinates": [ 8, 146.64, 311.07, 201.51, 14.99 ], "formula_id": "formula_10", "formula_text": "( ) ( ) pl FF ql FF bl FF l FF D D A D D D - + + = * 5 min 2  (5)" }, { "formula_coordinates": [ 8, 164.41, 660.44, 187.01, 13.2 ], "formula_id": "formula_11", "formula_text": "l l l CT Te CT Te D D D        = +    (8)" }, { "formula_coordinates": [ 9, 80.32, 159.99, 12.22, 12.05 ], "formula_id": "formula_12", "formula_text": "1 e D " }, { "formula_coordinates": [ 9, 164.4, 194.99, 196.52, 11.04 ], "formula_id": "formula_13", "formula_text": "1 1 1 * l l l Te Te Te D D A D     - - = + -(9)" }, { "formula_coordinates": [ 9, 83.05, 266.25, 278.29, 46.78 ], "formula_id": "formula_14", "formula_text": "  ( ) ' ' 1 1 , 0.5 3 0.5 * 2* 0.5 1 * 4 CT l l l CT Te CT Te Te D D D A A D A     - -         = + - + + -         (10)" }, { "formula_coordinates": [ 12, 69.79, 546.88, 297.69, 21.7 ], "formula_id": "formula_15", "formula_text": "nearest_centroid = (current_sum_of_data_points_in_nearest_centroid + new_data) / (current_number_of_data_points_in_nearest_centroid + 1)(11)" }, { "formula_coordinates": [ 13, 154.15, 399.52, 201.78, 21.51 ], "formula_id": "formula_16", "formula_text": "𝐿 = ∑ 𝑛 𝑖=1 ∑ 𝑑 𝑖,𝑗 𝑘 𝑗=1 ⋅ 𝐸 𝑖,𝑗 2(13)" } ]
10.18653/v1/2022.insights-1.11
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b12", "b14", "b0" ], "table_ref": [], "text": "The public release of large language model (LLM) products like ChatGPT has triggered a wave of enthusiasm for NLP technologies. As more people discover the wealth of opportunities enabled by Figure 1: The UMLS update process in (a) introduces atoms from individual sources into the original UMLS as synonyms of existing concepts or entirely new concepts. The UVA task is formulated as binary synonymy prediction (b) and was thus unable to tackle the realworld update task addressed by our UVI formulation.\nthese technologies, NLP systems will be expected to perform in a wide variety of real-world scenarios. However, even as LLMs get increasingly more capable, it is unlikely that they will lead to translational solutions alone. Although many aspects are crucial for an NLP system's success, we use this work to highlight one key aspect of building real-world systems which is sometimes taken for granted: formulating a problem in a way that is well-aligned with its real-world counterpart. To explore the effect of this key step in building realworld NLP systems, we provide a case study on the important task of UMLS vocabulary insertion. The Unified Medical Language System (UMLS) (Bodenreider, 2004) is a large-scale biomedical knowledge base that standardizes over 200 medical vocabularies. The UMLS contains approximately 16 million source-specific terms, referred to as atoms, grouped into over 4 million unique concepts, making it one of the most comprehensive publicly available biomedical knowledge bases and a crucial resource for biomedical interoperability. Many of the vocabularies which make up the UMLS are independently updated to keep up with the rapidly advancing biomedical research field. In order for this essential public resource to remain up-to-date, a team of expert editors painstakingly identify which new atoms should be integrated into existing UMLS concepts or added as new concepts, as shown in Figure 1a. This process, which we refer to as UMLS vocabulary insertion (UVI), involves inserting an average of over 300,000 new atoms into the UMLS and is carried out twice a year before each new UMLS version release.\nDespite its importance, scale and complexity, this task is accomplished by editors using lexical information (McCray et al., 1994), synonymy information provided by the source vocabularies and their own expertise. In order to improve this process, much work has been done to augment it with modern NLP techniques. In Nguyen et al. (2021), the authors introduce datasets and models which explore the task of UMLS vocabulary alignment (UVA). As seen in Figure 1b, the authors formulate the UVA task as a binary synonymy prediction task between two UMLS atoms, while the real-world task requires the whole UMLS to be considered and a concept to be predicted for each new atom (unless it is deemed a new concept atom). Unfortunately, while the UVA task has successfully explored biomedical synonymy prediction, its formulation has made it unable to yield practical improvements for the UVI process.\nIn this work, we attempt to address this gap with a novel UVI problem formulation, also depicted in Figure 1b. Our formulation follows the realworld task exactly by predicting whether a new atom should be associated with an existing concept or identified as a new concept atom. We introduce five datasets taken directly from actual UMLS updates starting from the second half of 2020 until the end of 2022. These datasets enabled us to measure the real-world practicality of our systems and led us to findings we could not have discovered otherwise. First, we find that adapting UVA models to perform the UVI task yields much higher error rates than in their original task, showing that their strong performance does not transfer to the real-world setting. Second, contrary to previous work (Bajaj et al., 2022), we find that biomedical language models (LMs) outperform previous UVA models. Thirdly, we discover that rule-based and deep learning frameworks greatly improve each other's performance. Finally, inspired by biomedical entity linking and the complementary nature of our baseline systems, we propose a null-aware and rule-enhanced re-ranking model which outperforms all other methods and achieves low error rates on all five UMLS update datasets. To show our model's practical utility, we quantitatively evaluate its robustness across UMLS update versions and semantic domains, conduct a comparative evaluation against the second best method and carry out a qualitative error analysis to more deeply understand its limitations. We hope that our case study helps researchers and practitioners reflect on the importance of problem formulation for the translational success of NLP systems." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "UMLS Vocabulary Alignment", "publication_ref": [ "b14", "b16", "b14", "b14", "b10", "b11", "b0" ], "table_ref": [], "text": "Previous work to improve UMLS editing formulates the problem as biomedical synonymy prediction through the UMLS vocabulary alignment task (Nguyen et al., 2021(Nguyen et al., , 2022;;Wijesiriwardene et al., 2022). These investigations find that deep learning methods are effective at predicting synonymy for biomedical terms, obtaining F1 scores above 90% (Nguyen et al., 2021). Although this formulation can help explore biomedical synonymy prediction, it does not consider the larger UMLS updating task and thus the strong performance of these models does not transfer to real-world tasks such as UVI.\nApart from the clear difference in scope between UVA and UVI shown in Figure 1b, major differences in evaluation datasets contribute to the gap in UVA's applicability to the UVI task. In Nguyen et al. (2021), the authors built a synonymy prediction dataset with almost 200 million training and test synonym pairs to approximate the largescale nature of UMLS editing. UVA dataset statistics can be found in Appendix A. Since the UVA test set was created using lexical similarity aware negative sampling, it does not hold the same distribution as all the negative pairs in the UMLS. Since the UVI task considers all of the UMLS, UVA sampling leads to a significant distribution shift between these tasks. This unfortunately diminishes the usefulness of model evaluation on the UVA dataset for the real-world task. Surprisingly, this gap results in biomedical language models like BioBERT (Lee et al., 2019) and SapBERT (Liu et al., 2021) underperforming previous UVA models in the UVA dataset Bajaj et al. (2022) while outperforming them in our experiments." }, { "figure_ref": [], "heading": "Biomedical Entity Linking", "publication_ref": [ "b4", "b8", "b9", "b5", "b11", "b20", "b19", "b11", "b20", "b19", "b11", "b15", "b3", "b11" ], "table_ref": [], "text": "In the task of biomedical entity linking, terms mentioned within text must be linked to existing concepts in a knowledge base, often UMLS. Our own task, UMLS vocabulary insertion, follows a similar process except for three key differences: 1) relevant terms come from biomedical vocabularies rather than text, 2) some terms can be new to the UMLS and 3) each term comes with source-specific information. Many different strategies have been used for biomedical entity linking such as expert-written rules (D'Souza and Ng, 2015), learning-to-rank methods (Leaman et al., 2013), models that combine NER and entity-linking signals (Leaman and Lu, 2016;Furrer et al., 2022) and language model fine-tuning (Liu et al., 2021;Zhang et al., 2022;Yuan et al., 2022). Due to the strong parallels between biomedical entity-linking and our task, we leverage the best performing LM based methods for the UVI task Liu et al. (2021); Zhang et al. (2022); Yuan et al. (2022). These methods finetune an LM to represent synonymy using embedding distance, enabling a nearest neighbor search to produce likely candidates for entity linking.\nThe first difference between biomedical entity linking and UVI is addressed by ignoring textual context as done in Liu et al. (2021), which we adopt as a strong baseline. The second difference, that some new atoms can be new to the UMLS, is addressed by work which includes un-linkable entities in the scope of their task (Ruas and Couto, 2022;Dong et al., 2023). In these, a cross-encoder candidate module introduced by Wu et al. ( 2020) is used to re-rank the nearest neighbors suggested by embedding methods like Liu et al. (2021) with an extra candidate which represents that the entity is unlinkable, or in our case, a new concept atom. The third difference has no parallel in biomedical entity linking since mentions do not originate from specific sources and is therefore one of our contributions in §4.6." }, { "figure_ref": [], "heading": "UMLS Vocabulary Insertion", "publication_ref": [], "table_ref": [], "text": "We refer to UMLS Vocabulary Insertion (UVI) as the process of inserting atoms from updated or new medical vocabularies into the UMLS. In this task, each new term encountered in a medical source vocabulary is introduced into the UMLS as either a synonym of an existing UMLS concept or as an entirely new concept. In this section, we describe our formulation of the UVI task, the baselines we adapted from previous work, as well as a thorough description of our proposed approach." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "First, we define the version of the UMLS before the update as K := {c 1 , ..., c n }, a set of unique UMLS concepts c i . Each concept c i is defined as c i := {a i 1 , ..., a i k i } where each atom a i j , as they are referred to by the UMLS, is defined as the j th source-specific synonym for the i th concept in the UMLS.\nIn the UMLS Vocabulary Insertion (UVI) task, a set of m new atoms Q := {q 1 , ..., q m } must be integrated into the current set of concepts K. Thus, we can now define the UVI task as the following function I which maps a new atom q j to its gold labelled concept c q j if it exists in the old UMLS K or to a null value if it is a new concept atom, as described by the following Equation 1.\nI(K, q j ) = c q j if c q j ∈ K ∅ otherwise (1)\n4 Experimental Setup test sets. We do stratified sampling to keep the distribution of semantic groups, categories defined by the UMLS, constant across splits within each insertion set. This is important since the distribution of semantic groups changes significantly across insertion datasets and preliminary studies showed that performance can vary substantially across categories. For details regarding the number of examples in each split and the distribution of semantic groups across different insertion sets, refer to Appendix B." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [], "table_ref": [], "text": "We report several metrics to evaluate our methods comprehensively on the UVI task: accuracy, new concept metrics and existing concept accuracy.\nAccuracy. It measures the percentage of correct predictions over the full insertion set Q. New Concept Metrics. These measure how well models predict new atoms as new concepts and they are described in Equation 2. The terms in Equation 2, subscripted by nc, refer to the number of true positive (TP), false positive (FP) and false negative (FN) examples, calculated by using the new concept label as the positive class.\nP nc = T P nc T P nc + F P nc R nc = T P nc T P nc + F N nc (2)\nExisting Concept Accuracy. This metric shows model performance on atoms in Q which were linked by annotators to the previous version of UMLS K, as shown in Equation 3. Let N ec be the number of concepts in Q which were linked to concepts in K.\nA ec = 1 N ec q j ∈Q ĉq j = c q j if c q j ∈ K 0 otherwise ĉq j := I(K, q j )\n(3)" }, { "figure_ref": [], "heading": "UVA Baselines", "publication_ref": [ "b14", "b12", "b14", "b21" ], "table_ref": [], "text": "We adapted several UVA specific system as baselines for our UMLS vocabulary insertion task.\nRule-based Approximation (RBA). (Nguyen et al., 2021) This system was designed to approximate the decisions made by UMLS editors regarding atom synonymy using three simple rules. Two atoms were deemed synonymous if 1) they were labelled as synonyms in their source vocabularies, 2) their strings have identical normalized forms and compatible semantics (McCray et al., 1994) and3) the transitive closure of the other two strategies. We thus define the I function for the UVI task as follows. We first obtain an unsorted list of atoms a i in K deemed synonymous with q j by the RBA. We then group these atoms by concept to make a smaller set of unique concepts c i . Since this predicted concept list is unsorted, if it contains more than one potential concept, we randomly select one of them as the predicted concept ĉq j . If the RBA synonym list is empty, we deem the new atom as not existing in the current UMLS version.\nLexLM. (Nguyen et al., 2021) The Lexical-Learning Model (LexLM) system was designed as the deep learning alternative to the RBA and trained for binary synonymy prediction using the UVA training dataset. Their proposed model consists of an LSTM encoder over BioWordVec (Zhang et al., 2019) embeddings which encodes two strings and calculates a similarity score between them. A threshold is used over the similarity score to determine the final synonymy prediction.\nTo adapt this baseline to the UVI task, we define the insertion function I as mapping a new atom q j to the concept in K, ĉq j , containing the atom with the highest similarity score to q j based on the LexLM representations. To allow the function I to predict that q j does not exist in the current UMLS and should be mapped to the empty set ∅), we select a similarity threshold for the most similar concept under which q j is deemed a new atom. For fairness in evaluation, the similarity threshold is selected using the 2020AB UVI training set." }, { "figure_ref": [], "heading": "LM Baselines", "publication_ref": [ "b0", "b6", "b11" ], "table_ref": [], "text": "Previous work finds that language models do not improve UVA performance (Bajaj et al., 2022). However, given our new formulation, we evaluate\nUMLS (K) atom (q) concept RBA,1 … concept RBA,k concept RBA,i … concept RBA,j concept SapBERT,1 … concept SapBERT,N New Concept concept RBA,i 0.15 … … concept RBA,j 0.25 concept SapBERT,1 0.20 … … concept SapBERT,N 0.30 New Concept 0.10 Re-Ranker + RBA Signal RBA SapBERT concept SapBERT,1 … concept SapBERT,N\nFigure 2: Overall architecture for our best performing approach on the new UVI task formulation. Our methodology leverages the best distance-based ranking model (SapBERT) as well as RBA signal. Additionally, our design allows new atoms to be identified as new concepts by introducing a 'New Concept' placeholder into the candidate list given to the re-ranking module as shown above.\ntwo language models in the more realistic UVI task using the same strategy described for the LexLM model above. For implementation details, we refer the interested reader to Appendix C.\nPubMedBERT (Gu et al., 2021). PubMedBERT is one of the most capable biomedical specific language models available due to its from scratch pre-training on biomedical data as well as its specialized biomedical tokenizer.\nSapBERT (Liu et al., 2021). SapBERT is a language model designed for biomedical entity linking or concept normalization. It was developed by finetuning the original PubMedBERT on the 2020AA version of UMLS using a contrastive learning objective. This objective incentivizes synonymous entity representations in UMLS to be more similar than non-synonymous ones." }, { "figure_ref": [], "heading": "Augmented RBA", "publication_ref": [], "table_ref": [], "text": "Given that the neural representation baselines discussed above provide a ranking system missing from the RBA, we create a strong baseline by augmenting the RBA system with each neural ranking baseline. In these simple but effective baselines, the concepts predicted by the RBA are ranked based on their similarity to q j using each neural baseline system. New concept prediction uses the same method employed by the original RBA model." }, { "figure_ref": [], "heading": "Our Approach: Candidate Re-Ranking", "publication_ref": [ "b18", "b18" ], "table_ref": [], "text": "Our candidate re-ranking approach is inspired by some entity linking systems which use two distinct steps: 1) candidate generation, which uses a biencoder like the baselines described above, and 2) candidate re-ranking, in which a more computationally expensive model is used to rank the k most similar concepts obtained by the bi-encoder. Other work (Wu et al., 2020) encodes both new atoms and candidates simultaneously using language models, allowing for the encoding of one to be conditioned on the other. Our cross-encoder is based on PubMedBERT2 and we use the most similar 50 atoms which represent unique concepts as measured by the best baseline, the RBA system augmented with SapBERT ranking. More concretely, the atom which represents each candidate concept a c i is appended to new atom q j and encoded as follows: [CLS] q j [SEP ] a c i . Since the number of RBA candidates differs for every new atom, if the RBA produces less that 50 candidates, the remaining candidates are selected from SapBERT's nearest neighbor candidates. We use the BLINK codebase (Wu et al., 2020) to train our re-ranking module. More information about our implementation can be found in Appendix C." }, { "figure_ref": [], "heading": "Null Injection", "publication_ref": [ "b3", "b15" ], "table_ref": [], "text": "In contrast with standard entity linking settings where every mention can be linked to a relevant entity, UVI requires some mentions or new atoms to be deemed absent from the relevant set of entities. To achieve this in our re-ranking framework, we closely follow unlinkable biomedical entity linking methods (Dong et al., 2023;Ruas and Couto, 2022) and introduce a new candidate, denoted by the NULL token, to represent the possibility that the atom is new to the UMLS." }, { "figure_ref": [], "heading": "RBA Enhancement", "publication_ref": [], "table_ref": [], "text": "Finally, given the high impact of the RBA system in preliminary experiments, we integrate rule-based information into the candidate re-ranking learning.\nThe RBA provides information in primarily two ways: 1) the absence of RBA synonyms sends a strong signal for a new atom being a novel concept in the UMLS and 2) the candidate concepts which the RBA predicted, rather than the ones predicted based solely on lexical similarity, have a higher chance of being the most appropriate concept for the new atom. Thus, we integrate these two information elements into the cross-encoder by 1) when no RBA synonyms exist, we append the string \"(No Preferred Candidate)\" to the new atom q j and 2) every candidate that was predicted by the RBA is concatenated with the string \"(Preferred)\". This way, the cross-encoder obtains access to vital RBA information while still being able to learn the decision-making flexibility which UMLS editors introduce through their expert knowledge." }, { "figure_ref": [], "heading": "Results & Discussion", "publication_ref": [], "table_ref": [], "text": "In this section, we first discuss performance of our baselines and proposed methods on the UMLS 2020AB test set. We then evaluate the generalizability of our methods across UMLS versions and biomedical subdomains. Finally, we provide a comparative evaluation and a qualitative error analysis to understand our model's potential benefits and limitations." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Baselines. As seen in Table 2, previous baselines such as RBA, LexLM and biomedical language models like PubMedBERT and SapBERT stay under the 80% mark in overall accuracy, with specially low performance in the existing concept accuracy metric. Even SapBERT, which is fine-tuned for the biomedical entity linking task, is unable to obtain high existing concept and new concept prediction scores when using a simple optimal similarity threshold method. Nevertheless, a simple baseline which combines the strengths of neural models and the rule-based system obtains surprisingly strong results. This is especially the case for augmenting the RBA with SapBERT which obtains a 90% overall accuracy and existing concept accuracy of 76%. We note that the new concept recall and precision of all RBA baselines is the same since the same rule-based mechanism is used.\nOur Approach. For the PubMedBERT-based reranking module, we find that the NULL injection mechanism enables it to outperform the models that rely solely on lexical information (LexLM, Pub-MedBERT and SapBERT) by a wide margin. However, it underperforms the best augmented RBA baseline substantially, underscoring the importance of RBA signal for the UVI task. Finally, we note that RBA enhancement allows the re-ranking module to obtain a 93.2% accuracy due to boosts in existing concept accuracy and new concept precision of almost 10% and 4% respectively. These improvements comes from several important features of our best approach which we discuss in more detail in §5.3, namely the ability to flexibly determine when a new atom exists in the current UMLS even when it has no RBA synonyms and to apply rules used by UMLS editors seen in the model's training data. This substantial error reduction indicates our method's potential as a useful tool for supporting UMLS editors." }, { "figure_ref": [ "fig_1" ], "heading": "Model Generalization", "publication_ref": [], "table_ref": [], "text": "In this section, we note the robust generalization of our re-ranking module across both UMLS versions and semantic groups (semantic categories defined by the UMLS).\nAcross Versions. In Figure 3, we see that the best performing baseline RBA + SapBERT and our best method obtain strong performance across all five UMLS insertion datasets. Even though our proposed approach obtains the largest gains in the 2020AB set in which it was trained, it achieves stable existing concept accuracy and new concept F1 score improvements across all sets and shows no obvious deterioration over time, demonstrating its practicality for future UMLS updates. Unfortunately, we do observe a significant dip in new concept F1 for all models in the 2021AA dataset mainly due to the unusually poor performance of the RBA in one specific source, Current Procedural Terminology (CPT), for that version. Across Subdomains. Apart from evaluating whether our proposed approach generalizes across UMLS versions, we evaluate how model performance changes across different semantic groups.\nTable 3 shows the results of our best baseline (RBA + SapBERT) compared against our best proposed approach (Re-Ranker + RBA Signal) on the nine most frequent semantic groups averaged over all development insertion sets. We report the results in detail over all insertion sets in Appendix E. Our evaluation reveals that even though our best baseline performs quite well across several semantic groups, performance drops in challenging categories like Drugs, Genes, Procedures and the more general Concepts & Ideas category. Our approach is able to improve performance across most groups to above 90%, with the exception of Genes and Procedures. Since the distribution of semantic groups can vary widely across UMLS updates, as seen in the dataset details in Appendix B, our model's improved semantic group robustness is vital for its potential in improving the efficiency of the UMLS update process. As for the categories in which our approach remained below 90% like Genes and Procedures, we find that they are mainly due to outlier insertion sets. Both the Genes and Procedures categories have one insertion set, 2022AA and 2021AA respectively, in which the performance of both systems drops dramatically due to a weak RBA signal which our methodology was unable to correct for. We refer the interested reader to Appendix E for these results and a more detailed discussion around this limitation." }, { "figure_ref": [], "heading": "Comparative Evaluation", "publication_ref": [], "table_ref": [ "tab_3", "tab_4", "tab_4" ], "text": "As mentioned in the main results, our best model outperforms the best baseline mainly through improvements in existing concept accuracy and new concept precision. In Table 4, we report the distribution of 2,943 examples incorrectly predicted by RBA + SapBERT amended by our best approach. We note that a large majority, around 60%, of the corrections are concept linking corrections, new atoms which are linked to an existing concept correctly while they were wrongly predicted as new concept atoms by the baseline. Most of the remain- The examples shown in Table 5 illustrate the benefits of our proposed approach more clearly. In the first two rows, we see two re-ranking corrections. In the first example, SapBERT incorrectly identifies '<eudicots>' as being closer to '<moth>' than '<angiosperm>' but our model has learned to interpret the disambiguation tags and correctly associates 'eudicots' with 'angiosperm' as levels of plant family classifications. In the second example, we observe that our trained model learns to link new atoms to concepts which have more comprehensive information such as the addition of the \"Regimen\" phrase. Although this is an editorial rule rather than an objective one, it is important to note that our model can adequately encode these.\nThe final two rows in Table 5 show concept linking corrections. These examples illustrate the most important feature of our proposed model, the ability to link new atoms to concepts even when the RBA would consider them a new concept atom. In these instances, the model must determine whether all the features in the new atom are present in any potential candidates without support from the RBA. In these two examples, the model is able to correctly identify synonymy by mapping 'removal of the right ovary' to 'right oophorectomy', 'NA' to 'Sodium' and 'TAB' to 'Oral Tablet. " }, { "figure_ref": [], "heading": "Error Analysis", "publication_ref": [], "table_ref": [ "tab_6", "tab_6" ], "text": "Given that our work focuses on a specific practical application, in this section, we aim to more deeply understand how our approach can be effectively adopted by UMLS editors in their vocabulary insertion task. To this end, we recruited a biomedical terminology expert familiar with the UMLS vocabulary insertion process to analyze the practical effectiveness and limitations of our system.\nWe first studied the calibration of our best model's output as a way to understand its error detection abilities. As shown in detail in Appendix F, we see a substantial drop in performance when model confidence, a softmax over candidate logit scores, drops below 90%. This drop could indicate that our model is well calibrated, however, our qualitative experiments reveal that this signal comes from a large number of annotation errors in the UMLS which are easily detected by our problem formulation.\nWe discovered this through a qualitative error analysis carried out with the help of the aforementioned biomedical terminology expert. We chose three sets of 30 randomly chosen example errors with different model confidence scores: high (90%-100%), medium (60%-70%) and low (30%-40%). Our expert editor reports several important findings. First, there was no substantial difference in example difficulty between different model confidence bins. Second, 70% of model errors are caused by the existence of UMLS concepts which have phrases that are equivalent to the new atoms, leading to ambiguous examples which can be found in the first section of Table 6. This arises from two types of annotation errors within the UMLS, either the new atom was incorrectly introduced into the UMLS or the phrase that is representing that concept was previouly introduced into UMLS incorrectly. Out of this study, the expert found 15 out of the 90 instances where our model's suggestions lead to detecting incorrect associations in the original UMLS vocabulary insertion process. This evaluation suggests that our model could be quite useful in supporting quality assurance for the UMLS.\nEven though most model errors are caused by annotation issues in the UMLS, there are still some which are due to complexity and ambiguity. In the bottom half of Table 6, we see examples that our model still struggles with. First, the new atom \"urea 400 MG/ML\" should have been mapped to \"urea 40%\" since the percentage is calculated as the number of grams in 100 mL. However, this decision requires not only the knowledge of this definition but also mathematical reasoning abilities. Finally, the last error in our table is caused by the ambiguity in deciding whether \"human echovirus\" and \"echovirus\" should be deemed equivalent. We note that both of these error types as well as the previously mentioned annotation errors show that our model's errors are occurring on scenarios which are either unsolvable or very challenging, shedding light on its potential as a practical system to support UMLS editors." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In conclusion, this paper emphasizes the importance of formulating NLP problems that align well with real-world scenarios in the midst of growing enthusiasm for NLP technologies. Focusing on the real-world task of UMLS vocabulary insertion, we demonstrate the importance of problem formulation by showcasing the differences between the UMLS vocabulary alignment formulation and our own UVI formulation. We evaluate existing UVA models as baselines and find that their performance differs significantly in the real-world setting. Additionally, we show that our formulation allows us to not only discover straightforward but exceptionally strong new baselines but also develop a novel nullaware and rule-enhanced re-ranking model which outperforms all other methods. Finally, we show that our proposed approach is highly translational by providing evidence for its robustness across UMLS versions and biomedical subdomains, exploring the reasons behind its superior performance over our baselines and carrying out a qualitative error analysis to understand its limitations. We hope our case study highlights the significance of problem formulation and offers valuable insights for researchers and practitioners for building effective and practical NLP systems." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We acknowledge several limitations to our investigation, which we propose to address in future work. First, while our formulation aligns exactly with part of the insertion process, there are aspects of the full insertion of new terms into the UMLS which are out of our scope. While we do identify terms that are not linked to existing UMLS concepts, we do not attempt to group these terms into new concepts. The identification of synonymous terms for new concepts will be addressed in future work. Second, except for the RBA approach that leverages lexical information and source synonymy, our approach does not take advantage of contextual information available for new terms (e.g., hierarchical information provided by the source vocabulary). We plan to follow (Nguyen et al., 2022) and integrate this kind of information that has been shown to increase precision without detrimental effect on recall in the UVA task. Third, our approach uses a single term, the term closest to the new atom, as the representative for the concept for linking purposes. While this approach drastically simplifies processing, it also restricts access to the rich set of synonyms available for the concept. We plan to explore alternative trade offs in performance when including more concept synonyms. Finally, reliance on the RBA information had the potential for incorrectly identifying new concepts when RBA signal is not complete. Even though RBA signal is quite useful for this task, it is important to build systems robust to its absence. We plan to explore this robustness more actively in future work by including such incomplete signal in the training process." }, { "figure_ref": [], "heading": "A Original UVA Dataset", "publication_ref": [ "b14" ], "table_ref": [ "tab_7" ], "text": "Table 7 lists the basic statistics for the UMLS vocabulary alignment datasets. Since the UVA task was formulated and evaluated only as a binary classification task, the dataset is divided into positive and negative pairs. For more details about how the negative pairs were sampled from the UMLS, we refer the interested reader to §4.2 of Nguyen et al. (2021). " }, { "figure_ref": [], "heading": "B UVI Dataset Details", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "In Table 8 In terms of dataset construction, we reiterate that stratified sampling based on semantic groups was used to keep the original distributions intact. We adopt this technique due to the substantial changes in semantic group distribution across insertion sets, as seen in 4, as well as the high variance in model performance across semantic categories, as seen in §5.2 and Appendix E." }, { "figure_ref": [], "heading": "C Implementation Details", "publication_ref": [ "b17", "b7", "b7", "b18" ], "table_ref": [ "tab_9", "tab_10" ], "text": "In this section we discuss the implementation details for our baselines as well as our proposed approach. For the UMLS vocabulary alignment baselines, we use the same implementation of the Rule-Based Approximation (RBA) and LexLM used by the authors in Nguyen et al. ( 2021). To implement our language model baselines we use the Hug-gingFace Transformers library (Wolf et al., 2020). We use the FAISS library (Johnson et al., 2021) to speed up nearest neighbor search using GPUs when experimenting with LexLM, SapBERT and PubMedBERT embeddings (Johnson et al., 2021). We train our cross-encoder re-ranking module using BLINK (Wu et al., 2020), which uses a crossentropy loss to maximize the score of the correct candidate over the rest of the candidates. We use default hyperparameters listed in Table 9 to train our re-ranking module but perform early stopping using the accuracy metric on our 2020AB validation set. All experiments used an NVIDIA V100 GPU with 16 GB of VRAM. The models we used and the approximate amount of GPU hours used for each is listed in Table 10. " }, { "figure_ref": [], "heading": "D Latency Comparison", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "In Table 11, we report the inference latency for each baseline as well as our proposed approaches on the UVI task. As seen in the table, our approach has significantly slower inference than previous baselines. Nevertheless, since the UMLS insertion task happens only twice a year, variations in inference latency are not a significant concern as long as the process can be run within a reasonable amount of time on available computing resources. We hope that these numbers can help other researchers and practitioners understand the computing requirements on this or similar tasks. " }, { "figure_ref": [], "heading": "E Detailed Semantic Group Evaluation", "publication_ref": [], "table_ref": [ "tab_13", "tab_13" ], "text": "As mentioned in 5.2, different UMLS updates often contain completely different semantic group distributions since they depend entirely on independent source updates. Due to this, generalization across different semantic categories (semantic groups in the UMLS) is a crucial feature for a system to be successful in real-world UMLS vocabulary insertion. Table 13 provides a detailed report of the performance of our strongest baseline and our best proposed approach on all development insertion sets across the 9 most frequent semantic groups.\nAs seen in these detailed results, our proposed approach obtains stronger and more consistent results across all semantic groups compared to our best baseline.\nNevertheless, as discussed in the main text, our approach remained below 90% on average in categories like Genes and Procedures. In the broken down results in Table 13, we can more clearly see that these averaged results are caused by outlier insertion sets. For the Genes semantic group, our proposed approach improves performance considerably for all insertion sets except for 2022AA, in which its performance drops by more than 10 points. We note that the performance of the best baseline is also much lower than usual, potentially indicating a weak RBA signal and challenging atoms to link. For the Procedures category, we see a similar pattern in the 2021AA insertion set while the other sets see small but regular improvements with our system. These results indicate that, although our proposed approach can leverage the RBA signal more consistently when it is sufficiently strong, it fails to correct for it when it is very weak to begin with. It is therefore important to continue working on ways to correct or at least alert annotators about potential system failures in specific concept sub-groups." }, { "figure_ref": [], "heading": "F Model Calibration Details", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "As discussed above, our re-ranker model's output confidence, defined as a softmax over candidate logit scores produced by our model, seemed correlated with model accuracy. In Table 12, we show model accuracy across different model confidence scores. We find that model confidence score is highly correlated with model accuracy, which drops to around 50% when model confidence drops below 90% and continues to drop after that. Through qualitative analysis, we find that this does not indicate successful model calibration but is actually mainly caused by annotation errors within UMLS which result in duplicate and ambiguous concepts. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors would like to thank the expert UMLS annotators from the NLM for their detailed error analysis. We also appreciate constructive comments from anonymous reviewers and our NLM and OSU NLP group colleagues. This research was supported in part by NIH R01LM014199, the Ohio Supercomputer Center (Center, 1987) and the Intramural Research Program of the NIH, National Library of Medicine." } ]
As the immense opportunities enabled by large language models become more apparent, NLP systems will be increasingly expected to excel in real-world settings. However, in many instances, powerful models alone will not yield translational NLP solutions, especially if the formulated problem is not well aligned with the real-world task. In this work, we study the case of UMLS vocabulary insertion, an important real-world task in which hundreds of thousands of new terms, referred to as atoms, are added to the UMLS, one of the most comprehensive open-source biomedical knowledge bases (Bodenreider, 2004). Previous work aimed to develop an automated NLP system to make this time-consuming, costly, and error-prone task more efficient. Nevertheless, practical progress in this direction has been difficult to achieve due to a problem formulation and evaluation gap between research output and the real-world task. In order to address this gap, we introduce a new formulation for UMLS vocabulary insertion which mirrors the real-world task, datasets which faithfully represent it and several strong baselines we developed through re-purposing existing solutions. Additionally, we propose an effective rule-enhanced biomedical language model which enables important new model behavior, outperforms all strong baselines and provides measurable qualitative improvements to editors who carry out the UVI task. We hope this case study provides insight into the considerable importance of problem formulation for the success of translational NLP solutions.
Solving the Right Problem is Key for Translational NLP: A Case Study in UMLS Vocabulary Insertion
[ { "figure_caption": "Figure 3 :3Figure3: Existing concept accuracy (left) and new concept F1 (right) of the best model from each baseline type and our best approach across 5 UVI datasets from 2020AB to 2022AB. All improvements over the best baseline are very highly significant (p-value < 0.001).", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: This figure shows the incidence of each of the most frequent 8 semantic groups across the 5 insertion sets explored in this work.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "UMLS Statistics from 2020AB to 2022AB. Our models are trained on the 2020AB insertion dataset.", "figure_data": "4.1 DatasetsTo evaluate the UVI task in the most realistic waypossible, we introduce a set of five insertion setsQ which contain all atoms which are inserted intothe UMLS from medical source vocabularies byexpert editors twice a year. Due to their real-worldnature, these datasets vary in size and new conceptdistribution depending on the number and type ofatoms that are added to source vocabularies beforeevery update as shown in Table 1. We note that theversion of the UMLS we use contains 8.5 ratherthan 16 million atoms because we follow previouswork and only use atoms that are in English, comefrom active vocabularies and are non-suppressible,features defined by UMLS editors.While most of our experiments focus on theUMLS 2020AB, we use the other four as test setsto evaluate temporal generalizability. We split the2020AB insertion dataset into training, dev andtest sets using a 50:25:25 ratio and the other in-sertion datasets using a 50:50 split into dev and", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison for rule-based, distance-based and combined baselines against our re-ranking approaches both with and without RBA-signal over all our metrics. All results reported above were calculated on the 2020AB UMLS insertion dataset. We find that all improvements of our best approach over the RBA+SapBERT baseline are very highly significant (p-value < 0.001) based on a paired t-test with bootstrap resampling.", "figure_data": "AccuracyNew Concept Recall PrecisionF1Existing Concept AccuracyRule Based Approximation (RBA)70.199.090.594.626.3LexLM63.289.592.490.922.4PubMedBERT68.499.167.380.220.7SapBERT77.494.179.286.052.0RBA + LexLM80.499.090.594.651.6RBA + PubMedBERT83.799.090.594.660.0RBA + SapBERT90.799.090.594.676.1Re-Ranker (PubMedBERT)85.596.391.693.968.4+ RBA Signal93.298.296.197.185.5", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Distribution of examples incorrectly predicted by the best baseline amended by our best model.", "figure_data": "Correction TypeCorrection %Concept Linking59.5Re-Ranking35.9New Concept Identification4.6", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Some examples which were incorrectly predicted by our best baseline (RBA + SapBERT), shown above in red, but corrected by our best proposed reranking model, shown above in green.ing corrections, 35.9%, are re-ranking corrections based on our model's ability to re-rank gold concept over other candidate concepts. The final 5% comes from new concept identification corrections in which a new atom is correctly identified as a new concept atom when it was incorrectly linked to an existing one by the best baseline.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Some examples which were incorrectly predicted by our best proposed model, shown in red. Gold label concepts are marked with green. The first two rows show two errors caused by UMLS annotations while the final two are legitimate errors caused by complexity and ambiguity.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Original UVA dataset statistics.", "figure_data": "UVA Pairs Positive Pairs Negative PairsTrain 192,400,46222,324,834170,075,628Test173,035,8625,581,209167,454,653", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": ", we report the size of our five UMLS vocabulary insertion dataset splits. We note that only the 2020AB version contains a training set, all other insertion sets only have development and test sets. Experimental split statistics for UMLS insertion dataset Q from 2,020 to 2,022.", "figure_data": "TrainDevTest2020AB 215,402 105,796 108,9372021AA-112,647 113,5632021AB-227,440 228,0532022AA-88,18687,8032022AB-138,107 137,735", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Hyperparameters selected for our cross-encoder re-ranking training for reproducibility.", "figure_data": "LearningTotalBatchWarmupRateEpochsSizeRatio2e-5310.1# of ParametersTotal GPU(millions)HoursLexLM0.25PubMedBERT100140SapBERT10040", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Total GPU Hours associated with our experiments. PubMedBERT GPU hours include both UMLS encoding and fine-tuning for our re-ranking module.", "figure_data": "", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Time spent on inference for each baseline as well as our proposed approach.", "figure_data": "ModelInference Latency (ms)Time for 300k Atoms (mins)RBA0.010.05LexLM1.286.40SapBERT2.5012.50RBA + LexLM1.296.45RBA + SapBERT2.5112.55Re-Ranker (RBA Signal)35.51177.5", "figure_id": "tab_11", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "The output probability of our best re-ranking approach (the probability of the highest scoring candidate concept) seemed to be correlated with high prediction accuracy but actually indicates annotation errors.", "figure_data": "ModelNumberConfidenceofAccuracy(%)Examples0238.7108022.52020632.53039736.0401,28256.05096455.16051148.77041150.68059055.39038,07692.110062,74399.8", "figure_id": "tab_12", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Breakdown for Table3over all insertion development sets and the most frequent semantic groups. These detailed results can help us more closely understand model failures across semantic groups compared to the aggregated results.", "figure_data": "", "figure_id": "tab_13", "figure_label": "13", "figure_type": "table" } ]
Bernal Jiménez Gutiérrez; Yuqing Mao; Vinh Nguyen; Kin Wah Fung; Yu Su; Olivier Bodenreider
[ { "authors": "Goonmeet Bajaj; Vinh Nguyen; Thilini Wijesiriwardene; Hong Yung Yip; Vishesh Javangula; Amit Sheth; Srinivasan Parthasarathy; Olivier Bodenreider", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Evaluating biomedical word embeddings for vocabulary alignment at scale in the UMLS Metathesaurus using Siamese networks", "year": "2022" }, { "authors": "Olivier Bodenreider", "journal": "Nucleic acids research", "ref_id": "b1", "title": "The Unified Medical Language System (UMLS): Integrating Biomedical Terminology", "year": "2004" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "Ohio supercomputer center", "year": "1987" }, { "authors": "Hang Dong; Jiaoyan Chen; Yuan He; Yinan Liu; Ian Horrocks", "journal": "", "ref_id": "b3", "title": "Reveal the unknown: Out-ofknowledge-base mention discovery with entity linking", "year": "2023" }, { "authors": "D' Jennifer; Vincent Souza; Ng", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Sieve-based entity linking for the biomedical domain", "year": "2015" }, { "authors": "Lenz Furrer; Joseph Cornelius; Fabio Rinaldi", "journal": "BMC Bioinformatics", "ref_id": "b5", "title": "Parallel sequence tagging for concept recognition", "year": "2022" }, { "authors": "Yu Gu; Robert Tinn; Hao Cheng; Michael Lucas; Naoto Usuyama; Xiaodong Liu; Tristan Naumann; Jianfeng Gao; Hoifung Poon", "journal": "ACM Trans. Comput. Healthcare", "ref_id": "b6", "title": "Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing", "year": "2021" }, { "authors": "Jeff Johnson; Matthijs Douze; Hervé Jégou", "journal": "IEEE Transactions on Big Data", "ref_id": "b7", "title": "Billion-scale similarity search with gpus", "year": "2021" }, { "authors": "Robert Leaman; Rezarta Islamaj Dogan; Zhiyong Lu", "journal": "Bioinformatics", "ref_id": "b8", "title": "DNorm: disease name normalization with pairwise learning to rank", "year": "2013" }, { "authors": "Robert Leaman; Zhiyong Lu", "journal": "Bioinformatics", "ref_id": "b9", "title": "TaggerOne: joint named entity recognition and normalization with semi-Markov Models", "year": "2016" }, { "authors": "Jinhyuk Lee; Wonjin Yoon; Sungdong Kim; Donghyeon Kim; Sunkyu Kim; Chan Ho; So ; Jaewoo Kang", "journal": "Bioinformatics", "ref_id": "b10", "title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining", "year": "2019" }, { "authors": "Fangyu Liu; Ehsan Shareghi; Zaiqiao Meng; Marco Basaldella; Nigel Collier", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Self-alignment pretraining for biomedical entity representations", "year": "2021" }, { "authors": "Alexa T Mccray; Suresh Srinivasan; Allen C Browne", "journal": "", "ref_id": "b12", "title": "Lexical methods for managing variation in biomedical terminologies", "year": "1994" }, { "authors": "Hong Vinh Phu Nguyen; Goonmeet Yung Yip; Thilini Bajaj; Vishesh Wijesiriwardene; Srinivas Javangula; Amit P Parthasarathy; Olivier Sheth; Bodenreider", "journal": "", "ref_id": "b13", "title": "Context-enriched learning models for aligning biomedical vocabularies at scale in the umls metathesaurus", "year": "2022" }, { "authors": "Hong Vinh Phu Nguyen; Olivier Yung Yip; Bodenreider", "journal": "", "ref_id": "b14", "title": "Biomedical vocabulary alignment at scale in the umls metathesaurus", "year": "2021" }, { "authors": "Pedro Ruas; Francisco M Couto", "journal": "Journal of Biomedical Informatics", "ref_id": "b15", "title": "Nilinker: Attention-based approach to nil entity linking", "year": "2022" }, { "authors": "Thilini Wijesiriwardene; Phu Vinh; Goonmeet Nguyen; Hong Bajaj; Vishesh Yung Yip; Yuqing Javangula; Kin Wah Mao; Srinivas Fung; Amit P Parthasarathy; Olivier Sheth; Bodenreider", "journal": "", "ref_id": "b16", "title": "Ubert: A novel language model for synonymy prediction at scale in the umls metathesaurus", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Ledell Wu; Fabio Petroni; Martin Josifoski; Sebastian Riedel; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Scalable zeroshot entity linking with dense entity retrieval", "year": "2020" }, { "authors": "Zheng Yuan; Zhengyun Zhao; Haixia Sun; Jiao Li; Fei Wang; Sheng Yu", "journal": "Journal of Biomedical Informatics", "ref_id": "b19", "title": "Coder: Knowledgeinfused cross-lingual medical term embedding for term normalization", "year": "2022" }, { "authors": "Sheng Zhang; Hao Cheng; Shikhar Vashishth; Cliff Wong; Jinfeng Xiao; Xiaodong Liu; Tristan Naumann; Jianfeng Gao; Hoifung Poon", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Knowledge-rich self-supervision for biomedical entity linking", "year": "2022" }, { "authors": "Yijia Zhang; Qingyu Chen; Zhihao Yang; Hongfei Lin; Zhiyong Lu", "journal": "Scientific Data", "ref_id": "b21", "title": "BioWordVec, improving biomedical word embeddings with subword information and MeSH", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 349.43, 426.22, 175.71, 26.44 ], "formula_id": "formula_0", "formula_text": "I(K, q j ) = c q j if c q j ∈ K ∅ otherwise (1)" }, { "formula_coordinates": [ 4, 130.67, 562.57, 159.19, 56.42 ], "formula_id": "formula_1", "formula_text": "P nc = T P nc T P nc + F P nc R nc = T P nc T P nc + F N nc (2)" }, { "formula_coordinates": [ 4, 83.88, 722.2, 178.64, 54.25 ], "formula_id": "formula_2", "formula_text": "A ec = 1 N ec q j ∈Q ĉq j = c q j if c q j ∈ K 0 otherwise ĉq j := I(K, q j )" }, { "formula_coordinates": [ 5, 81.97, 79.66, 432.76, 71.34 ], "formula_id": "formula_3", "formula_text": "UMLS (K) atom (q) concept RBA,1 … concept RBA,k concept RBA,i … concept RBA,j concept SapBERT,1 … concept SapBERT,N New Concept concept RBA,i 0.15 … … concept RBA,j 0.25 concept SapBERT,1 0.20 … … concept SapBERT,N 0.30 New Concept 0.10 Re-Ranker + RBA Signal RBA SapBERT concept SapBERT,1 … concept SapBERT,N" } ]
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b24", "b18", "b11" ], "table_ref": [], "text": "In a diverse set of information retrieval applications, reaching certain minimal recall thresholds is key, when identifying which documents in a large corpus are relevant to an information need. For example, in intelligence and law enforcement use cases, searching for adverse effects of medicines, due diligence, and legal search applications, preventing false negatives is often more important than preventing false positives [25]. But, manually reviewing all documents exhaustively is cost intensive and error-prone [19]. Hence, technology is often used to reduce review effort when retrieving information.\nHowever, technologies aimed at reducing review effort (like \"technology assisted review\") can create false negatives, since they often rely on active learning systems that exclude documents automatically based on user feedback [12].\nTherefore, this research aims to evaluate a more recall-oriented approach for reducing review effort when cumulatively identifying relevant documents. For this, we combine two major components: textual similarity and relevance feedback. Here, the textual similarity is used to produce relevance rankings, whilst relevance feedback is used to iteratively improve those relevance rankings based on user feedback. Hence, our first research question is formulated as follows: what text similarity methods are most suitable for relevance feedback? Our second research question builds on that and reads: to what extent can relevance feedback help to reduce review effort?\nFor the textual similarity component, we evaluate TF-IDF and BERT-based dense-vector representations. For the relevance feedback component, we compare text-based and vector-based feedback strategies. Moreover, we experiment with different levels of textual granularity in both components. In the text similarity component this is done through paragraph-based document rankings and in the relevance feedback component this is done through feedback amplification.\nThis combination of representation methods, feedback strategies and levels of granularity is technically and methodologically novel, and leads to substantial improvements in results. Given these two components, our proposed method reduces review effort between 17.85% and 59.04% when compared to our baseline approach of giving no relevance feedback, given a recall target of 80%.\nThe remainder of this paper is structured as follows. Section 2 discusses related work. Next, we describe our methodology (section 3) and experimental setup (section 4). In section 5 we share the results of our experiments, and the paper ends with a discussion section (section 6) and conclusion (section 7)." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "In this section we discuss work related to the two main components of our method: text similarity methods and relevance feedback strategies. For the text similarity methods we discuss term-based and context-based approaches. Next, for the relevance feedback strategies, we review text-based and vector-based strategies." }, { "figure_ref": [], "heading": "Term-based similarity methods", "publication_ref": [ "b6", "b13", "b12", "b5", "b5", "b5", "b23" ], "table_ref": [], "text": "Term-based similarity methods compute text similarity through taking the common features between two pieces of text into account [7]. As a result, these methods don't incorporate contextual language properties like homonymy or synonymy in their similarity computations.\nA simple and commonly used [14] implementation of term-based similarity is Jaccard similarity [13], which computes text similarity based on how many features two texts have in common divided by the total number of features across both texts. The implementation of this similarity method is based on set algebra (using the intersection and the union operators). The formula for this is given below. Here, A and B refer to the set of unique words for both documents.\nJaccard(A, B) = |A ∩ B| |A ∪ B| = |A ∩ B| |A| + |B|(1)\nHowever, a shortcoming of Jaccard is that frequent terms (that are more likely to match) within a document are unlikely to be distinctive or important [6]. To deal with this shortcoming, there are two approaches. The first approach is based on manually filtering out frequent words based on a predefined list of words that are known to be common in a given language. These words are often referred to as \"stop words\". Removing stop words when searching for textually similar documents tends to have a beneficial effect [6]. Hence, we filter out stop words in our experiment.\nThe second approach is based on diminishing the value of words that are common in a corpus [6]. An advantage of this method compared to filtering stop words is that it's more dynamic. Certain words that are common within a specific context (e.g., the word \"patient\" in medical data) might not be included in predefined lists of stop words.\nThis approach is implemented in our TF-IDF-based similarity method through inverse document frequency. Here, terms are given a measure of uniqueness by dividing the number of documents in total by the number of documents that have a specific term. The formula for this metric is given below. Here, D refers to the number of documents in the dataset whereas d refers to the number of documents that contain term t. As a result, terms that appear in many documents (and are therefore less unique) are given a diminished value [24].\nidf (t, D) = log |D| 1 + |{d ∈ D : t ∈ d}| (2)" }, { "figure_ref": [], "heading": "Context-based similarity methods", "publication_ref": [ "b10", "b6", "b6", "b8", "b25", "b20", "b20", "b20" ], "table_ref": [], "text": "In this research we also use context-based similarity methods. In contrast to the term-based similarity methods mentioned previously, these methods do incorporate a form of semantic meaning. This is based on the \"distributional hypothesis\" [11], which is built on the idea that \"words that occur in the same contexts tend to have similar meanings\" [7]. Given this hypothesis, this research uses pre-trained word embeddings that provide vector-based representations of texts. This enables us to compute the textual similarity based on the similarity between the vectors (e.g., with cosine similarity) [7].\nIn this category of pre-trained word embeddings, we are specifically focused on BERT [9], [26]. The transformer-based architecture of BERT allows it to capture context-sensitive word properties (like homonymy and synonymy) in its embeddings. There are versions of BERT made for specific tasks. For example, Fig. 1: Simplified architecture of SBERT [21] Sentence-BERT (SBERT) is a BERT model specifically made for measuring text similarity using the cosine distance metric [21]. The key advantage SBERT has over the other BERT models for text similarity tasks is its reduction in computational overhead. The researchers found that for finding the most similar pairs in a collection of 10.000 sentences, BERT would take ≈65 hours whereas SBERT would take 5 seconds. Moreover, it enables a similarity search between larger bodies of text (like sentences and paragraphs) through mean-pooling the word embeddings.\nA simplified overview of this architecture can be found in Figure 1. Here, the (word-level) BERT embeddings are first mean-pooled to create paragraph or sentence level embeddings. Thereafter, these embeddings are compared through their cosine similarity. Note, for the mean-pooling process, text that exceeds 384 words in length is truncated [21]. As a result, this approach can't be implemented on a level of text granularity that exceeds this amount." }, { "figure_ref": [], "heading": "Text-based feedback strategies", "publication_ref": [ "b17" ], "table_ref": [], "text": "Relevance feedback refers to changing the relevance ranking through user feedback, generally in multiple iterations [18]. A commonly used text-based relevance feedback strategy is \"keyword expansion\". Here, keywords from the selected search results are appended to the query. A frequently used implementation of this approach is based on the inverse document frequency mentioned earlier in this section. Here, the underlying assumption is that more unique words are more valuable to the search. In this research we will use this strategy for selecting keywords to expand our queries with." }, { "figure_ref": [], "heading": "Vector-based feedback strategies", "publication_ref": [ "b14", "b14", "b21", "b4", "b3" ], "table_ref": [], "text": "Assuming vector representations of the query and search results are available, user feedback can also be applied to the queried vector directly. This is referred to as vector-based pseudo-relevance feedback. In general, there are two commonly Fig. 2: Flowchart of the method used methods for this [15]. First, the queried vector can be averaged with the positive search results. Second, the queried vector can be summed with the selected search results. Both strategies are implemented in our research.\nA commonly used variation of averaging query vectors is based on Rocchio's method for relevance feedback [15]. The high-level idea of this method is to move the query vector towards the selected vectors through assigning different weights to selected and queried vectors [22]. The version of Rocchio implemented by most researchers today [5] [4] differs slightly from the original method, since it omits the negative feedback (i.e. non-selected documents) from the formula. As a result, this version of Rocchio can be seen as a weighted average between the (original) queried embedding and the (averaged) selected embeddings.\nThe weight of the queried embedding (α) and the weight of the averaged selected embeddings (β) can be set by a user. Still, the default/consensus values most research adheres to is α = 0.5 and β = 0.5 [4]. Hence, we use Rocchio with those parameter values as a baseline method in our experiment." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this research we evaluate different relevance feedback strategies and text similarity methods with the objective of reducing review effort. As shown in Figure 2, the method for accomplishing this is based on iteratively presenting the user with a set of (10) results to accept or decline. Thereafter, the accepted documents are used to improve the query (and consequently results) for the next iteration.\nFor returning the results (i.e relevance ranking) we experiment with different text similarity methods, levels of textual granularity, and ranking methods. These methods are explained in the first and second subsections. Next, for processing the feedback given after each iteration, we experiment with different feedback strategies. These are explained in the third and fourth subsections." }, { "figure_ref": [], "heading": "TF-IDF-based text similarity", "publication_ref": [ "b9" ], "table_ref": [], "text": "Our baseline method is based on TF-IDF, which vectorizes text based on multiplying the frequency of a word in a document with the inverse frequency of that word in the entire data set. Hence, TF-IDF looks at the \"uniqueness\" of a word instead of just the frequency. The result of TF-IDF is a sparse vector where all the words that exist in a document are given their TF-IDF value (words that don't appear in a specific document have zero values).\nIn our research we use TF-IDF to find similar documents using \"More-LikeThis\" (MLT). In summary, MLT queries the terms from a document (that have the highest TF-IDF values) individually using the document index. The documents returned by these queries are ranked based on combining their MLT scores, which is defined as the sum of the TF-IDF values of the matching terms between the queried document and the returned document [10].\nNote, we are aware that a commonly used implementation of TF-IDF to identify textually similar texts is based on computing the cosine distance between the TF-IDF vectors. However, computing the cosine distance between n vectors results in quadratic time and space complexity (O(n 2 )). Hence, we use MLT instead." }, { "figure_ref": [], "heading": "BERT-based text similarity", "publication_ref": [ "b19", "b16" ], "table_ref": [], "text": "Our third similarity method is based on Sentence-BERT (SBERT) embeddings. These embeddings can be used to find similar texts using \"dense-vector search\" (DVS). The distance metric used for this is cosine similarity, which computes the similarity between two vectors (i.e. embeddings) based on the cosine value of the angle between the two vectors [20]. This angle is computed through dividing the dot-product (which is the sum of products from both vectors) by the length of the vectors. This means that the potential values of this similarity measure range from -1 (completely opposite) to 1 (completely similar). The formula for this is given below.\ncosine similarity = cos(θ) = AB ∥A∥∥B∥ = n i=1 A i B i n i=1 (A i ) 2 n i=1 (B i ) 2(3)\nIn order to find embeddings with a high cosine similarity (or low cosine distance) in a large dataset efficiently we use Hierarchical Navigable Small Worlds (HNSW) based vector search [17]. HNSW is an algorithm that finds the k most similar documents to a query with logarithmic time and space complexity (O(log(N))." }, { "figure_ref": [], "heading": "Paragraph-based document rankings", "publication_ref": [ "b0" ], "table_ref": [], "text": "Because relevant information can be exclusive to a specific part of a document [1], we conduct experiments on two levels of text granularity: document and paragraph. Here, the paragraph level means that we query and retrieve paragraphs instead of documents. However, for the relevance ranking we only consider documents. As a result, the experiments on the paragraph level require a document ranking to be derived from the returned paragraphs. For this, we define two different paragraph-based document rankings.\nThe first ranking method is based on taking the highest ranked paragraph of a document in the ranking as the overall document ranking. For example, say we return 6 paragraphs from 3 unique documents in the following order:\n{d 1 , d 2 , d 2 , d 3 , d 3 , d 3 }\nwhere d i refers to a paragraph from document i. Then, our document ranking will be as follows: {1, 2, 3}. Note how the ranking from the first paragraph of a document determines its position in the document ranking.\nThe second ranking method is based on counting the number of paragraphs per document in the ranking, and using that to rank the documents. For example, say we return the same 6 paragraphs in the following order:\n{d 1 , d 2 , d 2 , d 3 , d 3 , d 3 }\nwhere d i refers to a paragraph from document i. Then, our document ranking will be as follows: {3, 2, 1}. Note how in contrast to the previous ranking method, the number of paragraphs per document determines the ranking." }, { "figure_ref": [], "heading": "Baseline feedback strategies", "publication_ref": [], "table_ref": [], "text": "In this research we have three baseline methods for the relevance feedback experiments. The first baseline method is to re-use the original queried embedding. If a user accepts/declines documents, the ranking of documents remains constant. This is referred to as \"no feedback\" or \"original\".\nThe second baseline method is based on keyword expansion. In keyword expansion, each iteration a number of keywords from the documents with positive pseudo-relevance feedback is selected and appended to the original query using an OR operator. As a result, the selected keywords serve as a pre-filter for the original query where the documents should contain at least 1 of the collected keywords to be considered. In our research, this selection is done based on the (highest) inverse document frequency value." }, { "figure_ref": [], "heading": "Vector-based feedback strategies", "publication_ref": [], "table_ref": [], "text": "Since the queried and collected texts have vectors (e.g., BERT or TF-IDF), the relevance feedback methods are based on vector operations. These vector operations are based on summing and averaging the vectors. For this, we experiment with both cumulative (i.e. include the queried vector in the average/sum) and non-cumulative feedback (i.e. exclude the queried vector in the average/sum).\nAlso, for text similarity methods implemented on the paragraph level, relevance feedback can be amplified to the document level. If a given paragraph receives positive relevance feedback, then that feedback can be extended to other" }, { "figure_ref": [], "heading": "Model name", "publication_ref": [ "b20" ], "table_ref": [], "text": "Size #Dimensions Speed (sentences/sec) all-mpnet-base-v2 420 MB 768 2800 all-MiniLM-L12-v2 120 MB 384 7500 all-MiniLM-L6-v2 80 MB 384 14200\nTable 1: Selected BERT models' characteristics according to [21] paragraphs that have the same parent document. In our research this will be referred to as \"amplified feedback\" (or \"amp\" in tabular formats). Note, feedback amplification is not applicable to any of our baseline methods." }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [], "table_ref": [], "text": "This section provides an overview of the experiments and our data (and preprocessing). The first experiment focuses on comparing the performances of different text similarity methods. The second experiment focuses on implementing the best performing similarity method using different relevance feedback strategies." }, { "figure_ref": [], "heading": "Data and preprocessing", "publication_ref": [ "b2", "b2" ], "table_ref": [], "text": "Our research uses the RCV-1 v2 [3] dataset in all the experiments. This dataset was made public in 2005 by Reuters News and consists of 806784 news articles. Due to hardware constraints, we did not use the complete dataset in our experiment. Instead, we randomly sampled 300 articles per topic. For this, we sampled 15 (unrelated) topics that are equally split in train, validation, and test. Moreover, besides these topics we also sampled 4 topics that share the same parent topics. This set will be referred to as the \"ambiguous\" set.\nAs for preprocessing, in TF-IDF we filter out stop words, numbers, and convert the text to lowercase. The stop word list used for this is publicly available on GitHub 3 . For the SBERT-based experiments, we only remove numbers and special characters.\nFinally, for the experiments on the paragraph level, we first split the text into sentences using the <p>...</p> tags in the XML files from RCV-1 v2 dataset [3]. Next, a paragraph is created through concatenating every 3 adjacent sentences of a document (and the remainder). For SBERT the number of words in a paragraph can't exceed 384. Hence, we verified that the paragraphs in the topic sets do not exceed that limit." }, { "figure_ref": [], "heading": "Text similarity methods", "publication_ref": [ "b20" ], "table_ref": [], "text": "For the text similarity methods, we compare two approaches: MLT (which is based on TF-IDF) and DVS (which is based on SBERT). For both methods, we iterate through our set using \"query by document\" (QBD). Each time we query 3 https://github.com/stopwords-iso/stopwords-en a document/paragraph to return other documents/paragraphs that belong to the same topic.\nFor MLT, we set two parameters. First, the minimum document frequency for words to be considered (minDf). Second, the maximum document frequency for words to be considered (maxDf). Due to limited computing resources, we didn't conduct a full grid search to find the optimal values for these parameters. Instead, we conducted a manual search on the train and validation sets and used the average of the most performant parameter values on our test (and ambiguous) set. Note, we conduct these experiments on the paragraph and document level.\nFor DVS, we don't have any parameters to set. Hence these experiments are conducted directly on our test set and our ambiguous set. However, we do experiment with three different pre-trained SBERT models. These are selected based on being the general-purpose models in the SBERT documentation [21]. An overview of these models can be found in Table 1. Note, due to SBERT's maximum context length, we conduct these experiments on the paragraph level only.\nFinally, we evaluate the similarity methods based on (the macro averaged) recall, precision and F1 scores. For these metrics, the definition of a \"true positive\" is a returned document that is of the same set as the queried document. This positive set is based on the annotations of the dataset. For example, if we query a document annotated as \"sports\", then the document returned (as similar) should also be annotated as \"sports\" to be considered a true positive." }, { "figure_ref": [], "heading": "Relevance feedback", "publication_ref": [ "b22" ], "table_ref": [], "text": "For our relevance feedback experiments we implement a form of pseudo-relevance feedback. Here, the feedback is based on the same definition of a true positive as mentioned earlier.\nAn overview of the layout of the experiment can be found in Algorithm 1. Note how each iteration the already collected/declined documents are filtered from the search. Also, note how the query is updated each iteration based on the pseudo-relevance feedback. In our implementation of this experiment, we collect 10 documents/paragraphs each iteration.\nFinally, for performance evaluation, we record the average number of iterations needed to achieve exactly 80% recall (which is a commonly used threshold in information retrieval [23])." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "This section is an overview of the results from our experiments. In the first subsection, we discuss the results of the individual text similarity methods. In the second subsection, we discuss the results of the different relevance feedback strategies. it's apparent that deriving a document ranking from its highest ranked paragraph outperforms ranking documents based on counting the paragraphs. Moreover, for both methods querying the first paragraph of a document slightly outperforms querying random paragraphs. Next, for the TF-IDF-based method specifically, the experiments on the document level outperform the experiments on the paragraph level.\nFinally, when comparing the performance of the optimal configuration of both approaches, DVS (with a first-based ranking) outperforms all TF-IDF-based approaches. Next, when comparing the optimal configurations of the approaches on the ambiguous set, we again see that DVS outperforms the TF-IDF-based approach (see Figure 5)." }, { "figure_ref": [], "heading": "Relevance feedback", "publication_ref": [], "table_ref": [ "tab_0", "tab_2" ], "text": "For the second set of experiments, we experimented with different feedback strategies using the identified text similarity method (DVS). For these experiments, the average number of iterations (and standard deviation) needed to achieve 80% recall for all methods and datasets is shown in Table 2. Note, every iteration translates to a review effort of 10 paragraphs. Similar to the previous experiments, the results are available on the test set and the ambiguous set.\nFirst, the results on the test set. Here, it's apparent that the feedback methods based on vector operations require fewer iterations and have a lower standard deviation that the baseline methods. The best performing feedback method is summing the vectors. Next, the results on the ambiguous set. Here, all similarity methods require more iterations to reach 80% recall and have a higher standard deviation than they have on the test set. Still, feedback methods based on summing the vectors (cumulatively) gives the best results.\nIt's apparent that cumulative feedback methods outperform non-cumulative feedback methods for all vector-based relevance strategies. Moreover, the difference between averaging and summing vectors is smaller when using noncumulative strategies.\nA noteworthy finding when comparing the results from the test set and the ambiguous set is the gap between the minimal baseline method (of no feedback) and the optimal feedback method (of cumulatively summing the vectors). On the test set, the reduction of review effort (measured in the number of iterations to achieve 80% recall) is 17.85%. On the ambiguous set however, this reduction of review effort equals 59.04%. Another noteworthy finding is that amplifying the feedback to sibling paragraphs reduces the standard deviation, but not the number of iterations needed to achieve 80% recall.\nFinally, for all methods we measured average time taken per iteration. The results for this are shown in Table 3. These results show that relevance feedback strategies based on vector operations (average and sum) add little latency to the experiment compared to no feedback. Still, amplifying feedback to sibling paragraphs does add some latency to the experiment for both averaging and " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "This section is a discussion of the results. First, we interpret the results for our two main experiments. Second, we discuss the limitations of our research and the resulting suggestions for future work." }, { "figure_ref": [], "heading": "Text similarity methods", "publication_ref": [ "b26", "b8" ], "table_ref": [], "text": "For all of text similarity methods, some common denominators emerge from the results. First, querying the first paragraph gives better results that querying a random paragraph. This coincides with findings from related work, since prior research found that the effectiveness of \"query by document\" approaches depends on the prevalence of relevant terms in the queried text [27]. Combining this finding with our results, it's probable that the first paragraphs in the RCV-1 v2 dataset outperform random paragraphs because they have a higher prevalence of relevant terms. This property could be exclusive to the news articles used in our research, and therefore might not apply to the data used in other domains Another commonality between the methods is related to the paragraph-based document rankings. Here, we see that for all methods ranking documents based on the first paragraph in the ranking gives better results than querying documents based on counting their paragraphs in the ranking. Potentially, this could be related to differences in the number of paragraphs per document. Because, if a document only has one paragraph, then it's always bound to be at the bottom of a count-based ranking. Regardless of how related/similar that document is.\nAs for comparing the different text similarity methods, both sets of experiments show that DVS outperforms the TF-IDF-based approach. Considering our DVS approach is based on BERT instead of TF-IDF, this finding makes sense. Because, BERT's bidirectional self-attention mechanism [9] enables it to understand more ambiguous and context-sensitive language properties. For example, the usage of synonymy, homonymy and referrals." }, { "figure_ref": [], "heading": "Relevance feedback", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "For the second experiment, the best performing text similarity method (DVS, with a first-based document ranking) was implemented for relevance feedback. For both the test set and the ambiguous set, the experiments show that relevance feedback methods based on cumulatively summing the vectors reduce review effort the most. Interestingly, this improvement does not seem to come at a computational cost. Since Table 3 shows that the execution time of these methods (without feedback amplification to sibling paragraphs) is fairly similar to our minimal baseline method (of no feedback). Note, when optimizing for a low standard deviation, amplifying feedback to sibling paragraphs is beneficial, which does add latency to the experiment." }, { "figure_ref": [], "heading": "Relevance", "publication_ref": [ "b27", "b15", "b0" ], "table_ref": [], "text": "First, with regards to implementation, our results show that review effort can be decreased using text similarity-based relevance feedback methods. An important side note in this finding is that relevance feedback accomplishes this through only re-ranking documents. As a result, this strategy does not produce false negatives automatically without the awareness of the user (in contrast to an active learning based approach). This is particularly important in use-cases that require a high recall.\nSecond, with regards to scientific novelty and contributions, we should state that the concept of relevance feedback has been studied before [28]. However, certain parts within our implementation are (to our knowledge) novel and therefore contribute to science.\nFirst, the evaluation of different paragraph-based document rankings contribute to the domain of paragraph-based document-to-document retrieval. Given the rise of large language models (that are generally limited to the paragraph level [16]) and the fact that relevant information can be exclusive to a specific part of a document [1], these findings are applicable beyond the technologies/embeddings used in this research.\nSecond, our results show that the usage of sibling paragraphs in relevance feedback can reduce the standard deviation of review effort. To our knowledge, this technique and finding is novel. Moreover, a more \"stable\" reduction of review effort could be favorable in real-world scenarios. Hence, this finding is not just novel, but also applicable." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b7" ], "table_ref": [], "text": "The size of the dataset (only 300 articles per topic) is smaller than most realworld information retrieval scenarios. Moreover, our data only consists of news articles. Therefore, certain findings in our research (e.g., the fact that querying the first paragraph slightly outperforms querying random paragraphs) might not apply on other datasets. Still, given the fact that the RCV-1 v2 dataset is commonly used as a benchmark dataset in information retrieval [8], the results are still an adequate indication of our method's performance." }, { "figure_ref": [], "heading": "Future work", "publication_ref": [ "b1" ], "table_ref": [], "text": "Our first suggestion for future work is related to the size of our dataset. As mentioned, due to computational constraints we only sampled 300 articles per topic. However, datasets in real-world information retrieval applications can be much larger than that. As a result, future work could run these methods on larger datasets to research their scalability.\nNext, this research uses Solr's \"MoreLikeThis\" functionality to increase the speed of our TF-IDF-based similarity experiments. Meaning, we didn't use vector space scoring to compute the similarities between the TF-IDF vectors. The reasoning behind this is that computing the (cosine) similarities between n TF-IDF vectors results in quadratic (O(n 2 )) time and space complexity, which is simply not viable in a real-world application.\nHowever, recent innovations have made it possible to compute the pairwise similarities between (sparse) vectors much faster. An example of this is the ChunkDot Python library [2], which splits the TF-IDF matrix into chunks and computes the similarities in parallel. Future work could use this innovation to experiment with TF-IDF-based cosine similarity as an additional text similarity method." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b24" ], "table_ref": [], "text": "This research aimed to evaluate the impact of changing the (text similaritybased) relevance rankings based on relevance feedback. For this, the first research question was formulated as follows: what text similarity methods are most suitable for relevance feedback? Our results show that the most suitable text similarity method for this is DVS (using BERT-based dense-vector representations) where the highest ranked paragraph determines the document ranking.\nNext, the second research question was formulated as follows: to what extent can relevance feedback help to reduce review effort? Here, our results show that (compared to processing no relevance feedback) the relevance feedback method identified in this research reduces review effort between 17.85% and 59.04%, given a target recall level of 80%.\nGiven the recall-oriented nature of many information retrieval applications [25], the results for the relevance feedback experiments are very encouraging. Since, in contrast to an active learning based strategy (which typically used for this purpose), this approach reduces review effort through only re-ranking documents. As a result, there are no false negatives created without the awareness of the user." } ]
In a number of information retrieval applications (e.g., patent search, literature review, due diligence, etc.), preventing false negatives is more important than preventing false positives. However, approaches designed to reduce review effort (like "technology assisted review") can create false negatives, since they are often based on active learning systems that exclude documents automatically based on user feedback. Therefore, this research proposes a more recall-oriented approach to reducing review effort. More specifically, through iteratively re-ranking the relevance rankings based on user feedback, which is also referred to as relevance feedback. In our proposed method, the relevance rankings are produced by a BERT-based dense-vector search and the relevance feedback is based on cumulatively summing the queried and selected embeddings. Our results show that this method can reduce review effort between 17.85% and 59.04%, compared to a baseline approach (of no feedback), given a fixed recall target.
Relevance feedback strategies for recall-oriented neural information retrieval
[ { "figure_caption": "Algorithm 1 : 5 for result in results do 6 if feedback for result is positive then 7 paragraphreturn iterations 5 . 1156751Pseudo relevance feedback experiment input : paragraph, maxRecall output: Iterations needed to achieve recall 1 iterations = 0 2 while recall(acceptedDocuments) ≤ maxRecall do 3 filter = acceptedDocuments + declinedDocuments 4 results = query(paragraph, filter) // Returns top 10 results. Text similarity methods For our TF-IDF-based approach, the manual search on the train and validation sets resulted in the parameter values of maxDf=0.8 and minDf=0 on both the paragraph and document level. Hence, the TF-IDF related results are based on these parameter values. Next, for the DVS experiments, all-mpnet-base v2 was the best performing pre-trained model. Hence, the results of this model are shown in this section. For our test set, the recall-precision graphs are shown in Figure 3 and Figure 4. In both Figures,", "figure_data": "", "figure_id": "fig_0", "figure_label": "156751", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :Fig. 4 :Fig. 5 :345Fig. 3: Different configurations for the TF-IDF-based approach on the test set", "figure_data": "", "figure_id": "fig_1", "figure_label": "345", "figure_type": "figure" }, { "figure_caption": "Iterations needed to achieve 80% recall (baselines are in the top cells, cumulative approaches are in the middle cells, non-cumulative approaches are in the bottom cells).", "figure_data": "Feedback strategyMean (Test)Std Dev (Test)Mean (Ambiguous)Std Dev (Ambiguous)No feedback33.36 8.8446.7710.04Keyword expansion32.24 8.0344.239.94Rocchio (α = 0.5, β = 0.5) 29.50 3.7937.985.11Average29.43 1.5132.652.97Average (amp)30.17 1.0332.991.80Sum28.29 0.7529.412.13Sum (amp)28.46 0.5030.541.31Average31.54 1.1438.684.41Average (amp)32.81 1.0838.412.64Sum32.80 1.1238.805.28Sum (amp)32.20 0.8738.203.77", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Average execution times for different relevant feedback strategies summing vectors. However, keyword expansion adds the most latency to the experiments.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Timo Kats; Peter Van Der Putten; Jan Scholtes
[ { "authors": "K J Adebayo", "journal": "", "ref_id": "b0", "title": "Multimodal Legal Information Retrieval", "year": "2018" }, { "authors": "R Agundez", "journal": "", "ref_id": "b1", "title": "ChunckDot Python library", "year": "2023-05" }, { "authors": "Massih-Reza & Amini; C Goutte", "journal": "UCI Machine Learning Repository", "ref_id": "b2", "title": "Reuters RCV1 RCV2 Multilingual, Multiview Text Categorization Test collection", "year": "2013" }, { "authors": "A Arampatzis; G Peikos; S Symeonidis", "journal": "Information Retrieval Journal", "ref_id": "b3", "title": "Pseudo relevance feedback optimization", "year": "2021" }, { "authors": "T Cai; Z He; C Hong; Y Zhang; Y L Ho; J Honerlaw; A Geva; V A Panickan; A King; D R Gagnon", "journal": "Journal of Biomedical Informatics", "ref_id": "b4", "title": "Scalable relevance ranking algorithm via semantic similarity assessment improves efficiency of medical chart review", "year": "2022" }, { "authors": "C P Chai", "journal": "Natural Language Engineering", "ref_id": "b5", "title": "Comparison of text preprocessing methods", "year": "2023" }, { "authors": "D Chandrasekaran; V Mago", "journal": "ACM Computing Surveys", "ref_id": "b6", "title": "Evolution of semantic similarity-a survey", "year": "2022-03" }, { "authors": "V Deolalikar", "journal": "IEEE", "ref_id": "b7", "title": "How valuable is your data? A quantitative approach using data mining", "year": "2015" }, { "authors": "J Devlin; M Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b8", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "A S Foundation", "journal": "", "ref_id": "b9", "title": "Apache Lucene -scoring", "year": "2011" }, { "authors": "J Gorman; J R Curran", "journal": "", "ref_id": "b10", "title": "Scaling distributional similarity to large corpora", "year": "2006" }, { "authors": "M R Grossman; G Cormack", "journal": "The Journal", "ref_id": "b11", "title": "Continuous active learning for TAR", "year": "2016" }, { "authors": "P Jaccard", "journal": "Bull. Soc. Vaud. Sci. Nat. pp", "ref_id": "b12", "title": "Nouvelles recherches sur la distribution florale", "year": "1908" }, { "authors": "S Joshi; D Contractor; K Ng; P M Deshpande; T Hampp", "journal": "", "ref_id": "b13", "title": "Auto-Grouping Emails for Faster e-Discovery", "year": "2020-06" }, { "authors": "H Li; A Mourad; S Zhuang; B Koopman; G Zuccon", "journal": "ACM Transactions on Information Systems", "ref_id": "b14", "title": "Pseudo relevance feedback with deep language models and dense retrievers: Successes and pitfalls", "year": "2023" }, { "authors": "Y Liu; T Han; S Ma; J Zhang; Y Yang; J Tian; H He; A Li; M He; Z Liu", "journal": "", "ref_id": "b15", "title": "Summary of ChatGPT research and perspective towards the future of large language models", "year": "2023" }, { "authors": "Y A Malkov; D A Yashunin", "journal": "IEEE Computer Society", "ref_id": "b16", "title": "Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs", "year": "2020-04" }, { "authors": "C D Manning", "journal": "Syngress Publishing", "ref_id": "b17", "title": "Introduction to information retrieval", "year": "2008" }, { "authors": "O B Piramuthu", "journal": "", "ref_id": "b18", "title": "Multiple choice online algorithms for technology-assisted reviews", "year": "2023" }, { "authors": "F Rahutomo; T Kitasuka; M Aritsugi", "journal": "", "ref_id": "b19", "title": "Semantic cosine similarity", "year": "2012" }, { "authors": "N Reimers; I Gurevych", "journal": "CoRR", "ref_id": "b20", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "year": "2019" }, { "authors": "J J Rocchio", "journal": "", "ref_id": "b21", "title": "Document Retrieval System-Optimization and Evaluation", "year": "2009" }, { "authors": "H L Roitblat", "journal": "", "ref_id": "b22", "title": "Probably Reasonable Search in eDiscovery", "year": "2022" }, { "authors": "D Sánchez; M Batet", "journal": "Expert Systems with Applications", "ref_id": "b23", "title": "A semantic similarity method based on information content exploiting multiple ontologies", "year": "2013" }, { "authors": "J J Song; W Lee; J Afshar", "journal": "Data & Knowledge Engineering", "ref_id": "b24", "title": "An effective high recall retrieval method", "year": "2019" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L Kaiser; I Polosukhin", "journal": "", "ref_id": "b25", "title": "Attention is all you need", "year": "2017" }, { "authors": "E Yang; D D Lewis; O Frieder; D A Grossman; R Yurchak", "journal": "DESIRES", "ref_id": "b26", "title": "Retrieval and Richness when Querying by Document", "year": "2018" }, { "authors": "H Zhang; G V Cormack; M R Grossman; M D Smucker", "journal": "Information Retrieval Journal", "ref_id": "b27", "title": "Evaluating sentencelevel relevance feedback for high-recall information retrieval", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 224.9, 127.66, 255.69, 22.31 ], "formula_id": "formula_0", "formula_text": "Jaccard(A, B) = |A ∩ B| |A ∪ B| = |A ∩ B| |A| + |B|(1)" }, { "formula_coordinates": [ 3, 230.78, 436.97, 249.81, 22.31 ], "formula_id": "formula_1", "formula_text": "idf (t, D) = log |D| 1 + |{d ∈ D : t ∈ d}| (2)" }, { "formula_coordinates": [ 6, 149.65, 544.26, 330.94, 29.23 ], "formula_id": "formula_2", "formula_text": "cosine similarity = cos(θ) = AB ∥A∥∥B∥ = n i=1 A i B i n i=1 (A i ) 2 n i=1 (B i ) 2(3)" }, { "formula_coordinates": [ 7, 134.77, 258.3, 90.03, 9.65 ], "formula_id": "formula_3", "formula_text": "{d 1 , d 2 , d 2 , d 3 , d 3 , d 3 }" }, { "formula_coordinates": [ 7, 390.56, 318.07, 90.03, 9.65 ], "formula_id": "formula_4", "formula_text": "{d 1 , d 2 , d 2 , d 3 , d 3 , d 3 }" } ]
2024-01-18
[ { "figure_ref": [ "fig_0", "fig_1", "fig_1", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b35", "b38", "b39", "b35", "b6", "b22" ], "table_ref": [], "text": "Identifying key anatomical structures (e.g. certain lesions or landmarks) from medical images plays a crucial role in the clin- Recently, a variety of methods have emerged for landmark detection using only a single exemplar annotation Yan et al. (2022); Yao et al. (2021Yao et al. ( , 2022)). In these methods, landmark detection is formulated as a template-query matching problem, which aims to directly associate landmarks with exemplars based on their similarity in an anatomical embedding space. To achieve this, voxel-wise self-supervised learning pipelines are designed. Two augmented views of the same image are fed into a Siamese network to produce a dense feature map for each of them. The objective is to ensure that corresponding voxels in these two views render similar features, while noncorresponding voxels exhibit distinctive features. Given the inherent structural similarities in the human body across subjects, jects.\nOne representative method within this context is the Selfsupervised Anatomical eMbedding (SAM) Yan et al. (2022), which can produce a unique embedding for each voxel in a medical image such as a CT or MRI image. During inference, the nearest-neighbor (NN) matching technique is employed to locate the desired landmark based on the exemplar annotation.\nThis approach has demonstrated promising results across various challenging tasks, including lesion tracking in longitudinal CT scans Cai et al. (2021), universal landmark matching (Fig. 1), and CT image registrationLiu et al. (2021); Li et al. (2023). Despite its success, it still has two major limitations. First, the discriminatory ability of SAM stems from self-supervised similarity measures of contextual appearance.\nHence it is difficult for SAM to distinguish the structures that share similar appearances but have different semantics (Fig. 2 (a)) and to match the structures with similar semantics but distinct appearances (Fig. 2 (b)). In such cases, SAM may produce significant matching errors. Second, SAM is unable to perform multi-modality anatomy matching, which limits its utility in multi-modality applications such as multi-modality lesion tracking and registration (Fig. 2 (c)).\nTo tackle these challenges, we propose universal anatomical embedding (UAE), a unified framework for learning appearance, semantic, and cross-modality embeddings. UAE introduces three key novelties.\n(1) Semantic embedding learning by prototypical supervised contrastive (SupCon) loss: To address the problem of similar appearance with difference semantics, we incorporate a semantic head to generate a semantic embedding vector for each voxel. The semantic head is trained using the prototypical SupCon loss, which is specifically designed to facilitate supervised contrastive learning at the voxel level. Notably, we train the semantic head using publicly available organ segmentation datasets, making it easily accessible and adaptable.\n(2) Fixed-point-based matching: To address the problem of similar semantics showing different appearance, we propose a fixed-point-based iterative matching technique to replace the naive NN matching strategy. We leverage the relation between the target structure and its surrounding \"stable\" structures to produce more reliable matching.\n(3) We propose a novel iterative cross-modality embedding learning method, which is designed to learn the correspondences between unregistered multi-modality images of the same subject without any manual annotation, even when they exhibit large field-of-view (FOV) differences.\nWe evaluate UAE on both single-modality and crossmodality tasks. Essentially, these methods optimize the same objective, which is defined based on treating the same location on two augmented views as a positive pair and considering two different locations as a negative pair. Therefore, they share the weakness of SAM, whereas our approach is designed to address these weaknesses." }, { "figure_ref": [], "heading": "Multi-modality data alignment", "publication_ref": [ "b18" ], "table_ref": [], "text": "The diagnostic accuracy can be significantly enhanced by utilizing the complementary information from aligned multimodality medical images. A notable example is that the aligned MRI and CT images can empower radiotherapists for precise treatment planning Khoo et al. (2000). Our solution to this chicken-and-egg issue involves a twophase learning process. Initially, we train a modality-agnostic UAE by applying strong contrast augmentation on single modality images. This strategy is used to simulate the appearance difference in multi-modality images (e.g., CT and MRI).\nSubsequently, we use this modality-agnostic UAE as a seed to initiate an iterative process that alternates between crossmodality embedding learning and cross-modality registration.\nIn the end, we obtain both a cross-modality UAE model and registered images. For new data, we can directly employ the trained cross-modality UAE for affine/rigid registration. The resulting images can then be fed into available deformable registration methods for voxel-wise alignment. By combining these two components, we create a robust, fully automated registration tool." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "The proposed UAE consists of UAE-S and UAE-M, which are designed for single-and multi-modality embedding learning, respectively. UAE-S can be further divided into two components: Semantic-Enhanced Anatomical embedding model " }, { "figure_ref": [], "heading": "Embedding training", "publication_ref": [], "table_ref": [], "text": "Prototypical SupCon loss\n𝑥 ! \" 𝑥 ! # 𝑥 ! $ 𝑐 \" 𝑐 $ 𝑐 # … … … …" }, { "figure_ref": [], "heading": "Semantic samples", "publication_ref": [], "table_ref": [], "text": "Appearance samples\n𝑥 % 𝑥 &\nVoxel-wise contrastive loss We now delve into the details of each module.\n𝑥 ! … … … … 𝑥 ! ′ 𝑥 % ′ 𝑥 ' ′ (a) (b)" }, { "figure_ref": [ "fig_5", "fig_1" ], "heading": "SEA", "publication_ref": [ "b19" ], "table_ref": [], "text": "SEA includes a semantic branch and an appearance branch, which share the same convolutional neural network (CNN) backbone (see Fig. 4 (a)). Note that we also tried to use transformer backbones but observed no significant performance gain. The appearance branch employs the same training strategy as the original SAM, where two overlapped and randomly augmented patches are fed into the CNN-backbone followed by the appearance head to generate the appearance embedding for each voxel. Let the embeddings of the positive pair be denoted by {x i , x ′ i }. The appearance branch aims to minimize the following voxel-wise contrastive loss\nL app = - npos i=1 log exp x i • x ′ i /τ a exp x i • x ′ i /τ a + nneg j=1 exp x i • x j /τ a ,(1)\nwhere n pos and n neg denote the number of positive and negative pairs, respectively, and τ a is the temperature parameter.\nThe self-supervised appearance head is able to learn the difference between two distinct body structures based on their ap-pearances. However, it cannot distinguish challenging cases such as adjacent tissues and organs, which share similar intensity distribution and texture (see Fig. 2(a)). To overcome this issue, we need higher-level semantic supervision to differentiate different tissues and organs.\nWe utilize a semantic branch to produce a fixed-length semantic embedding and treat it as a supplement to the appearance embedding. Our goal is to make the embeddings of the same organ closer than those from different organs. We can leverage any public organ segmentation dataset with arbitrary organ annotations. However, directly using the voxel-level supervised contrastive (SupCon) loss Khosla et al. (2020) is very expensive, since its complexity is O(n 2 ), where n is the number of voxels in a patch (∼300K). To address this issue, we design a prototypical SupCon loss by replacing the voxel-voxel pairs with prototype-voxel pairs. During training, given the predicted embeddings produced by the semantic head\n{x p i }, i ∈ [1, n p ], p ∈ [1, K],\nwhere x p i represents ith voxel embedding with semantic label p, n p is the number of voxels with label p, K is the number of semantic classes (i.e. organ labels), we formulate the prototypical SupCon loss as\nL s = K p=1 - 1 n p np i=1 log exp (c p • x p i /τ s ) K q=1 nq j=1 exp (c p • x q j /τ s ) ,(2)\nwhere c p = 1 n p n p i=1 x p i is the prototype of class p, and τ s is the 2 with a high similarity score using iterative NN matching. The process is then extended to the grid points around t A using batch-wise fixed-point iteration to obtain all fixed points (different starting points may converge to the same fixed point). Subsequently, we learn how the fixed points structure around t A is mapped to the corresponding structure in B through an affine transform T . By considering t A as an element of the fixed points structure in A, we can compute the matching point as q B = T t A , which is more accurate and reliable than the initial NN matching result q B 0 .\ntemperature parameter. In contrast to the original SupCon loss, prototypical SupCon loss reduces the complexity from O(n 2 ) to O(nK), enabling its usage on dense prediction tasks. We apply the L-2 normalization to the embeddings produced by both appearance and semantic heads, so that they can be directly concatenated and used as a unified embedding vector." }, { "figure_ref": [], "heading": "Fixed-point-based matching", "publication_ref": [], "table_ref": [], "text": "After learning the SEA model, we also need a robust and accurate point matching strategy. Current exemplar-based methods compute the inner product of template and query embeddings and uses NN matching. However, when the target structure in the query image is missing or significantly altered (see Fig. 2(b)), this simple strategy may not be accurate.\nTherefore, we propose an iterative structural inference method (see Fig. 5) to improve matching performance, which takes into account the consistency of a match and the relationship between the target landmark and its surrounding structures.\nFeeding two images A and B into SEA, we obtain the voxel embeddings X A = {x A i } and X B = {x B i }, where i is the index of pixels. Let the template embedding vector on A be denoted by x A t . We can find the corresponding query embedding on B through NN matching\nx B q = argmax i∈B (x A t • x B i ).(3)\nFor simplicity, we ignore the notation of embedding x and represent the template point and its NN matching point on B as: t A = t A 0 and q B 0 . We have established the correspondence t A 0 → q B 0 . Let us consider the reverse process. Starting from q B 0 , can the NN matching method give us t A 0 ? If yes, we can conclude that we have a consistent forward-backward matching and that is probably a good matching. However, if the reverse process maps to another point, i.e., q B 0 → t A 1 and t A 1 t A 0 , then the first NN match is probably not reliable, since the similarity score of S (q B 0 → t A 1 ) is larger than S (q B 0 → t A 0 ). Formally, the forward-backward process can be formulated as\nt A i+1 = f (t A i , X A , X B ) ≜ t A i → q B i , q B i → t A i+1 .(4)\nMathematically, a fixed point of a function is an element that is mapped to itself by the functionGranas and Dugundji (2003).\nTherefore, a forward-backward consistent matching identifies a fixed point of f since t A 0 = t A 1 = f (t A 0 ). For the matchings where t A 1 t A 0 , although it is not a fixed point, we can always use it as a starting point to find a fixed point using the fixed-point iteration. Specifically, for any t A 0 , we compute a sequence of its f mappings\nt A 0 , t A 1 = f (t A 0 ), t A 2 = f (t A 1 ), t A i+1 = f (t A i ), ....(5)\nThis process gradually increases the similarity score until, after n fix iterations, the sequence converges to t A i+1 = t A i for all i ≥ n fix . In light of the fixed-point iteration, we propose an approach to locate the matching of t A by considering all fixed points surrounding it. Our method begins with selecting an L 3 cubic region centered at the template point t A and applying batched fixed-point iteration. Subsequently, we preserve fixed points with offsets to t A below a threshold τ dis . We treat these fixed points on image A as a structured element, and view their corresponding points on image B as another structure. We then estimate an affine transform T to describe their mapping relation. By including t A in the fixed-point structure on A, we can compute the matching point as q B = T t A . When a query point has a drastically different appearance, its NN matching result becomes unreliable. Our fixed-point based matching method can adaptively find highly reliable points surrounding the query point and aggregate this structural information to give the final matching result." }, { "figure_ref": [ "fig_3", "fig_7", "fig_7" ], "heading": "UAE-M", "publication_ref": [ "b5", "b42", "b0" ], "table_ref": [], "text": "In multi-modality scenarios, for example, we have a dataset comprising both CT and MRI images of each subject. Notably, each CT and MRI pair is not registered and could exhibit significant FOV differences, which is common in raw clinical data (CT scans often have larger FOV than MRI, see Fig. 3). We aim to learn multi-modality anatomical embeddings on this dataset, and thus find the correspondence between CT and MRI images.\nA straightforward solution is to register the CT and MRI images of each subject and then learn the cross-modality correspondence. Existing registration methods, however, can hardly deal with the image pair with significant FOV differences.\nTo address this issue, we introduce an iterative method for learning the UAE-M model, which shares the same design as the SEA model but employs a distinct learning approach. To maintain simplicity, we utilize only the appearance head of SEA in UAE-M, while the semantic head can be seamlessly integrated if necessary. Drawing inspiration from the concept of modality-agnostic learning Billot et al. (2023), we initially train an UAE-M model that is moderately modality-independent by applying strong, or even aggressive, contrast augmentation. Specifically, we randomly apply non-linear intensity transformation Zhou et al. (2021) and intensity reversal to each image. This process generates visually unrealistic intensity images (as shown in Fig. 6(a)), but it preserves the anatomical structures. Furthermore, we include the random affine transformation, resizing, and blurring in our augmentation pipeline.\nDuring training, we first pick one CT or MRI image randomly, crop two overlapping patches, then apply different aggressive augmentations to these two patches, and finally use them to train the initial UAE-M (UAE-M 0 ) model. In this way, we force the model to learn the features that are independent of intensity and focus on higher-level structural similarity. As both CT and MRI are structural images, this UAE-M 0 can roughly match the regions with strong structural information.\nNonetheless, directly using this aggressively augmented UAE-M 0 for single-point matching is still locally inaccurate.\nInstead, we can estimate a reliable global affine/rigid transform from the matchings of multiple points. In SAME-affine Liu et al. ( 2021), evenly spaced points were employed to calculate the affine matrix for two images. Since our CT-MRI pair is from the same subject, we favor computing the rigid transform matrix to prevent unwanted stretching. Our task is thus to solve a rigid fitting problem of two 3-D point sets. Given two 3-D point\nsets p i and p ′ i , i = 1, • • • , N, we assume p ′ i = Rp i + t + n i ,(6)\nwhere R is a rotation matrix, t is a translation vector, and n i is instance noise. We aim to find R and t that minimize\nΣ 2 = N i=1 p ′ i -(Rp i + t) 2 . (7\n)\nWe utilize a quick and robust non-iterative method Arun et al. (1987) to compute the rigid transfer matrix, which enables us to map the scan with a small FOV to its corresponding scan with a large FOV and crop the latter. Due to the potential inaccuracy of the rigid transfer, we dilate the body region of the small FOV scan to avoid the risk of cropping overlapping regions from two scans. After cropping the scan, both scans have similar FOVs.\nThen we apply a widely-used deformable registration method called DEEDS Heinrich et al. (2013) to refine the results. This process is referred to as AdaReg (see Fig. 6 (b)).\nAfter registering CT-MRI pairs using AdaReg, we can learn the cross-modality correspondence. For the fine-level learning, we select N f pos positive pairs from the registered regions with overlapping areas and choose N f neg points to act as negative samples. Additionally, we choose N f fov points from the nonoverlapping area of the large FOV scan. Then the loss function is defined as\nL = - N f pos i=1 log exp x i • x ′ i /τ c exp x i • x ′ i /τ c + N f neg +N f fov j=1 exp x i • x j /τ c ,(8)\nwhere x i and x ′ i are the embeddings of a positive pair, x j represents the negative sample, and τ c is a temperature parameter.\nSimilarly, for the coarse-level learning, we select N c pos positive pairs and N c neg negative samples for each positive pair. To fully utilize the data, we also include augmented intra-modality data as input and using the self-supervised training method. However, as the alignment of CT-MRI pairs is based on the aggressive augmented UAE-M 0 , erroneous correspondences may exist in imperfectly aligned cases. To improve the accuracy, we iterate the following two steps: run AdaReg using UAE-M k-1 embedding and learn UAE-M k with image pairs from AdaReg,\nwhere k is the iteration number. In each iteration, we reduce the margin of the dilated body region of the small FOV image, resulting in a closer FOV for the cropped pairs and a more accurate deformable fine-tuning process. Our final UAE-M is obtained when this process converges after several iterations. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "The proposed UAE was implemented using PyTorch v1.9 and MMDetction v1.20. We adopted 3D ResNet18 as our CNN-backbone and 3D feature pyramid network (FPN) as our semantic and appearance head. The embedding length of each head is set as 128. For the appearance head, we generated the coarse level embedding and fine level embedding. For the semantic head, we only generated the fine-level embedding. To save GPU memory, the size of output is the half of input volume size, we then used trilinear interpolation to re-slice the output to the original size. The network was optimized by SGD with a momentum of 0.9. The learning rate was set to 0.02, the batch size was set to 5, and the temperature τ a and τ s were set to 0.5.\nAll CT volumes have been re-sliced to the isotropic resolution of 2×2×2 mm 3 . For fixed-points matching, we set L = 5. We For registered pairs, the input is the entire overlapped region along with a randomly selected region in the non-overlapped area. Due to the limited GPU memory, we only put one pair of data in each mini-batch. For fine-level embedding learning, we selected N f pos = 200 positive pairs, and, for each positive pair, we randomly selected N f neg = 500 and N f fov = 100 samples from the non-overlapped region. For coarse-level embedding learning, we set N c pos = 100, N c neg = 200, and τ c = 0.5. For AdaReg, we set the dilation margin of the body mask as 10, 5, and 1 for each iteration. " }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Table 1 displays the lesion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_13" ], "heading": "Method", "publication_ref": [ "b21", "b36", "b37", "b6", "b6", "b32", "b26", "b2", "b13", "b35", "b33", "b33", "b32" ], "table_ref": [ "tab_6" ], "text": "CPM@10mm (DLT-/ TLT-Results) CPM@Radius (DLT-/ TLT-Results) Li et al. (2019) 68.85/-80.31/-3.8 ± 4.8 3.8 ± 4.8 4.8 ± 7.5 8.3 ± 9.2 LENS-LesaNet, 2020 Yan et al. (2020Yan et al. ( , 2019) ) 70.00/-84.58/-2.7 ± 4.8 2.6 ± 4.7 5.7 ± 8.6 7.8 ± 10.3 DLT-Mix, 2021 Cai et al. (2021) 78.65/-88.75/-3.1 ± 4.4 3.1 ± 4.5 4.2 ± 7.6 7.1 ± 9.2 DLT, 2021 Cai et al. (2021) 78.85/-86.88/-3.5 ± 5.6 2.9 ± 4.9 4.0 ± 6.1 7.0 ± 8.9 TLT, 2022 Tang et al. (2022) -/87.37 -/95.32 3.0 ± 6.2 3.7 ± 5.2 1.7 ± 2.1 6.0 ± 7.7 Affine, 2016 Marstal et al. (2016) 48.33/-65.21/-4.1 ± 5.0 5.4 ± 5.6 7.1 ± 8.3 11.2 ± 9.9 VoxelMorph, 2018 Balakrishnan et al. (2018) 49.90/-65.59/-4.6 ± 6.7 5.2 ± 7.9 6.6 ± 6.2 10.9 ± 10.9 DEEDS, 2013 Heinrich et al. (2013) 71.88/-85.52/-2.8 ± 3.7 3.1 ± 4.1 5.0 ± 6.8 7.4 ± 8.1 SAM, 2022 Yan et al. (2022) 81.87/86.14 91.56/95.41 2.6 ± 3.8 2.3 ± 2.9 4.0 ± 5.6 6.2 ± 6.9 Vizitiu et al., 2023Vizitiu et al. (2023) 83.13/-91.87/-2.9 ± 6.0 2.2 ± 3.2 3.1 ± 3.9 5.9 ± 7.1 UAE-S (proposed) 84.06/89.27 93.02/96.77 2.3 ± 3.1 2.1 ± 2.7 3.6 ± 4.7 5.4 ± 5.7 The methods in the upper part use lesion annotations in training, and thus have task-specific supervision. The methods in the lower part do not use lesion annotations. We observed discrepancies between the evaluation code presented in the TLT paper Tang et al. (2022) and the DLT Cai et al. (2021). To ensure fairness, we report the results on both evaluation codes using the format of DLT-/ TLT-Results. for subsequent deformable registration in the highly challenging cross-modality and diverse-FOV scenario. The visualization of the registration results were shown in Fig. 9. UAE-M can be readily used in downstream tasks. We evaluated the performance of nasopharyngeal carcinoma (NPC) segmentation on the HaN dataset using UAE-M as the initial affine step. The results were presented in Table 4. It show that our method outperforms manual cropping, enabling fully automated CT-MRI-based lesion segmentation. UAE-M has also been used to learn the correspondence on multi-phase MRI images. Fig. 10 shows an example of liver lesion matching on multi-phase MRI images from the LLD-MMRI challenge dataset. We achieved second place in the challenge thanks to the reliable performance of UAE-M. \nMED X (mm) MED Y (mm) MED Z (mm) MED (mm) SiamRPN++, 2019" }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Initial", "publication_ref": [], "table_ref": [], "text": "UAE-M 0 UAE-M 1 UAE-M 2 87.2 ± 39.6 35.8 ± 18.0 29.5 ± 14.9 25.7 ± 11.6 -17.8 ± 17.2 11.8 ± 10.6 11.0 ± 9.1" }, { "figure_ref": [ "fig_14" ], "heading": "Analysis of the Fixed-point-based Matching", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "In the fixed-point-based matching approach, we select an L 3 cubic region centered at the template point as seeds to identify fixed-point pairs. Table 6 presents the lesion tracking results\nusing different L values. The case where L = 0 corresponds to UAE-S without fixed-point-based matching. As shown in the table, even when selecting fixed-point pairs in a small region (L = 3), performance gains can be observed. Considering both performance and computational cost, L = 5 is the optimal choice. The UAE-M 0 model, trained using aggressive contrast augmentation, can correctly match the kidney example but fails in the liver example. Regarding the similarity maps, the highlighted regions are roughly correct but not concentrated, which causes the single-point template-query matching to be inaccurate. Therefore, we employ the multi-points-based AdaReg method to find better alignment and to learn UAE-M k model iteratively. As evident in Fig. 11 (g) and (i), the iterative training results in more concentrated highlights in the similarity maps, indicating more confident and accurate matching." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Self-supervised exemplar-based landmark detection is an emerging topic as it does not need landmark annotations to train the model and can detect arbitrary anatomical points accurately and conveniently. We propose UAE to address the limitations of current methods, improving the accuracy in intramodality tasks and extending the ability to the multi-modality scenario. UAE introduces three key innovations: (1) Semantic embedding learning with a prototypical supervised contrastive loss to equip the anatomical embeddings with semantic information; (2) A fixed-point-based structural matching mechanism for more precise inference; and (3) A robust iterative pipeline for cross-modality anatomical embedding learning. UAE has shown promising results on various medical image tasks, including lesion tracking in longitudinal CT scans, one-shot landmark detection, and cross-modality rigid registration with downstream tasks. We look forward to more applications to be developed with our released codes and models." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the National Natural Science Foundation of China under Grants 62171377, and in part by the National Key R&D Program of China under Grant 2022YFC2009903 / 2022YFC2009900. The authors would like to thank Tony C. W. Mok, Zi Li, Minfeng Xu, Le Lu, and Dakai Jin for their invaluable help and suggestions." } ]
Identifying specific anatomical structures (e.g., lesions or landmarks) in medical images plays a fundamental role in medical image analysis. Exemplar-based landmark detection methods are receiving increasing attention since they can detect arbitrary anatomical points in inference while do not need landmark annotations in training. They use self-supervised learning to acquire a discriminative embedding for each voxel within the image. These approaches can identify corresponding landmarks through nearest neighbor matching and has demonstrated promising results across various tasks. However, current methods still face challenges in: (1) differentiating voxels with similar appearance but different semantic meanings (e.g., two adjacent structures without clear borders); (2) matching voxels with similar semantics but markedly different appearance (e.g., the same vessel before and after contrast injection); and (3) cross-modality matching (e.g., CT-MRI landmark-based registration). To overcome these challenges, we propose universal anatomical embedding (UAE), which is a unified framework designed to learn appearance, semantic, and cross-modality anatomical embeddings. Specifically, UAE incorporates three key innovations: (1) semantic embedding learning with prototypical contrastive loss; (2) a fixed-point-based matching strategy; and (3) an iterative approach for cross-modality embedding learning. We thoroughly evaluated UAE across intra-and inter-modality tasks, including one-shot landmark detection, lesion tracking on longitudinal CT scans, and CT-MRI affine/rigid registration with varying field of view. Our results suggest that UAE outperforms state-of-the-art methods, offering a robust and versatile approach for landmark based medical image analysis tasks.
UAE: Universal Anatomical Embedding on Multi-modality Medical Images
[ { "figure_caption": "Fig. 1 .1Fig. 1. Two examples of exemplar-based landmark detection. (a) In intrapatient case, an lesion annotation from one CT is used to locate the corresponding lesion on the follow-up scan. (b) In inter-patient case, an anatomical structure annotation (aortic valve) from one CT is used to identify the same anatomical structure of another subject.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Hard cases for self-supervised anatomical embeddings. (a) In a noncontrast CT, the liver and kidney exhibit similar texture and intensity distribution, making it difficult to distinguish them using self-supervised appearance features. (b) The use of contrast agents greatly alters the appearance of vessels, leading to confusion with ribs. (c) Matching anatomical structures across modalities (e.g. MRI and CT).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Given a 3D image, SAM extracts two partially-overlapping patches and augments them through scaling and intensity transform. Voxels that appear in the same location on both patches are considered positive pairs, while other voxels (beyond a threshold to the positive pairs) on both patches are treated as negative samples. SAM deliberately selects hard and diverse negative samples to form the negative pairs. Finally, the In-foNCE loss Oord et al. (2018) is applied to reduce the distance between positive pairs and push negative pairs apart. This results in all voxels on the 3D image having a distinct embedding representation and the same location on different augmented views having similar embeddings. Due to the high structural similarity of the human body across individuals, SAM can also output similar embeddings for the same anatomical structure on different subject's scans. A recent study by Vizitiu et al.Vizitiu et al. (2023) showed that SAM-style method can also benefit from the optional supervision, such as automatically extracted anatomical landmarks.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Two examples of multi-modality medical images with substantially different FOVs. Left: Abdominal CT and MRI images; Right: Head and neck CT and MRI images. We can achieve the roughly alignment of different images by matching the same anatomical landmarks on them.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "FineFig. 4 .4Fig. 4. Overview of SEA. (a) SEA contains a semantic head and an appearance head. The semantic head generates semantic embeddings, the appearance head is responsible for generating both coarse and fine appearance (App.) embeddings. The dimension of embeddings is reduced to to 3 using PCA, and each embedding can be shown as a RGB image. (b) Illustration of voxel-wise contrastive loss and prototypical supervised contrastive loss, where x i and x ′ i", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig.5. Illustration of fixed-point-based matching method. We first show single point fixed-point iteration. Starting from the template point t A = t A 0 , we identify a fixed point t A 2 with a high similarity score using iterative NN matching. The process is then extended to the grid points around t A using batch-wise fixed-point iteration to obtain all fixed points (different starting points may converge to the same fixed point). Subsequently, we learn how the fixed points structure around t A is mapped to the corresponding structure in B through an affine transform T . By considering t A as an element of the fixed points structure in A, we can compute the matching point as q B = T t A , which is more accurate and reliable than the initial NN matching result q B 0 .", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Illustration of UAE-M Framework. (a) shows the results of using aggressive intensity augmentation on MRI and CT images. (b) The iterative training procedure.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "4.1. DatasetsWe trained UAE-S on two public datasets. The NIH-Lymph Node (NIH-LN) datasetYan et al. (2022) includes 176 chestabdomen-pelvis CT scans, and the Total Segmentator dataset Wasserthal et al. (2022) contains 1204 CT images with the labels of 104 anatomical structures. We evaluated UAE-S on two tasks: lesion tracking and landmark detection using CT. The lesion tracking task aims to match the same lesion on longitudinal CT scans. We used the publicly available deep longitudinal study (DLS) dataset Cai et al. (2021), which contains 3008, 403, and 480 lesion pairs for training, validation, and testing, respectively. For landmark detection, we used the ChestCT dataset, which contains the contrast-enhanced (CE) and noncontrast (NC) CT scans of 94 subjects. For UAE-M, we use two in-house datasets for training and test. The head-and-neck (HaN) dataset consists of 120 paired T1 MRI and non-contrast CT images, which were not coregistered. Each MRI image has a voxel size of around 0.5×0.5×6 mm 3 , and each CT image has a voxel size of around 1×1×3 mm 3 . The MRI images have a limited FOV and mainly capture the region between the nose and the second cervical vertebra, while the CT images include the regions from the top of head to a portion of the lungs. We used 100 cases for training and reserved 20 for test. The abdomen (ABD) dataset contains 98 pairs of T2 MRI and non-contrast CT scans. The voxel sizes of CT and MRI images are 0.7×0.7×5 mm and 0.8×0.8×8 mm, respectively. The MRI images have a small FOV that covers only a portion of liver and kidney, while the CT images encompass regions from the bottom of lung to the thighbone. We used 4.2. Performance Metrics The accuracy of lesion tracking was assessed using the Center Point Matching (CPM) method Tang et al. (2022); Cai et al. (2020). A match is deemed correct if the Euclidean distance between the predicted and ground truth centers is smaller than a threshold, which is set to either 10mm or the lesion radiusTang et al. (2022); Cai et al. (2020). Other performance metrics are the Mean Euclidean Distance (MED) between the predicted and ground truth centers and its projections in each direction (referred to as MEDX, MEDY, and MEDZ). For the ChestCT landmark matching, we use the same setting as SAM by calculating the mean distance of 19 predefined landmarks using templatequery matching. We assessed the performance of cross-modality affine/rigid registration by comparing the voxel-level MED of the same landmark on the registered pairs. Specifically, we annotated 12 landmarks on both CT and MRI images on the HaN dataset, including the lacrimal gland, the endpoint of temporomandibular joint, the top and bottom of C2 spine, the middle point of jawbone, and the intersection of the lateral pterygoid muscle and upper jawbone. On the ABD dataset, we annotated 6 landmarks, including the top and bottom points of liver and spleen, as well as the top points of kidneys.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Box-plot of the MED error of CT-MRI rigid registration on (a) HaN and (b) ABD datasets.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "tracking results on the DLS test set. We compared with supervised lesion tracking methods Li et al. (2019); Yan et al. (2020, 2019); Cai et al. (2021); Tang et al. (2022), registration methods Marstal et al. (2016); Bal-", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ". (2018);Heinrich et al. (2013),SAM Yan et al. (2022) and the improved SAMVizitiu et al. (2023). It reveals that UAE-S outperforms all competing methods, though it does not use any task-specific supervision. Note that the organ masks used to train the semantic head contain no lesion information.The lesion tracking results produced by SAM and UAE-S were visualized in Fig.8. It shows that the difficult cases for SAM can be effectively handled by UAE-S.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "ForFig. 8 .Fig. 9 .89Fig. 8. A visual comparison of lesion tracking results of SAM and UAE-S. We show two challenging cases with deformed organs and lesions, and altered contrast and image quality. Green boxes show the true lesion regions. Red circles indicate the true and tracked lesion centers. (a) and (d) are template (baseline) CT images. (b)(e) and (c)(f) are tracking results of SAM and UAE-S on the follow-up CT scans, respectively.", "figure_data": "", "figure_id": "fig_12", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. Two lesion matching examples using UAE-M on eight MRI modalities from the LLD-MMRI challenge. We present two examples. (a) Template pre-contrast phase images. (b) Arterial phase images. (c) Venous phase images. (d) Delay phase images. (e) DWI images. (f) In phase images. (g) Out phase images. (g) T2WI images. The lesions are marked in red circles.", "figure_data": "", "figure_id": "fig_13", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. Template-query matching by different models. (a) Template image, the red dots reperesent the template points. (b)(c) The matching results and similarity map using SAM method. (d)(e) UAE-M 0 results. (f)(g) UAE-M 1 results. (h)(i) UAE-M 2 results.", "figure_data": "", "figure_id": "fig_14", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": ".5 8.8±5.7 76.4±16.3 77.6±15.6 CT-MRI-M 75.0±8.2 8.5±4.8 79.2±15.4 75.9±14.6 CT-MRI-C 76.4±8.1 7.7±3.6 83.0±13.8 74.7±14.0 CT-only means using only manually cropped CT to do segmentation. CT-MRI-M means manually cropping the CT volume for the rough region between the nose and the second cervical vertebra, followed by registering MRI to CT volume. CT-MRI-C denotes our UAE-M method, which involves automatic cropping of the region based on the FOV of MRI volume. For all methods, we used DEEDS for deformable registration and nnUNet Isensee et al. (2021) for segmentation.", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. Examples of cross-modality registration under FOV differences. MRI images (green) were overlaid on CT images (red). (a) UAE-M 0 +DEEDS. (b)UAE-M 2 +DEEDS.", "figure_data": "", "figure_id": "fig_16", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "figure, SAM fails on both examples. The similarity maps reveal that SAM cannot even highlight the corresponding region in the kidney example, although it appears to be easier than the liver example.", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "We have released the entire training and inference codes, together with easy-to-use demos and trained models to facilitate the community to build their applications with this useful anatomical embedding and matching tool.", "figure_data": "2. Related work2.1. Exemplar-based medical landmark detectionIn exemplar-based (i.e. one-shot) landmark detection meth-ods, landmark detection in medical images is formulated as atemplate-query matching problem, with a central reliance ondefining metrics to assess the similarity between voxels in tem-plate and query images. Most methods involve contrastive(2022) proposed theSample Choosing Policy (SCP) method to select the most rep-resentative template image from a dataset. Chen et al. Chenet al. (2023) developed the Local Discrimination (LD) method,which learns contrastive self-supervised dense embeddings andgenerates segmentation masks through clustering simultane-ously. They achieved promising performance across several 2Dmedical image datasets.SAM Yan et al. (2022) extends exemplar-based medical land-mark detection to 3D images like CT and MRI. It is designed tocapture anatomical information at the voxel level and generaterent methods (UAE: 5.1 ± 2.2mm, C2FViT Mok and Chungsimilar embeddings for corresponding body parts, thereby fa-(2022): 13.4 ± 11.6mm), showing remarkable robustness undercilitating template-query matching. SAM employs a coarse-to-the conditions of significant FOV disparity. When applied tofine contrastive learning process, where the coarse level learnsthe downstream task of multi-modal tumor segmentation, ourglobal appearances, and the fine level learns local textures.automatic method outperforms human-assisted traditional reg-", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of lesion tracking methods and our UAE-S on test set of DLS dataset.", "figure_data": "", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of anatomical landmark detection methods and our UAE-S on ChestCT dataset. Same to original SAM Yan et al. (2022) paper, we reported mean error ± std. and max error.", "figure_data": "MethodCE-CENC-NCCE-NCNC-CEOn 19 test cases Yan et al. (2022)Affine Marstal et al. (2016)8.4±5.2 32.9 8.5±5.3 33.1--DEEDS Heinrich et al. (2013)4.6±3.3 18.8 4.7±3.4 24.4--VoxelMorph Balakrishnan et al. (2018) 7.3±3.6 20.1 7.4±3.7 20.2--SAM Yan et al.", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "gives the performance of landmark detection on theChestCT dataset. Previously in Yan et al. (2022), the perfor-mance was examined exclusively on 19 test cases, focusingsolely on intra-phase settings. By contrast, we conducted vali-dation on all 94 cases, testing both intra-and inter-phase match-ing scenarios. It shows in Table 2 that the proposed UAE-Soutperforms SAM across all settings consistently.", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of UAE-S on full DLS dataset Cai et al. (2021) using evaluation code of TLT Tang et al. (2022).", "figure_data": "MethodCPM@ 10mmCPM@ RadiusMED X (mm)MED Y (mm)MED Z (mm)MED (mm)SAM Yan et al. (2022)88.9493.442.4 ± 3.0 2.5 ± 3.1 3.6 ± 3.8 5.8 ± 5.0UAE-S w/o Fix-point-based matching89.3793.992.3 ± 2.6 2.5 ± 2.8 3.5 ± 3.7 5.6 ± 4.5UAE-S w/o Semantic head89.6694.442.2 ± 2.4 2.3 ± 2.8 3.4 ± 3.7 5.3 ± 4.4UAE-S91.0595.452.2 ± 2.2 2.1 ± 2.2 3.2 ± 3.4 5.1 ± 3.8(a)(b)(c)(d)(e)(f)(g)(h)(i)", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "MED (mm) after each UAE-M iteration on ABD dataset. Top row: UAE-M rigid transform results. Bottom row: Adding DEEDS for deformable registration.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The performance comparison of different L values in the fixedpoint-based matching on the full DLS dataset. 89.37 90.02 91.05 90.99 CPM@Radius 93.99 94.65 95.45 95.27", "figure_data": "L value0357CPM@10mm", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" } ]
Xiaoyu Bai; Fan Bai; Xiaofei Huo; Jia Ge; Jingjing Lu; Xianghua Ye; Ke Yan; Yong Xia
[ { "authors": "K S Arun; T S Huang; S D Blostein", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b0", "title": "Least-squares fitting of two 3-d point sets", "year": "1987" }, { "authors": "B B Avants; N Tustison; G Song", "journal": "Insight j", "ref_id": "b1", "title": "Advanced normalization tools (ants)", "year": "2009" }, { "authors": "G Balakrishnan; A Zhao; M R Sabuncu; J Guttag; A V Dalca", "journal": "", "ref_id": "b2", "title": "An unsupervised learning model for deformable medical image registration", "year": "2018" }, { "authors": "G Balakrishnan; A Zhao; M R Sabuncu; J Guttag; A V Dalca", "journal": "IEEE transactions on medical imaging", "ref_id": "b3", "title": "Voxelmorph: a learning framework for deformable medical image registration", "year": "2019" }, { "authors": "B Bier; M Unberath; J N Zaech; J Fotouhi; M Armand; G Osgood; N Navab; A Maier", "journal": "Springer", "ref_id": "b4", "title": "X-ray-transform invariant anatomical landmark detection for pelvic trauma surgery", "year": "2018-09-16" }, { "authors": "B Billot; D N Greve; O Puonti; A Thielscher; K Van Leemput; B Fischl; A V Dalca; J E Iglesias", "journal": "Medical Image Analysis", "ref_id": "b5", "title": "Synthseg: Segmentation of brain mri scans of any contrast and resolution without retraining", "year": "2023" }, { "authors": "J Cai; Y Tang; K Yan; A P Harrison; J Xiao; G Lin; L Lu", "journal": "", "ref_id": "b6", "title": "Deep lesion tracker: monitoring lesions in 4d longitudinal imaging studies", "year": "2021" }, { "authors": "J Cai; K Yan; C T Cheng; J Xiao; C H Liao; L Lu; A P Harrison", "journal": "Springer", "ref_id": "b7", "title": "Deep volumetric universal lesion detection using light-weight pseudo 3d convolution and surface point regression", "year": "2020" }, { "authors": "M Caron; I Misra; J Mairal; P Goyal; P Bojanowski; A Joulin", "journal": "Advances in neural information processing systems", "ref_id": "b8", "title": "Unsupervised learning of visual features by contrasting cluster assignments", "year": "2020" }, { "authors": "H Chen; R Wang; X Wang; J Li; Q Fang; H Li; J Bai; Q Peng; D Meng; L Wang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "Unsupervised local discrimination for medical images", "year": "2023" }, { "authors": "J Chen; E C Frey; Y He; W P Segars; Y Li; Y Du", "journal": "Medical image analysis", "ref_id": "b10", "title": "Transmorph: Transformer for unsupervised medical image registration", "year": "2022" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "PMLR", "ref_id": "b11", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "A Granas; J Dugundji", "journal": "Springer", "ref_id": "b12", "title": "Fixed point theory", "year": "2003" }, { "authors": "M P Heinrich; M Jenkinson; M Brady; J A Schnabel", "journal": "IEEE transactions on medical imaging", "ref_id": "b13", "title": "Mrf-based deformable registration and ventilation estimation of lung ct", "year": "2013" }, { "authors": "M Hoffmann; B Billot; D N Greve; J E Iglesias; B Fischl; A V Dalca", "journal": "IEEE transactions on medical imaging", "ref_id": "b14", "title": "Synthmorph: learning contrast-invariant registration without acquired images", "year": "2021" }, { "authors": "W Huang; H Yang; X Liu; C Li; I Zhang; R Wang; H Zheng; S Wang", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b15", "title": "A coarse-to-fine deformable transformation framework for unsupervised multi-contrast mr image registration with dual consistency constraint", "year": "2021" }, { "authors": "F Isensee; P F Jaeger; S A Kohl; J Petersen; K H Maier-Hein", "journal": "Nature Methods", "ref_id": "b16", "title": "nnU-Net: A Self-Configuring Method for Deep Learning-Based Biomedical Image Segmentation", "year": "2021" }, { "authors": "Y Jiang; Y Li; X Wang; Y Tao; J Lin; H Lin", "journal": "Springer", "ref_id": "b17", "title": "Cephalformer: Incorporating global structure constraint into visual features for general cephalometric landmark detection", "year": "2022" }, { "authors": "V S Khoo; E J Adams; F Saran; J L Bedford; J R Perks; A P Warrington; M Brada", "journal": "International Journal of Radiation Oncology* Biology* Physics", "ref_id": "b18", "title": "A comparison of clinical target volumes determined by ct and mri for the radiotherapy planning of base of skull meningiomas", "year": "2000" }, { "authors": "P Khosla; P Teterwak; C Wang; A Sarna; Y Tian; P Isola; A Maschinot; C Liu; D Krishnan", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Supervised contrastive learning", "year": "2020" }, { "authors": "S Klein; M Staring; K Murphy; M A Viergever; J P Pluim", "journal": "IEEE transactions on medical imaging", "ref_id": "b20", "title": "Elastix: a toolbox for intensity-based medical image registration", "year": "2009" }, { "authors": "B Li; W Wu; Q Wang; F Zhang; J Xing; J S Yan", "journal": "", "ref_id": "b21", "title": "Evolution of siamese visual tracking with very deep networks", "year": "2019" }, { "authors": "Z Li; L Tian; T C Mok; X Bai; P Wang; J Ge; J Zhou; L Lu; X Ye; K Yan", "journal": "Springer", "ref_id": "b22", "title": "Samconvex: Fast discrete optimization for ct registration using self-supervised anatomical embedding and correlation pyramid", "year": "2023" }, { "authors": "F Liu; K Yan; A P Harrison; D Guo; L Lu; A L Yuille; L Huang; G Xie; J Xiao; X Ye", "journal": "Springer", "ref_id": "b23", "title": "Same: Deformable image registration based on self-supervised anatomical embeddings", "year": "2021-09-27" }, { "authors": "B C Lowekamp; D T Chen; L Ibáñez; D Blezek", "journal": "Frontiers in neuroinformatics", "ref_id": "b24", "title": "The design of simpleitk", "year": "2013" }, { "authors": "F Maes; D Vandermeulen; P Suetens", "journal": "", "ref_id": "b25", "title": "Medical image registration using mutual information", "year": "2003" }, { "authors": "K Marstal; F Berendsen; M Staring; S Klein", "journal": "", "ref_id": "b26", "title": "Simpleelastix: A user-friendly, multi-lingual library for medical image registration", "year": "2016" }, { "authors": "T C Mok; A Chung", "journal": "", "ref_id": "b27", "title": "Affine medical image registration with coarseto-fine vision transformer", "year": "2022" }, { "authors": "T C Mok; A C Chung", "journal": "Springer", "ref_id": "b28", "title": "Large deformation diffeomorphic image registration with laplacian pyramid networks", "year": "2020-10-04" }, { "authors": "A Q O'neil; A Kascenas; J Henry; D Wyeth; M Shepherd; E Beveridge; L Clunie; C Sansom; E Seduikyte Keith Muir; I Poole", "journal": "", "ref_id": "b29", "title": "Attaining human-level performance with atlas location autocontext for anatomical landmark detection in 3d ct data", "year": "2018" }, { "authors": "A V D Oord; Y Li; O Vinyals", "journal": "", "ref_id": "b30", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Q Quan; Q Yao; J Li; S K Zhou", "journal": "", "ref_id": "b31", "title": "Which images to label for few-shot medical landmark detection?", "year": "2022" }, { "authors": "W Tang; H Kang; H Zhang; P Yu; C W Arnold; R Zhang", "journal": "Springer", "ref_id": "b32", "title": "Transformer lesion tracker", "year": "2022-09-18" }, { "authors": "A Vizitiu; A T Mohaiu; I M Popdan; A Balachandran; F C Ghesu; D Comaniciu", "journal": "Springer", "ref_id": "b33", "title": "Multi-scale self-supervised learning for longitudinal lesion tracking with optional supervision", "year": "2023" }, { "authors": "J Wasserthal; M Meyer; H C Breit; J Cyriac; S Yang; M Segeroth", "journal": "", "ref_id": "b34", "title": "Totalsegmentator: robust segmentation of 104 anatomical structures in ct images", "year": "2022" }, { "authors": "K Yan; J Cai; D Jin; S Miao; D Guo; A P Harrison; Y Tang; J Xiao; J Lu; L Lu", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b35", "title": "Sam: Self-supervised learning of pixel-wise anatomical embeddings in radiological images", "year": "2022" }, { "authors": "K Yan; J Cai; Y Zheng; A P Harrison; D Jin; Y Tang; Y Tang; L Huang; J Xiao; L Lu", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b36", "title": "Learning from multiple datasets with heterogeneous and partial labels for universal lesion detection in ct", "year": "2020" }, { "authors": "K Yan; Y Peng; V Sandfort; M Bagheri; Z Lu; R M Summers", "journal": "", "ref_id": "b37", "title": "Holistic and comprehensive annotation of clinically significant findings on diverse ct images: learning from radiology reports and label ontology", "year": "2019" }, { "authors": "Q Yao; Q Quan; L Xiao; Kevin Zhou; S ", "journal": "Springer", "ref_id": "b38", "title": "One-shot medical landmark detection", "year": "2021-09-27" }, { "authors": "Q Yao; J Wang; Y Sun; Q Quan; H Zhu; S K Zhou", "journal": "", "ref_id": "b39", "title": "Relative distance matters for one-shot landmark detection", "year": "2022" }, { "authors": "Z Yin; P Gong; C Wang; Y Yu; Y Wang", "journal": "Springer", "ref_id": "b40", "title": "One-shot medical landmark localization by edge-guided transform and noisy landmark refinement", "year": "2022" }, { "authors": "Z Zhong; J Li; Z Zhang; Z Jiao; X Gao", "journal": "Springer", "ref_id": "b41", "title": "An attention-guided deep regression model for landmark detection in cephalograms, in: Medical Image Computing and Computer Assisted", "year": "2019-10-13" }, { "authors": "Z Zhou; V Sodha; J Pang; M B Gotway; J Liang", "journal": "Medical image analysis", "ref_id": "b42", "title": "Models genesis", "year": "2021" }, { "authors": "H Zhu; Q Yao; S K Zhou", "journal": "", "ref_id": "b43", "title": "Datr: Domain-adaptive transformer for multi-domain landmark detection", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 451.29, 159.37, 85.36, 50.92 ], "formula_id": "formula_0", "formula_text": "𝑥 ! \" 𝑥 ! # 𝑥 ! $ 𝑐 \" 𝑐 $ 𝑐 # … … … …" }, { "formula_coordinates": [ 5, 501.72, 80.74, 26.62, 7.31 ], "formula_id": "formula_1", "formula_text": "𝑥 % 𝑥 &" }, { "formula_coordinates": [ 5, 187.16, 80.94, 349.04, 161.83 ], "formula_id": "formula_2", "formula_text": "𝑥 ! … … … … 𝑥 ! ′ 𝑥 % ′ 𝑥 ' ′ (a) (b)" }, { "formula_coordinates": [ 5, 56.61, 654.43, 226.59, 28.49 ], "formula_id": "formula_3", "formula_text": "L app = - npos i=1 log exp x i • x ′ i /τ a exp x i • x ′ i /τ a + nneg j=1 exp x i • x j /τ a ,(1)" }, { "formula_coordinates": [ 5, 301.13, 603.79, 251.31, 29.71 ], "formula_id": "formula_4", "formula_text": "{x p i }, i ∈ [1, n p ], p ∈ [1, K]," }, { "formula_coordinates": [ 5, 339.36, 708.34, 213.07, 27.38 ], "formula_id": "formula_5", "formula_text": "L s = K p=1 - 1 n p np i=1 log exp (c p • x p i /τ s ) K q=1 nq j=1 exp (c p • x q j /τ s ) ,(2)" }, { "formula_coordinates": [ 6, 119.26, 667.26, 174.89, 12.88 ], "formula_id": "formula_6", "formula_text": "x B q = argmax i∈B (x A t • x B i ).(3)" }, { "formula_coordinates": [ 6, 353.1, 409.39, 210.29, 12.99 ], "formula_id": "formula_7", "formula_text": "t A i+1 = f (t A i , X A , X B ) ≜ t A i → q B i , q B i → t A i+1 .(4)" }, { "formula_coordinates": [ 6, 357.91, 590.24, 205.49, 13.08 ], "formula_id": "formula_8", "formula_text": "t A 0 , t A 1 = f (t A 0 ), t A 2 = f (t A 1 ), t A i+1 = f (t A i ), ....(5)" }, { "formula_coordinates": [ 7, 301.13, 353.25, 251.31, 38.25 ], "formula_id": "formula_9", "formula_text": "sets p i and p ′ i , i = 1, • • • , N, we assume p ′ i = Rp i + t + n i ,(6)" }, { "formula_coordinates": [ 7, 373.12, 439.96, 175.45, 29.68 ], "formula_id": "formula_10", "formula_text": "Σ 2 = N i=1 p ′ i -(Rp i + t) 2 . (7" }, { "formula_coordinates": [ 7, 548.56, 450.08, 3.87, 8.9 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 8, 64.77, 314.23, 229.38, 35.1 ], "formula_id": "formula_12", "formula_text": "L = - N f pos i=1 log exp x i • x ′ i /τ c exp x i • x ′ i /τ c + N f neg +N f fov j=1 exp x i • x j /τ c ,(8)" }, { "formula_coordinates": [ 10, 50.27, 84.62, 494.33, 29.14 ], "formula_id": "formula_13", "formula_text": "MED X (mm) MED Y (mm) MED Z (mm) MED (mm) SiamRPN++, 2019" } ]
2023-11-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b37", "b70", "b75", "b8", "b41", "b81", "b94", "b53", "b73", "b40", "b8", "b1", "b42", "b81", "b40", "b8", "b40", "b42", "b81", "b12", "b104", "b12", "b70", "b8", "b41", "b42", "b8", "b12", "b63", "b65", "b13", "b56", "b57", "b1", "b44", "b13" ], "table_ref": [], "text": "Driven by advances in generative image modeling with diffusion models [38,68,71,76], there has been significant recent progress on generative video models both in research [9,42,82,95] and real-world applications [54,74] Broadly, these models are either trained from scratch [41] or finetuned (partially or fully) from pretrained image models with additional temporal layers inserted [9,32,43,82].\nTraining is often carried out on a mix of image and video datasets [41].\nWhile research around improvements in video modeling has primarily focused on the exact arrangement of the spatial and temporal layers [9,41,43,82], none of the aforementioned works investigate the influence of data selection. This is surprising, especially since the significant impact of the training data distribution on generative models is undisputed [13,105]. Moreover, for generative image modeling, it is known that pretraining on a large and diverse dataset and finetuning on a smaller but higher quality dataset significantly improves the performance [13,71]. Since many previous approaches to video modeling have successfully drawn on techniques from the image domain [9,42,43], it is noteworthy that the effect of data and training strategies, i.e., the separation of video pretraining at lower resolutions and high-quality finetuning, has yet to be studied. This work directly addresses these previously uncharted territories.\nWe believe that the significant contribution of data selection is heavily underrepresented in today's video research landscape despite being well-recognized among practitioners when training video models at scale. Thus, in contrast to previous works, we draw on simple latent video diffusion baselines [9] for which we fix architecture and training scheme and assess the effect of data curation. To this end, we first identify three different video training stages that we find crucial for good performance: text-to-image pretraining, video pretraining on a large dataset at low resolution, and high-resolution video finetuning on a much smaller dataset with higher-quality videos. Borrowing from largescale image model training [13,64,66], we introduce a systematic approach to curate video data at scale and present an empirical study on the effect of data curation during video pretraining. Our main findings imply that pretraining on well-curated datasets leads to significant performance improvements that persist after high-quality finetuning.\nA general motion and multi-view prior Drawing on these findings, we apply our proposed curation scheme to a large video dataset comprising roughly 600 million samples and train a strong pretrained text-to-video base model, which provides a general motion representation. We exploit this and finetune the base model on a smaller, high-quality dataset for high-resolution downstream tasks such as textto-video (see Figure 1, top row) and image-to-video, where we predict a sequence of frames from a single conditioning image (see Figure 1, mid rows). Human preference studies reveal that the resulting model outperforms state-of-the-art image-to-video models.\nFurthermore, we also demonstrate that our model provides a strong multi-view prior and can serve as a base to finetune a multi-view diffusion model that generates multiple consistent views of an object in a feedforward manner and outperforms specialized novel view synthesis methods such as Zero123XL [14,57] and SyncDreamer [58]. Finally, we demonstrate that our model allows for explicit motion control by specifically prompting the temporal layers with motion cues and also via training LoRAmodules [32,45] on datasets resembling specific motions only, which can be efficiently plugged into the model. To summarize, our core contributions are threefold: (i) We present a systematic data curation workflow to turn a large uncurated video collection into a quality dataset for generative video modeling. Using this workflow, we (ii) train state-of-the-art text-to-video and image-to-video models, outperforming all prior models. Finally, we (iii) probe the strong prior of motion and 3D understanding in our models by conducting domain-specific experiments. Specifically, we provide evidence that pretrained video diffusion models can be turned into strong multi-view generators, which may help overcome the data scarcity typically observed in the 3D domain [14]." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b37", "b83", "b86", "b12", "b63", "b70", "b74", "b8", "b28", "b40", "b81", "b94", "b70", "b91", "b8", "b9", "b29", "b42", "b8", "b30", "b1", "b34", "b96", "b21", "b70", "b0", "b8", "b28", "b30", "b1", "b28", "b8", "b8", "b1", "b51", "b113", "b8", "b96", "b81", "b63", "b50", "b79", "b65", "b104", "b26", "b62", "b66", "b46", "b65", "b104", "b12", "b63", "b79", "b40", "b42", "b81", "b93", "b6", "b8", "b81", "b114", "b79" ], "table_ref": [], "text": "Most recent works on video generation rely on diffusion models [38,84,87] to jointly synthesize multiple consistent frames from text-or image-conditioning. Diffusion models implement an iterative refinement process by learning to gradually denoise a sample from a normal distribution and have been successfully applied to highresolution text-to-image [13,64,68,71,75] and video synthesis [9,29,41,82,95].\nIn this work, we follow this paradigm and train a latent [71,92] video diffusion model [9,23] on our video dataset. We provide a brief overview of related works which utilize latent video diffusion models (Video-LDMs) in the following paragraph; a full discussion that includes approaches using GANs [10,30] and autoregressive models [43] can be found in App. B. An introduction to diffusion models can be found in App. D.\nLatent Video Diffusion Models Video-LDMs [9,31,32,35,97] train the main generative model in a latent space of reduced computational complexity [22,71]. Most related works make use of a pretrained text-to-image model and insert temporal mixing layers of various forms [1,9,29,31,32] into the pretrained architecture. Ge et al. [29] additionally relies on temporally correlated noise to increase temporal consistency and ease the learning task. In this work, we follow the architecture proposed in Blattmann et al. [9] and insert temporal convolution and attention layers after every spatial convolution and attention layer. In contrast to works that only train temporal layers [9,32] or are completely training-free [52,114], we finetune the full model. For textto-video synthesis in particular, most works directly condition the model on a text prompt [9,97] or make use of an additional text-to-image prior [23,82].\nIn our work, we follow the former approach and show that the resulting model is a strong general motion prior, which can easily be finetuned into an image-to-video or multi-view synthesis model. Additionally, we introduce micro-conditioning [64] on frame rate. We also employ the EDM-framework [51] and significantly shift the noise schedule towards higher noise values, which we find to be essential for high-resolution finetuning. See Section 4 for a detailed discussion of the latter.\nData Curation Pretraining on large-scale datasets [80] is an essential ingredient for powerful models in several tasks such as discriminative text-image [66,105] and language [27,63,67] modeling. By leveraging efficient language-image representations such as CLIP [47,66,105], data curation has similarly been successfully applied for generative image modeling [13,64,80]. However, discussions on such data curation strategies have largely been missing in the video generation literature [41,43,82,94], and processing and filtering strategies have been introduced in an ad-hoc manner. Among the publicly accessible video datasets, WebVid-10M [7] dataset has been a popular choice [9,82,115] despite being watermarked and suboptimal in size. Additionally, WebVid-10M is often used in combination with image data [80], to enable joint image-video training. However, this amplifies the difficulty of separating the effects of image and video data on the final model. To address these shortcomings, this work presents a systematic study of methods for video data curation and further introduces a general three-stage training strategy for generative video models, producing a state-ofthe-art model." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Curating Data for HQ Video Synthesis", "publication_ref": [ "b107", "b23", "b47" ], "table_ref": [], "text": "In this section, we introduce a general strategy to train a state-of-the-art video diffusion model on large datasets of videos. To this end, we (i) introduce data processing and curation methods, for which we systematically analyze the impact on the quality of the final model in Section 3. small subset of high-quality videos at higher resolution. We study the importance of each regime separately in Sections 3.2 to 3.4. We collect an initial dataset of long videos which forms the base data for our video pretraining stage. To avoid cuts and fades leaking into synthesized videos, we apply a cut detection pipeline1 in a cascaded manner at three different FPS levels. Figure 2, left, provides evidence for the need for cut detection: After applying our cut-detection pipeline, we obtain a significantly higher number (∼4×) of clips, indicating that many video clips in the unprocessed dataset contain cuts beyond those obtained from metadata. Next, we annotate each clip with three different synthetic captioning methods: First, we use the image captioner CoCa [108] to annotate the mid-frame of each clip and use However, further investigation reveals that the resulting dataset contains examples that can be expected to degrade the performance of our final video model, such as clips with less motion, excessive text presence, or generally low aesthetic value. We therefore additionally annotate our dataset with dense optical flow [24,48], which we calculate at 2 FPS and with which we filter out static scenes by removing any videos whose average optical flow magnitude is below a certain threshold. Indeed, when considering the motion distribution of LVD (see Figure 2, right) via optical flow scores, we identify a subset of close-to-static clips therein." }, { "figure_ref": [], "heading": "Data Processing and Annotation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Raw", "publication_ref": [ "b65", "b79" ], "table_ref": [], "text": "Moreover, we apply optical character recognition [5] to weed out clips containing large amounts of written text. Lastly, we annotate the first, middle, and last frames of each clip with CLIP [66] embeddings from which we calculate aesthetics scores [80] as well as text-image similarities. Statistics of our dataset, including the total size and average duration of clips, are provided in Tab. 1." }, { "figure_ref": [ "fig_2" ], "heading": "Stage I: Image Pretraining", "publication_ref": [ "b8", "b40", "b81", "b70" ], "table_ref": [], "text": "We consider image pretraining as the first stage in our training pipeline. Thus, in line with concurrent work on video models [9,41,82], we ground our initial model on a pretrained image diffusion model -namely Stable Diffusion 2.1 [71] -to equip it with a strong visual representation.\nTo analyze the effects of image pretraining, we train and compare two identical video models as detailed in App. D on a 10M subset of LVD; one with and one without pretrained spatial weights. We compare these models using a human preference study (see App. E for details) in Figure 3a, which clearly shows that the image-pretrained model is preferred in both quality and prompt-following." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_5", "fig_5", "fig_2", "fig_5", "fig_5", "fig_5" ], "heading": "Stage II: Curating a Video Pretraining Dataset", "publication_ref": [ "b65", "b104", "b12", "b39", "b68", "b20", "b20", "b6", "b99" ], "table_ref": [], "text": "A systematic approach to video data curation. For multimodal image modeling, data curation is a key element of many powerful discriminative [66,105] and generative [13,40,69] equally powerful off-the-shelf representations available in the video domain to filter out unwanted examples, we rely on human preferences as a signal to create a suitable pretraining dataset. Specifically, we curate subsets of LVD using different methods described below and then consider the human-preference-based ranking of latent video diffusion models trained on these datasets.\nMore specifically, for each type of annotation introduced in Section 3.1 (i.e., CLIP scores, aesthetic scores, OCR detection rates, synthetic captions, optical flow scores), we start from an unfiltered, randomly sampled 9.8M-sized subset of LVD, LVD-10M, and systematically remove the bottom 12.5, 25 and 50% of examples. Note that for the synthetic captions, we cannot filter in this sense. Instead, we assess Elo rankings [21] for the different captioning methods from Section 3.1. To keep the number of total subsets tractable, we apply this scheme separately to each type of annotation. We train models with the same training hyperparameters on each of these filtered subsets and compare the results of all models within the same class of annotation with an Elo ranking [21] for human preference votes. Based on these votes, we consequently select the best-performing filtering threshold for each annotation type. The details of this study are presented and discussed in App. E. Applying this filtering approach to LVD results in a final pretraining dataset of 152M training examples, which we refer to as LVD-F, cf . Tab. 1.\nCurated training data improves performance. In this section, we demonstrate that the data curation approach described above improves the training of our video diffusion models. To show this, we apply the filtering strategy described above to LVD-10M and obtain a four times smaller subset, LVD-10M-F. Next, we use it to train a baseline model that follows our standard architecture and train- Pretraining on curated datasets consistently boosts performance of generative video models during video pretraining at small (Figures 4a and4b) and larger scales (Figures 4c and4d). Remarkably, this performance improvement persists even after 50k steps of video finetuning on high quality data (Figure 4e).\ning schedule and evaluate the preference scores for visual quality and prompt-video alignment compared to a model trained on uncurated LVD-10M.\nWe visualize the results in Figure 3b, where we can see the benefits of filtering: In both categories, the model trained on the much smaller LVD-10M-F is preferred. To further show the efficacy of our curation approach, we compare the model trained on LVD-10M-F with similar video models trained on WebVid-10M [7], which is the most recognized research licensed dataset, and InternVid-10M [100], which is specifically filtered for high aesthetics. Although LVD-10M-F is also four times smaller than these datasets, the corresponding model is preferred by human evaluators in both spatiotemporal quality and prompt alignment as shown in Figure 4b.\nData curation helps at scale. To verify that our data curation strategy from above also works on larger, more practically relevant datasets, we repeat the experiment above and train a video diffusion model on a filtered subset with 50M examples and a non-curated one of the same size. We conduct a human preference study and summarize the results of this study in Figure 4c, where we can see that the advantages of data curation also come into play with larger amounts of data. Finally, we show that dataset size is also a crucial factor when training on curated data in Figure 4d, where a model trained on 50M curated samples is superior to a model trained on LVD-10M-F for the same number of steps." }, { "figure_ref": [ "fig_5" ], "heading": "Stage III: High-Quality Finetuning", "publication_ref": [ "b12", "b63", "b8", "b81" ], "table_ref": [], "text": "In the previous section, we demonstrated the beneficial effects of systematic data curation for video pretraining. However, since we are primarily interested in optimizing the performance after video finetuning, we now investigate how these differences after Stage II translate to the final performance after Stage III. Here, we draw on training techniques from latent image diffusion modeling [13,64] and increase the resolution of the training examples. Moreover, we use a small finetuning dataset comprising 250K pre-captioned video clips of high visual fidelity.\nTo analyze the influence of video pretraining on this last stage, we finetune three identical models, which only differ in their initialization. We initialize the weights of the first with a pretrained image model and skip video pretraining, a common choice among many recent video modeling approaches [9,82]. The remaining two models are initialized with the weights of the latent video models from the previous section, specifically, the ones trained on 50M curated and uncurated video clips. We finetune all models for 50K steps and assess human preference rankings early during finetuning (10K steps) and at the end to measure how performance differences progress in the course of finetuning. We show the obtained results in Figure 4e, where we plot the Elo improvements of user preference relative to the model ranked last, which is the one initialized from an image model. Moreover, the finetuning resumed from curated pretrained weights ranks consistently higher than the one initialized from video weights after uncurated training.\nGiven these results, we conclude that i) the separation of video model training in video pretraining and video finetuning is beneficial for the final model performance after finetuning and that ii) video pretraining should ideally occur on a large scale, curated dataset, since performance differences after pretraining persist after finetuning." }, { "figure_ref": [], "heading": "Training Video Models at Scale", "publication_ref": [ "b13", "b56", "b57" ], "table_ref": [], "text": "In this section, we borrow takeaways from Section 3 and present results of training state-of-the-art video models at scale. We first use the optimal data strategy inferred from ablations to train a powerful base model at 320 × 576 in App. D.2. We then perform finetuning to yield several strong state-of-the-art models for different tasks such as text-to-video in Section 4.2, image-to-video in Section 4.3 and frame interpolation in Section 4.4. Finally, we demonstrate that our video-pretraining can serve as a strong implicit 3D prior, by tuning our image-to-video models on multi-view generation in Section 4.5 and outperform concurrent work, in particular Zero123XL [14,57] and Sync-Dreamer [58] in terms of multi-view consistency. " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "FVD (↓)", "publication_ref": [ "b42", "b42", "b81", "b8", "b114", "b28" ], "table_ref": [], "text": "CogVideo (ZH) [43] 751.34 CogVideo (EN) [43] 701.59 Make-A-Video [82] 367.23 Video LDM [9] 550.61 MagicVideo [115] 655.00 PYOCO [29] 355.20 " }, { "figure_ref": [], "heading": "Pretrained Base Model", "publication_ref": [ "b70", "b43", "b86", "b50", "b50", "b43", "b87" ], "table_ref": [], "text": "As discussed in Section 3.2, our video model is based on Stable Diffusion 2.1 [71] (SD 2.1). Recent works [44] show that it is crucial to adopt the noise schedule when training image diffusion models, shifting towards more noise for higher-resolution images. As a first step, we finetune the fixed discrete noise schedule from our image model towards continuous noise [87] using the network preconditioning proposed in Karras et al. [51] for images of size 256 × 384. After inserting temporal layers, we then train the model on LVD-F on 14 frames at resolution 256 × 384. We use the standard EDM noise schedule [51] for 150k iterations and batch size 1536. Next, we finetune the model to generate 14 320 × 576 frames for 100k iterations using batch size 768. We find that it is important to shift the noise schedule towards more noise for this training stage, confirming results by Hoogeboom et al. [44] for image models. For further training details, see App. D. We refer to this model as our base model which can be easily finetuned for a variety of tasks as we show in the following sections. The base model has learned a powerful motion representation, for example, it significantly outperforms all baselines for zero-shot textto-video generation on UCF-101 [88] (Tab. 2). Evaluation details can be found in App. E." }, { "figure_ref": [ "fig_6" ], "heading": "High-Resolution Text-to-Video Model", "publication_ref": [], "table_ref": [], "text": "We finetune the base text-to-video model on a high-quality video dataset of ∼ 1M samples. Samples in the dataset generally contain lots of object motion, steady camera motion, and well-aligned captions, and are of high visual quality altogether. We finetune our base model for 50k iterations at resolution 576 × 1024 (again shifting the noise schedule towards more noise) using batch size 768. Samples in Figure 5, more can be found in App. E. " }, { "figure_ref": [ "fig_6" ], "heading": "High Resolution Image-to-Video Model", "publication_ref": [ "b38", "b72", "b35", "b53" ], "table_ref": [], "text": "Besides text-to-video, we finetune our base model for image-to-video generation, where the video model receives a still input image as a conditioning. Accordingly, we replace text embeddings that are fed into the base model with the CLIP image embedding of the conditioning. Additionally, we concatenate a noise-augmented [39] version of the conditioning frame channel-wise to the input of the UNet [73]. We do not use any masking techniques and simply copy the frame across the time axis. We finetune two models, one predicting 14 frames and another one predicting 25 frames; implementation and training details can be found in App. D. We occasionally found that standard vanilla classifier-free guidance [36] can lead to artifacts: too little guidance may result in inconsistency with the conditioning frame while too much guidance can result in oversaturation. Instead of using a constant guidance scale, we found it helpful to linearly increase the guidance scale across the frame axis (from small to high). Details can be found in App. D. Samples in Figure 5, more can be found in App. E.\nIn Section 4.5 we compare our model with state-of-theart, closed-source video generative models, in particular GEN-2 [23, 74] and PikaLabs [54], and show that our model is preferred in terms of visual quality by human voters. Details on the experiment, as well as many more image-tovideo samples, can be found in App. E." }, { "figure_ref": [ "fig_8" ], "heading": "Camera Motion LoRA", "publication_ref": [ "b1" ], "table_ref": [], "text": "To facilitate controlled camera motion in image-to-video generation, we train a variety of camera motion LoRAs within the temporal attention blocks of our model [32]; see App. D for exact implementation details. We train these additional parameters on a small dataset with rich cameramotion metadata. In particular, we use three subsets of the data for which the camera motion is categorized as \"hori- zontally moving\", \"zooming\", and \"static\". In Figure 7 we show samples of the three models for identical conditioning frames; more samples can be found in App. E." }, { "figure_ref": [], "heading": "Frame Interpolation", "publication_ref": [ "b8" ], "table_ref": [], "text": "To obtain smooth videos at high frame rates, we finetune our high-resolution text-to-video model into a frame interpolation model. We follow Blattmann et al. [9] and concatenate the left and right frames to the input of the UNet via masking. The model learns to predict three frames within the two conditioning frames, effectively increasing the frame rate by four. Surprisingly, we found that a very small number of iterations (≈ 10k) suffices to get a good model. Details and samples can be found in App. D and App. E, respectively." }, { "figure_ref": [ "fig_10", "fig_10", "fig_9", "fig_11" ], "heading": "Multi-View Generation", "publication_ref": [ "b13", "b110", "b19", "b110", "b56", "b13", "b57", "b111", "b65" ], "table_ref": [], "text": "To obtain multiple novel views of an object simultaneously, we finetune our image-to-video SVD model on multi-view datasets [14,15,111]. Datasets. We finetuned our SVD model on two datasets, where the SVD model takes a single image and outputs a sequence of multi-view images: (i) A subset of Objaverse [15] consisting of 150K curated and CC-licensed synthetic 3D objects from the original dataset [15]. For each object, we rendered 360 • orbital videos of 21 frames with randomly sampled HDRI environment map and elevation angles between [-5 • , 30 • ]. We evaluate the resulting models on an unseen test dataset consisting of 50 sampled objects from Google Scanned Objects (GSO) dataset [20]. and (ii) MVImgNet [111] consisting of casually captured multiview videos of general household objects. We split the videos into ∼200K train and 900 test videos. We rotate the frames captured in portrait mode to landscape orientation.\nThe Objaverse-trained model is additionally conditioned on the elevation angle of the input image, and outputs orbital videos at that elevation angle. The MVImgNet-trained models are not conditioned on pose and can choose an arbitrary camera path in their generations. For details on the pose conditioning mechanism, see App. E.\nModels. We refer to our finetuned Multi-View model as SVD-MV. We perform an ablation study on the importance of the video prior of SVD for multi-view generation. To this effect, we compare the results from SVD-MV i.e. from a video prior to those finetuned from an image prior i.e. the text-to-image model SD2.1 (SD2.1-MV), and that trained without a prior i.e. from random initialization (Scratch-MV). In addition, we compare with the current state-of-the-art multiview generation models of Zero123 [57], Zero123XL [14], and SyncDreamer [58].\nMetrics. We use the standard metrics of Peak Signal-to-Noise Ratio (PSNR), LPIPS [112], and CLIP [66] Similarity scores (CLIP-S) between the corresponding pairs of ground truth and generated frames on 50 GSO test objects.\nTraining. We train all our models for 12k steps (∼16 hours) with 8 80GB A100 GPUs using a total batch size of 16, with a learning rate of 1e-5.\nResults. Figure 9(a) shows the average metrics on the GSO test dataset. The higher performance of SVD-MV compared to SD2.1-MV and Scratch-MV clearly demonstrates the advantage of the learned video prior in the SVD model for multi-view generation. In addition, as in the case of other models finetuned from SVD, we found that a very small number of iterations (≈ 12k) suffices to get a good model. Moreover, SVD-MV is competitive w.r.t state-ofthe-art techniques with lesser training time (12k iterations in 16 hours), whereas existing models are typically trained for much longer (for example, SyncDreamer was trained for four days specifically on Objaverse). Figure 9(b) shows convergence of different finetuned models. After only 1k iterations, SVD-MV has much better CLIP-S and PSNR scores than its image-prior and no-prior counterparts.\nFigure 8 shows a qualitative comparison of multi-view generation results on a GSO test object and Figure 10 on an MVImgNet test object. As can be seen, our generated frames are multi-view consistent and realistic. More details on the experiments, as well as more multi-view generation samples, can be found in App. E. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b57" ], "table_ref": [], "text": "We present Stable Video Diffusion (SVD), a latent video diffusion model for high-resolution, state-of-the-art text-tovideo and image-to-video synthesis. To construct its pretraining dataset, we conduct a systematic data selection and scaling study, and propose a method to curate vast amounts of video data and turn large and noisy video collection into suitable datasets for generative video models. Furthermore, we introduce three distinct stages of video model training which we separately analyze to assess their impact on the final model performance. Stable Video Diffusion provides a powerful video representation from which we finetune video models for state-of-the-art image-to-video synthesis and other highly relevant applications such as LoRAs for camera control. Finally we provide a pioneering study on multi-view finetuning of video diffusion models and show that SVD constitutes a strong 3D prior, which obtains stateof-the-art results in multi-view synthesis while using only a fraction of the compute of previous methods. We hope these findings will be broadly useful in the generative video modeling literature. A discussion on our work's broader impact and limitations can be found in App. A. generative model for multi-view generation compared to adapting an image-generative model. In addition, the temporal attention layers in our video model naturally assist in the generation of consistent multi-views of an object without needing any explicit 3D structures like in [58]." }, { "figure_ref": [], "heading": "C. Data Processing", "publication_ref": [ "b79" ], "table_ref": [], "text": "In this section, we provide more details about our processing pipeline including their outputs on a few public video examples for demonstration purposes. The design of our processing pipeline addresses the above points. Thus, to ensure temporal quality, we detect cuts with a cascaded approach directly after download, clip the videos accordingly, and estimate optical flow for each resulting video clip. After that, we apply three synthetic captioners to every clip and further extract frame-level CLIP similarities to all of these text prompts to be able to filter out outliers. Finally, visual quality at the frame level is assessed by using a CLIPembeddings-based aesthetics score [80]. We describe each step in more detail in what follows." }, { "figure_ref": [ "fig_1" ], "heading": "Source Video", "publication_ref": [ "b99", "b90", "b47", "b23", "b88" ], "table_ref": [], "text": "Cut Detected? w/o cascade w/ cascade (ours)\n✓ ✓ ✓ ✓ ✗ ✓ ✗ ✓ Figure 11.\nComparing a common cut detector with our cascaded approach, shows the benefits of our cascaded method: While normal single-fps cut detection can only detect sudden changes in scene, more continuous transitions tend to remain undetected, what is in contrast with our approach which reliably also detects the latter transitions.\nCascaded Cut Detection. Similar to previous work [100], we use PySceneDetect2 to detect cuts in our base video clips. However, as qualitatively shown in Figure 11 we observe many fade-ins and fade-outs between consecutive scenes, which are not detected when running the cut detector at a unique threshold and only native fps. Thus, in contrast to previous work, we apply a cascade of 3 cut detectors which are operating at different frame rates and different thresholds to detect both sudden changes and slow ones such as fades.\nKeyframe-Aware Clipping. We clip the videos using FFMPEG [91] directly after cut detection by extracting the timestamps of the keyframes in the source videos and snapping detected cuts onto the closest keyframe timestamp, which does not cross the detected cut. This allows us to quickly extract clips without cuts via seeking and isn't prohibitively slow at scale like inserting new keyframes in each video.\nSource Video Optical Flow Score 0.043 Optical Flow. As motivated in Section 3.1 and Figure 2 it is crucial to provide means for filtering out static scenes. To enable this, we extract dense optical flow maps at 2fps using the OpenCV [48] implementation of the Farnebäck algorithm [24].\nTo further keep storage size tractable we spatially downscale the flow maps such that the shortest side is at 16px resolution. By averaging these maps over time and spatial coordinates, we further obtain a global motion score for each clip, which we use to filter out static scenes by using a threshold for the minimum required motion, which is chosen as detailed on App. E.2.2. Since this only yields rough approximate, for the final Stage III finetuning, we compute more accurate dense optical flow maps using RAFT [89] at 800 × 450 resolution. The motion scores are then computed similarly. Since the highquality finetuning data is relatively much smaller than the pretraining dataset, this makes the RAFT-based flow computation tractable." }, { "figure_ref": [], "heading": "Source Video Caption", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "CoCa", "publication_ref": [ "b84", "b64", "b65", "b79", "b79" ], "table_ref": [], "text": "VBLIP LLM there is a piece of wood on the floor next to a tape measure .\na person is using a ruler to measure a piece of wood A person is using a ruler to measure a piece of wood on the floor next to a tape measure.\ntwo men sitting on a rock near a river . one is holding a stick and the other is holding a pole .\ntwo people are fishing in a river Two men are fishing in a river. One is holding a stick and the other is holding a pole.\nFigure 13. Comparison of various synthetic captioners. We observe that CoCa often captures good spatial details, whereas VBLIP tends to capture temporal details. We use an LLM to combine these two, and experiment with all three types of synthetic captions.\nSynthetic Captioning. At a million-sample scale, it is not feasible to hand-annotate data points with prompts. Hence we resort to synthetic captioning to extract captions. However in light of recent insights on the importance of caption diversity [85] and taking potential failure cases of these synthetic captioning models into consideration, we extract three captions per clip by using i) the image-only captioning model CoCa [65], which describes spatial aspects well, ii) -to also capture temporal aspects -the video-captioner VideoBLIP [109] and iii) to combine these two captions and like that, overcome potential flaws in each of them, a lightweight LLM. Examples of the resulting captions are shown in Figure 13.\nCaption similarities and Aesthetics. Extracting CLIP [66] image and text representations have proven to be very helpful for data curation in the image domain since computing the cosine similarity between the two allows for assessment of textimage alignment for a given example [80] and thus to filter out examples with erroneous captions. Moreover, it is possible to extract scores for visual aesthetics [80]. Although CLIP is only able to process images, and this consequently is only possible on a single frame level we opt to extract both CLIP-based i) text-image similarities and ii) aesthetics scores of the first, center, and last frames of each video clip. As shown in Section 3.3 and App. E.2.2, using training text-video models on data curated by using these scores improves i) text following abilities and ii) visual quality of the generated samples compared to models trained on unfiltered data.\nText Detection. In early experiments, we noticed that models trained on earlier versions of LVD-F obtained a tendency to generate videos with excessive amounts of written text depicted which is arguably not a desired feat for a text-to-video model. To this end, we applied the off-the-shelf text-detector CRAFT [5] to annotate the start, middle, and end frames of each clip in our dataset with bounding box information on all written text. Using this information, we filtered out all clips with a total area of detected bounding boxes larger than 7% to construct the final LVD-F." }, { "figure_ref": [ "fig_5" ], "heading": "Source Video", "publication_ref": [], "table_ref": [], "text": "Text Area Ratio 0.102 Figure 14. An example of a video with lots of unwanted text. We apply text-detection and annotate bounding boxes around text, and then compute the ratio between the area of all the boxes and the size of the frame." }, { "figure_ref": [], "heading": "D. Model and Implementation Details D.1. Diffusion Models", "publication_ref": [ "b50", "b86", "b86", "b50", "b36", "b36" ], "table_ref": [], "text": "In this section, we give a concise summary of DMs. We make use of the continuous-time DM framework [51,87]. Let p data (x 0 ) denote the data distribution and let p(x; σ) be the distribution obtained by adding i.i.d. σ 2 -variance Gaussian noise to the data. Note that or sufficiently large σ max , p(x; σ max 2 ) ≈ N (0, σ max 2 ). DM uses this fact and, starting from high variance Gaussian noise x M ∼ N (0, σ max 2 ), sequentially denoise towards σ 0 = 0. In practice, this iterative refinement process can be implemented through the numerical simulation of the Probability Flow ordinary differential equation (ODE) [87] \ndx = -σ(t)σ(t)∇ x log p(x; σ(t)) dt,(1)\nwhere ∇ x log p(x; σ) is the score function [46]. DM training reduces to learning a model s θ (x; σ) for the score function ∇ x log p(x; σ). The model can, for example, be parameterized as\n∇ x log p(x; σ) ≈ s θ (x; σ) = (D θ (x; σ) -x)/σ 2 [51],\nwhere D θ is a learnable denoiser that tries to predict the clean x 0 . The denoiser D θ is trained via denoising score matching (DSM)\nE (x0,c)∼p data (x0,c),(σ,n)∼p(σ,n) λ σ ∥D θ (x 0 + n; σ, c) -x 0 ∥ 2 2 ,(2)\nwhere p(σ, n) = p(σ) N n; 0, σ 2 , p(σ) can be a probability distribution or density over noise levels σ. It is both possible to use a discrete set or a continuous range of noise levels. In this work, we use both options, which we further specify in App. D.2. λ σ : R + → R + is a weighting function, and c is an arbitrary conditioning signal. In this work, we follow the EDMpreconditioning framework [51], parameterizing the learnable denoiser D θ as\nD θ (x; σ) = c skip (σ)x + c out (σ)F θ (c in (σ)x; c noise (σ)),(3)\nwhere F θ is the network to be trained. Classifier-free guidance. Classifier-free guidance [37] is a method used to guide the iterative refinement process of a DM towards a conditioning signal c. The main idea is to mix the predictions of a conditional and an unconditional model\nD w (x; σ, c) = wD(x; σ, c) -(w -1)D(x; σ),(4)\nwhere w ≥ 0 is the guidance strength. The unconditional model can be trained jointly alongside the conditional model in a single network by randomly replacing the conditional signal c with a null embedding in Eq. ( 2), e.g., 10% of the time [37].\nIn this work, we use classifier-free guidance, for example, to guide video generation toward text conditioning." }, { "figure_ref": [], "heading": "D.2. Base Model Training and Architecture", "publication_ref": [ "b70", "b50", "b52", "b33", "b55", "b33", "b50", "b78", "b50", "b8", "b58", "b35" ], "table_ref": [], "text": "As discussed in , we start the publicly available Stable Diffusion 2.1 [71] (SD 2.1) model. In the EDM-framework [51], SD 2.1 has the following preconditioning functions:\nc SD2.1 skip (σ) = 1,(5)\nc SD2.1 out (σ) = -σ ,(6)\nc SD2.1 in (σ) = 1 √ σ 2 + 1 ,(7)\nc SD2.1 noise (σ) = arg min j∈[1000] (σ -σ j ) ,(8) (9)\nwhere σ j+1 > σ j . The distribution over noise levels p(σ) used for the original SD 2.1. training is a uniform distribution over the 1000 discrete noise levels {σ j } j∈[1000] . One issue with the training of SD 2.1 (and in particular its noise distribution p(σ)) is that even for the maximum discrete noise level σ 1000 the signal-to-noise ratio [53] is still relatively high which results in issues when, for example, generating very dark images [34,56]. Guttenberg and CrossLabs [34] proposed offset noise, a modification of the training objective in Eq. ( 2) by making p(n | σ) non-isotropic Gaussian. In this work, we instead opt to modify the preconditioning functions and distribution over training noise levels altogether. Image model finetuning. We replace the above preconditioning functions with\nc skip (σ) = σ 2 + 1 -1 ,(10)\nc out (σ) = -σ √ σ 2 + 1 ,(11)\nc in (σ) = 1 √ σ 2 + 1 ,(12)\nc noise (σ) = 0.25 log σ,(13) (14)\nwhich can be recovered in the EDM framework [51] by setting σ data = 1); the preconditioning functions were originally proposed in [79]. We also use the noise distribution and weighting function proposed in Karras et al. [51], namely log σ ∼ N (P mean , P 2 std ) and λ(σ) = (1 + σ 2 )σ -2 , with P mean = -1.2 and P std = 1. We then finetune the neural network backbone F θ of SD2.1 for 31k iterations using this setup. For the first 1k iterations, we freeze all parameters of F θ except for the time-embedding layer and train on SD2.1's original training resolution of 512 × 512. This allows the model to adapt to the new preconditioning functions without unnecessarily modifying the internal representations of F θ too much. Afterward, we train all layers of F θ for another 30k iterations on images of size 256 × 384, which is the resolution used in the initial stage of video pretraining.\nVideo pretraining. We use the resulting model as the image backbone of our video model. We then insert temporal convolution and attention layers. In particular, we follow the exact setup from [9], inserting a total of 656M new parameters into the UNet bumping its total size (spatial and temporal layers) to 1521M parameters. We then train the resulting UNet on 14 frames on resolution 256 × 384 for 150k iters using AdamW [59] with learning rate 10 -4 and a batch size of 1536. We train the model for classifier-free guidance [36] and drop out the text-conditioning 15% of the time. Afterward, we increase the spatial resolution to 320 × 576 and train for an additional 100k iterations, using the same settings as for the lower-resolution training except for a reduced batch size of 768 and a shift of the noise distribution towards more noise, in particular, we increase P mean = 0. During training, the base model and the high-resolution Text/Image-to-Video models are all conditioned on the input video's frame rate and motion score. This allows us to vary the amount of motion in a generated video at inference time." }, { "figure_ref": [], "heading": "D.3. High-Resolution Text-to-Video Model", "publication_ref": [], "table_ref": [], "text": "We finetune our base model on a high-quality dataset of ∼ 1M samples at resolution 576 × 1024. We train for 50k iterations at a batch size of 768, learning rate 3 × 10 -5 , and set P mean = 0.5 and P std = 1.4. Additionally, we track an exponential moving average of the weights at a decay rate of 0.9999. The final checkpoint is chosen using a combination of visual inspection and human evaluation." }, { "figure_ref": [], "heading": "D.4. High-Resolution Image-to-Video Model", "publication_ref": [ "b38", "b72" ], "table_ref": [], "text": "We can finetune our base text-to-video model for the image-to-video task. In particular, during training, we use one additional frame on which the model is conditioned. We do not use text-conditioning but rather replace text embeddings fed into the base model with the CLIP image embedding of the conditioning frame. Additionally, we concatenate a noise-augmented [39] version of the conditioning frame channel-wise to the input of the UNet [73]. In particular, we add a small amount of noise of strength log σ ∼ N (-3.0, 0.5 2 ) to the conditioning frame and then feed it through the standard SD 2.1 encoder. The mean of the encoder distribution is then concatenated to the input of the UNet (copied across the time axis). Initially, we finetune our base model for the image-to-video task on the base resolution (320 × 576) for 50k iterations using a batch size of 768 and learning rate 3 × 10 -5 . Since the conditioning signal is very strong, we again shift the noise distribution towards more noise, i.e., P mean = 0.7 and P std = 1.6. Afterwards, we fintune the base image-to-video model on a high-quality dataset of ∼ 1M samples at 576 × 1024 resolution. We train two versions: one to generate 14 frames and one to generate 25 frames. We train both models for 50k iterations at a batch size of 768, learning rate 3 × 10 -5 , and set P mean = 1.0 and P std = 1.6. Additionally, we track an exponential moving average of the weights at a decay rate of 0.9999. The final checkpoints are chosen using a combination of visual inspection and human evaluation." }, { "figure_ref": [ "fig_15" ], "heading": "D.4.1 Linearly Increasing Guidance", "publication_ref": [ "b35" ], "table_ref": [], "text": "We occasionally found that standard vanilla classifier-free guidance [36] (see Eq. ( 4)) can lead to artifacts: too little guidance may result in inconsistency with the conditioning frame while too much guidance can result in oversaturation. Instead of using a constant guidance scale, we found it helpful to linearly increase the guidance scale across the frame axis (from small to high). A PyTorch implementation of this novel technique can be found in Figure 15." }, { "figure_ref": [ "fig_22" ], "heading": "D.4.2 Camera Motion LoRA", "publication_ref": [ "b1" ], "table_ref": [], "text": "To facilitate controlled camera motion in image-to-video generation, we train a variety of camera motion LoRAs within the temporal attention blocks of our model [32]. In particular, we train low-rank matrices of rank 16 for 5k iterations. Additional samples can be found in Figure 20." }, { "figure_ref": [], "heading": "D.5. Interpolation Model Details", "publication_ref": [ "b72", "b70", "b8", "b51", "b8", "b40", "b81", "b58" ], "table_ref": [], "text": "Similar to the text-to-video and image-to-video models, we finetune our interpolation model starting from the base textto-video model, cf . App. D.2. To enable interpolation, we reduce the number of output frames from 14 to 5, of which we use the first and last as conditioning frames, which we feed to the UNet [73] backbone of our model via the concatconditioning-mechanism [71]. To this end, we embed these frames into the latent space of our autoencoder, resulting in two image encodings z s , z e ∈ R c×h×w , where c = 4, h = 52, w = 128. To form a latent frame sequence that is of the same shape as the noise input of the UNet, i.e. R 5×c×h×w , we use a learned mask embedding z m ∈ R c×h×w and form a latent sequence z = {z s , z m , z m , z m , z e } ∈ R 5×c×h×w . We concatenate this sequence channel-wise with the noise input and additionally with a binary mask where 1 indicates the presence of a conditioning frame and 0 that of a mask embedding. The final input for the UNet is thus of shape (5,9,52,128). In line with previous work [9,41,82], we use noise augmentation for the two conditioning frames, which we apply in the latent space. Moreover, we replace the CLIP text representation for the crossattention conditioning with the corresponding CLIP image representation of the start frame and end frame, which we concatenate to form a conditioning sequence of length 2.\nWe train the model on our high-quality dataset at spatial resolution 576 × 1024 using AdamW [59] with a learning rate of 10 -4 in combination with exponential moving averaging at decay rate 0.9999 and use a shifted noise schedule with P mean = 1 and P std = 1.2. Surprisingly, we find this model, which we train with a comparably small batch size of 256, to converge extremely fast and to yield consistent and smooth outputs after only 10k iterations. We take this as another evidence of the usefulness of the learned motion representation our base text-to-video model has learned." }, { "figure_ref": [], "heading": "D.6. Multi-view generation", "publication_ref": [], "table_ref": [], "text": "We finetuned the high-resolution image-to-video model on our specific rendering of the Objaverse dataset. We render 21 frames per orbit of an object in the dataset at 576 × 576 resolution and finetune the 25-frame Image-to-Video model to generate these 21 frames. We feed one view of the object as the image condition. In addition, we feed the elevation of the camera as conditioning to the model. We first pass the elevation through a timestep embedding layer that embeds the sine and cosine of the elevation angle at various frequencies and concatenates them into a vector. This vector is finally concatenated to the overall vector condition of the UNet.\nWe trained for 12k iterations with a total batch size of 16 across 8 A100 GPUs of 80GB VRAM at a learning rate of 1 × 10 -5 ." }, { "figure_ref": [], "heading": "E. Experiment Details E.1. Details on Human Preference Assessment", "publication_ref": [], "table_ref": [], "text": "For most of the evaluation conducted in this paper, we employ human evaluation as we observed it to contain the most reliable signal. For text-to-video tasks and all ablations conducted for the base model, we generate video samples from a list of 64 test prompts. We then employ human annotators to collect preference data on two axes: i) visual quality and ii) prompt following. More details on how the study was conducted App. E.1.1 and the rankings computed App. E.1.2 are listed below." }, { "figure_ref": [ "fig_17" ], "heading": "E.1.1 Experimental Setup", "publication_ref": [], "table_ref": [], "text": "Given all models in one ablation axis (e.g. four models of varying aesthetic or motion scores), we compare each prompt for each pair of models (1v1). For every such comparison, we collect on average three votes per task from different annotators, i.e., three each for visual quality and prompt following, respectively. Performing a complete assessment between all pairwise comparisons gives us robust and reliable signals on model performance trends and the effect of varying thresholds. Sample interfaces that the annotators interact with are shown in Figure 16. The order of prompts and the order between models are fully randomized. Frequent attention checks are in place to ensure data quality. " }, { "figure_ref": [], "heading": "E.1.2 Elo Score Calculation", "publication_ref": [ "b20", "b2", "b5" ], "table_ref": [], "text": "To calculate rankings when comparing more than two models based on 1v1 comparisons as outlined in App. E.1.1, we use Elo Scores (higher-is-better) [21], which were originally proposed as a scoring method for chess players but have more recently also been applied to compare instruction-tuned generative LLMs [3,6]. For a set of competing players with initial ratings R init participating in a series of zero-sum games, the Elo rating system updates the ratings of the two players involved in a particular game based on the expected and actual outcome of that game. Before the game with two players with ratings R 1 and R 2 , the expected outcome for the two players is calculated as\nE 1 = 1 1 + 10 R 2 -R 1 400 ,(15)\nE 2 = 1 1 + 10 R 1 -R 2 400 . (16\n)\nAfter observing the result of the game, the ratings R i are updated via the rule\nR ′ i = R i + K • (S i -E i ) , i ∈ {1, 2}(17)\nwhere S i indicates the outcome of the match for player i. In our case, we have S i = 1 if player i wins and S i = 0 if player i loses. The constant K can be seen as weight emphasizing more recent games. We choose K = 1 and bootstrap the final Elo ranking for a given series of comparisons based on 1000 individual Elo ranking calculations in a randomly shuffled order. Before comparing the models, we choose the start rating for every model as R init = 1000. " }, { "figure_ref": [ "fig_2" ], "heading": "E.2. Details on Experiments from Section 3 E.2.1 Architectural Details", "publication_ref": [ "b72", "b8", "b8", "b58", "b33", "b36", "b36", "b85" ], "table_ref": [], "text": "Architecturally, all models trained for the presented analysis in Section 3 are identical. To insert create a temporal UNet [73] based on an existing spatial model, we follow Blattmann et al. [9] and add temporal convolution and (cross-)attention layers after each corresponding spatial layer. As a base 2D-UNet, we use the architecture from Stable Diffusion 2.1, whose weights we further use to initialize the spatial layers for all runs except the second one presented in Figure 3a, where we intentionally skip this initialization to create a baseline for demonstrating the effect of image-pretraining. Unlike Blattmann et al. [9], we train all layers, including the spatial ones, and do not freeze the spatial layers after initialization. All models are trained with the AdamW [59] optimizer with a learning rate of 1.e -4 and a batch size of 256. Moreover, in contrast to our models from Section 4, we do not translate the noise process to continuous time but use the standard linear schedule used in Stable Diffusion 2.1, including offset noise [34], in combination with the v-parameterization [37]. We omit the text-conditioning in 10% of the cases to enable classifier-free guidance [37] during inference. To generate samples for the evaluations, we use 50 steps of the deterministic DDIM sampler [86] with a classifier guidance scale of 12 for all models." }, { "figure_ref": [ "fig_18" ], "heading": "E.2.2 Calibrating Filtering Thresholds", "publication_ref": [ "b107" ], "table_ref": [], "text": "Here, we present the outcomes of our study on filtering thresholds presented in Section 3.3. As stated there, we conduct experiments for the optimal filtering threshold for each type of annotation while not filtering for any other types. The only difference here is our assessment of the most suitable captioning method, where we simply compare all used captioning methods. We train each model on videos consisting of 8 frames at resolution 256 × 256 for exactly 40k steps with a batch size of 256, roughly corresponding to 10M training examples seen during training. For evaluation, we create samples based on 64 pre-selected prompts for each model and conduct a human preference study as detailed in App. E.1. Figure 17 shows the ranking results of these human preference studies for each annotation axis for spatiotemporal sample quality and prompt following. Additionally, we show an averaged 'aggregated' score.\nFor captioning, we see that -surprisingly -the captions generated by the simple clip-based image captioning method CoCa of Yu et al. [108] clearly have the most beneficial influence on the model. However, since recent research recommends using more than one caption per training example, we sample one of the three distinct captions during training. We nonetheless reflect the outcome of this experiment by shifting the captioning sampling distribution towards CoCa captions by using p CoCa = 0.5; p V-BLIP = 0.25; p LLM = 0.25; .\nFor motion filtering, we choose to filter out 25% of the most static examples. However, the aggregated preference score of the model trained with this filtering method does not rank as high in human preference as the non-filtered score. The rationale behind this is that non-filtered ranks best primarily because it ranks best in the category 'prompt following' which is less important than the 'quality' category when assessing the effect of motion filtering. Thus, we choose the 25% threshold, as mentioned above, since it achieves both competitive performances in 'prompt following' and 'quality'.\nFor aesthetics filtering, where, as for motion thresholding, the 'quality' category is more important than the 'prompt following'-category, we choose to filter out the 25 % with the lowest aesthetics score, while for CLIP-score thresholding we omit even 50% since the model trained with the corresponding threshold is performing best. Finally, we filter out the 25% of samples with the largest text area covering the videos since it ranks highest both in the 'quality' category and on average.\nUsing these filtering methods, we reduce the size of LVD by more than a factor of 3, cf . Tab. 1, but obtain a much cleaner dataset as shown in Section 3. For the remaining experiments in Section 3.3, we use the identical architecture and hyperparameters as stated above. We only vary the dataset as detailed in Section 3.3." }, { "figure_ref": [], "heading": "E.2.3 Finetuning Experiments", "publication_ref": [], "table_ref": [], "text": "For the finetuning experiments shown in Section 3.4, we again follow the architecture, training hyperparameters, and sampling procedure stated at the beginning of this section. The only notable differences are the exchange of the dataset and the increase in resolution from the pretraining resolution 256 × 256 to 512 × 512 while still generating videos consisting of 8 frames. We train all models presented in this section for 50k steps." }, { "figure_ref": [], "heading": "E.3. Human Eval vs SOTA", "publication_ref": [ "b73", "b53", "b63" ], "table_ref": [], "text": "For comparison of our image-to-video model with state-of-the-art models like Gen-2 [74] and Pika [54], we randomly choose 64 conditioning images generated from a 1024×576 finetune of SDXL [64]. We employ the same framework as in App. E.1.1 to evaluate and compare the visual quality generated samples with other models.\nFor Gen-2, we sample the image-to-video model from the web UI. We fixed the same seed of 23, used the default motion value of 5 (on a scale of 10), and turned on the \"Interpolate\" and \"Remove watermark\" features. This results in 4-second samples at 1408 × 768. We then resize the shorter side to yield 1056 × 576 and perform a center-crop to match our resolution of 1024 × 576. For our model, we sample our 25-frame image-to-video finetune to give 28 frames and also interpolate using our interpolation model to yield samples of 3.89 seconds at 28 FPS. We crop the Gen-2 samples to 3.89 seconds to avoid biasing the annotators.\nFor Pika, we sample the image-to-video model from the Discord bot. We fixed the same seed of 23, used the motion value of 2 (on a scale of 0-4), and specified a 16:9 aspect ratio. This results in 3-second samples at 1024 × 576, which matches our resolution. For our model, we sample our 25-frame image-to-video finetune to give 28 frames and also interpolate using our interpolation model to yield samples of 3.89 seconds at 28 FPS. We crop our samples to 3 seconds to match Pika and avoid biasing the annotators. Since Pika samples have a small \"Pika Labs\" watermark in the bottom right, we pad that region with black pixels for both Pika and our samples to also avoid bias." }, { "figure_ref": [], "heading": "E.4. UCF101 FVD", "publication_ref": [ "b87", "b10", "b9" ], "table_ref": [], "text": "This section describes the zero-shot UCF101 FVD computation of our base text-to-video model. The UCF101 dataset [88] consists of 13,320 video clips, which are classified into 101 action categories. All videos are of frame rate 25 FPS and resolution 240 × 320. To compute FVD, we generate 13,320 videos (16 frames at 25 FPS, classifier-free guidance with scale w = 7) using the same distribution of action categories, that is, for example, 140 videos of \"TableTennisShot\", 105 videos of \"PlayingPiano\", etc. We condition the model directly on the action category (\"TableTennisShot\", \"PlayingPiano\", etc.) and do not use any text modification. Our samples are generated at our model's native resolution 320 × 576 (16 frames), and we downsample to 240 × 432 using bilinear interpolation with antialiasing, followed by a center crop to 240 × 320. We extract features using a pretrained I3D action classification model [11], in particular we are using a torchscript 3 provided by Brooks et al. [10]. " }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "E.5.4 Temporal Prompting via Temporal Cross-Attention Layers", "publication_ref": [ "b8", "b17", "b37" ], "table_ref": [], "text": "Our architecture follows Blattmann et al. [9], who introduced dedicated temporal cross-attention layers, which are used interleaved with the spatial cross-attention layers of the standard 2D-UNet [18,38]. During probing our Text-to-Video model from Section 4.2, we noticed that it is possible to independently prompt the model spatially and temporally by using different text-prompts as inputs for the spatial and temporal cross-attention conditionings, see Figure 21. To achieve this, we use a dedicated spatial prompt to describe the general content of the scene to be depicted while the motion of that scene is fed to the model via a separate temporal prompt, which is the input to the temporal cross-attention layers. We provide an example of these first experiments indicating this implicit disentanglement of motion and content in Figure 21, where we show that varying the temporal prompt while fixing random seed and spatial prompt leads to spatially similar scenes that obtain global motion properties following the temporal prompt. 21. Text-to-video samples using the prompt \"Flowers in a pot in front of a mountainside\" (for spatial cross-attention). We adjust the camera control by replacing the prompt in the temporal attention using \"\", \"panning\", \"rotating\", and \"zooming\" (from top to bottom). While not being trained for this inference task, the model performs surprisingly well. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Special thanks to Emad Mostaque for his excellent support on this project. Many thanks go to our colleagues Jonas Müller, Axel Sauer, Dustin Podell and Rahim Entezari for fruitful discussions and comments. Finally, we thank Harry Saini and the one and only Richard Vencu for maintaining and optimizing our data and computing infrastructure." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Broader Impact and Limitations", "publication_ref": [ "b40", "b60", "b78" ], "table_ref": [], "text": "Broader Impact: Generative models for different modalities promise to revolutionize the landscape of media creation and use. While exploring their creative applications, reducing the potential to use them for creating misinformation and harm are crucial aspects before real-world deployment. Furthermore, risk analyses need to highlight and evaluate the differences between the various existing model types, such as interpolation, text-to-video, animation, and long-form generation. Before these models are used in practice, a thorough investigation of the models themselves, their intended uses, safety aspects, associated risks, and potential biases is essential. Limitations: While our approach excels at short video generation, it comes with some fundamental shortcomings w.r.t. long video synthesis: Although a latent approach provides efficiency benefits, generating multiple keyframes at once is expensive both during training but also inference, and future work on long video synthesis should either try a cascade of very coarse frame generation or build dedicated tokenizers for video generation. Furthermore, videos generated with our approach sometimes suffer from too little generated motion. Lastly, video diffusion models are typically slow to sample and have high VRAM requirements, and our model is no exception. Diffusion distillation methods [41,61,79] are promising candidates for faster synthesis." }, { "figure_ref": [], "heading": "B. Related Work", "publication_ref": [ "b3", "b11", "b16", "b25", "b54", "b7", "b18", "b32", "b42", "b101", "b102", "b103", "b106", "b9", "b24", "b49", "b59", "b76", "b77", "b82", "b89", "b92", "b95", "b97", "b109", "b3", "b7", "b11", "b16", "b18", "b25", "b54", "b59", "b89", "b92", "b95", "b109", "b10", "b87", "b105", "b6", "b40", "b41", "b81", "b93", "b8", "b40", "b81", "b114", "b8", "b81", "b114", "b28", "b96", "b98", "b8", "b40", "b81", "b40", "b28", "b38", "b8", "b112", "b114", "b70", "b1", "b112", "b1", "b96", "b112", "b15", "b48", "b61", "b115", "b56", "b80", "b57", "b56", "b80", "b57" ], "table_ref": [], "text": "Video Synthesis. Many approaches based on various models such as variational RNNs [4,12,17,26,55], normalizing flows [8,19], autoregressive transformers [28, 33,43,[102][103][104]107], and GANs [10,25,50,60,77,78,83,90,93,96,98,110] have tackled video synthesis. Most of these works, however, have generated videos either on low-resolution [4,8,12,17,19,26,55,60,90,93,96,110] or on comparably small and noisy datasets [11,88,106] which were originally proposed to train discriminative models.\nDriven by increasing amounts of available compute resources and datasets better suited for generative modeling such as WebVid-10M [7], more competitive approaches have been proposed recently, mainly based on well-scalable, explicit likelihood-based approaches such as diffusion [41,42,82] and autoregressive models [94]. Motivated by a lack of available clean video data, all these approaches are leveraging joint image-video training [9,41,82,115] and most methods are grounding their models on pretrained image models [9,82,115]. Another commonality between these and most subsequent approaches to (text-to-)video synthesis [29,97,99] is the usage of dedicated expert models to generate the actual visual content at a coarse frame rate and to temporally upscale this low-fps video to temporally smooth final outputs at 24-32 fps [9,41,82]. Similar to the image domain, diffusion-based approaches can be mainly separated into cascaded approaches [41] following [29,39] and latent diffusion models [9,113,115] translating the approach of Rombach et al. [71] to the video domain. While most of these works aim at learning general motion representation and are consequently trained on large and diverse datasets, another well-recognized branch of diffusion-based video synthesis tackles personalized video generation based on finetuning of pretrained text-to-image models on more narrow datasets tailored to a specific domain [32] or application, partly including non-deep motion priors [113]. Finally, many recent works tackle the task of image-to-video synthesis, where the start frame is already given, and the model has to generate the consecutive frames [32,97,113]. Importantly, as shown in our work (see Figure 1) when combined with off-the-shelf text-to-image models, image-to-video models can be used to obtain a full text-(to-image)-to-video pipeline.\nMulti-View Generation Motivated by their success in 2D image generation, diffusion models have also been used for multi-view generation. Early promising diffusion-based results [2, 16,49,62,101,116] have mainly been restricted by lacking availability of useful real-world multi-view training data. To address this, more recent works such as Zero-123 [57], MVDream [81], and SyncDreamer [58] propose techniques to adapt and finetune pretrained image generation models such as Stable Diffusion (SD) for multi-view generation, thereby leveraging image priors from SD. One issue with Zero-123 [57] is that the generated multi-views can be inconsistent with respect to each other as they are generated independently with poseconditioning. Some follow-up works try to address this view-consistency problem by jointly synthesizing the multi-view images. MVDream [81] proposes to jointly generate four views of an object using a shared attention module across images. SyncDreamer [58] proposes to estimate a 3D voxel structure in parallel to the multi-view image diffusion process to maintain consistency across the generated views.\nDespite rapid progress in multi-view generation research, these approaches rely on single image generation models such as SD. We believe that our video generative model is a better candidate for the multi-view generation as multi-view images form a specific form of video where the camera is moving around an object. As a result, it is much easier to adapt a video-" } ]
Stability AI Figure 1. Stable Video Diffusion samples. Top: Text-to-Video generation.
Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets
[ { "figure_caption": "3 and Section 3.4, and (ii), identify three different training regimes for generative video modeling. In particular, these regimes consist of • Stage I: image pretraining, i.e. a 2D text-to-image diffusion model [13, 64, 71]. • Stage II: video pretraining, which trains on large amounts of videos. • Stage III: video finetuning, which refines the model on a", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. Our initial dataset contains many static scenes and cuts which hurts training of generative video models. Left: Average number of clips per video before and after our processing, revealing that our pipeline detects lots of additional cuts. Right: We show the distribution of average optical flow score for one of these subsets before our processing, which contains many static clips.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Effects of image-only pretraining and data curation on video-pretraining on LVD-10M: A video model with spatial layers initialized from a pretrained image model clearly outperforms a similar one with randomly initialized spatial weights as shown in Figure 3a. Figure 3b emphasizes the importance of data curation for pretraining, since training on a curated subset of LVD-10M with the filtering threshold proposed in Section 3.3 improves upon training on the entire uncurated LVD-10M.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Summarized findings of Sections 3.3 and 3.4: Pretraining on curated datasets consistently boosts performance of generative video models during video pretraining at small (Figures4a and 4b) and larger scales (Figures4c and 4d). Remarkably, this performance improvement persists even after 50k steps of video finetuning on high quality data (Figure4e).", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Samples at 576 × 1024. Top: Image-to-video samples (conditioned on leftmost frame). Bottom: Text-to-video samples.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "SVDFigure 6 .6Figure 6. Our 25 frame Imageto-Video model is preferred by human voters over GEN-2[74] and PikaLabs[54].", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Applying three camera motion LoRAs (horizontal, zooming, static) to the same conditioning frame (on the left).", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Generated multi-view frames of a GSO test object using our SVD-MV model (i.e. SVD finetuned for Multi-View generation), SD2.1-MV [72], Scratch-MV, SyncDreamer [58], and Zero123XL [14].", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. (a) Multi-view generation metrics on Google Scanned Objects (GSO) test dataset. SVD-MV outperforms image-prior (SD2.1-MV) and no-prior (Scratch-MV) variants, as well other state-of-the-art techniques. (b) Training progress of multi-view generation models with CLIP-S (solid, left axis) and PSNR (dotted, right axis) computed on GSO test dataset. SVD-MV shows better metrics consistently from the start of finetuning.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Generated novel multi-view frames for MVImgNet dataset using our SVD-MV model, SD2.1-MV [72], Scratch-MV.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Data for HQ Video Synthesis 3.1. Data Processing and Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Stage I: Image Pretraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3. Stage II: Curating a Video Pretraining Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4. Stage III: High-Quality Finetuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. Training Video Models at Scale 4.1. Pretrained Base Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2. High-Resolution Text-to-Video Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3. High Resolution Image-to-Video Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Camera Motion LoRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.4. Frame Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5. Multi-View Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and Implementation Details D.1. Diffusion Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.2. Base Model Training and Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.3. High-Resolution Text-to-Video Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.4. High-Resolution Image-to-Video Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.4.1 Linearly Increasing Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.4.2 Camera Motion LoRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.5. Interpolation Model Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.6. Multi-view generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Experiment Details E.1. Details on Human Preference Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.1.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.1.2 Elo Score Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.2. Details on Experiments from Section 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.2.1 Architectural Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.2.2 Calibrating Filtering Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.2.3 Finetuning Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.3. Human Eval vs SOTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.4. UCF101 FVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.5. Additional Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.5.1 Additional Text-to-Video Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.5.2 Additional Image-to-Video Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.5.3 Additional Camera Motion LoRA Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.5.4 Temporal Prompting via Temporal Cross-Attention Layers . . . . . . . . . . . . . . . . . . . . . . . E.5.5 Additional Samples on Multi-View Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "MotivationWe start from a large collection of raw video data which is not useful for generative text-video (pre)training [70, 100] because of the following adverse properties: First, in contrast to discriminative approaches to video modeling, generative video models are sensitive to motion inconsistencies such as cuts of which usually many are contained in raw and unprocessed video data, cf . Figure 2, left. Moreover, our initial data collection is biased towards still videos as indicated by the peak at zero motion in Figure 2, right. Since generative models trained on this data would obviously learn to generate videos containing cuts and still scenes, this emphasizes the need for cut detection and motion annotations to ensure temporal quality. Another critical ingredient for training generative text-video models are captions -ideally more than one per video [85] -which are well-aligned with the video content. The last essential component for generative video training which we are considering here is the high visual quality of the training examples.", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 12 .12Figure 12. Examples for a static video. Since such static scenes can have a negative impact on generative video-text (pre-)training, we filter them out.", "figure_data": "", "figure_id": "fig_14", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure 15. PyTorch code for our novel linearly increasing guidance technique.", "figure_data": "", "figure_id": "fig_15", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "(a) Sample instructions for evaluating visual quality of videos. (b) Sample instructions for evaluating the prompt following of videos.", "figure_data": "", "figure_id": "fig_16", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 16 .16Figure 16. Our human evaluation framework, as seen by the annotators. The prompt & task order and model choices are fully randomized.", "figure_data": "", "figure_id": "fig_17", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 .17Figure 17. Results of the dedicated experiments conducted to identify most useful filtering thresholds for each ablation axis. For of these ablation studies we train four identical models using the architecture detailed in App. E.2.2 on different subset of LVD-10M, which we create by systematically increasing the thresholds which corresponds to filter out more and more examples.", "figure_data": "", "figure_id": "fig_18", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 .18Figure 18. Additional Text-to-Video samples. Captions from top to bottom: \"A hiker is reaching the summit of a mountain, taking in the breathtaking panoramic view of nature.\", \"A unicorn in a magical grove, extremely detailed.\", \"Shoveling snow\", \"A beautiful fluffy domestic hen sitting on white eggs in a brown nest, eggs are under the hen.\", and \"A boat sailing leisurely along the Seine River with the Eiffel Tower in background by Vincent van Gogh\".", "figure_data": "", "figure_id": "fig_19", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "E. 5 . 555Additional Samples on Multi-View Synthesis In Figures 22 to 25, we show additional visual examples for SVD-MV, trained on our renderings of Objaverse and MVIma-geNet datasets as described in Section 4.5.", "figure_data": "", "figure_id": "fig_20", "figure_label": "55", "figure_type": "figure" }, { "figure_caption": "Figure 19 .19Figure 19. Additional Image-to-Video samples. Leftmost frame is use for conditioning.", "figure_data": "", "figure_id": "fig_21", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 20 .20Figure 20. Additional Image-to-Video samples with camera motion LoRAs (conditioned on leftmost frame). The first, second, and thirs rows correspond to horizontal, static, zooming, respectively.", "figure_data": "", "figure_id": "fig_22", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "FigureFigure21. Text-to-video samples using the prompt \"Flowers in a pot in front of a mountainside\" (for spatial cross-attention). We adjust the camera control by replacing the prompt in the temporal attention using \"\", \"panning\", \"rotating\", and \"zooming\" (from top to bottom). While not being trained for this inference task, the model performs surprisingly well.", "figure_data": "", "figure_id": "fig_23", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 22 .22Figure 22. Additional image-to-multi-view generation samples from GSO test dataset, using our SVD-MV model trained on Objaverse, and comparison with other methods.", "figure_data": "", "figure_id": "fig_24", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Figure 23 .23Figure 23. Additional image-to-multi-view generation samples from GSO test dataset, using our SVD-MV model trained on Objaverse", "figure_data": "", "figure_id": "fig_25", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 24 .24Figure 24. Text-to-image-to-multi-view generation samples: text to image using SDXL with the prompt \"Centered 3D model of a cute anthropomorphic sunflower figure (plain background, unreal engine render 4k)\", and image-to-multi-view using our SVD-MV model trained on Objaverse", "figure_data": "", "figure_id": "fig_26", "figure_label": "24", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison of our dataset before and after fitering with publicly available research datasets.", "figure_data": "LVD LVD-F LVD-10M LVD-10M-F WebVid InternVid#Clips577M 152M9.8M2.3M10.7M 234MClip Duration (s)11.58 10.5312.1110.9918.011.7Total Duration (y)212.09 50.643.760.785.9486.80Mean #Frames325301335320--Mean Clips/Video11.09 4.761.21.11.032.96Motion Annotations? ✓✓✓✓✗✗V-BLIP [109] to obtain a video-based caption. Finally, wegenerate a third description of the clip via an LLM-basedsummarization of the first two captions.The resulting initial dataset, which we dub Large VideoDataset (LVD), consists of 580M annotated video clip pairs,forming 212 years of content.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "models. However, since there are no", "figure_data": "User Preference0.2 0.4 0.6 0.8 1.0w/ Image Pretraining w/o Image PretrainingUser Preference0.1 0.2 0.3 0.4 0.5 0.6 0.7LVD-10M-F LVD-10M0.0Prompt Alignment QualityAggregated0.0Prompt Alignment QualityAggregated(a) Initializing spatial layers from(b) Video data curation boosts per-pretrained images models greatlyformance after video pretraining.improves performance.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
Andreas Blattmann; Tim Dockhorn; Sumith Kulal; Daniel Mendelevitch; Maciej Kilian; Dominik Lorenz; Yam Levi; Zion English; Vikram Voleti; Adam Letts; Varun Jampani; Robin Rombach
[ { "authors": "Jie An; Songyang Zhang; Harry Yang; Sonal Gupta; Jia-Bin Huang; Jiebo Luo; Xi Yin", "journal": "", "ref_id": "b0", "title": "Latent-shift: Latent diffusion with temporal shift for efficient text-to-video generation", "year": "2023" }, { "authors": "Titas Anciukevičius; Zexiang Xu; Matthew Fisher; Paul Henderson; Hakan Bilen; J Niloy; Paul Mitra; Guerrero", "journal": "", "ref_id": "b1", "title": "Renderdiffusion: Image diffusion for 3d reconstruction, inpainting and generation", "year": "2023" }, { "authors": "Amanda Askell; Yuntao Bai; Anna Chen; Dawn Drain; Deep Ganguli; Tom Henighan; Andy Jones; Nicholas Joseph; Ben Mann; Nova Dassarma; Nelson Elhage; Zac Hatfield-Dodds; Danny Hernandez; Jackson Kernion; Kamal Ndousse; Catherine Olsson; Dario Amodei; Tom Brown; Jack Clark; Sam Mccandlish; Chris Olah; Jared Kaplan", "journal": "", "ref_id": "b2", "title": "A general language assistant as a laboratory for alignment", "year": "2021" }, { "authors": "Mohammad Babaeizadeh; Chelsea Finn; Dumitru Erhan; Roy H Campbell; Sergey Levine", "journal": "", "ref_id": "b3", "title": "Stochastic variational video prediction", "year": "2018" }, { "authors": "Youngmin Baek; Bado Lee; Dongyoon Han; Sangdoo Yun; Hwalsuk Lee", "journal": "", "ref_id": "b4", "title": "Character region awareness for text detection", "year": "2019" }, { "authors": "Yuntao Bai; Andy Jones; Kamal Ndousse; Amanda Askell; Anna Chen; Nova Dassarma; Dawn Drain; Stanislav Fort; Deep Ganguli; Tom Henighan; Nicholas Joseph; Saurav Kadavath; Jackson Kernion; Tom Conerly; Sheer El-Showk; Nelson Elhage; Zac Hatfield-Dodds; Danny Hernandez; Tristan Hume; Scott Johnston; Shauna Kravec; Liane Lovitt; Neel Nanda; Catherine Olsson; Dario Amodei; Tom Brown; Jack Clark; Sam Mccandlish; Chris Olah; Ben Mann; Jared Kaplan", "journal": "", "ref_id": "b5", "title": "Training a helpful and harmless assistant with reinforcement learning from human feedback", "year": "2022" }, { "authors": "Max Bain; Arsha Nagrani; Gül Varol; Andrew Zisserman", "journal": "", "ref_id": "b6", "title": "Frozen in time: A joint video and image encoder for end-to-end retrieval", "year": "2022" }, { "authors": "Andreas Blattmann; Timo Milbich; Michael Dorkenwald; Björn Ommer", "journal": "", "ref_id": "b7", "title": "ipoke: Poking a still image for controlled stochastic video synthesis", "year": "2021" }, { "authors": "Andreas Blattmann; Robin Rombach; Huan Ling; Tim Dockhorn; Seung Wook Kim; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b8", "title": "Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models", "year": "2023" }, { "authors": "Tim Brooks; Janne Hellsten; Miika Aittala; Ting-Chun; Timo Wang; Jaakko Aila; Ming-Yu Lehtinen; Alexei A Liu; Tero Efros; Karras", "journal": "NeurIPS", "ref_id": "b9", "title": "Generating long videos of dynamic scenes", "year": "2022" }, { "authors": "Joao Carreira; Andrew Zisserman", "journal": "", "ref_id": "b10", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "Lluis Castrejon; Nicolas Ballas; Aaron Courville", "journal": "", "ref_id": "b11", "title": "Improved conditional vrnns for video prediction", "year": "2019" }, { "authors": "Xiaoliang Dai; Ji Hou; Chih-Yao Ma; Sam Tsai; Jialiang Wang; Rui Wang; Peizhao Zhang; Simon Vandenhende; Xiaofang Wang; Abhimanyu Dubey; Matthew Yu; Abhishek Kadian; Filip Radenovic; Dhruv Mahajan; Kunpeng Li; Yue Zhao; Vladan Petrovic; Mitesh Kumar Singh; Simran Motwani; Yi Wen; Yiwen Song; Roshan Sumbaly; Vignesh Ramanathan; Zijian He; Peter Vajda; Devi Parikh", "journal": "", "ref_id": "b12", "title": "Emu: Enhancing image generation models using photogenic needles in a haystack", "year": "2023" }, { "authors": "Matt Deitke; Ruoshi Liu; Matthew Wallingford; Huong Ngo; Oscar Michel; Aditya Kusupati; Alan Fan; Christian Laforte; Vikram Voleti; Samir Yitzhak Gadre", "journal": "", "ref_id": "b13", "title": "Objaverse-XL: A universe of 10m+ 3d objects", "year": "2023" }, { "authors": "Matt Deitke; Dustin Schwenk; Jordi Salvador; Luca Weihs; Oscar Michel; Eli Vanderbilt; Ludwig Schmidt; Kiana Ehsani; Aniruddha Kembhavi; Ali Farhadi", "journal": "", "ref_id": "b14", "title": "Objaverse: A universe of annotated 3d objects", "year": "2023" }, { "authors": "Congyue Deng; Chiyu Jiang; Xinchen Charles R Qi; Yin Yan; Leonidas Zhou; Dragomir Guibas; Anguelov", "journal": "", "ref_id": "b15", "title": "Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors", "year": "2023" }, { "authors": "Emily Denton; Rob Fergus", "journal": "", "ref_id": "b16", "title": "Stochastic video generation with a learned prior", "year": "2018" }, { "authors": "Prafulla Dhariwal; Alex Nichol", "journal": "", "ref_id": "b17", "title": "Diffusion Models Beat GANs on Image Synthesis", "year": "2021" }, { "authors": "Michael Dorkenwald; Timo Milbich; Andreas Blattmann; Robin Rombach; Konstantinos G Derpanis; Björn Ommer", "journal": "", "ref_id": "b18", "title": "Stochastic image-to-video synthesis using cinns", "year": "2021" }, { "authors": "Laura Downs; Anthony Francis; Nate Koenig; Brandon Kinman; Ryan Hickman; Krista Reymann; Thomas B Mchugh; Vincent Vanhoucke", "journal": "IEEE", "ref_id": "b19", "title": "Google scanned objects: A high-quality dataset of 3d scanned household items", "year": "2022" }, { "authors": "E Arpad; Elo", "journal": "Arco Pub", "ref_id": "b20", "title": "The Rating of Chessplayers, Past and Present", "year": "1978" }, { "authors": "Patrick Esser; Robin Rombach; Björn Ommer", "journal": "", "ref_id": "b21", "title": "Taming transformers for high-resolution image synthesis", "year": "2020" }, { "authors": "Patrick Esser; Johnathan Chiu; Parmida Atighehchian; Jonathan Granskog; Anastasis Germanidis", "journal": "", "ref_id": "b22", "title": "Structure and content-guided video synthesis with diffusion models", "year": "2023" }, { "authors": "Gunnar Farnebäck", "journal": "", "ref_id": "b23", "title": "Two-frame motion estimation based on polynomial expansion", "year": "2003" }, { "authors": "Gereon Fox; Ayush Tewari; Mohamed Elgharib; Christian Theobalt", "journal": "", "ref_id": "b24", "title": "Stylevideogan: A temporal generative model using a pretrained stylegan", "year": "2021" }, { "authors": "Jean-Yves Franceschi; Edouard Delasalles; Mickaël Chen; Sylvain Lamprier; Patrick Gallinari", "journal": "", "ref_id": "b25", "title": "Stochastic latent residual video prediction", "year": "2020" }, { "authors": "Leo Gao; Stella Biderman; Sid Black; Laurence Golding; Travis Hoppe; Charles Foster; Jason Phang; Horace He; Anish Thite; Noa Nabeshima; Shawn Presser; Connor Leahy", "journal": "", "ref_id": "b26", "title": "The Pile: An 800gb dataset of diverse text for language modeling", "year": "2020" }, { "authors": "Songwei Ge; Thomas Hayes; Harry Yang; Xi Yin; Guan Pang; David Jacobs; Jia-Bin Huang; Devi Parikh", "journal": "Springer Nature Switzerland", "ref_id": "b27", "title": "Long video generation with time-agnostic vqgan and timesensitive transformer", "year": "2022" }, { "authors": "Seungjun Songwei Ge; Guilin Nah; Tyler Liu; Andrew Poon; Bryan Tao; David Catanzaro; Jia-Bin Jacobs; Ming-Yu Huang; Yogesh Liu; Balaji", "journal": "", "ref_id": "b28", "title": "Preserve your own correlation: A noise prior for video diffusion models", "year": "2023" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Jiaxi Gu; Shicong Wang; Haoyu Zhao; Tianyi Lu; Xing Zhang; Zuxuan Wu; Songcen Xu; Wei Zhang; Yu-Gang Jiang; Hang Xu", "journal": "", "ref_id": "b30", "title": "Reuse and diffuse: Iterative denoising for text-to-video generation", "year": "2023" }, { "authors": "Yuwei Guo; Ceyuan Yang; Anyi Rao; Yaohui Wang; Yu Qiao; Dahua Lin; Bo Dai", "journal": "", "ref_id": "b31", "title": "Animatediff: Animate your personalized text-to-image diffusion models without specific tuning", "year": "2023" }, { "authors": "Sonam Gupta; Arti Keshari; Sukhendu Das", "journal": "", "ref_id": "b32", "title": "Rv-gan: Recurrent gan for unconditional video generation", "year": "2022" }, { "authors": "Nicholas Guttenberg; Crosslabs ", "journal": "", "ref_id": "b33", "title": "Diffusion with offset noise", "year": "2023" }, { "authors": "Yingqing He; Tianyu Yang; Yong Zhang; Ying Shan; Qifeng Chen", "journal": "", "ref_id": "b34", "title": "Latent video diffusion models for highfidelity long video generation", "year": "2023" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b35", "title": "Classifier-free diffusion guidance", "year": "2021" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b36", "title": "Classifier-Free Diffusion Guidance", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b37", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Chitwan Saharia; William Chan; David J Fleet; Mohammad Norouzi; Tim Salimans", "journal": "", "ref_id": "b38", "title": "Cascaded diffusion models for high fidelity image generation", "year": "2021" }, { "authors": "Jonathan Ho; William Chan; Chitwan Saharia; Jay Whang; Ruiqi Gao; Alexey Gritsenko; P Diederik; Ben Kingma; Mohammad Poole; David J Norouzi; Tim Fleet; Salimans", "journal": "", "ref_id": "b39", "title": "Imagen Video: High Definition Video Generation with Diffusion Models", "year": "2022" }, { "authors": "Jonathan Ho; William Chan; Chitwan Saharia; Jay Whang; Ruiqi Gao; Alexey Gritsenko; P Diederik; Ben Kingma; Mohammad Poole; David J Norouzi; Tim Fleet; Salimans", "journal": "", "ref_id": "b40", "title": "Imagen video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey Gritsenko; William Chan; Mohammad Norouzi; David J ", "journal": "", "ref_id": "b41", "title": "Fleet. Video diffusion models", "year": "2022" }, { "authors": "Wenyi Hong; Ming Ding; Wendi Zheng; Xinghan Liu; Jie Tang", "journal": "", "ref_id": "b42", "title": "Cogvideo: Large-scale pretraining for text-tovideo generation via transformers", "year": "2022" }, { "authors": "Emiel Hoogeboom; Jonathan Heek; Tim Salimans", "journal": "", "ref_id": "b43", "title": "simple diffusion: End-to-end diffusion for high resolution images", "year": "" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b44", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Aapo Hyvärinen; Peter Dayan", "journal": "Journal of Machine Learning Research", "ref_id": "b45", "title": "Estimation of Non-Normalized Statistical Models by Score Matching", "year": "2005" }, { "authors": "Gabriel Ilharco; Mitchell Wortsman; Ross Wightman; Cade Gordon; Nicholas Carlini; Rohan Taori; Achal Dave; Vaishaal Shankar; Hongseok Namkoong; John Miller; Hannaneh Hajishirzi; Ali Farhadi; Ludwig Schmidt", "journal": "", "ref_id": "b46", "title": "Openclip", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Itseez. Open source computer vision library", "year": "2015-04-17" }, { "authors": "Heewoo Jun; Alex Nichol", "journal": "", "ref_id": "b48", "title": "Shap-e: Generating conditional 3d implicit functions", "year": "2023" }, { "authors": "Emmanuel Kahembwe; Subramanian Ramamoorthy", "journal": "Neural Networks", "ref_id": "b49", "title": "Lower dimensional kernels for video discriminators", "year": "2020" }, { "authors": "Tero Karras; Miika Aittala; Timo Aila; Samuli Laine", "journal": "", "ref_id": "b50", "title": "Elucidating the Design Space of Diffusion-Based Generative Models", "year": "2022" }, { "authors": "Levon Khachatryan; Andranik Movsisyan; Vahram Tadevosyan; Roberto Henschel; Zhangyang Wang; Shant Navasardyan; Humphrey Shi", "journal": "", "ref_id": "b51", "title": "Text2video-zero: Textto-image diffusion models are zero-shot video generators", "year": "2023" }, { "authors": "Diederik Kingma; Tim Salimans; Ben Poole; Jonathan Ho", "journal": "Advances in neural information processing systems", "ref_id": "b52", "title": "Variational diffusion models", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b53", "title": "Pika labs", "year": "2023" }, { "authors": "Alex X Lee; Richard Zhang; Frederik Ebert; Pieter Abbeel; Chelsea Finn; Sergey Levine", "journal": "", "ref_id": "b54", "title": "Stochastic adversarial video prediction", "year": "2018" }, { "authors": "Shanchuan Lin; Bingchen Liu; Jiashi Li; Xiao Yang", "journal": "", "ref_id": "b55", "title": "Common Diffusion Noise Schedules and Sample Steps are Flawed", "year": "2023" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b56", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2023" }, { "authors": "Yuan Liu; Cheng Lin; Zijiao Zeng; Xiaoxiao Long; Lingjie Liu; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b57", "title": "Syncdreamer: Generating multiview-consistent images from a single-view image", "year": "2023" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b58", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Pauline Luc; Aidan Clark; Sander Dieleman; Diego De Las; Yotam Casas; Albin Doron; Karen Cassirer; Simonyan", "journal": "", "ref_id": "b59", "title": "Transformation-based adversarial video prediction on large-scale data", "year": "2020" }, { "authors": "Chenlin Meng; Robin Rombach; Ruiqi Gao; P Diederik; Stefano Kingma; Jonathan Ermon; Tim Ho; Salimans", "journal": "", "ref_id": "b60", "title": "On distillation of guided diffusion models", "year": "2023" }, { "authors": "Alex Nichol; Heewoo Jun; Prafulla Dhariwal; Pamela Mishkin; Mark Chen", "journal": "", "ref_id": "b61", "title": "Point-e: A system for generating 3d point clouds from complex prompts", "year": "2022" }, { "authors": "Guilherme Penedo; Quentin Malartic; Daniel Hesslow; Ruxandra Cojocaru; Alessandro Cappelli; Hamza Alobeidli; Baptiste Pannier; Ebtesam Almazrouei; Julien Launay", "journal": "", "ref_id": "b62", "title": "The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only", "year": "" }, { "authors": "Dustin Podell; Zion English; Kyle Lacey; Andreas Blattmann; Tim Dockhorn; Jonas Müller; Joe Penna; Robin Rombach", "journal": "", "ref_id": "b63", "title": "SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis", "year": "2023" }, { "authors": "Giovanni Puccetti; Maciej Kilian; Romain Beaumont", "journal": "LAION blog", "ref_id": "b64", "title": "Training contrastive captioners", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b65", "title": "Learning Transferable Visual Models From Natural Language Supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "", "ref_id": "b66", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2019" }, { "authors": "Aditya Ramesh", "journal": "", "ref_id": "b67", "title": "How dall•e 2 works", "year": "2022" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b68", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b69", "title": "Hierarchical Text-Conditional Image Generation with CLIP Latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b70", "title": "High-Resolution Image Synthesis with Latent Diffusion Models", "year": "1920" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b71", "title": "High-resolution image synthesis with latent diffusion models", "year": "2021" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b72", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation", "year": "2015" }, { "authors": " Runwayml", "journal": "", "ref_id": "b73", "title": "Gen-2 by runway", "year": "2023" }, { "authors": "Chitwan Saharia; Jonathan Ho; William Chan; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b74", "title": "Image super-resolution via iterative refinement", "year": "2021" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; S Sara Mahdavi; Rapha Gontijo Lopes; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b75", "title": "Photorealistic text-toimage diffusion models with deep language understanding", "year": "2022" }, { "authors": "Masaki Saito; Eiichi Matsumoto; Shunta Saito", "journal": "", "ref_id": "b76", "title": "Temporal generative adversarial nets with singular value clipping", "year": "2017" }, { "authors": "Masaki Saito; Shunta Saito; Masanori Koyama; Sosuke Kobayashi", "journal": "International Journal of Computer Vision", "ref_id": "b77", "title": "Train sparsely, generate densely: Memoryefficient unsupervised training of high-resolution temporal gan", "year": "2020" }, { "authors": "Tim Salimans; Jonathan Ho", "journal": "", "ref_id": "b78", "title": "Progressive Distillation for Fast Sampling of Diffusion Models", "year": "2022" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b79", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Yichun Shi; Peng Wang; Jianglong Ye; Mai Long; Kejie Li; Xiao Yang", "journal": "", "ref_id": "b80", "title": "Mvdream: Multi-view diffusion for 3d generation", "year": "2023" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni; Devi Parikh; Sonal Gupta; Yaniv Taigman", "journal": "", "ref_id": "b81", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data", "year": "1920" }, { "authors": "Ivan Skorokhodov; Sergey Tulyakov; Mohamed Elhoseiny", "journal": "", "ref_id": "b82", "title": "Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2", "year": "2022" }, { "authors": "Jascha Sohl-Dickstein; Eric A Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b83", "title": "Deep Unsupervised Learning using Nonequilibrium Thermodynamics", "year": "2015" }, { "authors": "Gowthami Somepalli; Vasu Singla; Micah Goldblum; Jonas Geiping; Tom Goldstein", "journal": "", "ref_id": "b84", "title": "Understanding and mitigating copying in diffusion models", "year": "2023" }, { "authors": "Yang Song; Stefano Ermon", "journal": "", "ref_id": "b85", "title": "Improved Techniques for Training Score-Based Generative Models", "year": "2020" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b86", "title": "Score-Based Generative Modeling through Stochastic Differential Equations", "year": "2020" }, { "authors": "Khurram Soomro; Mubarak Amir Roshan Zamir; Shah", "journal": "", "ref_id": "b87", "title": "Ucf101: A dataset of 101 human actions classes from videos in the wild", "year": "2012" }, { "authors": "Zachary Teed; Jia Deng", "journal": "Springer", "ref_id": "b88", "title": "Raft: Recurrent all-pairs field transforms for optical flow", "year": "2020" }, { "authors": "Yu Tian; Jian Ren; Menglei Chai; Kyle Olszewski; Xi Peng; Dimitris N Metaxas; Sergey Tulyakov", "journal": "", "ref_id": "b89", "title": "A good image generator is what you need for high-resolution video synthesis", "year": "2021" }, { "authors": "Suramya Tomar", "journal": "Linux Journal", "ref_id": "b90", "title": "Converting video formats with ffmpeg", "year": "2006" }, { "authors": "Arash Vahdat; Karsten Kreis; Jan Kautz", "journal": "", "ref_id": "b91", "title": "Score-based generative modeling in latent space", "year": "2021" }, { "authors": "Ruben Villegas; Jimei Yang; Seunghoon Hong; Xunyu Lin; Honglak Lee", "journal": "ICLR", "ref_id": "b92", "title": "Decomposing motion and content for natural video sequence prediction", "year": "2017" }, { "authors": "Ruben Villegas; Mohammad Babaeizadeh; Pieter-Jan Kindermans; Hernan Moraldo; Han Zhang; Mohammad Taghi Saffar; Santiago Castro; Julius Kunze; Dumitru Erhan", "journal": "", "ref_id": "b93", "title": "Phenaki: Variable length video generation from open domain textual description", "year": "2022" }, { "authors": "Vikram Voleti; Alexia Jolicoeur-Martineau; Christopher Pal", "journal": "", "ref_id": "b94", "title": "Mcvd: Masked conditional video diffusion for prediction, generation, and interpolation", "year": "2022" }, { "authors": "Carl Vondrick; Hamed Pirsiavash; Antonio Torralba", "journal": "", "ref_id": "b95", "title": "Generating videos with scene dynamics", "year": "2016" }, { "authors": "Jiuniu Wang; Hangjie Yuan; Dayou Chen; Yingya Zhang; Xiang Wang; Shiwei Zhang", "journal": "", "ref_id": "b96", "title": "Modelscope text-to-video technical report", "year": "2023" }, { "authors": "Yaohui Wang; Piotr Bilinski; Francois Bremond", "journal": "", "ref_id": "b97", "title": "Antitza Dantcheva. G3an: Disentangling appearance and motion for video generation", "year": "2020" }, { "authors": "Yaohui Wang; Xinyuan Chen; Xin Ma; Shangchen Zhou; Ziqi Huang; Yi Wang; Ceyuan Yang; Yinan He; Jiashuo Yu; Peiqing Yang", "journal": "", "ref_id": "b98", "title": "High-quality video generation with cascaded latent diffusion models", "year": "2023" }, { "authors": "Yi Wang; Yinan He; Yizhuo Li; Kunchang Li; Jiashuo Yu; Xin Ma; Xinyuan Chen; Yaohui Wang; Ping Luo; Ziwei Liu; Yali Wang; Limin Wang; Yu Qiao", "journal": "", "ref_id": "b99", "title": "Internvid: A large-scale video-text dataset for multimodal understanding and generation", "year": "2023" }, { "authors": "Daniel Watson; William Chan; Ricardo Martin-Brualla; Jonathan Ho; Andrea Tagliasacchi; Mohammad Norouzi", "journal": "", "ref_id": "b100", "title": "Novel view synthesis with diffusion models", "year": "2022" }, { "authors": "Dirk Weissenborn; Oscar Täckström; Jakob Uszkoreit", "journal": "", "ref_id": "b101", "title": "Scaling autoregressive video models", "year": "2020" }, { "authors": "Chenfei Wu; Lun Huang; Qianxi Zhang; Binyang Li; Lei Ji; Fan Yang; Guillermo Sapiro; Nan Duan", "journal": "", "ref_id": "b102", "title": "Godiva: Generating open-domain videos from natural descriptions", "year": "2021" }, { "authors": "Chenfei Wu; Jian Liang; Lei Ji; Fan Yang; Yuejian Fang; Daxin Jiang; Nan Duan", "journal": "Springer", "ref_id": "b103", "title": "Nüwa: Visual synthesis pretraining for neural visual world creation", "year": "2022" }, { "authors": "Hu Xu; Saining Xie; Ellen Xiaoqing; Po-Yao Tan; Russell Huang; Vasu Howes; Shang-Wen Sharma; Gargi Li; Luke Ghosh; Christoph Zettlemoyer; Feichtenhofer", "journal": "", "ref_id": "b104", "title": "Demystifying clip data", "year": "2023" }, { "authors": "Jun Xu; Tao Mei; Ting Yao; Yong Rui", "journal": "", "ref_id": "b105", "title": "Msr-vtt: A large video description dataset for bridging video and language", "year": "2016" }, { "authors": "Wilson Yan; Yunzhi Zhang; Pieter Abbeel; Aravind Srinivas", "journal": "", "ref_id": "b106", "title": "Videogpt: Video generation using vq-vae and transformers", "year": "2021" }, { "authors": "Jiahui Yu; Zirui Wang; Vijay Vasudevan; Legg Yeung; Mojtaba Seyedhosseini; Yonghui Wu", "journal": "", "ref_id": "b107", "title": "Coca: Contrastive captioners are image-text foundation models", "year": "2022" }, { "authors": "Keunwoo Peter; Yu ", "journal": "", "ref_id": "b108", "title": "Videoblip", "year": "2023" }, { "authors": "Sihyun Yu; Jihoon Tack; Sangwoo Mo; Hyunsu Kim; Junho Kim; Jung-Woo Ha; Jinwoo Shin", "journal": "", "ref_id": "b109", "title": "Generating videos with dynamics-aware implicit generative adversarial networks", "year": "2022" }, { "authors": "Xianggang Yu; Mutian Xu; Yidan Zhang; Haolin Liu; Chongjie Ye; Yushuang Wu; Zizheng Yan; Chenming Zhu; Zhangyang Xiong; Tianyou Liang", "journal": "", "ref_id": "b110", "title": "Mvimgnet: A large-scale dataset of multi-view images", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b111", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Shiwei Zhang; Jiayu Wang; Yingya Zhang; Kang Zhao; Hangjie Yuan; Zhiwu Qin; Xiang Wang; Deli Zhao; Jingren Zhou", "journal": "", "ref_id": "b112", "title": "I2vgen-xl: High-quality image-to-video synthesis via cascaded diffusion models", "year": "2023" }, { "authors": "Yabo Zhang; Yuxiang Wei; Dongsheng Jiang; Xiaopeng Zhang; Wangmeng Zuo; Qi Tian", "journal": "", "ref_id": "b113", "title": "Controlvideo: Training-free controllable text-to-video generation", "year": "2023" }, { "authors": "Daquan Zhou; Weimin Wang; Hanshu Yan; Weiwei Lv; Yizhe Zhu; Jiashi Feng", "journal": "", "ref_id": "b114", "title": "Magicvideo: Efficient video generation with latent diffusion models", "year": "2022" }, { "authors": "Zhizhuo Zhou; Shubham Tulsiani", "journal": "", "ref_id": "b115", "title": "Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction", "year": "2023" }, { "authors": "E ", "journal": "D.2 and Secs", "ref_id": "b116", "title": "Additional Samples Here, we show additional samples for the models introduced in App", "year": "" }, { "authors": "E ", "journal": "", "ref_id": "b117", "title": "1 Additional Text-to-Video Samples In Figure 18", "year": "" } ]
[ { "formula_coordinates": [ 16, 50.11, 412.35, 463.8, 162.66 ], "formula_id": "formula_0", "formula_text": "✓ ✓ ✓ ✓ ✗ ✓ ✗ ✓ Figure 11." }, { "formula_coordinates": [ 18, 222.29, 540.79, 322.83, 9.65 ], "formula_id": "formula_1", "formula_text": "dx = -σ(t)σ(t)∇ x log p(x; σ(t)) dt,(1)" }, { "formula_coordinates": [ 18, 319.41, 569.93, 225.71, 11.23 ], "formula_id": "formula_2", "formula_text": "∇ x log p(x; σ) ≈ s θ (x; σ) = (D θ (x; σ) -x)/σ 2 [51]," }, { "formula_coordinates": [ 18, 170.21, 612.1, 374.9, 12.7 ], "formula_id": "formula_3", "formula_text": "E (x0,c)∼p data (x0,c),(σ,n)∼p(σ,n) λ σ ∥D θ (x 0 + n; σ, c) -x 0 ∥ 2 2 ,(2)" }, { "formula_coordinates": [ 18, 184.82, 700.61, 360.29, 9.65 ], "formula_id": "formula_4", "formula_text": "D θ (x; σ) = c skip (σ)x + c out (σ)F θ (c in (σ)x; c noise (σ)),(3)" }, { "formula_coordinates": [ 19, 202.26, 117.73, 342.86, 11.03 ], "formula_id": "formula_5", "formula_text": "D w (x; σ, c) = wD(x; σ, c) -(w -1)D(x; σ),(4)" }, { "formula_coordinates": [ 19, 235.03, 232.43, 310.08, 12.69 ], "formula_id": "formula_6", "formula_text": "c SD2.1 skip (σ) = 1,(5)" }, { "formula_coordinates": [ 19, 235.03, 249.12, 310.08, 12.69 ], "formula_id": "formula_7", "formula_text": "c SD2.1 out (σ) = -σ ,(6)" }, { "formula_coordinates": [ 19, 235.03, 265.18, 310.08, 23.48 ], "formula_id": "formula_8", "formula_text": "c SD2.1 in (σ) = 1 √ σ 2 + 1 ,(7)" }, { "formula_coordinates": [ 19, 235.03, 291.98, 310.08, 34.95 ], "formula_id": "formula_9", "formula_text": "c SD2.1 noise (σ) = arg min j∈[1000] (σ -σ j ) ,(8) (9)" }, { "formula_coordinates": [ 19, 249.53, 428.89, 295.58, 13.76 ], "formula_id": "formula_10", "formula_text": "c skip (σ) = σ 2 + 1 -1 ,(10)" }, { "formula_coordinates": [ 19, 252.05, 446.38, 293.06, 23.48 ], "formula_id": "formula_11", "formula_text": "c out (σ) = -σ √ σ 2 + 1 ,(11)" }, { "formula_coordinates": [ 19, 256.88, 472.79, 288.24, 23.48 ], "formula_id": "formula_12", "formula_text": "c in (σ) = 1 √ σ 2 + 1 ,(12)" }, { "formula_coordinates": [ 19, 246.21, 501.15, 298.91, 23.9 ], "formula_id": "formula_13", "formula_text": "c noise (σ) = 0.25 log σ,(13) (14)" }, { "formula_coordinates": [ 22, 254.93, 549.66, 290.18, 25 ], "formula_id": "formula_14", "formula_text": "E 1 = 1 1 + 10 R 2 -R 1 400 ,(15)" }, { "formula_coordinates": [ 22, 254.93, 577.17, 286.04, 25 ], "formula_id": "formula_15", "formula_text": "E 2 = 1 1 + 10 R 1 -R 2 400 . (16" }, { "formula_coordinates": [ 22, 540.96, 584.22, 4.15, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 22, 217.8, 638.9, 327.31, 14.34 ], "formula_id": "formula_17", "formula_text": "R ′ i = R i + K • (S i -E i ) , i ∈ {1, 2}(17)" } ]
2023-12-04
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b7", "b9", "b11", "b6", "b12" ], "table_ref": [], "text": "Precision agriculture relies heavily on the accuracy of crop-type maps, as they serve as the foundation for informed decision-making in farming practices Becker-Reshef et al. [2023]. High-quality croptype maps enable farmers to optimize resource allocation, monitor crop health, and maximize yields while minimizing environmental impacts. However, generating accurate crop-type maps is a resourceintensive and expensive endeavor, often requiring laborious manual annotation or sophisticated supervised deep learning models. Therefore, there is an ongoing effort at the confluence of precision agriculture and deep learning to develop efficient and reliable automated methods for crop-type map prediction using abundant remote sensing satellite imagery Qadeer et al. [2021]. Meta AI's state-of-the-art Segment Anything Model (SAM) Kirillov et al. [2023] has garnered significant attention for its remarkable performance in automatically segmenting various types of images, including natural scenes, medical images, and satellite images Mazurowski et al. [2023], Wang et al. [2023], Jing et al. [2023]. SAM, with its prompt-based interface and automatic mask generator 2.3, has showcased impressive results even in zero-shot settings. Nevertheless, applying SAM to the challenging task of predicting crop-type maps presents unique challenges.\nSAM is limited to images of up to 3 channels and was trained on an extensive dataset of RGB images. One of the primary difficulties lies in this inherent limitations of using only the RGB spectra of a rich, multi-spectral satellite imagery stack. Distinguishing between different crop types using only spectral information from RGB channels is challenging as crops often exhibit similar color characteristics, especially during early growth stages. Moreover, crop-type maps are traditionally produced using the temporal evolution of the normalized difference vegetation index (NDVI) over the whole growing season Wei et al. [2023], Ghosh et al. [2021b] and not just using an RGB snapshot of the crop fields at a single moment in time. Furthermore, SAM's class-agnostic nature complicates the direct application of its zero-shot automatic mask generator to generate crop-type maps as, unlike typical image segmentation models, it does not provide labels for pixels and instead outputs a set of boolean masks. This paper seeks to investigate these challenges by proposing the use of clustering consensus metrics to quantify SAM's zero-shot performance on the task. While direct crop-type map generation may be challenging, we envision leveraging SAM's strengths to produce fast and accurate shape maps outlining individual fields within a large agricultural area of interest in a satellite image. These shape maps, despite not directly representing crop types, can serve as a valuable foundation for subsequent crop type classification and map generation processes.\nIn this paper, we will present the methodology and experiments conducted to assess SAM's performance, highlighting the insights gained from using clustering consensus metrics. The rest of this paper is organized as follows -In Section 2, we setup the preliminaries for a brief overview of crop-type mapping using remote sensing imagery and the Segment Anything model, followed by our experimental setup and analysis in Section 3. Finally, we conclude with our findings and specify some future directions in Section 4." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b1", "b5" ], "table_ref": [], "text": "Terminology:\n1. AOI: Area of Interest; depending on the spatial resolution at which the satellite captures data, each pixel in the AOI represents physical land area (typically measured in meters 2 ).\n2. NDVI: Normalized difference vegetation index; it is a measure of \"greenness\" which quantifies vegetation by measuring the difference between near-infrared (which vegetation strongly reflects) and red light (which vegetation absorbs).\nN DV I Sentinel2 = (B8 -B4) (B8 + B4)\nwhere B8: near-infrared band and B4: red band of the Sentinel-2 satellite measurements.\n2.1 Crop Data Layer (CDL): Data Product for high-quality crop-type maps\nThe Cropland Data Layer (CDL) Boryan et al. [2011], hosted on CropScape Han et al. [2012], provides a raster, geo-referenced, crop-specific land cover map for the continental United States.\nCDL is an annual data product created at the end of the growing season by the US Department of Agriculture (USDA), and National Agricultural Statistics Service (NASS). It is produced at a 30 m resolution and provides pixel-level classification across several hundred crop-types grown in the US." }, { "figure_ref": [], "heading": "Sentinel-2 Satellite Imagery", "publication_ref": [ "b2" ], "table_ref": [], "text": "The European Space Agency (ESA) provides an open release of the multi-spectral spatio-temporal earth observation data captured by their Sentinel-2 satellites Drusch et al. [2012] at 10 m, 20 m and 60 m resolutions across visible, near infrared, and short wave infrared bands of the spectrum. In Ghosh et al. [2021a], the authors release the CalCrop21 dataset, which contains a portion of the Sentinel-2 data as well as the corresponding CDL (available here). The datasets consists of 367 tiles (i.e. samples), each representing 1098 pixels x 1098 pixels AOIs spanning agricultural fields of Central Valley, California. Each sample represents the multi-spectral spatio-temporal stack of the AOI for the entire growing season of the year 2018. We will use this dataset for our analyses in this paper. See section 3.1 for more details." }, { "figure_ref": [], "heading": "Automatic mask generation with SAM", "publication_ref": [ "b7" ], "table_ref": [], "text": "SAM is the latest amongst the so-called foundation models for the crucial computer vision task of image segmentation (i.e. pixel classification). It is trained on the massive SA-1B dataset Kirillov et al. [2023] consisting of 11 million RGB images and 1 billion masks resulting in its state-of-the-art performance in zero-shot setting on a variety of tasks. However, there are a few key ways in which SAM differs from traditional segmentation models that play a crucial role in its proposed use-case for our task -1. Zero-shot inference: SAM has been trained on a massive dataset and can be used in the so-called zero-shot setting i.e. without training or fine-tuning on a large, task-specific dataset.\n2. Prompt-based interface: SAM is designed to segment objects in an input image based on a set of prompts. The prompts can be in form of points and/or boxes that a user can provide for a given image which can guide the model to isolate and segment objects in/around the prompted region in the input image.\n3. Lack of class labels: SAM is class-agnostic. Its output is a set of boolean masks and it does not identify the objects it segments with any semantic class labels.\nIn its segment everything setting, lacking any user provided prompts indicating a region-of-interest within the image to be segmented, SAM's Automatic Mask Generator (AMG) returns a set of boolean masks (and associated metadata like predicted Intersection over Union (IoU) score) given an input image by prompting the model with a grid of uniformly distributed point prompts. There are a few tunable parameters available in the AMG that are relevant to our analysis -1. Points per side (PPS): The number of points to be sampled along one side of the image. The total number of points is PPS 2 . Higher PPS values ensure more unambiguous point prompts available per region in the input image at the cost of higher mask generation time.\n2. Minimum Mask Region Area (MMRA): Removes disconnected regions and holes in masks with area smaller than the MMRA value." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "A sample in the CalCrop21 dataset is a 4-D tensor X ∈ R T ×W ×H×C , Y ∈ I W ×H where W = image width, H = image height, T = no. of timesteps, C = no. of spectral channels representing the multispectral spatio-temporal stack (X) that spans a 1098 x 1098 pixels AOI over 24 timesteps and across 10 spectral channels and the corresponding CDL (Y ). Whereas, SAM is limited to upto 3 channel inputs and was trained on the massive SA-1B dataset of 11 million RGB images. Thus, we are limited to using the red, green and blue channels from our samples. Furthermore, out of the 24 timesteps available, we choose the timestep where the NDVI is maximum to produce a temporal snapshot of the crop fields when the crops are at peak \"greenness\". With this limited choices, we compute X RGB . X RGB = X(t max , :, :, [i, j, k]) where X RGB ∈ R H×W ×3 such that t max ∈ T is the maximum NDVI timestep and i, j, k ∈ C correspond to the red, green, blue channels respectively. We use the ground-truth (Y ) as is. Thus, a sample representing the input image provided to SAM and the corresponding ground-truth CDL used to evaluate the prediction performance are (X RGB , Y ). See figure 1; the left and middle plots show an example input image and the corresponding ground-truth CDL.\nNote: The CalCrop21 dataset has 367 samples, however, after computing X RGB we have deemed 20 of those tiles unusable due to the cloud cover present at the maximum NDVI timestep (t max )." }, { "figure_ref": [], "heading": "Testing SAM's Automatic Mask Generator for crop-maps prediction", "publication_ref": [], "table_ref": [], "text": "Crop-map prediction requires us to assign a crop-type label to each pixel in the image. Traditionally, this translates to the popular multi-class semantic segmentation task that's well-studied in computer vision. However, SAM is class-agnostic. As described in section 2.3, when used in zero-shot, segment-everything, uniformly-prompted setting, it produces a set of boolean masks and we cannot know the one-to-one mapping between them and the crop-type classes in the groundtruth CDL. Therefore, we cannot compute the traditional evaluation metrics used in supervised learning settings viz. accuracy, dice coefficient, Intersection-Over-Union (IoU). To evaluate the quality of the predicted masks compared to the ground-truth mask, first, we have to post-process this collection of boolean masks into a single multi-class mask2 .\nFurthermore, there can be different no. of unique classes/labels in the ground-truth CDL compared to the predicted post-processed multi-class mask. Therefore, we have chosen to use clustering consensus metrics to quantify the agreement between the ground truth and predicted post-processed multi-class mask 3 . We flatten the ground-truth and predicted multi-class masks then treat them as two sets of clusterings of the pixels in the input image. A consensus metric would thus quantify agreement between the two sets, providing us an indirect measure of how closely SAM can predict the ground-truth CDL using just RGB images of the crop fields. We evaluate the multi-class mask that we derive from SAM's output on a variety of different clustering consensus metrics across varying √ AOI with respect to prompts density and minimum mask region area as a fraction of image length and image area respectively.\nObservation: Preliminary testing suggests that SAM can segment a semantically identical region (i.e. belonging to a single ground-truth class) as a single mask if that region remains spatially contiguous and occupies a relatively large fraction of the AOI in the input image (See Appendix C). We'll term this type of samples easier for SAM to segment as expected4 .\nThe original tiles in the CalCrop21 dataset have √ AOI = 1098 pixels and we have created sub-tiles with 2x, 4x and 8x smaller" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "√", "publication_ref": [], "table_ref": [], "text": "AOIs with an overlapping sliding window over the original tiles to create easier samples as the √ AOI decreases. This results in samples with sub-tile dimensions (549 x 549), (274 x 274) and (137 x 137) respectively. We randomly select 300 samples from each set to perform our analysis. As shown in figure 3.2, we observe some indicative trends across 4 different metrics. There are two types of trends to be noted -1. overall trend across varying √ AOI, 2. trend for a specific √ AOI across varying PPS%.\nFowlkes-Mallows Index (FMI): FMI measures the geometric mean of precision and recall between the ground-truth and predicted clusters. FMI is sensitive to the number of true positive pairs and penalizes both false positives and false negatives. FMI ranges from 0 (random) to 1 (perfect consensus). As shown in figure 3.2 (A), the relative decrease in mean FMI at increasing values of √ AOI suggests that the predicted clusters are becoming less accurate in terms of both false positives (pairs that are in the same predicted cluster but not in the same ground-truth cluster) and false negatives (pairs that are in the same ground-truth cluster but not in the same predicted cluster). The relative decrease in FMI for a given" }, { "figure_ref": [ "fig_2" ], "heading": "√", "publication_ref": [], "table_ref": [], "text": "AOI at increasing values of PPS% can be potentially explained by insufficient prompts density at the lower PPS% leading to singular large mask that lowers the precision and/or recall.\nAdjusted Rand Index (ARI): ARI quantifies the agreement between pairs of data points in terms of whether they are in the same or different clusters in both ground-truth and predicted clusterings while accounting for chance. ARI ranges from -1 (no agreement) to 1 (perfect agreement), with 0 indicating random agreement. As shown in figure 3.2 (C), the relative decrease in mean ARI at increasing values of √ AOI indicates that the clustering is poor at capturing the overall structure of the data w.r.t. the ground-truth. The relative increase in mean ARI for a given" }, { "figure_ref": [ "fig_2" ], "heading": "√", "publication_ref": [], "table_ref": [], "text": "AOI at increasing values of PPS% suggests that the lowest prompt density value is not sufficient but the performance plateaus at the higher prompt densities suggesting no further gains can be made in terms of pairwise agreements between pixels in terms of cluster membership in ground-truth versus the prediction.\nV-Measure: V-Measure is the harmonic mean of homogeneity and completeness, capturing both the quality of individual clusters and how well they cover the ground-truth classes. Homogeneity measures whether each cluster contains only data points that are members of a single ground-truth class. Completeness measures whether all data points that are members of a given ground-truth class are assigned to the same cluster. As shown in figure 3.2 (D), the mean V-Measure remains the same across increasing values of √ AOI indicates that the predicted clusters are not becoming more homogeneous and complete as the √ AOI increases. The relative increase in mean V-Measure for a given" }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "√", "publication_ref": [], "table_ref": [], "text": "AOI at increasing values of PPS% suggests that the lowest prompt density value is not sufficient, but the performance plateaus at the higher prompt densities suggesting no further gains can be made in terms of homogeneity and completeness.\nNormalized Mutual Information (NMI): NMI measures the mutual information between the ground-truth and predicted clusters, normalized by entropy terms. It quantifies the amount of information shared between the two clusterings. NMI ranges from 0 (no mutual information) to 1 (perfect agreement). NMI is correlated to V-Measure, so as shown in figure 3.2(B), we see similar trends at differing √ AOI and PPS%. Overall, the average clustering consensus is low across all metrics with really long tails which are indicative of outlier samples that align very well (or misalign terribly) with the ground-truth purely by chance across different AOIs (see figure 3). The highest scoring samples at the smallest AOI tend to have a few large semantically identical regions that remain spatially contiguous and therefore get segmented in a way that aligns with the ground-truth CDL (see figure 3, leftmost plots)." }, { "figure_ref": [ "fig_4", "fig_0" ], "heading": "SAM's Automatic Mask Generator for rapid, automatic crop fields shape-maps generation", "publication_ref": [ "b1", "b13", "b8" ], "table_ref": [], "text": "The CDL is produced as a pixel-level classification by decision tree classifiers using the timeseries and derived features of the temporal evolution of the NDVI information captured by LandSat satellite imagery Boryan et al. [2011]. CDL is a high quality data product, however due to this pixel-level classification, which essentially lacks the spatial neighborhood context that modern deep convolutional neural networks use to great success for image processing tasks, the CDL has pixel noise (see figures 1 and 6, middle column). In modern efforts towards training a deep learning model to automatically predict the CDL using the satellite imagery input, a critical preprocessing step in producing good training examples is denoising the CDL of this pixel noise. There are various proposed methods for this preprocessing from manual approaches to more automatic supervised learning-based approaches Ghosh et al. [2021a], Zhang et al. [2020], Lin et al. [2022]. One convenient V-Measure, ARI and NMI improve as the prompt density increases while the FMI declines suggesting that although the clusters in the predicted mask become more homogenous and complete, the predicted mask captures the overall structure better in terms of pairwise agreements between cluster memberships of pixels and shared information between pixels in ground-truth and predicted masks, the predicted mask gets less accurate in terms of precision and recall. way to perform this preprocessing swiftly and accurately is to do field-level aggregation of the crop types, the reasoning being farmers typically plant a single crop in a field. To perform this field-level aggregation, shape maps of the crop fields in an AOI are used. For the state of California, these shape maps are produced by the California Department of Water Resources annually via on-site surveys as described here (see figure 5). Based on analysis presented in this paper, we can envision a promising use-case for SAM to produce these shape maps of crop fields as the class-agnostic nature of zero-shot SAM's automatic mask generation applies more naturally to accelerating shape maps generation. As shown in figure 1(right column), SAM is successful at identifying individual crop fields in an AOI separated by field borders, roads, waterways or other structures even though it struggles to identify semantically identical crop fields (i.e. fields containing the same crop-type) in an AOI as a single mask. Moreover, the resultant shape map is also low in pixel noise.\nAs CalCrop21 dataset does not provide the shape files corresponding to the the samples, we cannot perform a quantitative analysis for this proposed use-case and can only provide a qualitative assessment. Therefore, we leave the quantitative analysis to future work." }, { "figure_ref": [], "heading": "Conclusions and Future Directions", "publication_ref": [], "table_ref": [], "text": "Our findings in this paper indicate that, while direct crop-type map generation using SAM's automatic mask generator (AMG) with uniformly distributed prompts is infeasible, we foresee a promising alternative in using it for shape maps generation instead. Our experiments demonstrate that SAM can be a valuable tool for producing fast and noise-free shape maps outlining individual fields within a large agricultural AOI in a satellite image. These shape maps, which are currently created manually as an annual data product, while not directly representing crop types can serve as a foundational step in the crop-type map generation process. Although, SAM's AMG enables swift annotations for features/objects of interest in the AOI, currently, the class-agnostic output limits us from predicting \"true\" multi-class masks where there is a one-to-one correspondence between the ground-truth and predicted labels. For a future direction, we can envision a use-case where we can use the ground-truth CDL to prompt SAM in a \"CDL-informed\" fashion one crop-type at a time and consolidate the classwise binary masks into a \"true\" multi-class mask. In conclusion, our work provides steps towards bridging the gap between state-of-the-art image segmentation models like SAM and the specific needs of the agriculture industry, offering a potential avenue for more efficient and cost-effective tools for precision agriculture practices. Ultimately, our work contributes to the development of innovative solutions that enhance sustainability and productivity in farming while addressing the challenges of producing high-quality crop-type maps. " }, { "figure_ref": [], "heading": "B Effect of varying MMRA on clustering consensus", "publication_ref": [], "table_ref": [], "text": "As shown in figure7, the overall trend in FMI (and to a lesser extent ARI) shows decreasing consensus over increasing √ AOI. However, for a given √ AOI, unlike prompt density (PPS%), varying the minimum mask region area (MMRA%) to eliminate smaller masks did not demonstrate any effect across our experiments. " }, { "figure_ref": [], "heading": "C Some high and low scoring examples", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "* Research was supported by the Agriculture and Food Research Initiative Competitive Grant no. 2020-69012-31914 from the USDA National Institute of Food and Agriculture and by the National Science Foundation CREST Center for Multidisciplinary Research Excellence in Cyber-Physical Infrastructure Systems (MECIS) grant no. 2112650. (NeurIPS 2023)." } ]
Climate change is increasingly disrupting worldwide agriculture, making global food production less reliable. To tackle the growing challenges in feeding the planet, cutting-edge management strategies, such as precision agriculture, empower farmers and decision-makers with rich and actionable information to increase the efficiency and sustainability of their farming practices. Crop-type maps are key information for decision-support tools but are challenging and costly to generate. We investigate the capabilities of Meta AI's Segment Anything Model (SAM) for crop-map prediction task, acknowledging its recent successes at zero-shot image segmentation. However, SAM being limited to up-to 3 channel inputs and its zero-shot usage being class-agnostic in nature pose unique challenges in using it directly for crop-type mapping. We propose using clustering consensus metrics to assess SAM's zero-shot performance in segmenting satellite imagery and producing crop-type maps. Although direct crop-type mapping is challenging using SAM in zero-shot setting, experiments reveal SAM's potential for swiftly and accurately outlining fields in satellite images, serving as a foundation for subsequent crop classification. This paper attempts to highlight a use-case of state-of-the-art image segmentation models like SAM for crop-type mapping and related specific needs of the agriculture industry, offering a potential avenue for automatic, efficient, and cost-effective data products for precision agriculture practices.
Can SAM recognize crops? Quantifying the zero-shot performance of a semantic segmentation foundation model on generating crop-type maps using satellite imagery for precision agriculture *
[ { "figure_caption": "Figure 1 :1Figure 1: (left) Examples of input images (AOI = 1098 x 1098) created using the red-green-blue channels at the maximum NDVI timestep from the 4D multispectral spatiotemporal imagery stack from Sentinel-2 satellite, (middle) the ground-truth crop-type maps (CDL) depicting crop types and other related classes, (right) zero-shot predicted masks using SAM's automatic mask generator.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Aggregate plots of clustering consensus between ground-truth CDL and multi-class mask produced by SAM's Automatic Mask Generator across 4 different metrics -(A) FMI: Fowlkes-Mallows Index; declines with increasing √ AOI as well as over increasing PPS% for a given √ AOI, (B) NMI: Normalized Mutual Information; unchanged with increasing √ AOI, (C) ARI: Adjusted Rand Index; declines with increasing √ AOI, (D) V-Measure: Correlated to NMI.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Examining samples from the tails of the distribution of FMI scores. Top tails: (Leftmost plots, green outline) Samples with high FMI scores tend to have semantically identical regions that remain spatially contiguous and occupy a large subarea in the AOI. Bottom tails: (Middle plots, red outline) Samples with low FMI scores tend to have semantically identical regions that don't remain spatially contiguous as they are separated by crop-field boundaries, roads, or other structures.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Examining a sample with mean FMI score over increasing prompts density (PPS%):V-Measure, ARI and NMI improve as the prompt density increases while the FMI declines suggesting that although the clusters in the predicted mask become more homogenous and complete, the predicted mask captures the overall structure better in terms of pairwise agreements between cluster memberships of pixels and shared information between pixels in ground-truth and predicted masks, the predicted mask gets less accurate in terms of precision and recall.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An example AOI from agricultural fields of California overlaid with the shape-maptypically a shape file with a collection of polygons -depicting the extent and borders of crop fields in the AOI.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Additional examples: (left) Input images (AOI = 1098 x 1098) created using the red-greenblue channels at the maximum NDVI timestep from the 4D multispectral spatiotemporal imagery stack from Sentinel-2 satellite, (middle) the ground-truth crop-type maps (CDL) depicting crop types and other related classes, (right) zero-shot predicted masks using SAM's automatic mask generator.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: MMRA does not have any effect on the mean scores for a given √ AOI across all 4 metrics.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 (8Figure 8(A) shows a selection of samples with high consensus scores while figure 8(B) shows a selection of low scoring samples.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Samples where semantically identical regions remain spatially contiguous and span a large subarea in the AOI are segmented appropriately by SAM in its zero-shot setting with uniformly distributed prompts.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" } ]
Rutuja Gurav; Het Patel; Zhuocheng Shang; Ahmed Eldawy; Jia Chen; Elia Scudiero; Evangelos Papalexakis
[ { "authors": "Inbal Becker-Reshef; Brian Barker; Alyssa Whitcraft; Patricia Oliva; Kara Mobley; Christina Justice; Ritvik Sahajpal", "journal": "Scientific Data", "ref_id": "b0", "title": "Crop type maps for operational global agricultural monitoring", "year": "2023" }, { "authors": "Claire Boryan; Zhengwei Yang; Rick Mueller; Mike Craig", "journal": "Geocarto International", "ref_id": "b1", "title": "Monitoring us agriculture: the us department of agriculture, national agricultural statistics service, cropland data layer program", "year": "2011" }, { "authors": "Matthias Drusch; Umberto Del Bello; Sébastien Carlier; Olivier Colin; Veronica Fernandez; Ferran Gascon; Bianca Hoersch; Claudia Isola; Paolo Laberinti; Philippe Martimort", "journal": "Remote sensing of Environment", "ref_id": "b2", "title": "Sentinel-2: Esa's optical high-resolution mission for gmes operational services", "year": "2012" }, { "authors": "Rahul Ghosh; Praveen Ravirathinam; Xiaowei Jia; Ankush Khandelwal; David Mulla; Vipin Kumar", "journal": "IEEE", "ref_id": "b3", "title": "Calcrop21: A georeferenced multi-spectral dataset of satellite imagery and crop labels", "year": "2021" }, { "authors": "Rahul Ghosh; Praveen Ravirathinam; Xiaowei Jia; Chenxi Lin; Zhenong Jin; Vipin Kumar", "journal": "IEEE", "ref_id": "b4", "title": "Attention-augmented spatio-temporal segmentation for land cover mapping", "year": "2021" }, { "authors": "Weiguo Han; Zhengwei Yang; Liping Di; Richard Mueller", "journal": "Computers and Electronics in Agriculture", "ref_id": "b5", "title": "Cropscape: A web service based application for exploring and disseminating us conterminous geospatial cropland data products for decision support", "year": "2012" }, { "authors": "Yongcheng Jing; Xinchao Wang; Dacheng Tao", "journal": "", "ref_id": "b6", "title": "Segment anything in non-euclidean domains: Challenges and opportunities", "year": "2023" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b7", "title": "Segment anything", "year": "2023" }, { "authors": "Li Lin; Liping Di; Chen Zhang; Liying Guo; Yahui Di; Hui Li; Anna Yang", "journal": "Scientific Data", "ref_id": "b8", "title": "Validation and refinement of cropland data layer using a spatial-temporal decision tree algorithm", "year": "2022" }, { "authors": "Haoyu Maciej A Mazurowski; Hanxue Dong; Jichen Gu; Nicholas Yang; Yixin Konz; Zhang", "journal": "Medical Image Analysis", "ref_id": "b9", "title": "Segment anything model for medical image analysis: an experimental study", "year": "2023" }, { "authors": "Salar Muhammad Usman Qadeer; Murtaza Saeed; Abubakr Taj; Muhammad", "journal": "IEEE", "ref_id": "b10", "title": "Spatio-temporal crop classification on volumetric data", "year": "2021" }, { "authors": "Di Wang; Jing Zhang; Bo Du; Dacheng Tao; Liangpei Zhang", "journal": "", "ref_id": "b11", "title": "Scaling-up remote sensing segmentation dataset with segment anything model", "year": "2023" }, { "authors": "Peng Wei; Huichun Ye; Shuting Qiao; Ronghao Liu; Chaojia Nie; Bingrui Zhang; Lijuan Song; Shanyu Huang", "journal": "Remote Sensing", "ref_id": "b12", "title": "Early crop mapping based on sentinel-2 time-series data and the random forest algorithm", "year": "2023" }, { "authors": "Chen Zhang; Zhengwei Yang; Liping Di; Li Lin; Pengyu Hao", "journal": "The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences", "ref_id": "b13", "title": "Refinement of cropland data layer using machine learning", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 260.31, 589.75, 126.04, 22.31 ], "formula_id": "formula_0", "formula_text": "N DV I Sentinel2 = (B8 -B4) (B8 + B4)" } ]
2023-12-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7" ], "table_ref": [], "text": "Imagine a young pianist learning to perform a complex piece of music. Initially, she listens to her experienced piano teacher's rendition, absorbing the nuances and techniques. Yet, the most crucial learning comes from recording and listening to her own performances. When she listens to her own recordings, she can identify where her performance differs from the perfect melody she has in mind -the ideal, real-world performance. These differences are more valuable for her improvement than simply comparing her performance to the ideal. This is because these disparities directly reflect her unique challenges, and addressing them (with the teacher's help) will have the most direct impact on improving her performance. This learning scenario is akin to the process of Knowledge Distillation [1]. The simpler problems that the pianist initially tackles resemble the easily learnable features of a dataset, which a student model can grasp without assistance. The complex music pieces, however, symbolize the hard-to-grasp parts of the data, where the student model benefits immensely from the teacher model's guidance.\nAs the pianist matures artistically, she discerns that mastering known pieces alone does not capture the full essence of her musical journey. Wanting to elevate her compositions, she begins to intertwine lyrics into her melodies. This is not a mere juxtaposition of words and tunes; it is a realization that lyrics can heighten the clarity and expressiveness of her music. In the same vein, the harmonization of visual content and natural language in the CLIP model is not just for the sake of fusion, but because language can significantly augment the nuances of visual recognition.\nDeep learning models, much like the pianist, might occasionally struggle when exposed to unfamiliar terrains or \"genres\" in the form of out-of-domain data. Despite the proficiency of deep learning in various tasks, they can often falter when faced with such \"novel compositions\". This situation frequently arises in real-world applications. Numerious algorithms have been developed to ensure consistency across distributions [2,3] and regularize the models to learn Preprint domain-invariant features [4,5,6], but they often yield only modest improvements [7] over traditional Empirical Risk Minimization (ERM) technique [8].\nInspired by the pianist's pursuit of harmony between melody and lyrics, and her introspective approach to identify discrepancies in her performance to perfect her craft, our work similarly seeks to focus on challenging aspects of training data and incorporate semantic information to better capture the domain-invariant features. In this paper, we present the Selective Cross-Modality Distillation (SCMD) framework.\nRather than relying on the soft target distribution from the teacher model, SCMD emphasizes the discrepancies, specifically, the gap between the student's performance and real-world expectations. Just as the pianist hones in on the variations between her rendition and the ideal composition, SCMD selects hard-to-learn samples in the training data, targeting those for knowledge distillation. This approach, we believe, not only enhances the learning process, but also arms the student model with robust feature representations, crucial for navigating unfamiliar terrains.\nWe have chosen to utilize the CLIP model as a key component of our approach, not only for its ability to combine visual and linguistic information, but mainly for its proficiency in matching images with textual descriptions. This special capacity of CLIP enhances our framework, offering a more comprehensive knowledge base from which our student models can extract and learn." }, { "figure_ref": [], "heading": "Contributions:", "publication_ref": [], "table_ref": [], "text": "• We present the Selective Cross-Modality Distillation (SCMD) framework, an innovative departure from traditional methods, emphasizing adaptive sample treatments over the uniform approaches commonly found in current knowledge distillation techniques.\n• We underscore the significance of intricate and hard-to-learn training samples and provide a comprehensive theoretical foundation for our selection strategies.\n• We propose a cross-modality distillation module within our SCMD framework to leverage the unique capabilities of the CLIP model, seamlessly integrating linguistic comprehension with visual perception for a more nuanced learning paradigm.\n• We substantiate the effectiveness of our SCMD method through empirical evaluations, demonstrating its superior performance and capability to establish new standards on the various benchmarks.\n2 Related Work" }, { "figure_ref": [], "heading": "Domain Generalization", "publication_ref": [ "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b22", "b23", "b8", "b26", "b27", "b3", "b28", "b29", "b30", "b31", "b32", "b33", "b34" ], "table_ref": [], "text": "Domain Generalization (DG) [9] has recently emerged as a key area in machine learning, focused on training models with data from multiple source distributions for application on a distinct target distribution. This paradigm has spurred methods for training distribution invariance, either through explicit regularization-based techniques [10,11,12,13,14,15,16,17,18,19] or data augmentation [20,21,22,23,24,25], rooted in the assumption that distributional invariance augments model generalization.\nDG is generally classified into three facets: data manipulation, representation learning, and learning strategy [26].\nData manipulation [23,24] refines inputs to facilitate learning generalizable representations. Representation learning techniques [9,27,28,4] aim for domain-invariant feature representations to enhance generalization. Some methods employ learning strategies like meta-learning [29] to simulate domain shifts during training, improving the model's adaptability to unseen domains. Recent studies indicate that weight averaging can further boost DG task performance [30,31], contributing to more stable out-of-domain test results.\nSeveral previous studies have leveraged vision-language models to enhance Domain Generalization (DG) performance, closely aligning with our contribution. For instance, Domain Prompt Learning (DPL) [32] employs a lightweight prompt adaptor to automatically generate a prompt estimating domain-specific features from unlabeled examples across distributions. However, these generated prompts often lack clear semantic meanings, potentially limiting their effectiveness in certain contexts [33]. Other research [34] dispatches appropriate pretrained models, including CLIP, to each sample based on their generalization ability. Another approach [35] reformulates the DG objective using mutual information with oracle models, including CLIP.\nWhile numerous domain generalization techniques have been explored in the literature, our approach is unique in that it uses knowledge distillation to achieve this purpose. We show the versatility and potential of knowledge distillation by using CLIP as the teacher model and training a student model afterwards. This combination of ideas, though seemingly straightforward, is a new direction that has not been explored much in prior research." }, { "figure_ref": [], "heading": "Contrastive Language-Image Pre-Training", "publication_ref": [ "b35", "b35", "b36", "b37", "b38", "b39", "b40", "b35", "b41" ], "table_ref": [], "text": "CLIP [36], a vision-language model, has recently garnered significant attention for its potential to enhance the robustness of machine learning models. Models like CLIP employ a contrastive loss to align visual and text encoders within a shared feature space, demonstrating promise for generic visual representation learning and zero-shot transfer via prompts [36,37,38,39,40,41].\nPretraining on 400 million image-text pairs [36] enables CLIP to align semantic meanings between images and sentences, excelling particularly in zero-shot image classification. For instance, it matches a fully-supervised ResNet101 model's performance with a 76.2% top-1 accuracy rate on the ImageNet [42] validation set and outperforms it on the ImageNet Sketch Dataset with 60.2% accuracy.\nCLIP's exceptional capabilities, derived from pretraining on a vast number of image-text pairs, have shown its prowess especially in tasks like zero-shot image classification. It excels in aligning images and text prompts. Yet, the true potential of such a powerful model lies not just in its standalone performance but in how its extensive knowledge can be transferred to other architectures.\nIn this paper, we present a novel method to distill knowledge from CLIP, a multi-modal vision-language model, into a single-modal student model. By transitioning from multi-modal to single-modal distillation, we aim to enhance the student model's domain generalization, opening up new avenues for leveraging these potent models." }, { "figure_ref": [], "heading": "Knowledge Distillation", "publication_ref": [ "b0", "b42", "b43", "b44", "b45", "b46", "b47", "b48", "b49", "b50", "b51", "b52", "b53", "b54", "b55", "b56", "b57", "b58", "b59", "b60", "b41" ], "table_ref": [], "text": "Knowledge distillation, introduced by Hinton et al. [1], is a pivotal technique for balancing model performance and computational complexity by training a smaller student network with the soft output of a larger teacher model. This approach has spurred extensive research in model compression [43] and knowledge transfer [44].\nNumerous distillation techniques have emerged, including feature-based knowledge transfer methods [45,46,47,48,49] that align embeddings from certain layers, [50] that train teacher and student models concurrently. The design of loss functions operating on the outputs of both models has also been a significant research area, with notable methods including l 1 [51], l 2 [52,53,54], Maximum Mean Discrepancy (MMD) [55], KL divergence [56,57,58], and crossentropy losses [59,60].\nHowever, most studies focus on homologous-architecture distillation, leaving cross-architecture distillation relatively untapped. Recently, Liu et al. [61] made significant progress in this area by mapping a CNN's feature space into a transformer's attention and feature space using a partially cross-attention projector and a group-wise linear projector. With cross-view robust training, they achieved remarkable performance on ImageNet [42].\nDiverging from the norm, our approach offers a fresh perspective on knowledge distillation through two innovations.\nFirstly, we incorporate a novel selection mechanism that identifies hard-to-learn samples, enhancing the student model's depth of understanding. Secondly, leveraging on CLIP's multi-modal capabilities, we employ a cross-modality module. This strategy enables a profound transfer of both visual and linguistic knowledge, greatly enhancing the domain generalization prowess of the student model." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b35" ], "table_ref": [], "text": "In this section, we provide a detailed description of our Selective Cross-Modality Distillation Framework, which leverages a pretrained and freezed CLIP model [36] as the guiding teacher model." }, { "figure_ref": [], "heading": "Vanilla Knowledge Distillation", "publication_ref": [], "table_ref": [], "text": "The vanilla knowledge distillation process is\nθKD = arg min θ (xi,yi)∈(X,Y) L(ϕ(x i ), f (x i ; θ)) +λ 1 H(f (x i ; θ), y) +λ 2 R(ϕ, f, p, q, x i , y i ) (1)\nwhere ϕ is the teacher model, f (•; θ) is the student model. (X, Y) is the standard dataset that is used for training the student model. L is a generic distance function (can be the KL divergence between soft target distributions), H represents a generic loss function (usually cross-entropy loss), and R is an arbitrary designed regularization. p and q correspond to certain layers or attention maps.\nFigure 1: SCMD that features a selection mechanism to focus on hard-to-learn samples and a cross-modality module that projects the student's feature into CLIP multimodal space for alignment." }, { "figure_ref": [], "heading": "Selection Mechanism", "publication_ref": [ "b5", "b61", "b62", "b63" ], "table_ref": [], "text": "S = x i : x i ∈ X, i ∈ I where I = i : H(f (x i ), y i ) ≥ τ(2)\nIn the preceding equation, X denotes the batch of samples, x i an individual sample, and y i its true label. The set S consists of selected samples. The function H(f (x i ; θ), y i ) computes the cross-entropy loss for the i-th sample, while I contains indices of samples in the batch with a cross-entropy loss exceeding the threshold τ .\nCross-entropy loss quantifies the divergence between the predicted probability distribution and the actual labels. High cross-entropy indicates challenging samples or model uncertainty. In our optimization, we use this loss to identify hard-to-learn samples. During each forward pass, samples with higher losses are selected for adjustment, a methodology also adopted in prior works [6,62,63,64].\nThe uniqueness of our method lies not in the recognition of hard-to-learn samples but in its integration within knowledge distillation. By doing so, we harness the teacher model's rich knowledge more efficiently, optimizing the student model's learning. We delve into the theoretical underpinning of our selection mechanism in Section 4.\nThe \"hard-to-learn\" tag for the samples can change each iteration. However, to ensure holistic learning across the entire training dataset, we switch to full-batch training for the final k% of the training epochs." }, { "figure_ref": [], "heading": "Cross-Modality Module", "publication_ref": [ "b44", "b45", "b46", "b47", "b48", "b35", "b35", "b35" ], "table_ref": [], "text": "Various feature-based knowledge distillations have been explored [45,46,47,48,49], however, direct alignment of classification features often presents challenges. To address this, we exploit the robust cross-modal alignment capabilities of the CLIP [36] model and employ a cross-modality distillation strategy.\nIn the context of our method, the student features are transformed into the CLIP's [36] multimodal space using a linear projection. This is analogous to how CLIP projects its image features into a multimodal space to achieve alignment with text embeddings. This transformation bridges the semantic gap between the student model and the teacher model, facilitating a more effective knowledge-transfer process. After projection, we calculate the scaled pairwise cosine similarity with the text embeddings derived from the CLIP [36] model.\nOur cross-modality loss is expressed as follows:\nL CM = D KL (p t ||(p s ) ′ )\nwhere p t is the soft target distribution of CLIP and\n(p s ) ′ = σ(γ • P (e(x i ; θ e )) • ϕ text ; T = t)(3)\nIn this equation, γ is a scale factor, which adjusts the magnitude of the projected student feature. e represents the backbone of the student model. A linear projection P is applied to the student feature e(x i ; θ e ), and ϕ text represents the text embedding of CLIP. σ is the softmax function parameterized by the distillation temperature T .\nIn order to generate unbiased text features using CLIP, we use a generic template: \"this is a photo of a {class}\". This method helps us avoid incorporating any human prior knowledge about the dataset, ensuring that the feature generation process remains objective and is not influenced by any preconceived human understanding of the data." }, { "figure_ref": [], "heading": "Preprint", "publication_ref": [], "table_ref": [], "text": "During the inference phase, we omit the feature projector, relying solely on the student model's backbone and its associated classifier for generating predictions." }, { "figure_ref": [], "heading": "SCMD", "publication_ref": [], "table_ref": [], "text": "Figure 1 illustrates the overall framework of our proposed method.\nThe final training objective can be summarized as follows:\nθSCMD = arg min θ (xi,yi)∈(X,Y) λ 1 H(f (x i ; θ), y i ) +λ 2 L logits + λ 3 L CM\nwhere L logits = D KL (p t ||p s ) and x i and y i are the selected samples and corresponding labels (4)" }, { "figure_ref": [], "heading": "Theoretical Evidence for Selection Strategy", "publication_ref": [], "table_ref": [], "text": "To be consistent with the notation, we let (X, Y) denote the standard data set and (x i , y i ) be one of the samples. We let P denote a distribution and P denote the distribution of distributions. We let f (•; θ) denote the student model, ϕ denote the teacher model, and r(•) denote the risk. For convenience of notation, we allow r(•) to be parameterized by a distribution or by a dataset.\nWhen r(•) is parameterized by a dataset, we have the empirical risk as\nr (X, Y) = 1 n (xi,yi)∈(X,Y) L(f (x i ; θ), y i ),\nwhere L is a generic loss function.\nWhen r(•) is parameterized by a distribution, we have the expected risk as\nr P = E (xi,yi)∼P L(f (x i ; θ), y i ),\nFor simplicity of discussion, we only use r P, ϵ to denote robustness performance when we do not need to specify how the test distribution deviates from the training distribution. Assumption 1. for any data pair (x i , y i ) studied in the context of this paper, there is a gold standard labeling function (albeit unknown) that\ny i = f (x i ).\nWe believe this assumption is fundamental for numerous works studying the robustness behaviors of models with respect to feature perturbations, especially in the context of OOD robustness, where the test dataset is manually collected rather than generated by noise addition. Intuitively, this assumption stipulates that a musical piece recognized in the training phase must also be identified as the same piece in the testing phase, despite substantial shifts in the performance style or instrument used. In other words, these variations in representation, akin to distribution shifts in data, should not alter the fundamental recognition of the piece, preserving the semantics of the data. Lemma 4.1. Given Assumptions A1 such that there is a gold standard labeling function for source and target domains.\nFor two arbitrary distributions P ′ and P, r(P ′ ) ≤ r(P) + tv(P ′ , P)\nwhere tv denotes the total variation.\nProof. We leave the proof in Appendix A Lemma 4.2. Given the assumption that samples are independent and identically distributed, hypothesis space Θ and any δ > 0, with probability at least 1 -δ, we have\nr P ′ ) ≤ r (X, Y) P + tv(P ′ , P) + ξ(n (X,Y) P , Θ, δ)\nwhere we let n (X,Y) P denote the number of sample sizes in the finite dataset (X, Y) P , ξ is a vanilla term that connects the number of samples and hypothesis space with generalization error bound." }, { "figure_ref": [], "heading": "Preprint", "publication_ref": [ "b5", "b61", "b62", "b63" ], "table_ref": [], "text": "Proof. We leave the proof in Appendix A\nThe above results demonstrates that empirical robustness is determined by three factors: the divergence between training and test distributions, the measurable empirical error on the training distribution, and a technical term influenced by sample size and hypothesis space. Therefore, the critical term that will bound the robustness performance is how the training distribution deviates from the testing distribution. This intuitively gives us the idea that training with the distributions that are the most similar to the test distribution will benefit the model most.\nThe above results apply to arbitrary distributions P ∼ P. However, this does not necessarily encode the characteristics of the cases we are studying: some samples are hard for the model to learn.\nTo address this, we consider datasets generated by multiple distributions, some of which present more challenging learning scenarios. We represent these as a set P , consisting of m distributions, i.e., P = {P 1 , P 2 , . . . , P m }. Each data point is considered to be sampled from these distributions. For the convenience of discussion, we use tv(P ′ , P ) to denote the average divergence between the distributions within the set. tv(P ′ , P ) := m i tv(P ′ , P i )/m, ∀P i ∈ P . Finally, we use s() to denote the distribution selection mechanism, and compare two selection mechanisms: selecting the hard-to-learn samples (denoted as s 1 ) and selecting random samples (denoted as s 2 ). Lemma 4.3. P is continuous and has a finite expected value; for the two selection mechanism that are formally defined as tv(s 1 (P ), P ) = sup P∈P tv(P, P ), E P tv(s 2 (P ), P ) = 0 for a fixed testing dataset P ′ , with the assumption that tv(P, P ′ ) = tv(P, P) + tv(P, P ′ ), ∀P ∈ P we have E P tv(s 1 (P ), P ′ ) ≤ E P tv(s 2 (P ), P ′ ) Proof. We leave the proof in Appendix A Our result compares the upper-bounded differences between the two training distribution selection strategies, and our results suggest that selecting the hard-to-learn samples will lead to a tighter generalization error bound.\nAnother important factor to note is that, given assumption A1 and Lemma 4.1 and 4.2, the selection strategy applicable to our theoretical discussion (i.e. tv(s 1 (P ), P ) = sup P∈P tv(P, P )) is only when selecting the hard-to-learn samples according to the label of the samples (thus cross-entropy loss). Other selection strategies such as selecting based on KL-divergence or distillation loss (experimented in Section 6.2) despite might following a similar goal, does not strictly match our theoretical discussion with will likely lead to an error bound in between s 1 and s 2 . Therefore, with the support of the theoretical discussion, we argue that the most effective hard-to-learn selection mechanism is to be based on cross-entropy loss.\nAnother possible question is that the assumption tv(P, P ′ ) = tv(P, P) + tv(P, P ′ ), ∀P ∈ P might appear strong. In fact, the proof will hold with any assumptions that describe the concept that the more different one distribution is from the average of the training set, the more it will benefit the testing distribution. In the typical domain generalization setting, where there are no guaranteed connections between training and testing distributions, we believe this is one of the practical assumptions we can consider, also widely used in practical by other domain generalization literature [6,62,63,64]." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b6" ], "table_ref": [], "text": "In this section, we demonstrate the effectiveness of our proposed method using the DomainBed [7] benchmark and compare it to the current state-of-the-art DG techniques." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b6", "b68", "b69", "b70", "b71", "b72", "b71", "b6", "b73", "b35", "b29", "b30" ], "table_ref": [], "text": "We adhere to the protocol set out in [7] for our experimental setup and assess the performance of SCMD using VLCS [69], PACS [70], OfficeHome [71], TerraIncognita [72], and DomainNet [73]. It is noteworthy that CLIP does not perform well on the TerraIncognita [72] dataset, and we will explore these results in the discussion section.\nDue to the intensive computing requirements of DomainBed's [7] hyperparameter search protocol, we take a more simplified approach. We restrict our research to five distinct hyperparameter combinations, each tested three times. We assign 80% of the data for training and 20% for validation, choose the model based on the training-domain validation and report the results on the held-out domain. In order to make a fair comparison with other methods, we use ResNet50 [74] as the student model and CLIP [36], with ViT-B/32 as its image encoder, as the teacher model, which is in line with existing research.\nIn line with the findings of previous studies [30,31], we incorporate weight averaging into our experiments to access SCMD performance. This technique has been shown to mitigate the discrepancy between training-validation performance and out-of-domain test performance.\nEight RTX 3090 GPUs are utilized for all experiments." }, { "figure_ref": [], "heading": "Experimental results", "publication_ref": [ "b69", "b68", "b70", "b71", "b72", "b41", "b67", "b67", "b67" ], "table_ref": [ "tab_0" ], "text": "We compare SCMD with other domain generalization algorithms on five datasets: PACS [70], VLCS [69], Office-Home [71], TerraIncognita [72], and DomainNet [73]. We use ResNet50 pretrained on ImageNet1k [42] as the backbone.\nTable 1 shows that our proposed method achieves the best performance on the DomainBed benchmark for the ResNet50 model. It outperforms the existing methods on all datasets, with Model ratatouille [68] coming in second.\nModel Ratatouille [68] utilizes a technique that adjusts the model on multiple extra tasks to obtain different weight initializations. These weights are then adjusted for the desired tasks, and the final model is created by taking the average of these weights. This is demonstrated by Model Ratatouille (Uniform) [68], which averages a total of 60 models to achieve the final result.\nIn contrast, our proposed method employs a teacher model and evaluates performance on a single model. Our method is orthogonal to existing DG methods, potentially providing additional avenues and perspectives in the broader landscape of DG research." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b69", "b69" ], "table_ref": [], "text": "We carry out a thorough analysis of the SCMD algorithm by breaking it down into its components and examining them using the PACS dataset [70]. To ensure consistency and comparability with the main experiment, we use the same standardized hyperparameter search protocol.\nWe conduct a series of experiments on PACS [70] to evaluate the effectiveness of the proposed cross-modality module and the selection mechanism." }, { "figure_ref": [], "heading": "Impact of the Cross-Modality Module", "publication_ref": [ "b0" ], "table_ref": [ "tab_1" ], "text": "Table 2 (Top Section) presents the comprehensive results of our method alongside its different variations.\n• \"Vanilla KD\" [1] denotes the conventional knowledge distillation technique where the KL divergence between the predicted distributions of the student and teacher models is minimized. • 'SCMD (logits)\" is the combination of the selection mechanism and the minimization of KL divergence.\n• \"SCMD (logits + CM)\" represents the full version of our method, including all our proposed components." }, { "figure_ref": [], "heading": "Preprint", "publication_ref": [ "b29", "b30", "b0" ], "table_ref": [ "tab_1" ], "text": "We have included weight averaging [30,31] into the Vanilla KD to guarantee a fair comparison and show the effectiveness of our proposed components. This technique has been previously demonstrated to enhance performance.\nAs shown in the top part of Table 2, our selection mechanism alone leads to a 0.5% improvement in comparison to the average performance of Vanilla KD. Additionally, our cross-modality module further boosts the performance by 0.6%. When both are combined, our proposed methodology offers a significant increase in performance, surpassing Vanilla KD by a total of 1.1%. These results demonstrate the combined power and effectiveness of our proposed approach.\nAlgorithm Avg SCMD (logits + CM) (full method) 90.1 ± 0.0 Vanilla KD [1] 89.0 ± 0. " }, { "figure_ref": [], "heading": "Empirical Validation of Our Theoretical Analysis", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "As illustrated in Table 2 (Bottom Section), we employed various selection strategies for the samples.\n• \"selection based on KL\" refers to sample selection based on the KL-divergence between the predicted distributions of the student and teacher models.\n• \"selection based on distill loss\" implies that samples are chosen according to the distill loss, as defined in Eq 4.\n• \"no selection\" represents the baseline scenario where the entire training dataset is used without any hard-tolearn sample selection.\n• \"selection based on CE loss\" denotes our proposed selection strategy.\nIt is evident that any selection strategy yields better results than the \"no selection\" baseline. Our proposed \"selection based on CE loss\" approach is the most successful on the PACS dataset, outperforming \"selection based on KL\" by 0.5%, \"selection based on distill loss\" by 0.3%, and the no selection strategy by 0.7%. It is worth noting that the \"distill loss\" (Eq 4) includes the cross-entropy loss, which could explain why its performance is similar to \"selection based on CE loss\", albeit slightly lower." }, { "figure_ref": [], "heading": "Prompt Avg", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "this is an art of a {} 88.9 ± 0.4 this is a sketch of a {} 89.6 ± 0.2 this is a cartoon of a {} 88.7 ± 0.4 a photo of a {} 89.9 ± 0.2 this is a photo of a {} 89.9 ± 0.4 a {} 88.8 ± 0.2\nTable 3: The performance of SCMD was tested on PACS with different prompts, using the same hyperparameters for three trials. The teacher model was Clip ViT-B/32 and the student model was RN50.\nThese results provide empirical support to our theoretical proposition: \"Other selection strategies such as selecting based on KLdivergence or distillation loss despite might following a similar goal, do not strictly match our theoretical discussion which will likely lead to an error bound between s 1 and s 2 .\" Therefore, with the support of the theoretical discussion, we argue that the most effective hard-to-learn selection mechanism is to be based on crossentropy loss.\n7 Discussion and Limitations" }, { "figure_ref": [], "heading": "Impact of Prompts Variations", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In order to reduce bias in the feature extraction process with CLIP, we use a template that does not contain any human-derived insights, which is: \"this is a photo of a {}\". This template anchors the feature generation process in a way that is not dependent on any particular domain, thus avoiding the impact of any human preconceptions.\nOur experiments show that the prompt \"photo\" was the most effective for optimizing performance. We also found that slight changes to the prompt, such as \"a photo of a {}\" and \"this is a photo of a {}\", had little effect on the success of the distillation process. This demonstrates the resilience of feature distillation to minor changes in the prompt structure. We observed suboptimal zero-shot performance from the CLIP model when we distilled it into ResNet50 using the TerraIncognita dataset, as shown in Table 1. Despite this, our approach still provided a benefit in the form of selective learning, as SCMDno-KD outperformed ERM. This implies that preconditioning the CLIP model with fine-tuning before distillation may be a useful strategy to improve performance metrics for tasks like these." }, { "figure_ref": [], "heading": "Experiments on various student models", "publication_ref": [ "b69", "b73", "b7", "b73", "b73", "b73", "b35", "b4", "b75", "b9" ], "table_ref": [ "tab_6" ], "text": "We investigate the effect of different teacher models and model sizes by exploring the applicability of our proposed method using the PACS dataset [70]. We use ResNet152 and ResNet18 [74] and follow the same experimental setup and hyperparameter search protocol as in our previous experiments.\nTable 5 shows that SCMD outperforms Vanilla KD, even when different CLIP models are used as the teacher, such as RN101. In this case, SCMD achieved an improvement of approximately 0.6% compared to vanilla KD, which highlights the versatility of our approach and its effectiveness in both cross-architecture and homologous-architecture distillation scenarios.\nOur approach yields a noteworthy improvement of 3.4% over the ERM [8] technique and 0.8% over Vanilla KD when distilled into ResNet152 [74]. Even with a smaller model such as ResNet18 [74], our method still shows strong performance compared to other DG methods, with a marginal improvement of only 0.2% over Vanilla KD. This slight difference may be due to the large capacity gap between ResNet18 [74] and CLIP [36]. ViT-B/32 RN-18 81.5 ± 0.0 RSC [5] 82.8 ± 0.4 IRM [76] 81.1 ± 0.3 MMD [10] 81. \nAlgorithm" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we present Selective Cross-Modality Distillation (SCMD) for Domain Generalization, a novel approach that builds upon the existing knowledge distillation framework. Our method is designed to supplement existing DG techniques. We introduce a cross-modality module that leverages the robust cross-modal alignment capabilities of the CLIP model. The selection mechanism at the core of SCMD is supported by a thorough theoretical analysis and empirically validated through extensive experiments that demonstrate the efficacy of our proposed method." }, { "figure_ref": [], "heading": "A Theoretical Evidence for Selection Strategy", "publication_ref": [], "table_ref": [], "text": "To be consistent with notation, we let (X, Y) denote the standard dataset, and (x i , y i ) as one of the samples. We let P denote a distribution and P denote the distribution of distributions. We let f (•; θ) denote the student model, ϕ denote the teacher model, and r(•) denote the risk. For the convenience of notations, we allow r(•) to be parameterized by a distribution or by a dataset. Lemma A.1. Given assumptions A1 such that there is a gold standard labeling function for source and target domains.\nFor two arbitrary distributions P ′ and P, r(P ′ ) ≤ r(P) + tv(P ′ , P)\nwhere tv denotes the total variation.\nProof. Recall that we are assuming the same labelling function, let σ and σ ′ be the density functions of P and P ′ r(P ′ ) = r(P ′ ) + r(P) -r(P) ≤ r(P)+ | r(P ′ ) -r(P)\n| ≤ r(P) + | σ(x) -σ ′ (x) || f (x; θ) -y | dx ≤ r(P) + tv(P ′ , P) Lemma A.2.\nGiven assumption that samples are independent and identically distributed, hypothesis space Θ and any δ > 0, with probability at least 1 -δ, we have\nr P ′ ) ≤ r (X, Y) P + tv(P ′ , P) + ξ(n (X,Y) P , Θ, δ)\nwhere we let n (X,Y) P denote the number of sample sizes in the finite dataset (X, Y) P , ξ is a vanilla term that connects the number of samples and hypothesis space with generalization error bound.\nProof. Recall that we are assuming that samples are independent and identically distributed, we have This is an direct application of Theorem 1.1 and generalization error bound.\nThe above results demonstrates that empirical robustness is determined by three factors: the divergence between training and test distributions, the measurable empirical error on the training distribution, and a technical term influenced by sample size and hypothesis space. Therefore, the critical term that will bound the robustness performance is how the training distribution deviates from the testing distribution. This intuitively give us the idea that training with the distributions that are the most similar to the test distribution will benefit the model most.\nThe above results apply to arbitrary distributions P ∼ P. However, this does not necessarily encode the characteristics of the cases we are studying: some samples are hard for the model to learn.\nTo address this, we consider datasets generated by multiple distributions, some of which present more challenging learning scenarios. We represent these as a set P , consisting of m distributions, i.e., P = {P 1 , P 2 , . . . , P m }. Each data point is considered as sampled from these distributions. For the convenience of discussion, we use tv(P ′ , P ) to denote the average divergence between the distributions within the set. tv(P ′ , P ) := m i tv(P ′ , P i )/m, ∀P i ∈ P . Finally, we use s() to denote the distribution selection mechanism, and we compare two selection mechanisms: selecting the hard-to-learn samples (denoted as s 1 ) and selecting random samples (denoted as s 2 ). Lemma A.3. P is continuous and has a finite expected value; for the two selection mechanism that are formally defined as tv(s 1 (P ), P ) = sup P∈P tv(P, P ), E P tv(s 2 (P ), P ) = 0 for a fixed testing dataset P ′ , with the assumption that tv(P, P ′ ) = tv(P, P) + tv(P, P ′ ), ∀P ∈ P we have E P tv(s 1 (P ), P ′ ) ≤ E P tv(s 2 (P ), P ′ )" }, { "figure_ref": [], "heading": "Preprint", "publication_ref": [ "b5", "b61", "b62", "b63" ], "table_ref": [], "text": "Proof. Based on our definition of s 1 and s 2 , E P tv(s 2 (P ), P ′ ) = E P tv(P, P ′ ) And based on our assumption that tv(P, P ′ ) = tv(P, P) + tv(P, P ′ ), we have E P tv(s 1 (P ), P ′ ) -tv(s 2 (P ), P ′ ) = E P inf P∈P tv(P, P ′ ) -E P tv(P, P ′ ) ≤ 0 Our result compares the upper-bounded differences between the two training distribution selection strategies, and our results suggest that selecting the hard-to-learn samples will lead to a tighter generalization error bound.\nAnother important factor to note is that, given assumption A1 and Theorem 0.1, the selection strategy applicable to our theoretical discussion (i.e. tv(s 1 (P ), P ) = sup P∈P tv(P, P )) is only when selecting the hard-to-learn samples according to the label of the samples (thus, cross-entropy loss). Other selection strategies such as selecting based on KL-divergence or distillation loss despite might following a similar goal, do not strictly match our theoretical discussion, which will likely lead to an error bound in between s 1 and s 2 . Therefore, with the support of the theoretical discussion, we argue that the most effective hard-to-learn selection mechanism is to be based on cross-entropy loss.\nAnother possible question is that Assumption tv(P, P ′ ) = tv(P, P) + tv(P, P ′ ) might appear strong. In fact, the proof will hold with any assumptions that describe the concept that the more different one distribution is from the average of the training set, the more it will benefit the testing distribution. In the typical domain generalization setting, where there are no guaranteed connections between training and testing distributions, we believe this is one of the practical assumptions we can consider, also widely used in practical context by other domain generalization literature [6,62,63,64] " }, { "figure_ref": [], "heading": "B Method", "publication_ref": [ "b6" ], "table_ref": [], "text": "In the preceding equation, X denotes the batch of samples, x i an individual sample, and y i its true label. The set S consists of selected samples. The function H(f (x i ; θ), y i ) computes the cross-entropy loss for the i-th sample, while I contains indices of samples in the batch with a cross-entropy loss exceeding the threshold τ . where L logits = D KL (p t ||p s ) and x i and y i are the selected samples and their corresponding labels (7) C Compare with other KD methods" }, { "figure_ref": [], "heading": "Algorithm MA Avg", "publication_ref": [ "b76", "b77", "b78", "b0" ], "table_ref": [], "text": "FitNet [77] Yes 88.4 ± 0.2 BSS [78] Yes 89.3 ± 0.1 RKD [79] Yes 87.4 ± 0.2 Vanilla [1] Yes 89.0 ± 0.3 SCMD Yes 90.1 ± 0.0 Table 6: SCMD vs. other KD on PACS. \"MA\": Moving Average At the core of SCMD is the knowledge distillation process, which is essential to its design. To evaluate the effectiveness of SCMD, we conducted comparative experiments with other knowledge distillation methods on the PACS dataset. The results in Table 6 demonstrate that SCMD outperforms these contemporary techniques." }, { "figure_ref": [], "heading": "D Full Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The full details of the results presented in Table 1 of the main paper are displayed in Table 7." }, { "figure_ref": [], "heading": "D.1 hyperparameters search space", "publication_ref": [ "b6", "b6", "b29" ], "table_ref": [], "text": "We adhere to the experimental setup described in the DomainBed [7] paper. The specifics of our setup are outlined below:\n1. Data Split: We partition datasets into 80% training and 20% validation sets. Model selections are based on training domain validation performances, and we report on the corresponding test domain. 2. hyperparameters: Although many hyperparameters follow [7], deviations are documented in Table 8. 3. Batch & Decay: We adjust our batch size and weight decay following the guidelines of [30]. 4. Dropout: The ResNet dropout rate is set to 0 to mitigate excessive randomness. 5. Learning Rate: We abandon the rate of 1 × 10 -5 because it converges too slowly, and instead focus on rates of 3 × 10 -5 and 5 × 10 -5 .\nTable 9 presents the search space specific to our algorithm's hyperparameters. While we consistently set λ 1 , the balance factor for Cross-entropy, to 1, we perform random sweeps for the remaining weight factors." }, { "figure_ref": [], "heading": "Preprint", "publication_ref": [], "table_ref": [], "text": "" } ]
Domain Generalization (DG), a crucial research area, seeks to train models across multiple domains and test them on unseen ones. In this paper, we introduce a novel approach, namely, Selective Cross-Modality Distillation for Domain Generalization (SCMD). SCMD leverages the capabilities of large vision-language models, specifically the CLIP model, to train a more efficient model, ensuring it acquires robust generalization capabilities across unseen domains. Our primary contribution is a unique selection framework strategically designed to identify hard-tolearn samples for distillation. In parallel, we introduce a novel cross-modality module. This module seamlessly combines the projected features of the student model with the text embeddings from CLIP, ensuring the alignment of similarity distributions. We assess SCMD's performance on various benchmarks, where it empowers a ResNet50 to deliver state-of-the-art performance, surpassing existing domain generalization methods. Furthermore, we provide a theoretical analysis of our selection strategy, offering deeper insight into its effectiveness and potential in the field of DG.
CHOOSING WISELY AND LEARNING DEEPLY: SELECTIVE CROSS-MODALITY DISTILLATION VIA CLIP FOR DOMAIN GENERALIZATION
[ { "figure_caption": "ξ(n (X,Y) P , Θ, δ) = 2R(L) + (log 1/δ)/2n where R(L) stands for Rademacher complexity and L = {l θ | θ ∈ Θ} where l θ is the loss function corresponding to the student model f (•; θ) r(P ′ ) = r(P) + tv(P ′ , P) ≤ r (X, Y) P + tv(P ′ , P) + ξ(n (X,Y) P , Θ, δ)", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "B. 1 AlgorithmAlgorithm 111Selective Cross-Modality Distillation Input: Dataset (X , Y) of size n; Percentile of hard-to-learn samples per batch ρ; Percentile of full-batch training κ; Batch size η; Maximum number of iterations T ; Feature Projector P ; pretrained teacher model ϕ; Student model θ (randomly initialized) Output: Trained student model θ while t ≤ T do while t ≤ (1 -κ)T do Identify top ρ percentile samples with highest Cross-entropy loss based on Eq 2 For selected samples, compute student features and project to CLIP's multi-modal space via P . Distill knowledge from the teacher model ϕ to the student model θ using Eq 4 end while Distill knowledge from ϕ to θ across the entire batch and calculate the final loss with Eq 4 end while return optimized student model θ B.2 Selection Mechanism S = x i : x i ∈ X, i ∈ I where I = i : H(f (x i ), y i ) ≥ τ", "figure_data": "", "figure_id": "fig_1", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Preprint B. 3 6 )36Cross-Modality Module L CM = D KL (p t ||(p s ) ′ ) where p t is the soft target distribution of CLIP and (p s ) ′ = σ(γ • P (e(x i ; θ e )) • ϕ text ; T = t) (In this equation, γ is a scale factor, which adjusts the magnitude of the projected student feature. e represents the backbone of the student model. A linear projection P is applied to the student feature e(x i ; θ e ), and ϕ text represents the text embedding of CLIP. σ is the softmax function parameterized by the distillation temperature T . B.4 SCMD θSCMD = arg min θ (xi,yi)∈(X,Y)λ 1 H(f (x i ; θ), y i ) +λ 2 L logits + λ 3 L CM", "figure_data": "", "figure_id": "fig_2", "figure_label": "36", "figure_type": "figure" }, { "figure_caption": "Performance benchmarking on 5 datasets of the DomainBed benchmark. Gray background shows our proposed method. Experiments report the performance based on training-domain validation accuracy follow[7]. 'Ens/MA' stands for Ensemble/Moving Average. (best in bold and second underlined)", "figure_data": "AlgorithmEns/MAVLCSPACSOffHomeTerraIncDNetAvgTeacher (CLIP with ViT-B/32)No78.4 ± 0.0 94.7 ± 0.1 79.6 ± 0.1 19.0 ± 0.1 54.0 ± 0.0 65.1ERM [8]No77.5 ± 0.4 85.5 ± 0.2 66.5 ± 0.3 46.1 ± 1.8 40.9 ± 0.1 63.3CORAL [65]No78.8 ± 0.6 86.2 ± 0.3 68.7 ± 0.3 47.6 ± 1.0 41.5 ± 0.1 64.6VREx [66]No78.3 ± 0.2 84.9 ± 0.6 66.4 ± 0.6 46.4 ± 0.6 33.6 ± 2.9 61.9RSC [5]No77.1 ± 0.5 85.2 ± 0.9 65.5 ± 0.9 46.6 ± 1.0 38.9 ± 0.5 62.7ERM + SWAD [30]Yes79.1 ± 0.1 88.1 ± 0.1 70.6 ± 0.2 50.0 ± 0.3 46.5 ± 0.1 66.9CORAL + SWAD [30]Yes78.9 ± 0.1 88.3 ± 0.1 71.3 ± 0.1 51.0 ± 0.1 46.8 ± 0.0 67.3AdaClust [67]No78.9 ± 0.6 87.0 ± 0.3 67.7 ± 0.548.1± 0.143.3 ± 0.5 64.9MIRO + SWAD [35]Yes79.6 ± 0.2 88.4 ± 0.1 72.4 ± 0.1 52.9 ± 0.2 47.0 ± 0.0 68.1EoA [31]Yes79.188.672.552.347.468.0Model ratatouille (Greedy)[68]Yes78.7 ± 0.2 90.5 ± 0.2 73.4 ± 0.3 49.2 ± 0.9 47.7 ± 0.0 67.9Model ratatouille (Uniform)[68]Yes78.389.873.552.047.768.3SCMD (ours)Yes80.9 ± 0.2 90.1 ± 0.0 74.8 ± 0.1 51.3 ± 0.2 48.4 ± 0.0 69.1", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance evaluations: (Top) Impact of the cross-modality module in SCMD on the PACS dataset. (Bottom) SCMD performance with different strategies for selecting hard-to-learn samples on the PACS dataset. (best in bold)", "figure_data": "3", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "provides further details on the ablation studies.", "figure_data": "Preprint7.2 Analysis on TerraInc Performance", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Impact of KD on TerraIncognita. 'MA': Moving Average. 'SCMD_no_KD': variant when KD not used.", "figure_data": "AlgorithmMA Terra AvgERMNo46.1 ± 1.8SCMDYes 51.3 ± 0.2SCMD-no-KD Yes 53.1 ± 0.5", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Evaluation of SCMD's performance across various student and CLIP model architectures on the PACS dataset", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Jixuan Leng; Yijiang Li; Haohan Wang
[ { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b0", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Shai Ben-David; John Blitzer; Koby Crammer; Alex Kulesza; Fernando Pereira; Jennifer Wortman Vaughan", "journal": "Machine learning", "ref_id": "b1", "title": "A theory of learning from different domains", "year": "2010" }, { "authors": "Shai Ben-David; John Blitzer; Koby Crammer; Fernando Pereira", "journal": "MIT Press", "ref_id": "b2", "title": "Analysis of representations for domain adaptation", "year": "2006" }, { "authors": "Wang Lu; Jindong Wang; Haoliang Li; Yiqiang Chen; Xing Xie", "journal": "", "ref_id": "b3", "title": "Domain-invariant feature exploration for domain generalization", "year": "2022" }, { "authors": "Zeyi Huang; Haohan Wang; Eric P Xing; Dong Huang", "journal": "Springer", "ref_id": "b4", "title": "Self-challenging improves cross-domain generalization", "year": "2020" }, { "authors": "Zeyi Huang; Haohan Wang; Dong Huang; Yong ; Jae Lee; Eric P Xing", "journal": "", "ref_id": "b5", "title": "The two dimensions of worst-case training and the integrated effect for out-of-domain generalization", "year": "2022" }, { "authors": "Ishaan Gulrajani; David Lopez-Paz", "journal": "", "ref_id": "b6", "title": "In search of lost domain generalization", "year": "2021" }, { "authors": "Vladimir N Vapnik", "journal": "Wiley-Interscience", "ref_id": "b7", "title": "Statistical Learning Theory", "year": "1998" }, { "authors": "Krikamol Muandet; David Balduzzi; Bernhard Schölkopf", "journal": "", "ref_id": "b8", "title": "Domain generalization via invariant feature representation", "year": "2013-06-21" }, { "authors": "Haoliang Li; Sinno Jialin Pan; Shiqi Wang; Alex C Kot", "journal": "IEEE Computer Society", "ref_id": "b9", "title": "Domain generalization with adversarial feature learning", "year": "2018" }, { "authors": "Ya Li; Xinmei Tian; Mingming Gong; Yajing Liu; Tongliang Liu; Kun Zhang; Dacheng Tao", "journal": "", "ref_id": "b10", "title": "Deep domain generalization via conditional invariant adversarial networks", "year": "2018" }, { "authors": "Haohan Wang; Aaksha Meghawat; Louis-Philippe Morency; Eric P Xing", "journal": "IEEE", "ref_id": "b11", "title": "Select-additive learning: Improving generalization in multimodal sentiment analysis", "year": "2017" }, { "authors": "Kei Akuzawa; Yusuke Iwasawa; Yutaka Matsuo", "journal": "Springer", "ref_id": "b12", "title": "Adversarial invariant feature learning with accuracy constraint for domain generalization", "year": "2019" }, { "authors": "Yu Ding; Lei Wang; Bin Liang; Shuming Liang; Yang Wang; Fang Chen", "journal": "", "ref_id": "b13", "title": "Domain generalization by learning and removing domain-specific features", "year": "2022" }, { "authors": "Beining Han; Chongyi Zheng; Harris Chan; Keiran Paster; Michael R Zhang; Jimmy Ba", "journal": "", "ref_id": "b14", "title": "Learning domain invariant representations in goal-conditioned block mdps", "year": "2021-12-06" }, { "authors": "Ruoyu Wang; Mingyang Yi; Zhitang Chen; Shengyu Zhu", "journal": "IEEE", "ref_id": "b15", "title": "Out-of-distribution generalization with causal invariant transformations", "year": "2022" }, { "authors": "Rang Meng; Xianfeng Li; Weijie Chen; Shicai Yang; Jie Song; Xinchao Wang; Lei Zhang; Mingli Song; Di Xie; Shiliang Pu", "journal": "Springer", "ref_id": "b16", "title": "Attention diversification for domain generalization", "year": "2022" }, { "authors": "Kyungmoon Lee; Sungyeon Kim; Suha Kwak", "journal": "Springer", "ref_id": "b17", "title": "Cross-domain ensemble distillation for domain generalization", "year": "2022" }, { "authors": "Haohan Songwei Ge; Amir Wang; Eric Alavi; Ziv Xing; -Joseph Bar", "journal": "Journal of Computational Biology", "ref_id": "b18", "title": "Supervised adversarial alignment of single-cell rna-seq data", "year": "2021" }, { "authors": "Shiv Shankar; Vihari Piratla; Soumen Chakrabarti; Siddhartha Chaudhuri; Preethi Jyothi; Sunita Sarawagi", "journal": "", "ref_id": "b19", "title": "Generalizing across domains via cross-gradient training", "year": "2018-05-03" }, { "authors": "Xiangyu Yue; Yang Zhang; Sicheng Zhao; Alberto L Sangiovanni-Vincentelli; Kurt Keutzer; Boqing Gong", "journal": "IEEE", "ref_id": "b20", "title": "Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data", "year": "2019-11-02" }, { "authors": "Rui Gong; Wen Li; Yuhua Chen; Luc Van Gool", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b21", "title": "DLOW: domain flow for adaptation and generalization", "year": "2019" }, { "authors": "Kaiyang Zhou; Yongxin Yang; Timothy M Hospedales; Tao Xiang", "journal": "AAAI Press", "ref_id": "b22", "title": "Deep domain-adversarial image generation for domain generalisation", "year": "2020" }, { "authors": "Jiaxing Huang; Dayan Guan; Aoran Xiao; Shijian Lu", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b23", "title": "FSDR: frequency space domain randomization for domain generalization", "year": "2021" }, { "authors": "Zhuo Wang; Zezheng Wang; Zitong Yu; Weihong Deng; Jiahong Li; Tingting Gao; Zhongyuan Wang", "journal": "", "ref_id": "b24", "title": "Domain generalization via shuffled style assembly for face anti-spoofing", "year": "2022" }, { "authors": "Jindong Wang; Cuiling Lan; Chang Liu; Yidong Ouyang; Tao Qin; Wang Lu; Yiqiang Chen; Wenjun Zeng; Philip Yu", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b25", "title": "Generalizing to unseen domains: A survey on domain generalization", "year": "2022" }, { "authors": "W Muhammad Ghifary; Mengjie Bastiaan Kleijn; David Zhang; Balduzzi", "journal": "IEEE Computer Society", "ref_id": "b26", "title": "Domain generalization for object recognition with multi-task autoencoders", "year": "2015" }, { "authors": "Yaroslav Ganin; Victor S Lempitsky", "journal": "", "ref_id": "b27", "title": "Unsupervised domain adaptation by backpropagation", "year": "2015-07-11" }, { "authors": "Da Li; Yongxin Yang; Yi-Zhe Song; Timothy M Hospedales", "journal": "AAAI Press", "ref_id": "b28", "title": "Learning to generalize: Meta-learning for domain generalization", "year": "2018" }, { "authors": "Junbum Cha; Sanghyuk Chun; Kyungjae Lee; Han-Cheol Cho; Seunghyun Park; Yunsung Lee; Sungrae Park", "journal": "", "ref_id": "b29", "title": "SWAD: domain generalization by seeking flat minima", "year": "2021-12-06" }, { "authors": "Devansh Arpit; Huan Wang; Yingbo Zhou; Caiming Xiong", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Ensemble of averages: Improving model selection and boosting performance in domain generalization", "year": "2022" }, { "authors": "Xin Zhang; Shixiang Shane Gu; Yutaka Matsuo; Yusuke Iwasawa", "journal": "", "ref_id": "b31", "title": "Domain prompt learning for efficiently adapting clip to unseen domains", "year": "2021" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "International Journal of Computer Vision", "ref_id": "b32", "title": "Learning to prompt for vision-language models", "year": "2022" }, { "authors": "Ziyue Li; Kan Ren; Xinyang Jiang; Bo Li; Haipeng Zhang; Dongsheng Li", "journal": "", "ref_id": "b33", "title": "Domain generalization using pretrained models without fine-tuning", "year": "2022" }, { "authors": "Junbum Cha; Kyungjae Lee; Sungrae Park; Sanghyuk Chun", "journal": "Springer", "ref_id": "b34", "title": "Domain generalization by mutual-information regularization with pre-trained models", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "PMLR", "ref_id": "b35", "title": "Learning transferable visual models from natural language supervision", "year": "2021-07" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc V Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b36", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021-07" }, { "authors": "Jinyu Yang; Jiali Duan; Son Tran; Yi Xu; Sampath Chanda; Liqun Chen; Belinda Zeng; Trishul Chilimbi; Junzhou Huang", "journal": "", "ref_id": "b37", "title": "Vision-language pre-training with triple contrastive learning", "year": "2022" }, { "authors": "Jianwei Yang; Chunyuan Li; Pengchuan Zhang; Bin Xiao; Ce Liu; Lu Yuan; Jianfeng Gao", "journal": "", "ref_id": "b38", "title": "Unified contrastive learning in image-text-label space", "year": "2022" }, { "authors": "Lewei Yao; Runhui Huang; Lu Hou; Guansong Lu; Minzhe Niu; Hang Xu; Xiaodan Liang; Zhenguo Li; Xin Jiang; Chunjing Xu", "journal": "", "ref_id": "b39", "title": "FILIP: fine-grained interactive language-image pre-training", "year": "2022" }, { "authors": "Haoxuan You; Luowei Zhou; Bin Xiao; Noel Codella; Yu Cheng; Ruochen Xu; Shih-Fu Chang; Lu Yuan", "journal": "Springer", "ref_id": "b40", "title": "Learning visual representation from modality-shared contrastive language-image pre-training", "year": "2022" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Fei-Fei Li", "journal": "IEEE Computer Society", "ref_id": "b41", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009-06-25" }, { "authors": "Yu Cheng; Duo Wang; Pan Zhou; Tao Zhang", "journal": "", "ref_id": "b42", "title": "A survey of model compression and acceleration for deep neural networks", "year": "2017" }, { "authors": "Chuanqi Tan; Fuchun Sun; Tao Kong; Wenchang Zhang; Chao Yang; Chunfang Liu", "journal": "Springer", "ref_id": "b43", "title": "A survey on deep transfer learning", "year": "2018" }, { "authors": "Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio", "journal": "", "ref_id": "b44", "title": "Fitnets: Hints for thin deep nets", "year": "2015" }, { "authors": "Yoshua Bengio; Aaron Courville; Pascal Vincent", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b45", "title": "Representation learning: A review and new perspectives", "year": "2013" }, { "authors": "Sergey Zagoruyko; Nikos Komodakis", "journal": "", "ref_id": "b46", "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "year": "2017" }, { "authors": "Jangho Kim; Seonguk Park; Nojun Kwak", "journal": "", "ref_id": "b47", "title": "Paraphrasing complex network: Network compression via factor transfer", "year": "2018-12-03" }, { "authors": "Byeongho Heo; Jeesoo Kim; Sangdoo Yun; Hyojin Park; Nojun Kwak; Jin Young Choi", "journal": "IEEE", "ref_id": "b48", "title": "A comprehensive overhaul of feature distillation", "year": "2019-11-02" }, { "authors": "Ying Zhang; Tao Xiang; Timothy M Hospedales; Huchuan Lu", "journal": "IEEE Computer Society", "ref_id": "b49", "title": "Deep mutual learning", "year": "2018" }, { "authors": "Jangho Kim; Seonguk Park; Nojun Kwak", "journal": "", "ref_id": "b50", "title": "Paraphrasing complex network: Network compression via factor transfer", "year": "2018-12-03" }, { "authors": "Hanting Chen; Yunhe Wang; Chang Xu; Chao Xu; Dacheng Tao", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b51", "title": "Learning student networks via feature embedding", "year": "2020" }, { "authors": "Peyman Passban; Yimeng Wu; Mehdi Rezagholizadeh; Qun Liu", "journal": "AAAI Press", "ref_id": "b52", "title": "ALP-KD: attention-based layer projection for knowledge distillation", "year": "2021" }, { "authors": "Xiaobo Wang; Tianyu Fu; Shengcai Liao; Shuo Wang; Zhen Lei; Tao Mei", "journal": "Springer", "ref_id": "b53", "title": "Exclusivity-consistency regularized knowledge distillation for face recognition", "year": "2020" }, { "authors": "Zehao Huang; Naiyan Wang", "journal": "", "ref_id": "b54", "title": "Like what you like: Knowledge distill via neuron selectivity transfer", "year": "2017" }, { "authors": "Yuntao Chen; Naiyan Wang; Zhaoxiang Zhang", "journal": "AAAI Press", "ref_id": "b55", "title": "Darkrank: Accelerating deep metric learning via cross sample similarities transfer", "year": "2018" }, { "authors": "Nikolaos Passalis; Maria Tzelepi; Anastasios Tefas", "journal": "IEEE", "ref_id": "b56", "title": "Heterogeneous knowledge distillation using information flow modeling", "year": "2020" }, { "authors": "Nikolaos Passalis; Maria Tzelepi; Anastasios Tefas", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b57", "title": "Probabilistic knowledge transfer for lightweight deep representation learning", "year": "2020" }, { "authors": "Kunran Xu; Lai Rui; Yishi Li; Lin Gu", "journal": "Springer", "ref_id": "b58", "title": "Feature normalized knowledge distillation for image classification", "year": "2020" }, { "authors": "Junjie Liu; Dongchao Wen; Hongxing Gao; Wei Tao; Tse-Wei Chen; Kinya Osa; Masami Kato", "journal": "", "ref_id": "b59", "title": "Knowledge representing: efficient, sparse representation of prior knowledge for knowledge distillation", "year": "2019" }, { "authors": "Yufan Liu; Jiajiong Cao; Bing Li; Weiming Hu; Jingting Ding; Liang Li", "journal": "", "ref_id": "b60", "title": "Cross-architecture knowledge distillation", "year": "2022" }, { "authors": "Jonathon Byrd; Zachary Chase Lipton", "journal": "PMLR", "ref_id": "b61", "title": "What is the effect of importance weighting in deep learning?", "year": "2019-06-15" }, { "authors": "Haw-Shiuan Chang; Erik G Learned-Miller; Andrew Mccallum", "journal": "", "ref_id": "b62", "title": "Active bias: Training more accurate neural networks by emphasizing high variance samples", "year": "2017" }, { "authors": "Angelos Katharopoulos; François Fleuret", "journal": "PMLR", "ref_id": "b63", "title": "Not all samples are created equal: Deep learning with importance sampling", "year": "2018" }, { "authors": "Baochen Sun; Kate Saenko", "journal": "Springer", "ref_id": "b64", "title": "Deep coral: Correlation alignment for deep domain adaptation", "year": "2016" }, { "authors": "David Krueger; Ethan Caballero; Jörn-Henrik Jacobsen; Amy Zhang; Jonathan Binas; Dinghuai Zhang; Rémi Le Priol; Aaron C Courville", "journal": "PMLR", "ref_id": "b65", "title": "Out-of-distribution generalization via risk extrapolation (rex)", "year": "2021-07" }, { "authors": "Xavier Thomas; Dhruv Mahajan; Alex Pentland; Abhimanyu Dubey", "journal": "", "ref_id": "b66", "title": "Adaptive methods for aggregated domain generalization", "year": "2021" }, { "authors": "Alexandre Ramé; Kartik Ahuja; Jianyu Zhang; Matthieu Cord; Léon Bottou; David Lopez-Paz", "journal": "", "ref_id": "b67", "title": "Recycling diverse models for out-of-distribution generalization", "year": "2022" }, { "authors": "Chen Fang; Ye Xu; Daniel N Rockmore", "journal": "IEEE Computer Society", "ref_id": "b68", "title": "Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias", "year": "2013" }, { "authors": "Da Li; Yongxin Yang; Yi-Zhe Song; Timothy M Hospedales", "journal": "IEEE Computer Society", "ref_id": "b69", "title": "Deeper, broader and artier domain generalization", "year": "2017" }, { "authors": "Hemanth Venkateswara; Jose Eusebio; Shayok Chakraborty; Sethuraman Panchanathan", "journal": "IEEE Computer Society", "ref_id": "b70", "title": "Deep hashing network for unsupervised domain adaptation", "year": "2017" }, { "authors": "Sara Beery; Grant Van Horn; Pietro Perona", "journal": "", "ref_id": "b71", "title": "Recognition in terra incognita", "year": "2018" }, { "authors": "Xingchao Peng; Qinxun Bai; Xide Xia; Zijun Huang; Kate Saenko; Bo Wang", "journal": "IEEE", "ref_id": "b72", "title": "Moment matching for multisource domain adaptation", "year": "2019-11-02" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "IEEE Computer Society", "ref_id": "b73", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Nanyang Ye; Kaican Li; Lanqing Hong; Haoyue Bai; Yiting Chen; Fengwei Zhou; Zhenguo Li", "journal": "", "ref_id": "b74", "title": "Ood-bench: Benchmarking and understanding out-of-distribution generalization datasets and algorithms", "year": "2021" }, { "authors": "Martin Arjovsky; Léon Bottou; Ishaan Gulrajani; David Lopez-Paz", "journal": "", "ref_id": "b75", "title": "Invariant risk minimization", "year": "2019" }, { "authors": "Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio", "journal": "", "ref_id": "b76", "title": "Fitnets: Hints for thin deep nets", "year": "2015" }, { "authors": "Byeongho Heo; Minsik Lee; Sangdoo Yun; Jin Young Choi", "journal": "AAAI Press", "ref_id": "b77", "title": "Knowledge distillation with adversarial samples supporting decision boundary", "year": "2019-01-27" }, { "authors": "Wonpyo Park; Dongju Kim; Yan Lu; Minsu Cho", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b78", "title": "Relational knowledge distillation", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 211.43, 619.18, 329.24, 52.91 ], "formula_id": "formula_0", "formula_text": "θKD = arg min θ (xi,yi)∈(X,Y) L(ϕ(x i ), f (x i ; θ)) +λ 1 H(f (x i ; θ), y) +λ 2 R(ϕ, f, p, q, x i , y i ) (1)" }, { "formula_coordinates": [ 4, 190.06, 293.08, 350.61, 9.68 ], "formula_id": "formula_1", "formula_text": "S = x i : x i ∈ X, i ∈ I where I = i : H(f (x i ), y i ) ≥ τ(2)" }, { "formula_coordinates": [ 4, 258.57, 604.59, 94.86, 11.88 ], "formula_id": "formula_2", "formula_text": "L CM = D KL (p t ||(p s ) ′ )" }, { "formula_coordinates": [ 4, 224.52, 622.36, 316.15, 25.01 ], "formula_id": "formula_3", "formula_text": "(p s ) ′ = σ(γ • P (e(x i ; θ e )) • ϕ text ; T = t)(3)" }, { "formula_coordinates": [ 5, 207.47, 164.95, 197.88, 39.17 ], "formula_id": "formula_4", "formula_text": "θSCMD = arg min θ (xi,yi)∈(X,Y) λ 1 H(f (x i ; θ), y i ) +λ 2 L logits + λ 3 L CM" }, { "formula_coordinates": [ 5, 213.8, 349.9, 184.4, 27.27 ], "formula_id": "formula_5", "formula_text": "r (X, Y) = 1 n (xi,yi)∈(X,Y) L(f (x i ; θ), y i )," }, { "formula_coordinates": [ 5, 234.72, 421.54, 142.56, 9.99 ], "formula_id": "formula_6", "formula_text": "r P = E (xi,yi)∼P L(f (x i ; θ), y i )," }, { "formula_coordinates": [ 5, 159.61, 483.51, 48.2, 9.68 ], "formula_id": "formula_7", "formula_text": "y i = f (x i )." }, { "formula_coordinates": [ 5, 196.23, 681.37, 219.54, 12.66 ], "formula_id": "formula_8", "formula_text": "r P ′ ) ≤ r (X, Y) P + tv(P ′ , P) + ξ(n (X,Y) P , Θ, δ)" }, { "formula_coordinates": [ 9, 201.39, 354.41, 39.85, 8.06 ], "formula_id": "formula_9", "formula_text": "Algorithm" }, { "formula_coordinates": [ 15, 72, 225.39, 351.77, 84.65 ], "formula_id": "formula_10", "formula_text": "| ≤ r(P) + | σ(x) -σ ′ (x) || f (x; θ) -y | dx ≤ r(P) + tv(P ′ , P) Lemma A.2." }, { "formula_coordinates": [ 15, 196.23, 326.54, 219.54, 12.66 ], "formula_id": "formula_11", "formula_text": "r P ′ ) ≤ r (X, Y) P + tv(P ′ , P) + ξ(n (X,Y) P , Θ, δ)" } ]
2023-11-26
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b17", "b62", "b2", "b4", "b9", "b59", "b14", "b36", "b52", "b53", "b57", "b42", "b50", "b13", "b26", "b41", "b57", "b45", "b33", "b67", "b48", "b36", "b52", "b60", "b61", "b37", "b36" ], "table_ref": [], "text": "Vision Transformers (ViTs) significantly improve visual recognition tasks, including image classification [18,63], self-supervised learning [3,5,10,60], object detection [15,37], and semantic segmentation [53,54,58]. One crucial module that contributes significantly to the performance improvement is the multi-head self-attention (MHSA), which enables network designing with the long-range dependency modeling [43,51], global receptive field, higher flexibil- Conceptual comparisons between the self-attention and our proposed Group-Mix Attention (GMA). In (a) and (b), we showcase with 7×7 single-dimensional tokens. Unlike the selfattention that computes correlations between pairs of individual tokens, GMA creates proxies of token groups (e.g., nine adjacent tokens) via group aggregators, and then computes the group-togroup correlations via proxies. In (c) and (d), we show the concrete computation of GMA with seven four-dimensional tokens, so that N=7 and d=4. To compute the correlations between two highlighted groups that each consist of three tokens, we aggregate them into two proxies for further multiplication. The group aggregation can be effectively implemented via sliding-window-based operators.\nity [14,27] and stronger robustness [42,58]. Typically, the term \"attention\" (i.e., the Q-K-V attention) means linearly re-combining Value with the correlations between the Query and Key, which are usually computed between pairs of individual tokens. However, it's empirically found that there is a major limitation in Q-K-V self-attention, which is shown in Figure 1: the attention map only describes the correlations between each individual token pairs at one single granularity (Figure 1(a)), and multiplying the attention map with the Value only linearly re-combines the individual tokens. This framework obviously does not consider the correlations among different token groups (i.e., neighborhoods) at various granularities. For one specific example, self-attention does not correlate the nine tokens at the top-left corner as a whole to those groups at the bottom-right. This limitation, though obvious, has been unintentionally neglected because the Q-K-V computation seems to be capable enough of modeling the mappings from input to output, as any entry in the output attends to each individual entry in the input. Figure 2. Performance of GroupMixFormer compared to the state-of-the-art models. We evaluate GroupMixFormer on standard benchmarks, including classification on ImageNet-1K [46] without extra data in (a), object detection on COCO [34] in (b), and semantic segmentation on ADE20K [68] in (c). The computational complexity is denoted as the geometry area. GroupMixFormer performs favorably against ViT and CNN models including DeiT [49], Swin [37], PVT [53], CoaT [61], Focal [62], ConvNeXt [38], etc.\nIn this study, we propose a more comprehensive modeling approach, referred to as Group-Mix Attention (GMA), to alleviate the aforementioned limitations of the widely used Q-K-V self-attention mechanism. GMA splits the tokens into uniform and distinct segments and substitutes some individual tokens with group proxies generated via group aggregators, as shown in Figure 1 (b). Afterward, we compute the attention map with the Query and Key (where some tokens have been replaced by group proxies) and use it to re-combine both the group proxies together with individual tokens in Value. The proposed GMA has some appealing advantages: (1) GMA is capable of modeling correlations among not only individual tokens but also groups of tokens. Different kinds of attentions are mixed to obtain a better understanding of the tokens from a comprehensive aspect. The token-to-token, token-to-group, and group-to-group correlations are simultaneously modeled within each single layer for higher representational capabilities. (2) GMA is efficient and easy to implement. The group-to-group correlation is computed via aggregating the groups into proxy tokens and then computing the correlation between proxies (as shown in Figure 3). Such a process can be efficiently implemented with sliding-window-based operations, e.g., pooling and convolution.\nBuilding on GMA, we develop a hierarchical vision transformer, GroupMixFormer, which can serve as visual backbones for various tasks. We evaluate GroupMixFormers on standard visual recognition tasks, including image classification, object detection, and semantic segmentation, and performed comparisons with advanced models as shown in Figure 2. The results demonstrate the effectiveness of our designs. For example, a small GroupMixFormer instance (with 22.4M parameters) achieves 83.4% Top-1 accuracy on ImageNet-1K, comparable to the much larger Swin-B [37] (88M parameters). Additionally, GroupMixFormer also performs favorably against state-of-the-art ViTs and CNNs on object detection and semantic segmentation. On the ADE20K dataset, GroupMixFormer-B achieves 51.2% mIoU with a backbone size of 46M. Extensive experiments also demonstrate that effectively modeling the correlations among tokens and diverse groups is crucial for the success of GMA. Such a design paradigm can also be readily adopted into other ViT architectures as an advanced replacement for traditional self-attention." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Vision Transformer", "publication_ref": [ "b17", "b50", "b22", "b25", "b21", "b48", "b4", "b9", "b48", "b32", "b43", "b36", "b52", "b5", "b7", "b65" ], "table_ref": [], "text": "Vision Transformer (ViT) [18] first introduces the Transformers into computer vision. Unlike CNN-based architectures, ViT utilizes sequentially-connected Transformer encoders [51] on the visual token sequence. The multihead self-attention (MHSA) mechanism employed in ViTs captures global dependencies effectively, giving them an edge over CNN neural networks [23,26] in both supervised [22,49] and self-supervised scenarios [5,10]. To advance the general performance of ViTs, a series of researches have been conducted, including data-efficient training [49], token re-designing and selection [33,44], pyramid structures [37,53], modulation on self-attention mechanism [6,8,66], etc. Most of these works adopt the original Q-K-V computation, which is found to be effective in processing visual information. In this work, we aim to further advance the general performance of ViTs by introducing Group-Mix Attention (GMA). Unlike prior arts, GMA is capable of modeling the correlations among not only individual tokens but also groups of tokens within each single Transformer encoder layer, thus leading to comprehensive representational capabilities." }, { "figure_ref": [], "heading": "Comprehensive Modeling of Self-Attention", "publication_ref": [ "b35", "b36", "b61", "b16", "b6", "b29", "b30", "b44", "b47", "b51", "b53", "b55", "b60" ], "table_ref": [], "text": "To enhance the representational abilities of self-attention, several approaches have been explored from different perspectives as shown in the following. (1) Introducing locality has been demonstrated effective, as exemplified by Swin Transformers. [36,37] and Focal Transformer [62], which conduct attention computation within local windows. (2) Computing correlations with pre-defined patterns can enhance the capability of self-attention, as demonstrated by the CSWin Transformer [17] and Pixelfly-Mixer [7], both of which attempt to compute attention with pre-defined and carefully-designed patterns to realize more comprehensive modeling. (3) Other network architectures [30,31,45,48,52,54,56,61] have also been investigated for the modeling of more comprehensive visual patterns. In this work, we focus on the limitations caused by token-to-token correlations at one single granularity and propose an advanced attention mechanism (i.e., GMA) that constructs a more comprehensive prototype of self-attention, which clearly distinguishes our method from previous approaches." }, { "figure_ref": [], "heading": "GroupMix Attention and GroupMixFormer", "publication_ref": [], "table_ref": [], "text": "We introduce the motivation behind the high-level idea in Sec. 3.1, elaborate on the structural designs in Sec. 3.2, and describe the architectural configurations in Sec. 3.3." }, { "figure_ref": [], "heading": "Motivation: from Individual to Groups", "publication_ref": [ "b36", "b52" ], "table_ref": [], "text": "We discuss the limitations of self-attention starting from its vanilla formulation. Let X ∈ R N×d be the input tokens, where N is the token number and d is the dimension. The output of vanilla self-attention is, Y = Softmax(XX T )X .\n(1)\nNote that we ignore the normalization factor 1 √ d for brevity. Intuitively, by the definition of matrix multiplication, XX T calculates the similarity/correlation between each two of the tokens. The output of the softmax function A ∈ R N×N is called an attention map. The multiplication AX means linearly re-combining the tokens according to the attention map at each location.\nWe note a limitation of this form. There may exist certain patterns (i.e., group patterns) that require treating some specific tokens as a group with diverse granularities. However, self-attention lacks an explicit mechanism for modeling such patterns, as it only considers correlations between pairs of individual tokens at a single granularity (i.e., individual patterns). In this paper, we seek to utilize both individual patterns and group patterns for comprehensive modeling. Unlike prior approaches that model distinct patterns across multiple stages (typically four stages in a Transformer backbone), our approach introduces a novel method of encoding this modeling process within each individual layer at each stage. Specifically, for group patterns, we seek to correlate some neighborhoods of tokens to the other neighborhoods. This paper proposes to achieve this by generating group proxies in Query, Key, and Value, and performing the Q-K-V computation with proxies, which is described in Sec. 3.2. We experimentally found that explicitly modeling the correlations among groups with diverse sizes and individual tokens significantly improves the performance of not only the proposed GroupMixFormer but also other ViTs with different attention modules (e.g., Swin Transformer [37] and PVT [53], as shown in Tab. 9), demonstrating that upgrading the fundamental component can benefit multiple ViTs." }, { "figure_ref": [ "fig_2" ], "heading": "GMA: Mixing Groups for Better Attention", "publication_ref": [ "b0", "b46", "b60", "b18", "b54", "b16", "b0", "b46", "b60", "b1" ], "table_ref": [], "text": "We introduce GMA to model the group patterns as aforementioned. In GMA, we generate the group proxies by replacing some entries in the Query, Key, and Value with aggregations of some whole groups, which can be efficiently implemented with sliding-window-based operations Agg(•), e.g., maxpooling, convolution and etc. Specifically, the Q/K/V entries are uniformly divided into n segments and we perform aggregation on some segments. Without loss of generality, we use X i (i ∈ [1, • • • , n]) to denote one segment (X may represent Q, K, or V) and the aggregations as Agg i (X i ). Note that the aggregator may be different for each segment. To perform attention computation, we concatenate the aggrega-\ntions Agg i (X i ), i ∈ [1, • • • , n] to produce X ′ .\nIn this way, we obtain group proxies Q ′ , K ′ , and V ′ . Afterward, we perform attention computation as introduced in [1,47,61] on the group proxies to generate the output.\nDuring the aggregation process, we maintain the feature resolution. Therefore, without reducing the spatial resolution, GMA brings fine-grained features for attention computation, which outperforms those with decreased feature sizes [19,55]. In this paper, we use depth-wise convolutions with various kernel sizes to implement aggregators Agg(•), though we find other implementations also work (as shown in Tab. 6). As the inputs of attention are now group proxies, we achieve correlating K×K tokens simultaneously (K denotes the kernel size of Agg(•), which may be different for each segment) instead of individual tokens, which is more sufficient and comprehensive for modeling correlations.\nThe idea of using sliding-window-based operations to aggregate groups into proxies, though simple, is the key to the mechanism of mixing groups of different sizes and individual tokens at various granularities, as we use a different kernel size of aggregator for each segment. Such a process can be efficiently implemented via splitting segments, feeding them through aggregators implemented with different kernel sizes, and concatenating the outputs. Moreover, inspired by [17], we also employ an identity mapping on one segment instead of an aggregator to maintain the network's abilities in modeling individual token correlations. Therefore, we can model correlations among both groups and tokens while computing the attention map. Multiplying the attention map with the Value can be viewed as re-combining the corresponding groups together with individual tokens accordingly. In each GMA block, we split Q, K, and V into five segments and use aggregators with different kernel sizes to generate group proxies on four of them, so that we can conduct attention computation on mixtures of individual tokens and group proxies of different granularities. The branches whose outputs are fed into the attention computation are referred to as the pre-attention branches. To construct diverse connections, the rightmost branch utilizes aggregation but without attention, which is termed the non-attention branch. A linear mapping layer is adopted to fuse the outputs from the attention and non-attention branch. For clear illustration, we use Agg 1 , Agg 2 , and Agg 3 in the pre-attention branch to denote the aggregators with kernel sizes of 3, 5, and 7, respectively, and use Agg 0 for the aggregator in the non-attention branch.\nSpecifically, following the implementation of selfattention [1,47,61], we also use three learnable linear projections to generate Q, K, and V. Afterward, we split Q/K/V uniformly into five segments, each of which participates in different computations. As shown in Figure 3 (the left part), a branch corresponds to an aforementioned segment, and the four branches whose outputs are fed into the attention computation are referred to as the pre-attention branches. In three of the pre-attention branches, we use various implementations (e.g., min-pooling, avg-pooling, max-pooling, depth-wise convolution) as the aggregator Agg(•) with different kernel sizes, which are set as 3,5,7, respectively. The results in Tab. 6 indicate that each of these implementations achieves favorable performance, which shows that aggregation is a crucial step for attention advancement while its implementation can be flexible. We adopt the depth-wise convolutions, whose results are slightly better, in our paper. We further diversify the structures by using no aggregator in the last pre-attention branch, making it an identity mapping. Apart from such a branch with attention but no aggregator, we construct another branch with an aggregator but no attention, which is referred to as the non-attention branch. Finally, the outputs are mixed by a token ensemble layer, which is simply implemented by a linear projection with normalization [2] and activation." }, { "figure_ref": [ "fig_2" ], "heading": "Architectural Configurations", "publication_ref": [ "b36", "b52", "b1", "b17", "b36", "b48", "b52", "b61", "b36", "b48", "b52" ], "table_ref": [], "text": "Building on the proposed Group-Mix Attention, we introduce a series of vision Transformers named GroupMix-Former, as shown in Figure 3. We adopt a hierarchical [37,53] topology with four stages. The first 4× patch embedding layer embeds images into tokens, which is implemented with two sequential 3×3 convolutional layers, each with a stride of 2 and another two 3×3 layers with a stride of 1. At the beginning of each last three stages, we use a 2× patch embedding, which is also implemented with a 3×3 convolution. Within each stage, we construct several encoder blocks. Apart from a GMA block introduced in the last subsection, an encoder block also contains a Feed-Forward Network (FFN), Layer Normalization [2] and identity shortcuts, following the common practice in [18,37,49,53,62]. For image classification, the final output tokens are fed into the classifier after global average pooling (GAP); for dense prediction tasks (e.g., object detection and semantic segmentation), the task-specific heads can utilize the pyramid features output by the four stages. We do not adopt positional encoding in our model since we have naturally broken the permutation invariance with the GMA aggregators.\nWe instantiate four models with different architectural configurations. The architectural hyper-parameters include the number of encoder blocks in each stage L, the embedded dimension D, and the MLP ratio R, as shown in Tab. 1. Following the prior works [37,49,53], our models scale up from the mobile-scale GroupMixFormer-M (5.7 M) to the large-scale GroupMixFormer-L (70.3 M).\nTable 1. Architectural configurations of GroupMixFormer models. We use D, R, and L to denote the dimension of tokens, the expansion ratio of FFN, and the number of encoder blocks. We use M/T/S/B/L (mobile/tiny/small/base/large) to label models with different scales. \nH 4 × W 4 × D 1 D 1 = 40 R 1 = 4, L 1 = 3 D 1 = 80 R 1 = 4, L 1 = 4 D 1 = 80 R 1 = 4, L 1 = 2 D 1 = 200 R 1 = 2, L 1 = 8 D 1 = 240 R 1 = 4, L 1 = 8 stage 2 H 8 × W 8 × D 2 D 2 = 80 R 2 = 4, L 2 = 3 D 2 = 160 R 2 = 4, L 2 = 4 D 2 = 160 R 2 = 4, L 2 = 4 D 2 = 240 R 2 = 2, L 2 = 8 D 2 = 320 R 2 = 4, L 2 = 10 stage 3 H 16 × W 16 × D 3 D 3 = 160 R 3 = 4, L 3 = 12 D 3 = 200 R 3 = 4, L 3 = 12 D 3 = 320 R 3 = 4, L 3 = 12 D 3 = 320 R 3 = 4, L 3 = 12 D 3 = 360 R 3 = 2, L 3 = 30 stage 4 H 32 × W 32 × D 4 D 4 = 160 R 4 = 4, L 4 = 4 D 4 = 240 R 4 = 4, L 4 = 4 D 4 = 320 R 4 = 4, L 4 = 4 D 4 = 480 R 4 = 4, L 4 = 8 D 4 = 480 R 4 = 2, L 4 = 10" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b45", "b33", "b67" ], "table_ref": [], "text": "In this section, we evaluate our GroupMixFormer on standard visual recognition benchmarks including ImageNet-1K [46], MS-COCO [34], and ADE20k [68]. We present the implementation details for each scenario, quantitative comparisons to state-of-the-art vision backbones, and ablation studies in the following." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b48", "b63", "b64", "b36", "b38", "b24", "b38", "b23", "b34", "b8", "b38", "b56", "b28", "b36", "b52", "b12" ], "table_ref": [], "text": "We evaluate the image classification performance of Group-MixFormer on the ImageNet-1K dataset. We follow [49,64,65] to augment data and use the training recipe in [37]. We train GroupMixFormer for 300 epochs using an initial learning rate of 10 -3 with a 20-epoch linear warm-up. AdamW optimizer [39] is utilized with a weight decay of 0.05 and a cosine learning rate schedule. The stochastic depth drop rates [25] are set to 0.0, 0.1, 0.2, 0.4, and 0.5 for GroupMixFormer-M/T/S/B/L, respectively. For higher resolutions (e.g., 384 2 or 448 2 ), we finetune the models in another 30 epochs with the learning rate initialized as 2 × 10 -6 and a linear warm-up for 5 epochs. The finetuning process uses AdamW [39] with a weight decay of 10 -8 for optimization.\nFor object detection and instance segmentation, COCO 2017 dataset is utilized. Specifically, we employ GroupMix-Former as the backbones of Mask R-CNN [24] for object detection and segmentation, and RetinaNet [35] for detection only. All the backbones are initialized via the corresponding ImageNet pretrained models. We follow the training schedules in [9]: the initial learning rate is set to 10 -4 with a linear warm-up for 500 iterations and gradually decreases to 10 -5 and 10 -6 at the 24-th and 33-th epochs, respectively. We use AdamW [39] for both Mask R-CNN and RetinaNet, but the weight decay is 0.05 for the former and 10 -4 for the latter. Except for COCO, we also evaluate the semantic segmentation performance on ADE20k with UperNet [57] and Semantic FPN [29]. We follow [37,53] to use the public toolkit [13] for training and evaluations. The Semantic FPN is trained for 80k iterations, while the UperNet is trained for 160k iterations, both with an AdamW optimizer." }, { "figure_ref": [], "heading": "Comparisons with State-of-the-art Models", "publication_ref": [ "b27", "b19", "b16", "b37", "b36" ], "table_ref": [], "text": "Image Classification. We compare the proposed GroupMix-Former with the state-of-the-art models from the literature in Tab. 2, where all the reported results use only ImageNet-1k for training. For fair comparisons, we do not use any extra augmentations, like token-labeling [28], knowledge distillation, SAM [20], etc. We observe that GroupMix-Former consistently achieves higher Top-1 accuracies than the ViT and CNN models under similar model sizes and computational complexity constraints. Specifically, tested with a resolution of 224 2 , GroupMixFormer-S yields a top-1 accuracy of 83.4% with only 22.4M parameters, outperforming the second best ViT (CSWin-T [17]) by 0.7% and the best CNN (ConvNext-T [38]) by 1.3%. Meanwhile, GroupMixFormer-B trained with 224 × 244 images even achieves a similar accuracy with Swin-B [37], though the size of GroupMixFormer-B is only half as that of Swin-B. Moreover, GroupMixFormer shows satisfying scalability towards higher resolution. For example, finetuning with a resolution of 384 2 further improves the performance of GroupMixFormer-S to 85.0%; with around 70M parameters, our GroupMixFormer-L achieves 85.0% with a resolution of 224 2 and 86.2% with 384 2 . The results underscore the advantages of comprehensively incorporating both token-totoken and group-to-group correlations in modeling visual patterns. Additionally, attention responses from different aggregators are presented in the appendix to support the notion that there exist some patterns so that some tokens should be handled as a whole for classification.\nMoreover, we empirically observed that implementing depth-wise convolutions as the aggregators in GMA does lead to a slowdown in the inference speed. The throughput is reported in the appendix. However, this could be improved with more efficient aggregators (e.g., avg-pooling) and implementing engineering optimizations, such as \"torch.compile\". We will explore the optimization of the model's real-world speed in future research.\nObject Detection. Tab. 3 shows the object detection results on COCO with Mask R-CNN and RetinaNet detectors. With Mask R-CNN, GroupMixFormer achieves higher average precision under similar model parameters. Specifically, GroupMixFormer-T performs 1.0% higher (i.e., 47.5% v.s. " }, { "figure_ref": [ "fig_2" ], "heading": "Ablation Studies", "publication_ref": [ "b0", "b4", "b5", "b8", "b33", "b23", "b34", "b45", "b8", "b67", "b56", "b28", "b45", "b12", "b22", "b52", "b11", "b36", "b48", "b52", "b4", "b6", "b8", "b36", "b52" ], "table_ref": [], "text": "In this sub-section, we conduct ablation studies to analyze the key designs of GroupMixFormer. (1) We first analyze the necessity of the aggregators by changing the structural designs of GMA. (2) We experiment with various implementations of aggregators to see if other sliding-window-based operations, except for convolution, also work. (3) We validate that the performance gains of GroupMixFormer do not stem from the macrostructures. ( 4) The optimal configurations of the kernel sizes have been explored. (5) We conduct experiments to verify that GMA is not merely a trivial combination of convolution and self-attention. (6) We plug GMA Blocks into the other popular ViT architectures to verify if the superior performance of GroupMixFormer is merely due to the architectural designs (e.g., overlapping embedding layers and numbers of blocks within each stage). For image classification, we train GroupMixFormer-T for 300 epochs on ImageNet-1k (224 2 ) and test with the validation set. For object detection and semantic segmentation, we train Mask R-CNN with the 1× schedule [9] on COCO.\nGroup aggregators are necessary. Tab. 5 shows the results of ablating the aggregators. We first construct a GroupMixFormer-T baseline by replacing all of the five branches in GMA Blocks with identity mappings, so that the block degrades into a regular self-attention module. In the first group of experiments, we restore the aggregators in the non-attention branch (Agg 0 ) or the three pre-attention branches (Agg 1 , Agg 2 and Agg 3 ). Every model is trained Table 3. Object detection and instance segmentation on COCO 2017 [34] with Mask R-CNN [24] and RetinaNet [35]. All the models are pre-trained on ImageNet-1K [46]. 'P' represents the parameter number, and 'MS' denotes multi-scale training. The 3x schedule strictly follows [9]. Table 4. Semantic segmentation on ADE20k [68] with Uper-Net [57] and Semantic FPN [29]. All the models are pre-trained on ImageNet-1K [46] and finetuned with task-specific heads. We follow the standard training and evaluation processes in [13] for fair comparisons.\nBackbone Semantic FPN UperNet #Param(M) mIoU(%)\n#Param(M) mIoU(%) ResNet18 [23] 15.5 32.9 --PVT-Tiny [53] 17.0 35.7 --XCiT-T12/16 [ from scratch with the same configurations as described in Sec. 4.1. It could be observed that the aggregators are all critical, as they improve the top-1 accuracy by 0.4% and 1.0%, respectively. Moreover, the second group of experiments in Tab. 5 shows that using aggregators in all of the three pre-attention branches yields better performance than using any single one. Similar experimental results are observed in object detection and semantic segmentation as well. Using all the aggregators improves the baseline performance by a certain margin (e.g., +0.7% AP b and +0.5% AP m ). These results indicate that modeling correlations in a more comprehensive manner is able to provide fine-grained visual representations to benefit dense prediction scenarios.\nWe then analyze the impact of various kernel sizes of preattention aggregators on performance. Without altering the non-attention branch, we replace all of the pre-attention aggregators with either Agg 1 (3×3 convolution), Agg 2 (5×5) or Agg 3 (7×7). The second set of results in Tab. 5 indicates that the utilization of any group aggregators enhances classification and dense prediction performance, with a diverse combination of 3×3, 5×5, and 7×7 yielding the most optimal results. Specifically, GroupMixFormer-T equipped with diverse aggregators outperforms the baseline by +1.6% classification accuracy, +1.5% AP b in object detection, and +1.0% AP m in semantic segmentation, which suggests that modeling the correlations among groups of diverse sizes is the key to performance boost.\nDepthwise Convolutions are effective aggregators. Note that the implementations of aggregators Agg(•) could be various. Tab. 6 shows our results regarding the effects of different aggregator implementations (e.g., depthwise convolution [12], max-pooling, or average-pooling). It's empirically observed that the aggregators implemented by depthwise convolution achieve the slightly better performance (82.5% Top-1 accuracy on classification, 42.5% AP b for detection, Table 5. Ablation studies on the group aggregators in GMA Block. We use Agg 1 , Agg 2 , and Agg 3 to denote aggregators (in the pre-attention branch) with kernel sizes of 3, 5, and 7, respectively, and Agg 0 to denote the aggregator (in the non-attention branch) as shown in Figure 3. We report the Top-1 accuracy on ImageNet-1k together with AP m and AP b on COCO. and 39.7% AP m for instance segmentation with Mask R-CNN). Compared with the max-pooling and min-pooling operations, convolutional aggregators may take advantage of involving more learnable parameters for computing correlations, thus achieving better performances.\nPerformance gains are not derived from macrostructures. Compared with the representative works [37,49,53], our GroupMixFormer is deeper and has different implementations of patch embedding. In order to justify that the performance gains are not simply due to a better combination of architectural hyper-parameters (including the dimensions of tokens, expansion ratios, and layer depths as introduced in Tab. 1), we replace the GMA Blocks in GroupMixFormer-T with the Swin-attention or PVT-attention. The results in Tab. 7 show that simply replacing the GMA causes a significant performance drop, which justifies that the performance gain is due to the advanced attention mechanism instead of the architecture.\nOptimal configurations on the kernel sizes of aggrega-tors. To find the optimal configuration, we undertake two approaches: (1) enlarging the kernel size, and (2) altering the kernel configurations in varying orders. The first approach entails increasing the kernel sizes from (3,5,7) to (5,7,9). For the second approach, we deploy aggregators with larger kernels in the shallow layers and smaller kernels in the deeper layers, as well as in a reversed configuration. However, as demonstrated in Tab. 8, neither of these modifications proved as effective as the configuration we ultimately adopted.\nGMA is not merely a trivial combination of convolution and self-attention. We conduct further experiments to validate that our proposed GroupMixFormer is essentially different from a simple combination of convolution and selfattention. Specifically, we remove all the group aggregators from GroupMixFormer-T and insert a group of convolutional layers organized in the same manner (i.e., a combination of parallel identity mapping, 3×3, 5×5 and 7×7 layers) before the whole self-attention module. The accuracy drops by 1.0% in the Top-1 accuracy (81.5% v.s. 82.5%).\nAggregator is an advanced universal building block that could be applied to the other ViTs. We may also incorporate aggregators into representative ViTs (e.g., Swin [37] and PVT [53]) by simply inserting the them into their original attention modules to process their Query, Key, and Value. The results in Tab. 9 show that such a strategy generally boosts ViTs by a clear margin. For example, PVT-Small with aggregators achieves 80.6% Top-1 accuracy, which is 0.8% higher than its original result. It indicates that the proposed aggregators advance ViTs by modeling the group correlations and thus leading to a comprehensive understanding of the tokens." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b36" ], "table_ref": [], "text": "In this paper, we proposed an advanced attention mechanism, named Group-Mix Attention (GMA). In contrast to the popular multi-head self-attention (MHSA) that only models the correlations among individual tokens, the proposed GMA utilizes the group aggregators to simultaneously capture the token-to-token, token-to-group, and group-to-group correlations. We proposed GroupMixFormer based on GMA and instantiated a series of practical visual backbones with different sizes. Extensive experiments on the standard visual recognition benchmarks (including image classification, object detection, and semantic segmentation) have validated the effectiveness of the proposed GMA and GroupMixFormer.\nAlgorithm 1 PyTorch-style Pseudocode of GMA Block. and Swin [37]). Besides, with fewer parameters (68.6M v.s. 86.7M), our GroupMixFormer-T obtains a comparable performance with Focal-T (around 51.5% AP b ). Our GroupMixFormer-S achieves new state-of-the-art performance with an AP b of 51.9%." }, { "figure_ref": [ "fig_4" ], "heading": "D. Attention Visualization", "publication_ref": [], "table_ref": [], "text": "We present attention response maps in Figure 4. We show input images in (a), and the attention response maps from the ensemble layer in (b). Besides, the response maps of the outputs from the pre-attention branches and non-attention branch are shown in (c) to (g), respectively. We observe that applying self-attention on individual tokens sometimes fails to attend to the object, as shown in (c). In such a case, calculating the correlations among the group proxies, which are generated by the aggregators, may help. For example, as shown in the third row, calculating correlations among the groups, which are processed by aggregators with kernel sizes of 3 and 7, succeed in focusing on the dog, while modeling the token-to-token correlations in (c) focuses more on the background. These results indicate that there exist some patterns so that some tokens should be handled as a whole to capture the object features. In GMA, the representations captured by different aggregators are combined. It validates that comprehensively modeling the token-to-token, token-togroup, and group-to-group correlations leads to better vision recognition." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "This appendix includes detailed illustrations of the algorithm, training configurations and additional experiments. In Algorithm 1, we present the PyTorch-style pseudocode of GMA Block for easy implementation. In Appendix A, we detail the attention computation and elaborate on the training configurations for image classification, object detection, and instance/semantic segmentation. Besides, in Appendix C and Appendix D, we present additional experiments and visualizations to further validate GroupMixFormer's effectiveness, respectively." }, { "figure_ref": [], "heading": "A. Implementation Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1. Attention Computation", "publication_ref": [ "b0", "b1", "b2", "b3", "b0", "b46", "b60" ], "table_ref": [], "text": "We detail the attention computation adopted by Group-MixFormer in this appendix. The Q/K/V entries are first uniformly divided into five segments where we perform group aggregation on four segments. We use , 3, 4]) to denote the segments divided from Q/K/V entries, respectively. To produce the group proxies Q ′ , K ′ , and V ′ , we first employ the aggregation operation on the segments as Agg i (X q i ), Agg i (X k i ) and Agg i (X v i ). Then we concatenate all the four (i ∈ [1,2,3,4]) aggregated features to output group proxies Q ′ , K ′ , and V ′ . Afterward, we perform attention computation as introduced in [1,47,61] on the group proxies to generate the final output Att." }, { "figure_ref": [], "heading": "A.2. Image Classification", "publication_ref": [ "b45", "b48", "b64", "b63", "b66", "b36", "b38", "b24", "b38" ], "table_ref": [], "text": "The standard ImageNet-1K dataset [46] contains about 1.3 million training samples and 50K validation samples from 1000 categories. We experiment with input resolutions of 224 × 224, 384 × 384, or 448 × 448. We follow [49] for data augmentation, including Mixup [65], CutMix [64], random erasing [67], etc. We use the same training recipes as [37].\nFor training with 224 × 224, all GroupMixFormer instances are trained for 300 epochs with a batch size of 1024. The initial learning rate is set to 10 -3 with a linear warm-up for 20 epochs and then cosine annealing towards zero. We adopt the AdamW optimizer [39] with a weight decay coefficient of 0.05. The drop-path rates [25] are set to 0.0, 0.1, 0.2, 0.4 and 0.5 for GroupMixFormer-M/T/S/B/L, respectively. Besides, for higher resolutions (i.e., 384 × 384 and 448 × 448), we finetune the 224 × 224-pretrained models for another 30 epochs with an initial learning rate of 2 × 10 -6 and a linear warm-up for 5 epochs and then cosine annealing. For finetuning, we use AdamW optimizer [39] with a weight decay coefficient of 1.0 × 10 -8 ." }, { "figure_ref": [], "heading": "A.3. Object Detection and Instance segmentation", "publication_ref": [ "b33", "b23", "b34", "b45", "b36", "b8", "b38", "b33", "b23", "b3" ], "table_ref": [], "text": "For object detection, we experiment on COCO 2017 [34] with Mask R-CNN [24] and RetinaNet [35]. All models are initialized with the weights pretrained on ImageNet-1K [46]. The detectors are finetuned on COCO train2017 (118k images) and evaluated on COCO val2017 (5k images). For data augmentation, we adopt multi-scale training as a common practice [37]. We also follow the standard 3× (36-epoch) training schedules provided in [9]. We use AdamW [39] with a weight decay coefficient of 0.05 for Mask R-CNN and 10 -4 for RetinaNet.\nFor instance segmentation, we benchmark GroupMix-Former models on COCO 2017 [34] with Mask R-CNN [24] with the same configurations as described above.\nMoreover, we present additional results with Cascade Mask R-CNN [4] in this supplementary material. We use the same training configurations as Mask R-CNN." }, { "figure_ref": [], "heading": "A.4. Semantic Segmentation", "publication_ref": [ "b67", "b56", "b28", "b36", "b52" ], "table_ref": [], "text": "For semantic segmentation, we experiment on ADE20k [68] with UperNet [57] and Semantic FPN [29]). ADE20K contains ∼20k, ∼2k, and ∼3k images for training, validation, and testing, respectively, from 150 categories. Following common practices [37,53], we randomly resize and crop the image to 512 × 512 for training, and rescale the shorter side to 512 pixels for testing. We use AdamW with a weight decay coefficient of 10 -4 for Semantic FPN and 0.01 for UperNet. The Semantic FPN is trained for 80k iterations while the UperNet is trained for 160k iterations. The learning rate is initialized as 6 × 10 -5 , warmed up linearly in 1500 iterations, and then decayed following the polynomial decay schedule with a power of 0.9." }, { "figure_ref": [], "heading": "B. Speed Analysis", "publication_ref": [], "table_ref": [], "text": "We empirically found that implementing the aggregators in GMA with DW-Conv indeed slow-down the inference speed. For instance, as shown in Tab. 10, when tested on the single V100 GPU, our throughput (596 images/s) is smaller than the prevalent backbones (e.g., Swin-T with 755 images/s, CSWin-T with 701 images/s). However, our model outperforms others by large margins in recognition performance. Besides, it's noteworthy that with accuracy maintained, the speed of GroupMixFormercould be further improved by implementing with more efficient aggregators (e.g., +15 image/s by AvgPool as shown in Tab. 10)." }, { "figure_ref": [], "heading": "C. Additional results with Cascade Mask R-CNN", "publication_ref": [ "b3" ], "table_ref": [], "text": "To further verify the effectiveness of our proposed model, we equip GroupMixFormer with a more powerful object detector, i.e., Cascaded Mask R-CNN [4]. Detailed" } ]
Vision Transformers (ViTs) have been shown to enhance visual recognition through modeling long-range dependencies with multi-head self-attention (MHSA), which is typically formulated as Query-Key-Value computation. However, the attention map generated from the Query and Key captures only token-to-token correlations at one single granularity. In this paper, we argue that self-attention should have a more comprehensive mechanism to capture correlations among tokens and groups (i.e., multiple adjacent tokens) for higher representational capacity. Thereby, we propose Group-Mix Attention (GMA) as an advanced replacement for traditional self-attention, which can simultaneously capture token-to-token, token-to-group, and group-to-group correlations with various group sizes. To this end, GMA splits the Query, Key, and Value into segments uniformly and performs different group aggregations to generate group proxies. The attention map is computed based on the mixtures of tokens and group proxies and used to re-combine the tokens and groups in Value. Based on GMA, we introduce a powerful backbone, namely GroupMixFormer, which achieves state-of-the-art performance in image classification, object detection, and semantic segmentation with fewer parameters than existing models. For instance, GroupMixFormer-L (with 70.3M parameters and 384 2 input) attains 86.2% Top-1 accuracy on ImageNet-1K without external data, while GroupMixFormer-B (with 45.8M parameters) attains 51.2% mIoU on ADE20K.
Advancing Vision Transformers with Group-Mix Attention
[ { "figure_caption": "Figure 1 .1Figure1. Conceptual comparisons between the self-attention and our proposed Group-Mix Attention (GMA). In (a) and (b), we showcase with 7×7 single-dimensional tokens. Unlike the selfattention that computes correlations between pairs of individual tokens, GMA creates proxies of token groups (e.g., nine adjacent tokens) via group aggregators, and then computes the group-togroup correlations via proxies. In (c) and (d), we show the concrete computation of GMA with seven four-dimensional tokens, so that N=7 and d=4. To compute the correlations between two highlighted groups that each consist of three tokens, we aggregate them into two proxies for further multiplication. The group aggregation can be effectively implemented via sliding-window-based operators.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Detection (Mask R-CNN) (c) Segmentation (UperNet)", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Structural designs of Group-Mix Attention Block and architecture of GroupMixFormer.In each GMA block, we split Q, K, and V into five segments and use aggregators with different kernel sizes to generate group proxies on four of them, so that we can conduct attention computation on mixtures of individual tokens and group proxies of different granularities. The branches whose outputs are fed into the attention computation are referred to as the pre-attention branches. To construct diverse connections, the rightmost branch utilizes aggregation but without attention, which is termed the non-attention branch. A linear mapping layer is adopted to fuse the outputs from the attention and non-attention branch. For clear illustration, we use Agg 1 , Agg 2 , and Agg 3 in the pre-attention branch to denote the aggregators with kernel sizes of 3, 5, and 7, respectively, and use Agg 0 for the aggregator in the non-attention branch.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "5 #5# x: the input token with shape of (B, N, D), B is batch size, N=H*W, D is dimension # qkv_mapping(): linear mapping (in=D, out=D*3) to generate Q, K, V # att(): efficient multi-head Q-K-V computation # token_ensemble(): linear mapping (in=out=D) to combine the outputs from the attention and non-attention branches # act: activation function, implemented by HardSwish # norm: normalization function, implemented by LayerNorm # The aggregator is implemented by a depth-wise convolution (channels=groups=D//5) following a linear mapping def GMA(x): B,N,D=x.shape split_dim = D//Generate Q/K/V qkv = qkv_mapping(x).reshape(B, N, 3, D).permute(2, 0, 1, 3).reshape(3*B, N, D) qkv = qkv.transpose(1, 2).view(3*B, D, H, W) qkv = qkv.split([split_dim]*5, dim=1) # Now qkv[i] is the i-th branch with shape of (3*B, split_dim, H, W) qkv_pre_att_0 = act(norm(qkv[0])) # Generate group proxies via different aggregators qkv_pre_att_1 = act(norm(aggregator_pre_att_3x3(qkv[1]))) qkv_pre_att_2 = act(norm(aggregator_pre_att_5x5(qkv[2]))) qkv_pre_att_3 = act(norm(aggregator_pre_att_7x7(qkv[3]))) # Non-attention branch qkv_non_att = qkv[4].reshape(3, B, split_dim, H, W).permute(1, 0, 2, 3, 4).reshape(B, 3*split_dim, H, W) x_non_att = act(norm(aggregator_non_att_3x3(qkv_non_att)).reshape(B, split_dim, H, W)) # Efficient multi-head Q-K-V self-Attention. We ignore the number of heads for brevity # Its input is (3*B, D*4/5, H, W), output is (B, D*4/5, H, W) qkv_input = torch.cat([qkv_pre_att_0, qkv_pre_att_1, qkv_pre_att_2, qkv_pre_att_3], dim=1) x_att = att(qkv_input) # combine the outputs from attention and the non-attention branch x = torch.cat([x_att, x_non_att], dim=1) # the shape becomes (B, D, H, W) x = x.reshape(B, D, N).permute(0, 2, 1) # the shape becomes (B, N, D) x = token_ensemble(x) return x", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Attention visualizations on GroupMixFormer-S. The model attends to the pixels marked as red more than the others. Input images are shown in (a). In (c) to (f), we show the attention response maps from different aggregators in the pre-attention branches. In (g), we show the response maps from the aggregators in the non-attention branch. The combined response maps (outputs from the token ensemble layer) are shown in (b).", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "ImageNet-1k validation accuracy. The GFLOPs are measured with the specific resolution. Models with a comparable number of parameters are grouped together. the effectiveness of the Group-Mix mechanism, which is supposed to be able to capture the fine-grained features to facilitate dense predictions.", "figure_data": "MethodType #Params.(M) Input #GFLOPs Top-1 (%)ShuffleNet v2-50 [40] Mobile-Former [11] MobileViT-S [41] GroupMixFormer-MCNN Trans Trans Trans2.3 4.6 5.6 5.7224 2 224 2 256 2 224 22.3 1.2 1.8 1.477.2 72.8 78.4 79.4Semantic Segmentation. Tab. 3 also shows the seman-tic segmentation results on COCO with Mask-RCNN. Our GroupMixFormer-T impressively achieves an AP m of 42.4%,GroupMixFormer-M GroupMixFormer-M ResNet18 [23]Trans Trans CNN5.7 5.7 11.7384 2 448 2 224 24.0 5.4 1.881.5 81.8 69.80.6% higher than Coat Mini and 1.7% higher than PVT-Large. Besides, GroupMixFormer-B performs 1.1% betterEffNet-B4 [21] PVT-Tiny [53] PVTv2-B1 [54] P2T-Tiny [56]CNN Trans Trans Trans19.0 13.2 13.1 11.6224 2 224 2 224 2 224 24.2 1.9 2.1 1.882.9 75.1 78.7 79.8than Uniformer-B (i.e., 45.9% v.s. 44.8%). On ADE20K, we use UperNet and Semantic FPN and report the results in Tab. 4. Similarly, we observe that GroupMixFormers consis-CoaT Mini [61] BiFormer-T [69] GroupMixFormer-TTrans Trans Trans10.0 13.1 11.0224 2 224 2 224 26.8 2.2 3.781.0 81.4 82.5tently achieve favorable performance compared to the exist-ing backbones. For example, GroupMixFormer-T, thoughGroupMixFormer-T GroupMixFormer-T ResNet50 [23] ResNeXt50-32x4d [59]Trans Trans CNN CNN11.0 11.0 25.6 25.0384 2 448 2 224 2 224 210.9 14.9 4.1 4.384.1 84.3 76.5 77.6much smaller, performs 2.0% better than XCiT-S12/8 (i.e., 46.2% v.s. 44.2%, 14.1 M v.s. 30.4 M) with Semantic FPN. Notably, GroupMixFormer-T outperforms XCiT-M24/16 byConvNeXt-T [38] PVT-Small [53] PVTv2-B2 [54]CNN Trans Trans29.0 24.5 25.4224 2 224 2 224 24.5 3.8 4.082.1 79.8 82.00.3%, though the latter is 6.4× as big as GroupMixFormer-T (i.e., 46.2% v.s. 45.9%, 14.1 M v.s. 90.8 M). Similarly,Swin-T [37] CoaT Small [61] Focal-Tiny [62]Trans Trans Trans29.0 22.0 29.1224 2 224 2 224 24.5 12.6 4.981.3 82.1 82.2with UperNet, GroupMixFormers perform much better than the other bigger models, showing a clearly better trade-offP2T-Small [56]Trans24.1224 23.782.4between performance and efficiency. Such significant im-CSWin-T [17]Trans23.0224 24.382.7MViTv2-T [32]Trans24.0224 24.782.3DaViT-T [16]Trans28.3224 24.582.8XCiT-S12/16 [1]Trans26.0224 24.882.0GroupMixFormer-STrans22.4224 25.283.4GroupMixFormer-STrans22.4384 215.285.0ResNet101 [23]CNN44.7224 27.977.4ResNeXt101-32x4d [59] CNN44.2224 28.078.8ConvNeXt-S [38]CNN50.0224 28.783.1ConvNeXt-B [38]CNN89.0224 215.483.8ConvNeXt-L [38]CNN198.0224 234.484.3PVT-Large [53]Trans61.4224 29.881.7PVTv2-B3 [54]Trans45.2224 26.983.2Swin-B [37]Trans88.0224 215.483.5Swin-B [37]Trans88.0384 247.084.5CSWin-B [17]Trans78.0224 215.084.2MViTv2-B [32]Trans78.0224 215.084.2DaViT-B [16]Trans87.9224 215.584.6MaxViT-S [50]Trans69.0224 211.784.5CoaTLite Medium [61]Trans45.0384 228.784.5Focal-Small [62]Trans51.1224 29.183.5Focal-Base [62]Trans89.8224 216.083.8P2T-Large [56]Trans54.5224 29.883.9XCiT-M24/8 [1]Trans84.0224 263.983.7GroupMixFormer-BTrans45.8224 217.684.7GroupMixFormer-BTrans45.8384 251.685.8GroupMixFormer-LTrans70.3224 236.185.0GroupMixFormer-LTrans70.3384 2106.286.246.5%) than the second-best model, which is CoaT Mini,while maintaining a smaller model size of 30.8 M. Besides,our GroupMixFormer-B achieves an AP b of 51.5%, surpass-ing all the comparable models. With RetinaNet, Group-MixFormer also shows superiority: GroupMixFormer-T per-forms 0.5% better than Swin-B (i.e., 46.3% v.s. 45.8%)though ours is much smaller (i.e., 20.2 M v.s. 98.0 M);", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "ImageNet-1k classification, COCO det and instance seg (1x with Mask RCNN) performances on various aggregators.", "figure_data": "MethodAgg 0Agg 1 (3 × 3)Agg 2 (5 × 5)Agg 3 (7 × 7)#Params.(M)#GFLOPsTop-1 (%)AP bAP mGroupMixFormer-T10.53.480.943.939.6GroupMixFormer-T10.83.581.9 (+1.0)44.640.4GroupMixFormer-T10.73.581.3 (+0.4)44.140.3GroupMixFormer-T10.83.682.2 (+1.3)44.640.3GroupMixFormer-T10.83.682.3 (+1.4)44.640.4GroupMixFormer-T10.93.682.3 (+1.4)44.840.3GroupMixFormer-T11.03.782.5 (+1.6)45.440.6MethodImplementationTop-1 (%)AP bAP mGroupMixFormer-TMinPool82.342.439.7GroupMixFormer-TMaxPool82.242.339.7GroupMixFormer-TAvgPool82.242.339.6GroupMixFormer-TDWConv82.542.539.8", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "ImageNet-1k validation of replacing GMA with other attention modules on GroupMixFormer-T.", "figure_data": "Attention Type#Params.(M)#GFLOPsTop-1 (%)GMA10.53.482.5Swin-attention10.83.579.9 (-2.6)PVT-attention16.33.379.1 (-3.4)Table 8. Explorations on optimal kernel configurations withGroupMixFormer-TStrategy#Params.(M)#GFLOPSTop-1 Acc (%)kernel sizes = [5,7,9]11.23.982.0large kernel to small kernel10.83.782.2small kernel to large kernel11.03.782.0Ours (GroupMixFormer-T)11.03.782.5", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "ImageNet-1k validation accuracy of incorporating aggregators to other ViT architectures.", "figure_data": "StructuresAggregators#Params.(M)#GFLOPsTop-1 (%)Swin-T4.581.3Swin-T28.84.881.8 (+0.5)PVT-Small3.879.8PVT-Small25.24.080.6 (+0.8)", "figure_id": "tab_6", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Comparisons on inference speed with different models.", "figure_data": "MethodSwin-T PVT-S CSWin-T GroupMixFormer-S GroupMixFormer-S(AvgPool)Throughput (images/s) 755820701596611#Param.(M)29.024.523.022.422.1#FLOPs.(G)4.53.84.35.25.0Performance (%)81.379.882.783.483.0", "figure_id": "tab_7", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Object detection and instance segmentation performance on COCO 2017[34] with Cascade Mask R-CNN[4].", "figure_data": "BackboneCascade Mask R-CNN 3× + MS #P (M) AP b AP b 50 AP b 75 AP m AP m 50AP m 75ResNet50 [23]82.046.364.350.5---PVTv2-b2-Linear [54]80.150.969.555.244.066.847.7PVTv2-b2 [54]82.951.169.855.344.467.248.1Swin-T [37]85.650.268.854.743.566.146.9Focal-T [61]86.751.570.655.9---GroupMixFormer-T (ours)68.651.570.255.744.467.548.2GroupMixFormer-S (ours)80.051.970.756.145.168.348.4", "figure_id": "tab_8", "figure_label": "11", "figure_type": "table" } ]
Chongjian Ge; Xiaohan Ding; Zhan Tong; Li Yuan; Jiangliu Wang; Yibing Song; Ping Luo
[ { "authors": "Alaaeldin Ali; Hugo Touvron; Mathilde Caron; Piotr Bojanowski; Matthijs Douze; Armand Joulin; Ivan Laptev; Natalia Neverova; Gabriel Synnaeve; Jakob Verbeek", "journal": "", "ref_id": "b0", "title": "Xcit: Cross-covariance image transformers", "year": "2021" }, { "authors": "Jimmy Lei Ba; Jamie Ryan Kiros; Geoffrey E Hinton", "journal": "", "ref_id": "b1", "title": "Layer normalization", "year": "2016" }, { "authors": "Hangbo Bao; Li Dong; Furu Wei", "journal": "", "ref_id": "b2", "title": "Beit: Bert pre-training of image transformers", "year": "2021" }, { "authors": "Zhaowei Cai; Nuno Vasconcelos", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b3", "title": "Cascade r-cnn: high quality object detection and instance segmentation", "year": "2019" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b4", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Beidi Chen; Tri Dao; Kaizhao Liang; Jiaming Yang; Zhao Song; Atri Rudra; Christopher Re", "journal": "", "ref_id": "b5", "title": "Pixelated butterfly: Simple and efficient sparse training for neural network models", "year": "2021" }, { "authors": "Beidi Chen; Tri Dao; Kaizhao Liang; Jiaming Yang; Zhao Song; Atri Rudra; Christopher Re", "journal": "", "ref_id": "b6", "title": "Pixelated butterfly: Simple and efficient sparse training for neural network models", "year": "2021" }, { "authors": "Beidi Chen; Tri Dao; Eric Winsor; Zhao Song; Atri Rudra; Christopher Ré", "journal": "", "ref_id": "b7", "title": "Scatterbrain: Unifying sparse and low-rank attention approximation", "year": "2021" }, { "authors": "Kai Chen; Jiaqi Wang; Jiangmiao Pang; Yuhang Cao; Yu Xiong; Xiaoxiao Li; Shuyang Sun; Wansen Feng; Ziwei Liu; Jiarui Xu; Zheng Zhang; Dazhi Cheng; Chenchen Zhu; Tianheng Cheng; Qijie Zhao; Buyu Li; Xin Lu; Rui Zhu; Yue Wu; Jifeng Dai; Jingdong Wang; Jianping Shi; Wanli Ouyang; Chen Change Loy; Dahua Lin", "journal": "", "ref_id": "b8", "title": "MMDetection: Open mmlab detection toolbox and benchmark", "year": "2019" }, { "authors": "Xinlei Chen; Saining Xie; Kaiming He", "journal": "", "ref_id": "b9", "title": "An empirical study of training self-supervised vision transformers", "year": "2021" }, { "authors": "Yinpeng Chen; Xiyang Dai; Dongdong Chen; Mengchen Liu; Xiaoyi Dong; Lu Yuan; Zicheng Liu", "journal": "", "ref_id": "b10", "title": "Mobile-former: Bridging mobilenet and transformer", "year": "2022" }, { "authors": "François Chollet", "journal": "", "ref_id": "b11", "title": "Xception: Deep learning with depthwise separable convolutions", "year": "2017" }, { "authors": "", "journal": "MMSegmentation Contributors", "ref_id": "b12", "title": "MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark", "year": "2020" }, { "authors": "Jean-Baptiste Cordonnier; Andreas Loukas; Martin Jaggi", "journal": "", "ref_id": "b13", "title": "On the relationship between self-attention and convolutional layers", "year": "2019" }, { "authors": "Xiyang Dai; Yinpeng Chen; Bin Xiao; Dongdong Chen; Mengchen Liu; Lu Yuan; Lei Zhang", "journal": "", "ref_id": "b14", "title": "Dynamic head: Unifying object detection heads with attentions", "year": "2021" }, { "authors": "Mingyu Ding; Bin Xiao; Noel Codella; Ping Luo; Jingdong Wang; Lu Yuan", "journal": "", "ref_id": "b15", "title": "Davit: Dual attention vision transformers", "year": "2022" }, { "authors": "Xiaoyi Dong; Jianmin Bao; Dongdong Chen; Weiming Zhang; Nenghai Yu; Lu Yuan; Dong Chen; Baining Guo", "journal": "", "ref_id": "b16", "title": "Cswin transformer: A general vision transformer backbone with cross-shaped windows", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b17", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Bo Haoqi Fan; Karttikeya Xiong; Yanghao Mangalam; Zhicheng Li; Jitendra Yan; Christoph Malik; Feichtenhofer", "journal": "", "ref_id": "b18", "title": "Multiscale vision transformers", "year": "2021" }, { "authors": "Pierre Foret; Ariel Kleiner; Hossein Mobahi; Behnam Neyshabur", "journal": "", "ref_id": "b19", "title": "Sharpness-aware minimization for efficiently improving generalization", "year": "2020" }, { "authors": "Ido Freeman; Anton Lutz Roese-Koerner; Kummert", "journal": "", "ref_id": "b20", "title": "Effnet: An efficient structure for convolutional neural networks", "year": "2018" }, { "authors": "Benjamin Graham; Alaaeldin El-Nouby; Hugo Touvron; Pierre Stock; Armand Joulin; Hervé Jégou; Matthijs Douze", "journal": "", "ref_id": "b21", "title": "Levit: a vision transformer in convnet's clothing for faster inference", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b22", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b23", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Gao Huang; Yu Sun; Zhuang Liu; Daniel Sedra; Kilian Q Weinberger", "journal": "", "ref_id": "b24", "title": "Deep networks with stochastic depth", "year": "2016" }, { "authors": "Gao Huang; Zhuang Liu; Laurens Van Der Maaten; Kilian Q Weinberger", "journal": "", "ref_id": "b25", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; V Quoc; Yunhsuan Le; Zhen Sung; Tom Li; Duerig", "journal": "", "ref_id": "b26", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Zi-Hang Jiang; Qibin Hou; Li Yuan; Daquan Zhou; Yujun Shi; Xiaojie Jin; Anran Wang; Jiashi Feng", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "All tokens matter: Token labeling for training better vision transformers", "year": "2021" }, { "authors": "Alexander Kirillov; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b28", "title": "Panoptic feature pyramid networks", "year": "2019" }, { "authors": "Youngwan Lee; Jonghee Kim; Jeff Willette; Sung Ju Hwang", "journal": "", "ref_id": "b29", "title": "Mpvit: Multi-path vision transformer for dense prediction", "year": "2021" }, { "authors": "Kunchang Li; Yali Wang; Junhao Zhang; Peng Gao; Guanglu Song; Yu Liu; Hongsheng Li; Yu Qiao", "journal": "", "ref_id": "b30", "title": "Uniformer: Unifying convolution and self-attention for visual recognition", "year": "2022" }, { "authors": "Yanghao Li; Chao-Yuan Wu; Haoqi Fan; Karttikeya Mangalam; Bo Xiong; Jitendra Malik; Christoph Feichtenhofer", "journal": "", "ref_id": "b31", "title": "Mvitv2: Improved multiscale vision transformers for classification and detection", "year": "2022" }, { "authors": "Youwei Liang; Chongjian Ge; Zhan Tong; Yibing Song; Jue Wang; Pengtao Xie", "journal": "", "ref_id": "b32", "title": "Not all patches are what you need: Expediting vision transformers via token reorganizations", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b33", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b34", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Ze Liu; Han Hu; Yutong Lin; Zhuliang Yao; Zhenda Xie; Yixuan Wei; Jia Ning; Yue Cao; Zheng Zhang; Li Dong", "journal": "", "ref_id": "b35", "title": "Swin transformer v2: Scaling up capacity and resolution", "year": "2021" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b36", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Zhuang Liu; Hanzi Mao; Chao-Yuan Wu; Christoph Feichtenhofer; Trevor Darrell; Saining Xie", "journal": "", "ref_id": "b37", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b38", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Ningning Ma; Xiangyu Zhang; Hai-Tao Zheng; Jian Sun", "journal": "", "ref_id": "b39", "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "year": "2018" }, { "authors": "Sachin Mehta; Mohammad Rastegari", "journal": "", "ref_id": "b40", "title": "Mobilevit: lightweight, general-purpose, and mobile-friendly vision transformer", "year": "2021" }, { "authors": "Sayak Paul; Pin-Yu Chen", "journal": "", "ref_id": "b41", "title": "Vision transformers are robust learners", "year": "2021" }, { "authors": "Maithra Raghu; Thomas Unterthiner; Simon Kornblith; Chiyuan Zhang; Alexey Dosovitskiy", "journal": "", "ref_id": "b42", "title": "Do vision transformers see like convolutional neural networks", "year": "2021" }, { "authors": "Yongming Rao; Wenliang Zhao; Benlin Liu; Jiwen Lu; Jie Zhou; Cho-Jui Hsieh", "journal": "", "ref_id": "b43", "title": "Dynamicvit: Efficient vision transformers with dynamic token sparsification", "year": "2021" }, { "authors": "Daquan Sucheng Ren; Shengfeng Zhou; Jiashi He; Xinchao Feng; Wang", "journal": "", "ref_id": "b44", "title": "Shunted self-attention via multi-scale token aggregation", "year": "2022" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein", "journal": "International Journal of Computer Vision", "ref_id": "b45", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "Zhuoran Shen; Mingyuan Zhang; Haiyu Zhao; Shuai Yi; Hongsheng Li", "journal": "", "ref_id": "b46", "title": "Efficient attention: Attention with linear complexities", "year": "2021" }, { "authors": "Chenyang Si; Weihao Yu; Pan Zhou; Yichen Zhou; Xinchao Wang; Shuicheng Yan", "journal": "", "ref_id": "b47", "title": "Inception transformer", "year": "2022" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "", "ref_id": "b48", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "Zhengzhong Tu; Hossein Talebi; Han Zhang; Feng Yang; Peyman Milanfar; Alan Bovik; Yinxiao Li", "journal": "", "ref_id": "b49", "title": "Maxvit: Multiaxis vision transformer", "year": "2022" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b50", "title": "Attention is all you need", "year": "2017" }, { "authors": "L Wang; L Yao; B Chen; D Lin; X Cai; He; Liu", "journal": "", "ref_id": "b51", "title": "Crossformer: A versatile vision transformer hinging on crossscale attention", "year": "2018" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Deng-Ping Fan; Kaitao Song; Ding Liang; Tong Lu; Ping Luo; Ling Shao", "journal": "", "ref_id": "b52", "title": "Pyramid vision transformer: A versatile backbone for dense prediction without convolutions", "year": "2021" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Deng-Ping Fan; Kaitao Song; Ding Liang; Tong Lu; Ping Luo; Ling Shao", "journal": "Computational Visual Media", "ref_id": "b53", "title": "Pvtv2: Improved baselines with pyramid vision transformer", "year": "2022" }, { "authors": "Haiping Wu; Bin Xiao; Noel Codella; Mengchen Liu; Xiyang Dai; Lu Yuan; Lei Zhang", "journal": "", "ref_id": "b54", "title": "Cvt: Introducing convolutions to vision transformers", "year": "2021" }, { "authors": "Yu-Huan Wu; Yun Liu; Xin Zhan; Ming-Ming Cheng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b55", "title": "P2t: Pyramid pooling transformer for scene understanding", "year": "2022" }, { "authors": "Tete Xiao; Yingcheng Liu; Bolei Zhou; Yuning Jiang; Jian Sun", "journal": "", "ref_id": "b56", "title": "Unified perceptual parsing for scene understanding", "year": "2018" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "", "ref_id": "b57", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Saining Xie; Ross Girshick; Piotr Dollár; Zhuowen Tu; Kaiming He", "journal": "", "ref_id": "b58", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "Zhenda Xie; Yutong Lin; Zhuliang Yao; Zheng Zhang; Qi Dai; Yue Cao; Han Hu", "journal": "", "ref_id": "b59", "title": "Self-supervised learning with swin transformers", "year": "2021" }, { "authors": "Weijian Xu; Yifan Xu; Tyler Chang; Zhuowen Tu", "journal": "", "ref_id": "b60", "title": "Coscale conv-attentional image transformers", "year": "2021" }, { "authors": "Jianwei Yang; Chunyuan Li; Pengchuan Zhang; Xiyang Dai; Bin Xiao; Lu Yuan; Jianfeng Gao", "journal": "", "ref_id": "b61", "title": "Focal self-attention for local-global interactions in vision transformers", "year": "2021" }, { "authors": "Li Yuan; Qibin Hou; Zihang Jiang; Jiashi Feng; Shuicheng Yan", "journal": "", "ref_id": "b62", "title": "Volo: Vision outlooker for visual recognition", "year": "2021" }, { "authors": "Sangdoo Yun; Dongyoon Han; Seong Joon Oh; Sanghyuk Chun; Junsuk Choe; Youngjoon Yoo", "journal": "", "ref_id": "b63", "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "year": "2019" }, { "authors": "Hongyi Zhang; Moustapha Cisse; David Yann N Dauphin; Lopez-Paz", "journal": "", "ref_id": "b64", "title": "mixup: Beyond empirical risk minimization", "year": "2017" }, { "authors": "Pengchuan Zhang; Xiyang Dai; Jianwei Yang; Bin Xiao; Lu Yuan; Lei Zhang; Jianfeng Gao", "journal": "", "ref_id": "b65", "title": "Multi-scale vision longformer: A new vision transformer for high-resolution image encoding", "year": "2021" }, { "authors": "Zhun Zhong; Liang Zheng; Guoliang Kang; Shaozi Li; Yi Yang", "journal": "", "ref_id": "b66", "title": "Random erasing data augmentation", "year": "2020" }, { "authors": "Bolei Zhou; Hang Zhao; Xavier Puig; Tete Xiao; Sanja Fidler; Adela Barriuso; Antonio Torralba", "journal": "International Journal of Computer Vision", "ref_id": "b67", "title": "Semantic understanding of scenes through the ade20k dataset", "year": "2019" }, { "authors": "Lei Zhu; Xinjiang Wang; Zhanghan Ke; Wayne Zhang; Rynson Wh Lau", "journal": "", "ref_id": "b68", "title": "Biformer: Vision transformer with bi-level routing attention", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 308.86, 341.8, 172.56, 11.67 ], "formula_id": "formula_0", "formula_text": "tions Agg i (X i ), i ∈ [1, • • • , n] to produce X ′ ." }, { "formula_coordinates": [ 5, 67.52, 121.43, 455.67, 63.87 ], "formula_id": "formula_1", "formula_text": "H 4 × W 4 × D 1 D 1 = 40 R 1 = 4, L 1 = 3 D 1 = 80 R 1 = 4, L 1 = 4 D 1 = 80 R 1 = 4, L 1 = 2 D 1 = 200 R 1 = 2, L 1 = 8 D 1 = 240 R 1 = 4, L 1 = 8 stage 2 H 8 × W 8 × D 2 D 2 = 80 R 2 = 4, L 2 = 3 D 2 = 160 R 2 = 4, L 2 = 4 D 2 = 160 R 2 = 4, L 2 = 4 D 2 = 240 R 2 = 2, L 2 = 8 D 2 = 320 R 2 = 4, L 2 = 10 stage 3 H 16 × W 16 × D 3 D 3 = 160 R 3 = 4, L 3 = 12 D 3 = 200 R 3 = 4, L 3 = 12 D 3 = 320 R 3 = 4, L 3 = 12 D 3 = 320 R 3 = 4, L 3 = 12 D 3 = 360 R 3 = 2, L 3 = 30 stage 4 H 32 × W 32 × D 4 D 4 = 160 R 4 = 4, L 4 = 4 D 4 = 240 R 4 = 4, L 4 = 4 D 4 = 320 R 4 = 4, L 4 = 4 D 4 = 480 R 4 = 4, L 4 = 8 D 4 = 480 R 4 = 2, L 4 = 10" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b15", "b30", "b2", "b21", "b20", "b33", "b25", "b11", "b5", "b28", "b28", "b27", "b35", "b18", "b14", "b10", "b5", "b34", "b31", "b13", "b36", "b6", "b21" ], "table_ref": [], "text": "Observing physical phenomena often requires a careful design of experiments [5] to gain useful insights and discover new knowledge. Such a design can be impractical due to the combinatorially large and convoluted space of choices one has to consider. Bayesian optimization (BO) [16,31] has emerged as a sample-efficient agent to optimize over large spaces through iteratively querying potentially unknown, expensive to evaluate objectives (i.e., black-box) that often involve noisy measurements. BO has been successful across various disciplines including tuning machine learning [3,22], robotics [21], online learning [34] reinforcement learning [26], selection of chemical compounds [12] and in the design of new materials [6].\nFormally BO seeks to find the global optimum in the following problem\nx ⋆ = arg max\nx∈X f (x),(1)\nwhere f : X → R is a function on the compact subset X ⊆ R d . Usually no formulation or information over f (•) is given, but we rather capture our beliefs about its behavior through a prior distribution which is progressively updated as new data is acquired. BO acts iteratively with the first step involving the selection of a surrogate * Corresponding Author. Email: zikaix@liverpool.ac.uk model that approximates the correlation of the observed measurements with the parameter space. The next step requires the design of mechanisms to determine new points to query the objective in each iteration, i.e., designing an acquisition function. This normally involves calculating the expected informativeness of learning f (x) for every point and in practice acts as a trade-off between exploration versus exploitation in the sampling behavior of the optimizer. As new measurements are collected, the surrogate model is updated accordingly and this two-step procedure is repeated until convergence. Gaussian processes (GP) [29] are typically used as surrogate models in most BO instantiations mainly due to their efficiency and simplicity, i.e., having an amenable analytical form of the posterior distribution. They form a collection of random variables where the joint distribution of any finite subset is still a Gaussian distribution. In practice, a zero-mean GP prior is often employed and some common choices for a covariance kernel are the square exponential kernel and Matérn kernel [29].\nMost recently there has been an increasing interest over how to exploit external knowledge in BO towards accelerating convergence [28]. Many of these contributions primarily focus on employing external knowledge in a form of prior distribution added to the model towards guiding the optimization to more fruitful regions [36,19,15]. In most scientific tasks however, external knowledge cannot always be fairly translated in a form of prior distribution. More importantly, such strategies hinder the induction of prior bias in the problem and may ultimately shift the search into unexciting regions. Another series of works attempts to instead introduce domain-specific knowledge or rules in a form of constraints [11].\nIn practice, it may make more sense to introduce other types of external knowledge to the problem, such as structural information about the optimization space or about the relationship between the input parameters and the target objective, e.g., linear or non-linear, etc. This type of domain knowledge can be more amenable for physical science problems such as materials science for example where an optimal linear mixture of chemical compositions is sought [6]. In such domains especially, due to the inherent extreme imbalance of optimality conditions, most surrogate models resort to smoothing over the optimum or over-predicting near its location which can often result to a local-minima confinement [35,32]. While other choices of surrogate models have been used in BO such as Random forests [14] and Bayesian neural networks [37], these have empirically shown to predominate exploration and lead to poor performance [7].\nInstead, in this work we propose a simple, yet novel approach that can capture such domain-specific knowledge in BO by adaptively augmenting the acquisition function with a suitably chosen predictive model trained iteratively over previously sampled points in an online learning fashion [22]. Instead of using a predictive model as a surrogate model, we integrate its prediction power in the acquisition function as a corrective term, allowing the benefits of both GP and predictive model to be utilised. The proposed approach allows for more flexibility in the choice of acquisition functions and appears to improve upon standard BO on a materials design simulation task when suitable prognostic models are chosen. Extensive empirical results and ablation analyses demonstrate that our method maintains a competitive and robust performance overall.\nThe rest of the paper is organised as follows. Section 2 presents recent works related to external knowledge exploitation in BO, while Section 3 outlines the details of the proposed methodology. The performance and robustness of our algorithm is evaluated and discussed in Section 4 and Section 5 concludes our work and highlights some future directions." }, { "figure_ref": [], "heading": "Relation to existing methods", "publication_ref": [ "b22", "b16", "b12", "b2", "b35", "b18", "b27", "b14", "b29", "b6", "b13", "b19", "b33", "b37", "b24", "b32" ], "table_ref": [], "text": "External knowledge injection in BO has recently become fashionable with a number of interrelated approaches having been proposed over the last few years. A common issue in BO in general is the so-called \"cold start\" problem, i.e., the initial, usually randomly chosen points, fail to adequately capture the landscape of the optimizing objective. Most recently, a new branch of research explored the challenging task of utilizing knowledge from prior BO \"campaigns\", e.g., meta-learning [23], to help warm-start the optimization as well as injecting external information [17]. Transfer learning has been successfully applied to chemical reaction optimization [13] to bias the search space by weighting the acquisition function of the current campaign with past predictions.\nA major line of works propose BO frameworks that incorporate external knowledge in a form of prior beliefs from a fixed set of distributions [3]. A number of variants have been proposed that can accommodate generative models coupled with user defined priors into pseudo-posteriors [36] where the prior distribution is modeled using the \"positive observations\" for more efficient sampling. Other works incorporate prior user beliefs with observed data and compute the posterior distribution through repeated Thompson sampling [19]. New sampling points are approximated using a linear combination of posterior samplings. In [28] prior beliefs are used to highlight high-probable regions in terms of optimality through the probability integral transform method. Finally, the work of [15] adaptively integrates prior beliefs in the acquisition function as a decayed multiplicative term towards improved sampling, maintaining at the same time standard acquisition function convergence guarantees. The use of external knowledge as a GP prior provides a means to correct GP predictions; however the customized mean function tends to dominate as the optimization progresses in practice [30] and has shown to negatively affect the performance overall [7].\nWhile the accommodation of expert beliefs either as a surrogate or in a form of acquisition function has been well studied recently, structural aspects of the optimization have been at the forefront of knowledge injected BO. In particular such methods employ structural priors, other than the GP kernel to model how the objective function is expected to behave [14]. Such priors can either model the monotonicity of the objective [20] or its non-stationarity [34]. Another series of works attempts to alleviate the issue of overex-ploring the boundaries of the search space using multi-task Gaussian process [38]. The work of [25] proposes a cylindrical kernel that expands the center of the search space and shrinks the edges, while [33] propose adding derivative signs to the edges of the search space to steer BO towards the center. Nevertheless, most of the aforementioned methods address specific only structural aspects tailored to the problem at hand that are not directly generalizable. In this work we instead propose a generalized strategy for structural knowledge injection where a suitable predictive model is used to augment the acquisition function and enrich the search with structural properties of the problem at hand towards more effective exploitation in the optimization." }, { "figure_ref": [ "fig_1" ], "heading": "Enriching acquisition functions with domain knowledge", "publication_ref": [ "b13" ], "table_ref": [], "text": "In this section we propose a straightforward approach to inject general type of external knowledge in BO by augmenting the acquisition function through a tunable predictive model acting as an assistive surrogate to enrich the approximation power of the Gaussian process model. Let D = {(xi, yi)} n i=1 be an observation dataset and α(x, D) be an acquisition function, i.e., a criterion used to obtain new candidate samples to evaluate across the various iterations of BO. A commonly used acquisition function is upper confidence bound (UCB) defined as\nx ⋆ = arg max x∈X µ(x) + κσ(x),(2)\nwhere µ(x) is the posterior prediction, σ(x) is the uncertainty and the total objective represents a trade-off between exploration versus exploitation in the search. We now enrich Eq. ( 2) with the output of a predictive model ξ(x, D) such as a random forest or any other tree-based model [14] at x as\nx ⋆ = arg max x∈X α(x, D) + γξ(x, D).(3)\nThat is, the assistive predictive model is being trained sequentially on new sample points as these arrive independently from the BO procedure in a self-supervised regime. The addition of the prediction output itself works as a correction to the sampling space without affecting the parameters of the original Gaussian process. This can be a suitably chosen deterministic model capturing various structural information of the problem at hand towards a better-informed sampling. Figure 1 further exemplifies the effects of augmenting the acquisition in Eq. ( 3) by showing the different approximations of the Ackley function by a GP and random forest regressor, as well as their combined approximation power. Evidently, the latter panel demonstrates a better approximation power by the new acquisition and thus a richer sampling strategy. While this iterative training process can incur a running-time offset, especially as the number of samples increases, this practically becomes negligible in real-world tasks where measuring physical phenomena is a time-consuming process. We further discuss and exemplify this later in the experiments section where we demonstrate that the proposed approach inhibits no delay in a materials design optimization problem.\nDepending on the choice of acquisition function, a weighting parameter γ needs to be chosen to ensure the predictive model term is up to scale, without however affecting the prediction output of the Gaussian process. For UCB, we select γ = 1 as the predictive model output naturally acts as a correction to the posterior mean of the GP, being at a similar scale. For EI and POI, which reflect the possible improvement on the best score, a suitably designed scaling factor is used to normalise the predictive contribution and bring the two terms in the acquisition function to scale as follows\nγ = P α(xinit, D) P ξ(xinit, D) ,(4)\nwhere xinit stands for the initial set of points used to seed the optimizer. This can be selected either randomly or through a careful design of experiments.\nIn practice, the utility of the predictive model term tends to be negligible during the initial steps of the optimization due to lack of training data. Therefore, we employ a monotonically increasing weight for the correction term that deemphasizes its contribution in the initial stages, allowing a more GP-dominated sampling to take effect, while boosting its contribution further down the optimization process as soon as enough training data is made available. This can be achieved by introducing the following quadratically increasing weight function normalized to be in [0,1]\nh(i) = min 1, 4i 2 i 2 max ,(5)\nwhere i denotes the optimization iteration number and imax is the predefined total number of iterations. We therefore adjust the proposed acquisition function as follows\ne α(x, D, i) = α(x, D) + γh(i)ξ(x, D).(6)\nFurthermore, we have empirically observed that depending on the problem at hand, the predictive model in the acquisition function can dominate the search over the GP model and get stuck into local optima affecting overall performance. To address this issue we further propose an early stopping strategy so that the predictive model can be dropped when such phenomena are observed, allowing the optimizer to carry on only using standard GP prediction in a warm-start fashion. We particularly monitor such cases by examining the closeness of the different sampling points between consecutive BO iterations. The following condition describes our proposed early stopping criterion ∥xi -xi+1∥ xi+1 -\n1 i P i k=1 x k < ϵ.(7)\nAlgorithm 1 summarizes the main steps of our proposed algorithm, termed throughout the rest of the paper as DKIBO." }, { "figure_ref": [], "heading": "Experimental comparisons", "publication_ref": [ "b6", "b0", "b38", "b5" ], "table_ref": [], "text": "We present a set of experiments to demonstrate the practical utility of the proposed method on a wide range of problems, including the i ← i + 1; 17: end while physical-world task of searching for new materials. We empirically compare performances across various methods and demonstrate the robustness of the proposed method. Section 4.1 details the various experimental settings along with the selected comparison methods used for benchmarking. Sections 4.2, 4.3, 4.4 and 4.5 present results for four experimental settings, namely an analytical function optimization task [7], a hyperparameter optimization task [1], a robotic swimming simulation task [39] and a materials mixture design problem [6]. Finally, Section 4.6 details some ablation studies that further highlight the robustness of the proposed methodology." }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [ "b6", "b0", "b38", "b5", "b7", "b23", "b26", "b17", "b39", "b17" ], "table_ref": [], "text": "We empirically test the performance of the proposed algorithm across the following tasks:\n• Synthetic functions Synthetic benchmark functions (as used in [7]) of varying dimensionality with multiple local optima used to test precision, convergence speed and robustness of BO methods. Simple regret and cumulative mean regret are used to measure the performance of the optimization result and convergence speed, respectively. The maximum number of iterations is set to 100 and the median value of 50 repeated trials is reported. • BBO challenge Black-box optimization challenge (NeuIPS 2020) [1] is a hyperparameter tuning challenge over a large number of machine learning models such as SVM, random forest, etc. The score is computed by the bayesmark package using the minimum value achieved by an algorithm normalized by the expected minimum and maximum objective function values f , following the equation: si = 100 * fiix -f i fiix -f iii . • Robotic swimming task Robotic swimming simulation environment MuJoCo [39] that is used for testing reinforcement learning tasks, where the goal is to swim as fast as possible. Here we adapt the task to a black-box optimization one by linearly mapping the observation space matrix to the action space as a 16-dimensional optimization problem where the target is the average reward. The maximum number of iterations is set to 100 and the median value of 50 repeated trials is reported.\n• Photocatalytic hydrogen production We replicate the materials design problem addressed in [6] to maximize photocatalytic hydrogen evolution rate (HER) out of mixture of different materials.\nHere, we employ a rather cost-efficient approach where new HER measurements are interpolated through a multi-layered auto-ML ensemble model of neural networks, namely Autogluon [8]. Section 4.5 provides a detailed description of this task. The maximum number of iterations is set to 100 and the median value of 50 repeated trials is reported.\nWe experiment with the following optimizers:\n• Standard Bayesian optimization (SBO) Standard BO algorithm [24] employing a scikit-learn [27] GP with Matern kernel (ν = 2.5) optimized with L-BFGS-B algorithm [18] by the SciPy package [40], warm-started with 5 initial points. Throughout our analyses, we use preset hyperparameters for each of the compared methods. For all GP-based models, including the proposed, a Matérn kernel (ν = 2.5) with added white noise, optimized with L-BFGS-B algorithm [18] and warm-started with 5 initial points is used. Random seed for each trial is set to the trial number. The default acquisition function is UCB with κ set to 2.6. The benchmark results and code can he found in https://github.com/XieZikai/DKIBO." }, { "figure_ref": [], "heading": "Synthetic functions", "publication_ref": [ "b6" ], "table_ref": [ "tab_2", "tab_3", "tab_3" ], "text": "In this section we experiment with a set of synthetic set functions [7] of varying landscapes and dimensionalities to test the performance of the proposed method on different objectives with potentially large number of local optima. Table 1 presents the simple regret which measures the optimality performance of the different algorithms while Table 2 shows the cumulative mean regret which evaluates the convergence speed. The median value across all repeats along with standard deviation is reported in both tables. Here we enrich the DKIBO acquisition functions with a random forest regression model using 20 estimators with maximum depth of 5 tree splits and an early stopping hyperparameter set to 0.05.\nThe proposed method appears to perform best in terms of simple regret while maintains a competitive performance in terms of cumulative mean regret across of a set of diverse synthetic functions, as shown in Table 2. Compared to standard BO, DKIBO offers a consistent improvement on both measures." }, { "figure_ref": [], "heading": "BBO challenge", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Table 3 reports the BBO score for each method. For standard BO and the proposed method we report performances over three different acquisition functions. In this experiment we enrich the DKIBO acquisition functions with a random forest regression model using 20 estimators with maximum depth of 5 tree splits and an early stopping hyperparameter set to 0.05. Evidently, the enriched acquisition function of DKIBO improves upon the standard ones which further highlights the generalizability of the proposed methodology. Overall, SMAC outperforms but DKIBO (using EI) appears to achieve the second best score, demonstrating a very competitive performance." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Robotic swimming task", "publication_ref": [ "b38" ], "table_ref": [ "tab_5" ], "text": "We now test our algorithm on the MuJoCo [39] swimmer environment, which is a physical simulation of a 3D robotic swimmer with three segments and two articulation joints (rotors) to connect two of the segments to form a linear chain. The goal is to move forward as fast as possible by applying torque on the rotors using fluids friction. The environment contains a 2-dimensional action space for the rotors and 8-dimensional observation space. Since the environment is specifically designed for reinforcement learning tasks rather than black-box optimization, we wrap the problem using a (2, 8) matrix as a linear mapping weight from observation space to action space so that the problem transforms to a 16-dimensional black-box optimization problem where the target is to maximize the average reward.\nFigure 2 demonstrates DKIBO's competitive performance against other compared methods. Here we again use a random forest regression as predictive model using 20 estimators with maximum depth of 5 tree splits and an early stopping hyperparameter set to 0.05. It is evident that DKIBO maintains very competitive performance in terms of convergence and simple regret, indicating that the optimization process generally benefits from the augmented knowledge. Notably, a plateauing effect on simple regret can be observed in the last few stages of the optimization in the right-most panel of Figure 2 which could be owed to a possible local optima confinement. Table 4 further details DKIBO's performance in terms of simple regret and cumulative mean regret." }, { "figure_ref": [ "fig_3" ], "heading": "Photocatalytic hydrogen production optimization", "publication_ref": [ "b5", "b40", "b5", "b7", "b5" ], "table_ref": [ "tab_6" ], "text": "In this section we take on a materials design problem [6] where the goal is to find an optimal composition of materials that maximizes hydrogen production through photocatalysis [41]. In this experiment 10 different materials on various concentrations were used increasing the search space (full simplex) to 98,423,325 possible combinations. Due to the increasing demand for tractable optimizers for costly realworld tasks, BO has emerged as a competitive method in the physical sciences community. Here, we recast the problem in a simulated fashion and take advantage of the underlying linear relationship between the input and output parameters (structural knowledge). We specifically enrich our acquisition function with a linear regression model trained by a least-squares error and simulate the HER measurements by fitting existing lab measurements reported in [6] using Autogluon [8], a strong ensemble model that can approximate new measurements at new sampled points. While the simulated measurements are only approximate, our goal here is to successfully utilise underlying structural knowledge as a proof-of-concept.\nFigure 3 demonstrates the dominance of BO-based algorithms over other baseline algorithms in terms of HER and simple regret. Furthermore, the right-most panel highlights that the enrichment of DKIBO with linear regression significantly boosts performance in early stages and validates the utility of external knowledge injection in real-world problems, which is consistent with our observation in experiment 4.4. The left-most panel accordingly shows that the sample points suggested by the proposed approach are consistently better than standard BO (HER measurements are slightly different compared to the ones reported in [6] due to being interpolated by Autogluon; also, no batching is used here). At late stages however, the predictive model appears to induce fluctuations to the overall performance, which highlights the need for further improved adaptation strategies in future works. It is noteworthy that employing a random forest predictive model actually degrades DKIBO's performance, further highlighting the importance of injecting suitable knowledge for the task at hand. Table 5 further highlights the dominance of the proposed method in terms of simple regret and cumulative mean regret." }, { "figure_ref": [], "heading": "Ablation studies", "publication_ref": [], "table_ref": [], "text": "In this section we perform a series of ablation studies and analyses to further investigate the performance and robustness of the proposed methodology." }, { "figure_ref": [], "heading": "Exploration of scaling effect in acquisition", "publication_ref": [ "b8" ], "table_ref": [ "tab_7" ], "text": "We firstly explore the robustness of the proposed acquisition augmentation against scaling effects on the exploration aspects of the optimization. We specifically focus on the UCB acquisition function here mainly because its hyperparameter κ directly controls the exploration scale. We empirically show that adding the predictive model term in the acquisition offers a consistent improvement on the model's overall performance regardless of the choice of the κ hyperparameter in Eq. ( 2) in most cases. We vary κ in {1.3, 2.6, 5.1} and compare DKIBO with standard BO performance in Table 6 across a variety of settings.\nTo further demonstrate the flexibility of our proposed methodology we report performance using another predictive model, namely gradient boosted regression tree (GB) [9] based on 20 estimators and a maximum depth of 3. Due to space limitations we do not report performance using a linear predictive model here, as it empirically showed to only perform well for the photocatalysis experiment due to its underlying problem structure, and the best overall score was always achieved for κ = 2.6. Results demonstrate a consistent win of our methodology using a random forest predictive model (RF), with the gradient boosted (GB) showing a competitive performance across a variety of search spaces. This importantly highlights the ability of the added corrective term to learn to \"rebalance\" the explorationexploitation trade-off under different parametrizations of the acquisition and emphasizes the potential to generalize to various predictive models depending on the problem at hand." }, { "figure_ref": [ "fig_4" ], "heading": "Comparison to a linear mean function GP", "publication_ref": [ "b29", "b6" ], "table_ref": [], "text": "We further validate the utility of the added linear predictive model in the photocatalysis scenario by comparing against a simple GP surrogate with a linear mean function, highlighting major differences between the two approaches. A comparison in terms of both pure HER and simple regret exposes local optima confinement problems for the GP with a linear mean function which appears to dominate long-term predictions throughout the optimization process; a phenomenon also confirmed by [30]. We empirically confirm this here by employing an early stopping approach (ES) where the linear mean term is swapped with a zero-mean GP throughout the rest of the optimization based on the criterion of Eq. (7). HER and simple regret performance is shown in Figure 4 for the two linear mean variants and DKIBO. Evidently, a platauing effect is observed after iteration number 25 for the linear mean term BO in contrast to its respective ES variant, where a rapid performance boost follows after swapping to a zero-mean GP in that iteration. DKIBO on the other hand appears to perform best overall and maintain the correct balance between exploration versus exploitation in the acquisition term between UCB and the linear model." }, { "figure_ref": [ "fig_5" ], "heading": "Corrective term usage analysis", "publication_ref": [ "b6", "b24" ], "table_ref": [], "text": "Finally, we report results on the average usage duration of DKIBO's adaptive corrective term to further investigate the utility of DKIBO's early stopping strategy. Figure 5 shows at which iteration the corrective term is being dropped by the early-stopping strategy proposed in Eq. (7). Evidently, in most cases the corrective term appears to be useful throughout the whole run, while in some cases appears to be dropped in early stages. Nevertheless, even an early stage use only appears to significantly boost performance acting as a warm-start for the optimization. Outlier and percentile [25,75] information is also reported." }, { "figure_ref": [], "heading": "Conclusions and future works", "publication_ref": [], "table_ref": [], "text": "In this work we present a novel Bayesian optimization method to inject domain-specific knowledge about the structure of the search space in physical-world problems. We realize this by enriching the acquisition function with a predictive model that adaptively corrects the selection of new sample points in a form of penalty. The proposed approach is simple to implement, generalizable across a wide range of surrogate-based optimization methods and performs competitively on various different settings. Future directions of this work include the development of meta-models that will adaptively control the selection of the predictive model and early stopping criteria in an auto-ML fashion and testing our algorithm on other materials tasks." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors acknowledge financial support from the Leverhulme Trust via the Leverhulme Research Centre for Functional Materials Design. Zikai gratefully thanks the China Scholarship Council for a PhD studentship." } ]
In this paper we propose DKIBO, a Bayesian optimization (BO) algorithm that accommodates domain knowledge to tune exploration in the search space. Bayesian optimization has recently emerged as a sample-efficient optimizer for many intractable scientific problems. While various existing BO frameworks allow the input of prior beliefs to accelerate the search by narrowing down the space, incorporating such knowledge is not always straightforward and can often introduce bias and lead to poor performance. Here we propose a simple approach to incorporate structural knowledge in the acquisition function by utilizing an additional deterministic surrogate model to enrich the approximation power of the Gaussian process. This is suitably chosen according to structural information of the problem at hand and acts a corrective term towards a betterinformed sampling. We empirically demonstrate the practical utility of the proposed method by successfully injecting domain knowledge in a materials design task. We further validate our method's performance on different experimental settings and ablation analyses.
Domain Knowledge Injection in Bayesian Search for New Materials
[ { "figure_caption": "Figure 1 :1Figure 1: Approximation of the Ackley function by two different surrogate models and their combination. Red points represent datapoints upon which the models are trained.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Average reward (left) and simple regret (right) performance comparison on the robotic swimming task. Solid lines show the median values while shaded areas represent the [25, 75] percentile area.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: HER (left) and simple regret (right) performance comparison on the photocatalysis optimization task. Solid lines show the median values while shaded areas represent the [25, 75] percentile area.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: HER (left) and simple regret (right) performance comparison on the photocatalysis optimization ablation study between DKIBO and BO using linear mean GP and an early stopping strategy (ES). Solid lines show the median values while shaded areas represent the [25, 75] percentile area.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Box plots for average usage duration of DKIBO's adaptive corrective term in various problems. Outlier and percentile[25,75] information is also reported.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 Domain Knowledge Injected Bayesian Optimization Require: Acquisition function e α, maximum iteration number imax, prediction model ξ(x, D), ϵ = 0.05. 1: Initialize observation dataset by random sampling D ← {(xinit, yinit)}; 2: if α(•) is UCB then Probe x ⋆ to get y ⋆ and D ← D + {(x ⋆ , y ⋆ )};", "figure_data": "3:γ ← 1;4: else5:γ ←P Pα(x iiii ,D) ξ(x iiii ,D) ;6: end if7: i ← 0;8: while i < imax do9:Fit the predictive model ξ(x, D) with D;10:Fit the Gaussian process GP ∼ N (0, KMatérn) with D;11: 12: 13:α(x, i) and get new observation point x ⋆ ; Maximize e if ∥xi-xi+1∥ ∥xi+1-1 i P i i=1 x i∥ < ϵ then14:γ ← 0;15:end if16:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Simple regret performance on synthetic functions with various dimensionalities d. Best performances appear boldfaced.", "figure_data": "DKIBOSBOTPERSOTSMACColvilled=531.61±20.8739.52±22.55383.28±353.20 813.56±652.85157.90± 269.58825.70±636.69Michalewiczd=103.57 ± 0.613.94 ± 0.594.16 ± 0.444.46 ± 0.412.41±0.873.74 ± 0.50Ackleyd=20.13±0.280.31±0.505.76±1.798.02±2.512.71±2.232.65 ± 5.47Branind=26.75×10 -5 ±1.49×10 -46.67×10 -5 ±1.17×10 -40.17±0.240.26±0.340.018±0.280.073±0.65Eggholderd=21.12±4.301.92±3.5513.79±29.3683.76±54.518.77±50.6835.86±236.81Goldstein priced=22.86 ± 4.803.04±4.302.92±8.4712.12±18.630.63±26.7733.85±74.01Hartmannd=60.012±0.241.67×10 -3 ±0.0610.77±0.281.33±0.400.14±0.130.33±0.41Rosenbrockd=20.10±0.190.12 ± 0.182.38±4.253.75±10.630.95±4.094.82±9.72Six hump cameld=23.14×10 -4 ±4.42×10 -31.26×10 -3 ±0.0130.049±0.0740.14±0.166.06×10 -3 ±0.094 8.24×10 -3 ±0.23StyblinskiTangd=27.14×10 -4 ± 9.04×10 -47.50×10 -4 ± 1.13×10 -30.99±4.342.76±4.300.21±4.491.49±9.4DKIBOSBOTPERSOTSMACColvilled=52851.95±2753.50 3003.76±2964.77 2908.38±2447.89 3134.14± 2224.16 2466.44±2227.98 3923.27±3044.18Michalewiczd=104.47±1.094.60±0.944.65±0.714.79±0.543.57±1.574.570.96Ackleyd=22.96 ± 2.743.06±2.679.27±3.9111.38±3.796.37±4.208.01±6.27Branind=21.42±1.421.33±1.331.69±1.492.13±1.811.92±1.822.00±1.77Eggholderd=273.57±71.2170.75±68.0177.25±60.08180.71±106.84118.94±101.52177.84±211.39Goldstein priced=2312.02±307.81263.4±267.2262.7±275.9200.97±183.86235.69±225.09248.62±207.54Hartmannd=60.70±0.630.71±0.681.41±0.691.73±0.600.86±0.700.95±0.62Rosenbrockd=2257.91 ± 257.75285.44 ± 285.34358.14±354.28745.21±737.69488.60±486.10316.92±308.64Six hump cameld=20.55±0.550.55±0.540.59±0.520.80±0.630.62±0.600.63±0.57StyblinskiTangd=24.78±4.785.83±5.837.96±6.6510.11±7.056.54±6.3212.01±9.36", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Cumulative mean regret performance on synthetic functions with various dimensionalities d. Best performances appear boldfaced.", "figure_data": "SMACOTTPERSSBODKIBOUCBEIPOIUCBEIPOIBBO score94.1686.90 92.26 83.09 89.87 91.83 87.43 92.86 93.10 87.99", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "BBO scores for different optimization algorithms. Larger scores are better and best performance appears boldfaced.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Simple regret and cumulative mean regret performances on the robotic swimming experiment. Best performance appears boldfaced.", "figure_data": "CMRSimple RegretCMRSimple RegretDKIBO142.81±94.4547.29±60.53DKIBO5.57±3.571.95±2.14SBO155.35± 103.2142.62±44.31SBO7.76±5.122.72±2.62TPE163.51±88.9175.38±55.04TPE10.94±0.5010.54±0.28OT191.33±107.92129.26±79.13OT10.83±1.1410.27±0.91RS218.62±66.08175.68±47.69RS11.22±0.4110.89±0.24SMAC194.85±95.70118.64±59.58SMAC10.71±0.4410.29±0.15", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Simple regret and cumulative mean regret performances on the simulated photocatalysis experiment. Best performance appears boldfaced.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "01×10 -4 2.86×10 -4 1.77×10 -4 6.63×10 -5 6.73×10-5 7.58×10 -5 1.00×10 -4 4.91×10 -5 7.72×10 -5 Median regret performance on all tasks for various κ choices in UCB and predictive model choices. All experiments are conducted in 50 trials of 100 iterations. Due to space limitations standard deviation information is not reported here. Best performances for each κ choice appear boldfaced.", "figure_data": "κ = 5.1κ = 2.6κ = 1.3SBODKIBORFDKIBOGBSBODKIBORFDKIBOGBSBODKIBORFDKIBOGBColvilled=579.5961.6672.7939.5231.6140.3224.5231.0024.90Michalewiczd=1020.12.331.631.91.531.672.151.972.05Ackleyd=22.061.530.660.2800.100.200.130.130.11Branind=2 3.Eggholderd=27.667.544.873.041.111.230.480.270.03GoldsteinPriced=26.025.645.243.042.853.581.141.641.84Hartmann6d=60.510.240.0750.00170.0120.00240.180.128.75×10 -4Rosenbrockd=20.370.340.160.110.100.0720.100.0670.055SixHumpCameld=20.0340.0330.0290.00123.10×10 -4 3.02×10 -48.03×10 -57.20×10 -55.84×10 -5StyblinskiTangd=20.00180.00170.00127.54×10 -47.11×10 -4 6.79×10 -40.00105.73×10 -43.93×10 -4Robotic swimming102.3777.0874.1843.0047.6763.6535.8157.9648.71", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Zikai Xie; Xenophon Evangelopoulos; Joseph C R Thacker; Andrew I Cooper
[ { "authors": " Neurips", "journal": "", "ref_id": "b0", "title": "Black box optimization challenge starter kit", "year": "2020" }, { "authors": "Jason Ansel; Shoaib Kamil; Kalyan Veeramachaneni; Jonathan Ragan-Kelley; Jeffrey Bosboom; Una-May O' Reilly; Saman Amarasinghe", "journal": "", "ref_id": "b1", "title": "Opentuner: An extensible framework for program autotuning", "year": "2014" }, { "authors": "James Bergstra; Rémi Bardenet; Yoshua Bengio; Balázs Kégl", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b2", "title": "Algorithms for hyper-parameter optimization", "year": "2011" }, { "authors": "James Bergstra; Brent Komer; Chris Eliasmith; Dan Yamins; David D Cox", "journal": "Computational Science & Discovery", "ref_id": "b3", "title": "Hyperopt: a python library for model selection and hyperparameter optimization", "year": "2015" }, { "authors": "George Ep Box; William H Hunter; Stuart Hunter", "journal": "John Wiley and sons", "ref_id": "b4", "title": "Statistics for Experimenters: Design, Innovation, and Discovery", "year": "2005-05" }, { "authors": "Benjamin Burger; Vladimir V Phillip M Maffettone; Catherine M Gusev; Yang Aitchison; Xiaoyan Bai; Xiaobo Wang; Li; Buyi Ben M Alston; Rob Li; Clowes", "journal": "Nature", "ref_id": "b5", "title": "A mobile robotic chemist", "year": "2020" }, { "authors": "George De Ath; Jonathan E Fieldsend; Richard M Everson", "journal": "", "ref_id": "b6", "title": "What do you mean? the role of the mean function in bayesian optimisation", "year": "2020" }, { "authors": "Nick Erickson; Jonas Mueller; Alexander Shirkov; Hang Zhang; Pedro Larroy; Mu Li; Alexander Smola", "journal": "", "ref_id": "b7", "title": "Autogluon-tabular: Robust and accurate automl for structured data", "year": "2020" }, { "authors": " Jerome H Friedman", "journal": "Annals of statistics", "ref_id": "b8", "title": "Greedy function approximation: a gradient boosting machine", "year": "2001" }, { "authors": "Tim Head; Gilles Louppe Mechcoder; Iaroslav Shcherbatyi", "journal": "Zenodo", "ref_id": "b9", "title": "scikit-optimize/scikit-optimize: v0.5.2", "year": "2018" }, { "authors": "José Miguel Hernández-Lobato; Michael Gelbart; Matthew Hoffman; Ryan Adams; Zoubin Ghahramani", "journal": "PMLR", "ref_id": "b10", "title": "Predictive entropy search for bayesian optimization with unknown constraints", "year": "2015" }, { "authors": "José Miguel Hernández-Lobato; James Requeima; Alán Edward O Pyzer-Knapp; Aspuru-Guzik", "journal": "PMLR", "ref_id": "b11", "title": "Parallel and distributed thompson sampling for large-scale accelerated exploration of chemical space", "year": "2017" }, { "authors": "Riley Hickman; Jurgis Ruža; Loïc Roch; Hermann Tribukait; Alberto García-Durán", "journal": "", "ref_id": "b12", "title": "Equipping data-driven experiment planning for self-driving laboratories with semantic memory: case studies of transfer learning in chemical reaction optimization", "year": "2022" }, { "authors": "Frank Hutter; Kevin Holger H Hoos; Leyton-Brown", "journal": "Springer", "ref_id": "b13", "title": "Sequential model-based optimization for general algorithm configuration", "year": "2011" }, { "authors": "Carl Hvarfner; Danny Stoll; Artur Souza; Marius Lindauer; Frank Hutter; Luigi Nardi", "journal": "", "ref_id": "b14", "title": "πBO: Augmenting acquisition functions with user beliefs for bayesian optimization", "year": "2022" }, { "authors": "Matthias Donald R Jones; William J Schonlau; Welch", "journal": "Journal of Global Optimization", "ref_id": "b15", "title": "Efficient global optimization of expensive black-box functions", "year": "1998" }, { "authors": "Tinu Theckel; Joy ; Santu Rana; Sunil Gupta; Svetha Venkatesh", "journal": "Expert Systems with Applications", "ref_id": "b16", "title": "A flexible transfer learning framework for bayesian optimization with convergence guarantee", "year": "2019" }, { "authors": "Jungtaek Kim; Seungjin Choi", "journal": "Springer", "ref_id": "b17", "title": "On local optimizers of acquisition functions in bayesian optimization", "year": "2020" }, { "authors": "Cheng Li; Sunil Gupta; Santu Rana; Vu Nguyen; Antonio Robles-Kelly; Svetha Venkatesh", "journal": "", "ref_id": "b18", "title": "Incorporating expert prior knowledge into experimental design via posterior sampling", "year": "2020" }, { "authors": "Cheng Li; Santu Rana; Sunil Gupta; Vu Nguyen; Svetha Venkatesh; Alessandra Sutti; David Rubin; Teo Slezak; Murray Height; Mazher Mohammed", "journal": "", "ref_id": "b19", "title": "Accelerating experimental design by incorporating experimenter hunches", "year": "2019" }, { "authors": "Ruben Martinez-Cantin; Nando De Freitas; Arnaud Doucet; José A Castellanos", "journal": "Robotics: Science and Systems", "ref_id": "b20", "title": "Active policy learning for robot planning and exploration under uncertainty", "year": "2007" }, { "authors": "Sebastian Vu Nguyen; Michael Schulze; Osborne", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Bayesian optimization for iterative learning", "year": "2020" }, { "authors": "Shuteng Niu; Yongxin Liu; Jian Wang; Houbing Song", "journal": "IEEE Transactions on Artificial Intelligence", "ref_id": "b22", "title": "A decade survey of transfer learning (2010-2020)", "year": "2020" }, { "authors": "Fernando Nogueira", "journal": "", "ref_id": "b23", "title": "Bayesian Optimization: Open source constrained global optimization tool for Python", "year": "2014" }, { "authors": "Changyong Oh; Efstratios Gavves; Max Welling", "journal": "PMLR", "ref_id": "b24", "title": "Bock: Bayesian optimization with cylindrical kernels", "year": "2018" }, { "authors": "Jack Parker-Holder; Vu Nguyen; Stephen J Roberts", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Provably efficient online hyperparameter optimization with population-based bandits", "year": "2020" }, { "authors": "Fabian Pedregosa; Gaël Varoquaux; Alexandre Gramfort; Vincent Michel; Bertrand Thirion; Olivier Grisel; Mathieu Blondel; Peter Prettenhofer; Ron Weiss; Vincent Dubourg", "journal": "the Journal of machine Learning research", "ref_id": "b26", "title": "Scikit-learn: Machine learning in python", "year": "2011" }, { "authors": "Anil Ramachandran; Sunil Gupta; Santu Rana; Cheng Li; Svetha Venkatesh", "journal": "Knowledge-Based Systems", "ref_id": "b27", "title": "Incorporating expert prior in bayesian optimisation via space warping", "year": "2020" }, { "authors": "Carl Edward; Rasmussen Christopher; K I Williams", "journal": "", "ref_id": "b28", "title": "Gaussian processes in machine learning", "year": "2004" }, { "authors": "Stephen Roberts; Michael Osborne; Mark Ebden; Steven Reece; Neale Gibson; Suzanne Aigrain", "journal": "Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences", "ref_id": "b29", "title": "Gaussian processes for time-series modelling", "year": "1984" }, { "authors": "Bobak Shahriari; Kevin Swersky; Ziyu Wang; Ryan P Adams; Nando De Freitas", "journal": "Proceedings of the IEEE", "ref_id": "b30", "title": "Taking the human out of the loop: A review of bayesian optimization", "year": "2015" }, { "authors": "Zekun Alexander E Siemenn; Qianxiao Ren; Tonio Li; Buonassisi", "journal": "", "ref_id": "b31", "title": "Fast bayesian optimization of needle-in-a-haystack problems using zooming memory-based initialization", "year": "2022" }, { "authors": "Eero Siivola; Aki Vehtari; Jarno Vanhatalo; Javier González; Michael Riis Andersen", "journal": "IEEE", "ref_id": "b32", "title": "Correcting boundary over-exploration deficiencies in bayesian optimization with virtual derivative sign observations", "year": "2018" }, { "authors": "Jasper Snoek; Hugo Larochelle; Ryan P Adams", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Practical bayesian optimization of machine learning algorithms", "year": "2012" }, { "authors": "Jasper Snoek; Oren Rippel; Kevin Swersky; Ryan Kiros; Nadathur Satish; Narayanan Sundaram; Mostofa Patwary; Mr Prabhat; Ryan Adams", "journal": "PMLR", "ref_id": "b34", "title": "Scalable bayesian optimization using deep neural networks", "year": "2015" }, { "authors": "Artur Souza; Luigi Nardi; Leonardo B Oliveira; Kunle Olukotun; Marius Lindauer; Frank Hutter", "journal": "Springer", "ref_id": "b35", "title": "Bayesian optimization with a prior for the optimum", "year": "2021" }, { "authors": "Jost Tobias Springenberg; Aaron Klein; Stefan Falkner; Frank Hutter", "journal": "Curran Associates, Inc", "ref_id": "b36", "title": "Bayesian optimization with robust bayesian neural networks", "year": "2016" }, { "authors": "Kevin Swersky; Jasper Snoek; Ryan P Adams", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b37", "title": "Multi-task bayesian optimization", "year": "2013" }, { "authors": "Emanuel Todorov; Tom Erez; Yuval Tassa", "journal": "IEEE", "ref_id": "b38", "title": "Mujoco: A physics engine for model-based control", "year": "2012" }, { "authors": "Pauli Virtanen; Ralf Gommers; Travis E Oliphant; Matt Haberland; Tyler Reddy; David Cournapeau; Evgeni Burovski; Pearu Peterson; Warren Weckesser; Jonathan Bright", "journal": "Nature methods", "ref_id": "b39", "title": "Scipy 1.0: fundamental algorithms for scientific computing in python", "year": "2020" }, { "authors": "Zheng Wang; Can Li; Kazunari Domen", "journal": "Chem. Soc. Rev", "ref_id": "b40", "title": "Recent developments in heterogeneous photocatalysts for solar-driven overall water splitting", "year": "2019" } ]
[ { "formula_coordinates": [ 1, 156.84, 601.05, 130.35, 13.47 ], "formula_id": "formula_0", "formula_text": "x∈X f (x),(1)" }, { "formula_coordinates": [ 2, 372.07, 356.24, 178.13, 15.75 ], "formula_id": "formula_1", "formula_text": "x ⋆ = arg max x∈X µ(x) + κσ(x),(2)" }, { "formula_coordinates": [ 2, 361.02, 444.99, 189.18, 15.75 ], "formula_id": "formula_2", "formula_text": "x ⋆ = arg max x∈X α(x, D) + γξ(x, D).(3)" }, { "formula_coordinates": [ 3, 126.05, 249.49, 161.14, 33.71 ], "formula_id": "formula_3", "formula_text": "γ = P α(xinit, D) P ξ(xinit, D) ,(4)" }, { "formula_coordinates": [ 3, 120.75, 423.99, 166.44, 22.52 ], "formula_id": "formula_4", "formula_text": "h(i) = min 1, 4i 2 i 2 max ,(5)" }, { "formula_coordinates": [ 3, 93.04, 496.39, 194.15, 21.82 ], "formula_id": "formula_5", "formula_text": "e α(x, D, i) = α(x, D) + γh(i)ξ(x, D).(6)" }, { "formula_coordinates": [ 3, 149.78, 639.16, 137.42, 24.49 ], "formula_id": "formula_6", "formula_text": "1 i P i k=1 x k < ϵ.(7)" } ]
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b15", "b4", "b0" ], "table_ref": [], "text": "Biometric recognition systems play important roles in different domains due to their accuracy and quick processing time. Biometric authentication is based on an individual's distinct physiological or behavioral traits, such as voice prints, iris patterns, or fingerprints [17]. It reduces the risk of unauthorized access and burdens like the requirement of memorizing passwords, fraudulent authentication, and password stealing, as most of the biometric traits are difficult to forge or replicate, providing a higher level of security compared to passwords, which can be easily guessed or stolen unless the password is strong. Therefore, such authentication systems are becoming increasingly common in situations where users need to quickly authenticate themselves, such as border crossings, airports, cell-phone based authentication, and in healthcare, by simply providing their biometric data [16]. This eliminates the need for complex login procedures or repetitive authentication steps.\nOne of the most popular and extensively utilized biometric characteristics for authentication is fingerprints. Each fingerprint is unique and remains consistent over time. To enhance security and accuracy, multiple fingerprints, such as the fingerprint slap, are used for a user instead of a single fingerprint. The first step in using a slap fingerprint to authenticate a user is to separate or segment each finger in the slap [2]. There are several publicly and commercially available slap fingerprints segmenters, such as NIST NFSEG [10] and Neurotechnology Verifinger Segmenter [5].\nWe have developed a deep learning-based fingerprint segmenter called CRFSEG [15] which outperforms other fingerprint segmenters in terms of precise fingerprint detection and fingerprint matching accuracy. However, it is important to note that these fingerprint segmenters have mainly been developed and used for contact-based slap fingerprint images. While contact-based slap fingerprint authentication offers significant advantages, it also has multiple limitations, including the requirement for specialized hardware or infrastructure for implementation.\nMost fingerprint authentication systems require physical contact between the fingers and fingerprint-capturing devices to capture fingerprint images. However, such contact can result in problems like blurred images due to dust on the capturing surface of the device and the potential for contamination from previously captured fingerprint information. Furthermore, fingerprint data can raise privacy concerns, and there is a possibility of false positives or false negatives during the matching process. In addition to these limitations, contactbased fingerprint-capturing processes raise health concerns, particularly during pandemics, as they require direct contact with capturing devices [18]. However, ongoing advancements in fingerprint-based biometric technology continue to address these challenges, making biometrics a promising and increasingly adopted authentication solution. One promising area is the use of contactless fingerprinting, which eliminates the need for additional specialized hardware or infrastructure. The term fingerphoto refers to a contactless fingerprint image that may be taken with any camera, including a simple mobile phone camera. A fingerphoto is created by capturing an image of human fingers with a basic smartphone camera and typically includes multiple fingers [14]. In Figure 1, a sample fingerphoto is shown.\nContactless fingerprint-based authentication systems offer several advantages over traditional contact-based fingerprint authentication systems: Figure 1: A sample fingerphoto image was taken with a simple mobile phone camera. Fingerphotos are typically acquired by capturing an image of human fingers using a standard smartphone camera, and they frequently include multiple fingers within the frame. i) Improved security against spoofing: One of the most important advantages of using contactless fingerprint biometric systems is that they often incorporate additional antispoofing measures to enhance security. Advanced sensors and algorithms can detect and differentiate between live fingers and spoofing attempts using fake or synthetic fingerprints, reducing the risk of unauthorized access.\nii) Hygiene and cleanliness: By eliminating the requirement of physical contact between the user's finger and the fingerprint-capturing sensor, contactless fingerprint authentication reduces the risk of spreading germs, viruses, and bacteria, making it a more hygienic option, especially in high-traffic areas or shared devices.\niii) Convenience: Contactless fingerprint authentication is often faster and more userfriendly than contact-based methods. Users can simply place their finger toward the sensors/cameras or the camera near the user's hand, eliminating the need for precise alignment or direct physical contact. Most contactless fingerprint sensors like Idemia MorphoWave, Fujitsu PalmSecure can be used from a distance, which can be helpful for people with disabilities or who have difficulty bending over. This improves the overall user experience and can lead to increased user adoption and satisfaction.\niv) Versatility and ease of integration: Contactless fingerprint biometric systems can be integrated into a wide range of devices and environments. They can work with existing touchless technologies, such as proximity sensors or facial recognition, to provide multi-modal authentication options. This versatility allows for seamless integration into various applications, including access control systems, smartphones, and payment terminals.\nDespite the advantages offered by contactless fingerprint biometrics, it is worth noting that they also have some limitations. These include potential susceptibility to environmental factors such as lighting conditions, distance captures, and uneven focus, as well as the need for appropriate sensor technology. The implementation of multispectral imaging technology exemplifies a suitable approach to address certain challenges associated with contactless fingerprint biometrics. Multispectral fingerprint sensors excel in overcoming environmental obstacles by capturing data across multiple wavelengths of light enabling them to mitigate issues arising from factors such as lighting conditions and other environmental variables. Advancements in sensor technology and algorithmic improvements continue to help overcome these challenges and make contactless fingerprint biometrics a promising and viable authentication solution [1].\nMost contactless fingerprint authentication systems utilize fingerphotos, which are contactless fingerprint images that are captured from multiple fingers. Segmenting all fingertips from fingerphotos is an active area of research in contactless biometric authentication systems. Segmentation plays a crucial role in fingerprint matching, as observed in the existing literature [13].\nIn this paper, we describe our novel contactless fingerprint segmentation system developed by enhancing our existing contact-based fingerprint segmentation model, CRFSEG (Clarkson Rotated Fingerprint Segmentation Model) [15], by updating its architecture and training it on a contactless dataset. Our novel contactless segmentation model CRFSEG-v2 demonstrates higher accuracy when evaluated on our in-house contactless fingerprint dataset consisting of 23,650 slaps, which were annotated by human experts. We describe a novel model for segmentation of contactless fingerphotos. Its architecture is based on our prior fingerprint segmentation system [15]. Special attention was given to optimizing the deep learning architecture for this purpose. The paper presents the following novel contributions:\n• Built an in-house contactless dataset that contains 23,650 fingerphotos (94,600 single fingers).\n• Annotated all fingerphoto images1 manually to establish a ground-truth baseline for the accuracy assessment of fingerprint segmentation systems.\n• Updated and retrained for contactless fingerphotos our previously developed age-invariant deep learning-based slap segmentation model, that can handle arbitrarily orientated fingerprints.\n• Assessed the performance of the contactless model named CRFSEG-v2 (Clarkson Rotated Fingerprint Segmentation Model) using the following metrics, MAE, EAP, and accuracy in fingerprint matching." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b11" ], "table_ref": [], "text": "Researchers have utilized various segmentation methods to achieve precise contactless fingerprint segmentation. However, the skin-mask finger segmentation technique relies on the assumption that the skin color of the fingers is relatively uniform. This assumption may not hold true in all cases, such as when the fingers are dirty or have been exposed to sunlight. This type of segmentation is sensitive to environmental factors such as lighting conditions and variations in skin color. The presence of objects or background elements that resemble skin tones may affect the accuracy of the segmentation results. Additionally, the technique may not include all finger pixels in the selection, and the separation of fingers connected via the thumb relies on evaluating local minima and maxima, which may not always be accurate. Malhotra et al. introduced a method for segmenting the distal phalange in contactless fingerprint images by combining a saliency map and a skin-color map [12]. They adopted a different approach by employing a random decision forest matcher and feature extraction with a deep scattering network. Their dataset included 1216 contact-based and 8512 contactless images from 152 fingers, resulting in an EER ranging from 2.11% to 5.23%. This process involves extracting a binary mask that represents the finger region in a captured finger selfie. It combines region covariance-based saliency and skin color measurements for effective segmentation. The steps include extracting visual features, constructing covariance matrices, computing dissimilarities between regions, generating saliency maps, converting to the CMYK (Cyan(C), Magenta(M), Yellow(Y), and BlacK(B)) color model, fusing saliency and skin color maps, and applying thresholds to obtain the final segmented mask. Although the algorithm produces impressive results, it requires extensive tuning of hyperparameters and it continues to struggle with accurately distinguishing fingerprints in the presence of noisy backgrounds or under excessively bright lighting conditions.\nTo address challenges in accurately segmenting fingerprints under challenging illumination conditions or noisy backgrounds, Grosz et al. an auto-encoder-based segmentation approach [4]. To perform 500 PPI deformation and scale correction on contactless fingerprints, they utilized a spatial transformer. Their dataset included three parts one with 8,512 contactless and 1,216 contact-based fingerprints from 152 fingers, another containing 2,000 contactless and 4,000 contact-based fingerprints from 1,000 fingers, and a ZJU dataset comprising 9,888 contactless and 9,888 contact-based fingerprints from 824 fingers. These datasets served as the basis for evaluating their methodology. Their approach achieved impressive EERs of 1.20%, 0.72%, 0.30%, and 0.62% on these datasets, respectively. A U-net segmentation network is employed to segment a distal phalange of contactless fingerprint photos. The segmentation network is trained using manually marked segmentation masks from a dataset. The segmentation algorithm takes unsegmented images as input and outputs a segmentation mask. This mask is then used to crop a distal phalange of fingerprints and the background is removed. Image enhancements, including histogram equalization and gray-level inversion, are applied to improve ridge-valley structure.\nNone of the aforementioned research studies provide information about the accuracy like mean absolute error (MAE), error in angle prediction (EAP), and labeling accuracy of their finger photo segmentation techniques. Our work addresses this gap by evaluating our novel contactless segmentation system using a large dataset of images. We not only report the accuracy of the segmenter but also assess its impact on contactless fingerprint-matching accuracy." }, { "figure_ref": [], "heading": "Research Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a detailed examination of the method used for collecting fingerphotos from adult subjects, data annotation, data augmentation, and ground truth labeling. Then, we describe the proposed neural network architecture of CRFSEG-v2. Finally, we discuss the metrics used to evaluate different slap segmentation algorithms." }, { "figure_ref": [], "heading": "Slap Dataset", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss contactless datasets including fingerphoto collection, annotation, augmentation, and ground truth creation." }, { "figure_ref": [], "heading": "Contactless Fingerphoto Collection", "publication_ref": [], "table_ref": [], "text": "Contactless fingerphoto collection is a crucial component of contactless fingerprint biometrics research. Our literature review revealed a shortage of contactless fingerprint images, particularly finger photos containing multiple fingers. To address this gap, we conducted a user study and collected fingerphotos multiple times. The collection process involved acquiring fingerprint images without any physical contact with fingerprint capture devices. In this subsection, we discuss the different techniques and considerations involved in contactless fingerphoto collection.\n• Camera Selection: The selection of a camera is of utmost importance in contactless finger photo collection. Opting for high-resolution cameras with excellent color reproduction and minimal noise levels is ideal for capturing clear and detailed finger images. However, when developing a contactless authentication system that operates in realworld scenarios, relying solely on high-quality images is not recommended. Therefore, we utilized multiple mobile devices, including the Google Pixel 2, iPhone 7, X, Samsung S6, S7, and S9 to capture fingerphotos.\n• Illumination Conditions: The lighting conditions during the capture of contactless fingerphotos are crucial. It is important to have sufficient and uniform illumination to minimize shadows, reflections, and image noise. We took precautions to avoid overexposure or underexposure, as these can affect the visibility of fingerprint details.\n• Finger Alignment and Placement: Providing clear instructions and visual cues can help users achieve optimal finger placement. However, for real-world applications, fingerprint alignment and placement can vary. Therefore, we did not instruct users on how to position their fingers specifically within the camera's field of view. As a result, we obtained fingerprints with diverse alignment and placement.\n• Motion Blur Reduction: Movement during image capture can cause motion blur, which can degrade the quality of finger images. To minimize motion blur and improve image sharpness, users were instructed to place their hands on a table or stable surface.\n• Backgrounds: Selecting an appropriate background is crucial for improving the contrast between the finger and the background, which facilitates accurate extraction of the finger region. However, it is not always possible for users to have an ideal background in real-world applications. Therefore, we employed various backgrounds when capturing fingerphotos.\n• Privacy and Data Protection: Collecting contactless fingerphotos involves capturing personal biometric data. It is crucial to comply with privacy regulations and ensure the secure storage and handling of the collected data. To protect the privacy and confidentiality of individuals' biometric information, we employed anonymization techniques, and with appropriate permits from the Institutional Review Board (IRB).\nThis dataset comprises a total of 2150 fingerphotos. A comprehensive and diverse image dataset should encompass a broad range of scenarios, including different poses, illumination conditions, sizes, brightness levels, and fingerprint positions. Such datasets are valuable for developing a robust deep-learning-based fingerprint segmentation model and are compiled to thoroughly evaluate the model's performance in real-world scenarios. To introduce such variation into our dataset, we employed data augmentation techniques. Through augmentation, we obtained an additional 21,500 augmented images, thereby expanding the dataset to a total 23,650 images." }, { "figure_ref": [], "heading": "Fingerphoto Annotation and Augmentation", "publication_ref": [], "table_ref": [], "text": "Data annotation plays a crucial role in developing a well-structured and representative dataset, which is essential for building a reliable deep learning-based image segmentation model. The annotation process involves delineating a bounding box around each fingertip in a fingerphoto and assigning a corresponding label to each fingertip. The fingertip labels used include Left-Index, Left-Middle, Left-Ring, Left-Little, Left-Thumb, Right-Index, Right-Middle, Right-Ring, Right-Little, and Right-Thumb. This annotated data is then utilized to train a deep learning model, enabling it to recognize and classify distinct objects or classes within a fingerphoto.\nThe creation of a ground truth dataset for evaluating contactless fingerprint segmentation models typically involves laborious manual annotation, especially when dealing with a large number of images. To deal with this process, we initially annotated 200 images and fine-tuned our previously developed contact-based fingerprint segmentation model using this annotated dataset. Subsequently, we employed the fine-tuned CRFSEG-v2 segmentation model to automatically segment the fingerphotos. However, a manual review of the CRFSEG-v2 results was still necessary to ensure accuracy and alignment. This involved visually examining the images, rectifying errors, and aligning the fingerprints. Although the CRFSEG-v2 model demonstrated proficiency in generating bounding boxes around fingertips, it occasionally mislabeled them. We rectified these misclassifications through manual inspection of all the fingerphotos. This automated segmentation approach resulted in a 65-70% reduction in the time required for annotation. We further augmented the contactless dataset by rotating all the fingerphotos at various angles (-90°to 90°) to create a diverse set of slap images containing rotated fingerprints. The details of the dataset are shown in the Table 1.\nTable 1: We have 23,650 finger photos. Out of these, 2,150 were collected by Clarkson University. For image collection, we used different types of mobile phones such as Samsung S20, iPhone 7, iPhone X, and Google Pixel. Subsequently, we annotated the images manually. To create rotating slap images, we utilized the 2,150 finger photos that were annotated by humans to create more images using an augmentation technique. This resulted in the generation of 21,500 more augmented images by rotating all the finger photos at different angles (-90 " }, { "figure_ref": [ "fig_0" ], "heading": "Deep Learning Architecture for Contactless fingerphoto Segmentation", "publication_ref": [], "table_ref": [], "text": "In prior work, we developed a two-stage Faster R-CNN architecture for segmenting contactbased slap fingerprints [15]. We utilized a similar two-stage faster architecture for segmenting contactless slap fingerphotos. However, contact-based and contactless fingerprint images possess distinct characteristics in terms of quality, structure, size, and other factors. Contact-based images tend to exhibit higher quality as they are captured using physical sensors that make direct contact with the finger. Moreover, these images benefit from stable lighting conditions. In contrast, contactless images often suffer from challenges such as blurriness, noise, diverse backgrounds, and variations in illumination, as they are captured from a distance using different types of sensors. Consequently, we needed to make small modifications to the architecture to accommodate the lower quality and address other difficulties inherent in contactless images. This modification involves the use of different backbone networks, loss functions, increasing the kernel size, and adjusting the learning rate and number of epochs. The contactless fingerphoto deep learning architecture, see Figure 2, comprises three key structural components: the box head, the oriented region proposal network, and the backbone network." }, { "figure_ref": [], "heading": "Backbone Network", "publication_ref": [], "table_ref": [], "text": "CRFSEG-v2 backbone network, featuring the Resnet-100 with FPN architecture, is designed to extract feature maps of different sizes from contactless fingerphotos. This network incorporates both stem blocks and bottleneck blocks. To optimize calculations, extract key features, and expand channel capabilities, bottleneck blocks with three convolution layers of varying kernel sizes (1×1, 3×3, and 1×1) are employed [7]. Within the stem block, the focus is on reducing the input image size, achieved through Convolution 2D layers, ReLU activation, and max-pooling layers, to minimize computational cost while retaining all essential information. The oriented region proposal network (O-RPN) then takes in the multiscale semantic-rich feature maps generated by the backbone network." }, { "figure_ref": [ "fig_0" ], "heading": "Oriented Region Proposal Network (O-RPN)", "publication_ref": [], "table_ref": [], "text": "After being obtained from the backbone network, the input image feature maps are employed by the Oriented Region Proposal Network (O-RPN) to generate proposals for oriented regions of interest (ROIs), which are most likely to encompass the objects of interest. This is achieved by identifying areas with a high probability of containing the target objects. The proposal of ROIs by the O-RPN is a crucial step as it helps in generating accurate rotated bounding boxes for both axis-aligned and rotated objects. This is in contrast to the conventional regional proposal network (RPN) used in the Faster R-CNN architecture, where only axis-aligned regions are proposed. The proposal of oriented regions is achieved by using a sliding window approach with a 3×3 kernel on the feature maps. Consequently, the system generates anchor boxes with diverse aspect ratios, scales, and orientations. If we have k a different orientations, k s different scales, and k r different aspect rations, then K = k a × k s × k r anchor boxes are generated for each position in the feature map. Most of these anchor boxes might not have targeted objects in them. Subsequently, convolution layers and parallel output layers called localizing and classifying layers are employed to differentiate anchor boxes with target objects from others. The classifying layer assigns foreground or background labels according to the intersectionover-union (IoU) score with ground truth boxes while learning offsets (x,y,w,h, θ) is done by the localization layer for the foreground boxes. The term \"Foreground\" is used to denote regions of interest (proposals) that exhibit a substantial overlap with the bounding boxes of the ground truth objects in the image. In simpler terms, these are areas most likely to contain a prominent object. The classifying layer assigns a \"foreground\" label to these regions. Conversely, regions of interest that lack a significant overlap with any ground truth bounding box are designated as the \"background\". The probability of an object being present in these areas is lower. The classifying layer assigns a \"background\" label to these regions.\nRegarding the regression offset, the localizing layer generates (K×5) parameterized encodings, while the classifying layer produces (K×5) parameterized scores for region classification. The anchor generation strategy is illustrated in Figure 2.\nFor training O-RPN, seven different orientations (-π/4, -π/6, -π/12, 0, π/12, π/6, π/4), three aspect ratios (1:1, 1:2, 2:1 ) and three scales (128, 256 and 512) are used to generate anchors. In conclusion, the selection of seven orientations, three aspect ratios, and three scales involves a trade-off between maintaining computational efficiency during training and inference while ensuring the model's capacity to capture the diversity of objects in images [8]. Then, all the anchors are divided into three categories • positive anchors: the Intersection over Union (IoU) overlap between these anchors and the ground-truth boxes exceeds 0.7, indicating a strong alignment with the target objects, • negative anchors: these anchors have an IoU overlap smaller than 0.3 with the groundtruth boxes, indicating a significant mismatch with the target objects,\n• neutral anchors: these anchors have an IoU overlap between 0.3 and 0.7 using the ground-truth boxes. These are removed from the anchor set and not used during subsequent processing.\nIn contrast to Faster R-CNN, where horizontal anchor boxes are used, our approach utilizes oriented anchor boxes and oriented ground-truth boxes. The loss functions employed to train the O-RPN are defined by the following equations:\nL o-rpn = L cls (p, u) + λuL reg (t, t * ) (1)\nHere, L cls is the classification loss, p is the predicted probability across the foreground and background classes by the softmax function, u represents class label for anchors, where u = 1 for foreground containing fingerprint and u = 0 for background; t = (t x , t y , t h , t w , t θ ) denotes the predicted regression offset value of an anchor calculated by the network, and t * = (t * x , t * y , t * h , t * w , t * θ ) represents ground truth. λ is a balancing parameter that manages the balance between class loss and regression loss. Only the regression loss is enabled if u = 1 for the foreground and there is no regression for the background. The classification loss function is defined as the cross-entropy loss between the ground-truth label u and the predicted probability p:\nL cls (p, u) = -u • log(p) -(1 -u) • log(1 -p) (2)\nTuple t and t * are calculated like this:\nt x = (x -x a )/w a , t y = (y -y a )/h a , t w = log(w/w a ), t h = log(h/h a ), t θ = θ -θ a (3) t * x = (x * -x a )/w a , t * y = (y * -y a )/h a , t * w = log(w * /w a ), t * h = log(h * /h a ), t * θ = θ * -θ a (4)\nWhere x, x a and x * denote predicted box, anchor and ground truth box, respectively; similar for y, h, w and θ. The smooth-L1 loss is adopted for bounding box regression as follows:\nL reg (t, t * ) = i∈x,y,w,h,θ u.smooth L1 (t * i -t i ) (5) smooth L1 (x) = 0.5x 2 if |x| <1 |x| -0.5 otherwise(6)" }, { "figure_ref": [], "heading": "Box Head", "publication_ref": [], "table_ref": [], "text": "The O-RPN network, as part of the architecture, generates 1000 proposal boxes and objectless logits by default. These proposal boxes are then projected into the feature space using an ROI pooling layer. The output of the ROI pooling layer is reshaped and fed into the fully connected (FC) layers. The FC layers generate a RoI vector, which is then passed through a predictor with two branches: the rotated bounding box regressor and the classifier. The classifier, located in the classification layer of the model, predicts the object class, while the regressor layer is responsible for regressing the bounding box values. This process enables the model to accurately identify and classify objects within the input image, and also refine the location of the proposed bounding boxes." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "We used Mean Absolute Error, Error in Angle Prediction, fingerprint labeling accuracy, and matching score to assess the effectiveness of the CRFSEG-v2 model on our contactless fingerphoto dataset." }, { "figure_ref": [ "fig_1" ], "heading": "Mean Absolute Error", "publication_ref": [], "table_ref": [], "text": "The Mean Absolute Error is a metric applied to evaluate the performance of fingerprint segmentation models in accurately segmenting fingerprints within a specified geometric tolerance compared to human-annotated ground truth data. Fingerprint segmentation models need to strike a balance between two factors: oversegmentation and under-segmentation. Over-segmentation occurs when the predicted bounding box is smaller than the ground truth fingerprint area, potentially capturing the ridgevalley structure of adjacent fingerprints and introducing noise that may impact matching performance. Under-segmentation, on the other hand, involves extending the predicted bounding box beyond the actual fingerprint area, resulting in the loss of valuable fingerprint details and degrading matching performance. The MAE metric helps quantify the extent of over-segmentation or under-segmentation produced by the model. To calculate the Mean Absolute Error, we measure the distance in pixels between each side of the predicted bounding box and the corresponding side of the annotated ground truth bounding box. A successful segmentation refers to finding a bounding box around a fingerprint within a certain geometric tolerance of the human-annotated ground truth bounding box.\nFigure 3 illustrates more detail about calculating the MAE for a fingerprint. A detected bounding box is considered to have a positive error if any of its sides encompass more data than the corresponding side of the ground-truth bounding box. An example of a positive error is shown on the right side of the rightmost fingerprint. A detected bounding box is considered to have a negative error if any of its sides enclose less data than the corresponding side of the ground-truth bounding box. An example of the negative error is shown on the top side of the left-most fingerprint.\nFinally, Equation 7 is applied independently to calculate the MAE for each side.\nMAE = 1 N N i=0 |X error i | (7)\nN represents the total number of fingerprints within the dataset under evaluation. X signifies the Euclidean distance error on any side of the bounding box, encompassing the top, bottom, left, and right sides." }, { "figure_ref": [], "heading": "Error in Angle Prediction", "publication_ref": [], "table_ref": [], "text": "Error in Angle (EAP) Prediction is employed to assess the ability of various fingerprint segmentation systems to accurately predict the orientation of fingerprints. This involves determining the deviation between the predicted angle of a fingerphoto and the ground truth angle of the same fingerphoto, which is calculated using Equation 8.\nEAP = 1 N N i=0 |θ -θ * | (8)\nWhere N is the total number of fingerprints in the test dataset, θ is a ground truth angle and θ * is a predicted angle by a fingerprint segmentation model." }, { "figure_ref": [], "heading": "Fingerprint classification accuracy", "publication_ref": [ "b18", "b18" ], "table_ref": [], "text": "Hamming loss is employed to assess the performance of multi-class classifiers [19]. A pair of sequences, one comprising the ground-truth label and the other containing the predicted label, is evaluated using hamming loss, which quantifies the number of positions where the corresponding symbols differ. Equation 9 is used to calculate the hamming loss [19].\nHamming loss = 1 N N i=1 |Y i ∆Z i | |L| (9\n)\nwhere N is total number of samples in dataset, and L is number of labels. Z i is the predicted value for the i-th label of a given sample, and Y i is the corresponding ground true value. ∆ stands for the symmetric difference between two sets of predicted and ground truth values. The accuracy of a multi-class classifier is related to Hamming loss [6, 11], which can be computed using Equation 10. Accuracy = 1 -Hamming loss (10)" }, { "figure_ref": [], "heading": "Fingerprint Matching", "publication_ref": [], "table_ref": [], "text": "Fingerprint matching is a crucial aspect of slap segmentation systems, where the performance of the algorithms in correctly segmenting fingerprints within a specific tolerance is evaluated.\nTo assess fingerprint matching, we rely on the true accept rate (TAR) and false accept rate (FAR). The TAR represents the percentage of instances in which a biometric recognition system accurately verifies an authorized individual, calculated using Equation 11:\nT AR = Correct accepted fingerprints Total number of mated matching attempts × 100%(11)\nThe false accept rate (FAR) measures the percentage of instances in which a biometric recognition system mistakenly verifies an unauthorized user. It is computed using Equation 12." }, { "figure_ref": [], "heading": "F AR", "publication_ref": [], "table_ref": [], "text": "= Wrongly accepted fingerprints Total number of non-mated matching attempts × 100% ( 12)" }, { "figure_ref": [], "heading": "Training", "publication_ref": [], "table_ref": [], "text": "The CRFSEG-v2 model was developed using Detectron2, a framework created by Facebook AI Research (FAIR) [22], which supports advanced deep learning-based object detection algorithms. We utilized the Faster R-CNN algorithm and customized the Detectron2 code to implement the oriented regional proposal network (ORPN), added new layers for handling rotated bounding boxes, to meet our specific requirements for accurate slap fingerprint segmentation.\nIn our experiments, we started with a pre-trained Faster R-CNN model trained on the MS-COCO dataset, which has 81 output classes. However, since our task involved classifying ten fingerprints from two hands, we adjusted the output layers to reduce the number of classes from 81 to 10. The model was then fine-tuned using our unique slap image dataset. Training followed an end-to-end strategy, where we calculated loss values by comparing the predicted results against the ground truth. The training was conducted on a Linux-operated machine equipped with a 20-core Intel(R) Xeon(R) E5-2690 v2 @ 3.00GHz CPU, 64 GB RAM, and a NVIDIA GeForce 1080 Ti 12-GB GPU.\nWe performed a total of 40,000 training iterations for the fingerprint segmentation model. The learning rates started at 10 -4 and were decreased by a ratio of 0.1 at specific intervals (4000, 8000, 12000, 18000, and 25000, 32000 iterations). The weight decay was set to 0.0005, and the momentum was set to 0.7. Throughout the experiments, we employed multi-scale training, eliminating the need for scaling the input before feeding it into the neural network. The contactless fingerprint dataset, consisting of 23,650 finger photos, is divided using an 80:10:10 train/validate/test split ratio. A 10-fold cross-validation technique is employed to construct and assess the model, and the outcomes are presented in the results section." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "This section presents a comprehensive analysis of our findings. We employed four distinct metrics, namely Mean Absolute Error, Error in Angle Prediction, fingerprint labeling accuracy, and fingerprint matching accuracy, to assess the performance of our nobel CRFSEG-v2 segmentation model that handles contacless (contactbased) fingerprint images." }, { "figure_ref": [ "fig_4" ], "heading": "Mean Absolute Error of CRFSEG-v2 on our Contactless dataset", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "MAE measures the preciseness of bounding boxes around fingerprints generated by slap segmentation algorithms. We used Equation 7 to calculate the MAE of our contactless fingerphoto segmentation model on our novel dataset.\nTable 2 presents the Mean Absolute Error and its corresponding standard deviation for the segmentation model. The MAEs obtained for different sides of the predicted bounding boxes are as follows: 26.09 for the left side, 27.33 for the right side, 20.23 for the top side, and 52.92 for the bottom side. It is worth noting that all these values are below the NIST-defined tolerance threshold of 64 pixels. Furthermore, the achieved MAEs are comparable to those obtained by the contact-based fingerprint segmentation system, indicating the effectiveness of the proposed approach in achieving accurate segmentation results. Figure 4 showcases the histograms illustrating the Mean Absolute Error for all sides of the bounding boxes generated by our contactless segmentation model. These histograms are employed to showcase, analyze, and evaluate the MAE results. The generation process involved subtracting the coordinate positions of the corresponding sides of the ground-truth bounding boxes from the coordinate positions of the corresponding sides of the detected bounding boxes obtained from the segmentation model. The histograms provide evidence of improved performance in accurately segmenting contactless fingerphotos." }, { "figure_ref": [], "heading": "Error in Angle Prediction", "publication_ref": [], "table_ref": [], "text": "The Error in Angle Prediction (EAP) refers to the discrepancy between the angles predicted by the segmentation algorithms and the corresponding ground-truth angles of fingerphotos. It directly reflects the precision of the bounding boxes detected by the segmentation model. A smaller deviation among the predicted angles and the ground-truth angles signifies more accurate segmentation outcomes. The calculation of EAP is performed using Equation 8.\nFor our contactless segmentation system, the resulting EAP is 5. " }, { "figure_ref": [], "heading": "Label Prediction Accuracy of the contactless segmentation model", "publication_ref": [], "table_ref": [], "text": "The fingerprint label prediction accuracy of the contactless segmentation model was calculated using the Hamming Loss metric, which is widely used for evaluating multi-label classifiers [9]. Our segmentation model demonstrated a remarkable label accuracy of 97.46% on our contactless dataset, underscoring its superior accuracy in predicting fingerprint labels. The model's ability to accurately predict labels solely based on the characteristics of the fingerprint images adds to its versatility and potential for broader application." }, { "figure_ref": [], "heading": "Fingerprint Matching", "publication_ref": [], "table_ref": [], "text": "The main objective of a fingerprint segmentation algorithm is to improve the matching performance. In our research, we specifically focus on accurately segmenting fingerphotos To calculate the matching accuracy, we generated two sets of segmented fingerprint images from the contactless fingerphoto dataset. One set of segmented fingerprints is generated using the information of human-annotated (ground truth) bounding boxes and another set is generated using the information of CRFSEG-v2 model (segmentation model) generated bounding box information. We evaluated all possible genuine comparisons for each fingerprint in the segmented fingerprint set, while randomly selecting 100 non-mated fingerprints to create an imposter distribution.\nIn Table 3, we present the matching performance for segmented fingerprints using both the ground-truth bounding box information and the segmentation model. Our segmentation model achieved a fingerprint-matching accuracy of 88.88%, whereas the ground-truth accuracy was 90.36%. However, It is crucial to remember that the fingerprint matching accuracy for both the ground-truth data and the model's output data falls short compared to the contact-based fingerprint model, which achieves an accuracy of about 99%. Several factors contribute to the lower matching results, which we thoroughly discuss in the discussion section. Additionally, we provide suggestions for future research to overcome these challenges and achieve better matching performance. In total, we conducted 6,208,760 comparisons across all fingers in our experiment to estimate the matching accuracy. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "This study focuses on the development of a highly accurate slap segmentation system specifically designed for contactless fingerphoto images. The system utilizes deep learning techniques, particularly convolutional networks trained using an end-to-end approach to address challenges such as rotation and noise commonly encountered in fingerphotos. One advantage of our system is its ability to be fine-tuned using additional datasets, which enhances its performance and generalization across a wide, diverse range of fingerphoto images. To the best of our knowledge, commercial slap segmentation systems lack the flexibility to be easily fine-tuned on diverse fingerphoto datasets.\nThe mean absolute errors (MAEs), error in angle prediction (EAP), and labeling accuracy achieved by our segmentation model are high and comparable to the accuracy levels achieved by high-precision segmentation systems developed and tested using contact-based slap images. However, we observed lower accuracy in the matching performance of our system. It is crucial to remember that the matching performance heavily relies on the precise segmentation of fingerphotos, accurate labeling of fingertips/fingerprints, and the quality of the fingerphotos. Our segmentation model demonstrates the ability to effectively segment fingerphotos even in cases where the image quality is poor, thereby achieving label accuracy that is nearly on par with human-level performance. However, when these segmented images were subjected to different commercial matching software, we observed a decrease in fingerprint-matching accuracy. Through manual examination, we determined that our segmentation system accurately segments fingertips and assigns accurate labels to them. We believe the reduced fingerprint matching accuracy can be attributed to the limitations of the fingerprint matching software." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we have examined the potential of contactless fingerprint authentication systems as a promising alternative to traditional contact-based methods. By leveraging deep learning techniques, we have developed and evaluated a novel segmentation model for precise localization and extraction of contactless fingerprints. The results obtained from our real-world dataset demonstrate the effectiveness and reliability of our novel segmentation and extraction method. This work centers around the development of a segmentation model that leverages deep learning techniques, specifically tailored for the segmentation of contactless fingerprints.\nOur novel CRFSEG-v2 model has achieved notable results, including an average Mean Absolute Error (MAE) of 30 pixels, an Error in Angle Prediction (EAP) of 5.92 degrees, a Labeling Accuracy of 97.46%, and a VeriFinger matching accuracy of 88.87%. Additionally, we have curated an extensive in-house contactless dataset, comprising 23,650 finger photos. While our research showcases promising results, there are still challenges to address in the development of contactless fingerprint authentication systems. Future work should focus on addressing issues like variability in image quality, occlusion, and lighting conditions to further improve the robustness and generalization of the proposed system. With continued research and advancements in deep learning techniques, we can expect even more sophisticated and reliable contactless fingerprint authentication systems to emerge. These developments will undoubtedly contribute to enhanced security and an improved user experience across a broad range of applications, making fingerprint authentication an indispensable part of our digital lives.\nHere are some potential future steps that researchers can consider to address the matching failure and hence improve performance:\n• GAN-based approaches for enhancing low-quality images: Developing a generative adversarial network (GAN) can help learn the mapping between low-quality fingerprint images and their corresponding high-quality representations. By training the GAN on a dataset containing pairs of low-quality and high-quality fingerprint images, it can learn to enhance the low-quality images, thereby improving the matching accuracy.\n• Feature extraction and matching algorithms: Robust feature extraction algorithms can be employed to capture relevant fingerprint information, even in the presence of variations in image quality. These algorithms should be designed to handle distortions caused by factors such as blurriness, missing parts, or poor lighting conditions. Similarly, matching algorithms should be capable of accurately comparing and aligning fingerprint features, even when dealing with imperfect images.\n• Multiple capture and fusion: Instead of relying on a single fingerprint image, multiple captures of the fingerprint can be obtained and then fused together. This approach helps mitigate issues like missing parts or blurriness in individual images. Fusion techniques such as averaging, weighted averaging, or feature-level fusion can be employed to combine the information from multiple images and improve the overall matching accuracy." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This material is based upon work supported by the Center for Identification Technology Research and the National Science Foundation under Grant Number 1650503." } ]
Fingerprints are widely recognized as one of the most unique and reliable characteristics of human identity. Most modern fingerprint authentication systems rely on contact-based fingerprints, which require the use of fingerprint scanners or fingerprint sensors for capturing fingerprints during the authentication process. Various types of fingerprint sensors, such as optical, capacitive, and ultrasonic sensors, employ distinct techniques to gather and analyze fingerprint data. This dependency on specific hardware or sensors creates a barrier or challenge for the broader adoption of fingerprint based biometric systems. This limitation hinders the widespread adoption of fingerprint authentication in various applications and scenarios. Border control, healthcare systems, educational institutions, financial transactions, and airport security face challenges when fingerprint sensors are not universally available. To mitigate the dependence on additional hardware, the use of contactless fingerprints has emerged as an alternative. Developing precise fingerprint segmentation methods, accurate fingerprint extraction tools, and reliable fingerprint matchers are crucial for the successful implementation of a robust contactless fingerprint authentication system. This paper focuses on the development of a deep learning-based segmentation tool for contactless fingerprint localization and segmentation. Our system leverages deep learning techniques to achieve high segmentation accuracy and reliable extraction of fingerprints from contactless fingerprint images. In our evaluation, our segmentation method demonstrated an average mean absolute error (MAE) of 30 pixels, an error in angle prediction (EAP) of 5.92 degrees, and a labeling accuracy of 97.46%. These results demonstrate the effectiveness of our novel contactless fingerprint segmentation and extraction tools.
Deep Learning-Based Approaches for Contactless Fingerprints Segmentation and Extraction
[ { "figure_caption": "Figure 2 :2Figure 2: The complete architecture for CRFSEG-v2 includes several processing stages and is intended for precise fingerprint segmentation. For the feature maps, we need to provide our input contactless fingerprint image to a pre-trained CNN model on the ImageNet dataset. To generate oriented anchors, O-RPN needs to run on all levels of feature maps. By selecting spatial features from O-RPN and from the output of the backbone network, ROI pooling layers generate fixed-length feature vectors. These fixed-length feature vectors are then passed through the fully connected layers. There are two parallel branches that receive the output of the fully connected layers, referred to as the Softmax Classifier and Oriented Bounding Box Regressor. The softmax layers contain a softmax layer for multiclass classification, and the regressors contain bounding box regression.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: An example of calculating the positive and negative errors between the predicted and ground truth bounding boxes involves using Euclidean distance. To determine the pixel error for each side of the predicted bounding boxes, the distances between the endpoints of a side and the corresponding endpoints of the ground-truth bounding box are computed in pixels. For instance, when calculating the pixel error for the side AD, perpendicular lines AC and DF are drawn with respect to the line AD. The perpendicular line AC intersects the corresponding ground-truth line at point C, while the perpendicular line DF intersects the extension of the corresponding ground-truth line at point F. Subsequently, the Euclidean distances from point A to point C and from point D to point F are calculated. The average of these two Euclidean distances represents the pixel error for the side AD. This process is repeated for all four sides of the bounding box. Finally, Equation 7 is individually applied to the pixel errors of each side, resulting in the calculation of the Mean Absolute Error for that specific side.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "92 • , accompanied by a standard deviation of 11.98 • . The EAP values obtained from the segmentation model are comparably lower and exhibit a similar trend to those of the contact-based slap fingerprint segmentation, indicating superior performance.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a) The Mean Absolute Error (MAE) in pixels for the left side of the fingerprints, as predicted by the contactless segmentation system. (b) The Mean Absolute Error (MAE) in pixels for the right side of the fingerprints, as predicted by the contactless segmentation system. (c) The Mean Absolute Error (MAE) in pixels for the top side of the fingerprints, as predicted by the contactless segmentation system. (d) The Mean Absolute Error (MAE) in pixels for the bottom side of the fingerprints, as predicted by the contactless segmentation system.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The MAE histograms obtained from contactless fingerphoto segmentation algorithm using Equation 7 are presented in this figure.Figure 4a, Figure 4b, Figure 4c, and Figure 4d illustrate the MAE values for the left, right, top, and bottom sides of the bounding box, respectively, as predicted by the contactless fingerphoto segmentation system.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4: The MAE histograms obtained from contactless fingerphoto segmentation algorithm using Equation 7 are presented in this figure.Figure 4a, Figure 4b, Figure 4c, and Figure 4d illustrate the MAE values for the left, right, top, and bottom sides of the bounding box, respectively, as predicted by the contactless fingerphoto segmentation system.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 55Figure 5 displays the histogram of the EAP generated by the contactless segmentation model. The narrower spread of the histogram, as indicated by the smaller standard deviation and area, demonstrates the superior performance of the segmentation model in angle prediction. The boxplot graph in Figure 6 displays the EAP values obtained from the contactless segmentation model, illustrating the minimum, lower quartile, median, upper quartile, and maximum values. The line extending across the box corresponds to the median value of the EAP. Upon examining the boxplot graph, it becomes evident that the absolute median value of the EAP generated by the contactless segmentation model is notably lower. This indicates that the model demonstrates enhanced accuracy and precision in predicting the angle of fingerphotos, even when they are excessively rotated. The boxplot graph illustrates the constrained variability of the EAP values, further affirming the model's ability to maintain consistent performance regardless of the degree of rotation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The histogram of the error in fingerphoto angle prediction by the contactless segmentation model on the fingerphoto dataset. This error is calculated by subtracting the angles predicted by the models from the ground-truth angles of the fingerphoto images. Low standard deviation values indicate better results.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Boxplots of the error in angle prediction (EAP) of the contactless segmentation model on the contactless fingerphoto dataset. The boxplot statistical analysis of the mean with ±10 • (standard errors) for the EAP values of the model indicates that the algorithm used in this model is invariant to the rotation of fingerphotos.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Table 3 :3Figure 7 is the Receiver Operating Characteristics (ROC) curve that shows the matching scores of our segmentation model CRFSEG-v2 along with ground truth on the contactless dataset. The ROC curve is a graphical plot that represents the tradeoff between the true positive rate (TPR) and the false positive rate (FPR) at various threshold values.", "figure_data": "", "figure_id": "fig_9", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The Receiver Operating Characteristics (ROC) for the fingerprint matching performance of Ground Truth, a newly developed fingerphoto segmentation model in the contactless dataset.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "to 90 degrees). This augmentation is intended to help the model become invariant to different types of rotations of finger photos.", "figure_data": "DatasetTotal fingerphotos Lefthand fingerphotos Righthand fingerphotosBonafide215011181032Augmented215001118010320Total236501229811352", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The Mean Absolute Error (MAE) and its standard deviation were computed to assess the performance of the contactless segmentation system on our contactless fingerphoto dataset. The MAE was determined by averaging the absolute differences between each side of the detected bounding box and the corresponding side of the ground-truth bounding box, measured in pixels. A lower MAE value indicates better performance in accurately segmenting the fingerphotos.", "figure_data": "DatasetSideMAE(Std. dev.)Left26.09 (65.36)ContactlessRight Top27.33 (64.29) 20.23 (52.97)Bottom52.92 (90.93)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
M G Sarwar Murshed; Syed Konain; Sandip Purnapatra; Daqing Hou; Faraz Hussain
[ { "authors": "Dragana Bartolić; Dragosav Mutavdžić; Jens Michael Carstensen; Slavica Stanković; Milica Nikolić; Saša Krstović; Ksenija Radotić", "journal": "Scientific Reports", "ref_id": "b0", "title": "Fluorescence spectroscopy and multispectral imaging for fingerprinting of aflatoxin-b1 contaminated (zea mays l.) seeds: A preliminary study", "year": "2022" }, { "authors": "Samuel Cadd; Meez Islam; Peter Manson; Stephen Bleay", "journal": "Science & Justice", "ref_id": "b1", "title": "Fingerprint composition and aging: A literature review", "year": "2015" }, { "authors": "Sugandha Chakraverti; Pankaj Agarwal; Himansu Sekhar Pattanayak; Sanjay Pratap Singh Chauhan; Ashish Kumar Chakraverti; Manoj Kumar", "journal": "", "ref_id": "b2", "title": "De-noising the image using dbst-lcm-clahe: A deep learning approach", "year": "2023" }, { "authors": "Joshua J Steven A Grosz; Eryun Engelsma; Anil K Liu; Jain", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b3", "title": "C2cl: Contact to contactless fingerprint matching", "year": "2021" }, { "authors": "A Steven; Anil K Grosz; Jain", "journal": "IEEE Transactions on Biometrics, Behavior, and Identity Science", "ref_id": "b4", "title": "Afr-net: Attention-driven fingerprint recognition network", "year": "2023" }, { "authors": "Sooji Ha; Daniel J Marchetto; Sameer Dharur; Omar I Asensio", "journal": "Patterns", "ref_id": "b5", "title": "Topic classification of electric vehicle consumer experiences with transformer-based deep learning", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b6", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Jun Jia; Guangtao Zhai; Jiahe Zhang; Zhongpai Gao; Zehao Zhu; Xiongkuo Min; Xiaokang Yang; Guodong Guo", "journal": "IEEE Internet of Things Journal", "ref_id": "b7", "title": "Embdn: An efficient multiclass barcode detection network for complicated environments", "year": "2019" }, { "authors": "Sameh Khamis; Larry S Davis", "journal": "", "ref_id": "b8", "title": "Walking and talking: A bilinear approach to multilabel action recognition", "year": "2015" }, { "authors": "Kenneth Ko", "journal": "", "ref_id": "b9", "title": "Users guide to export controlled distribution of nist biometric image software", "year": "2007" }, { "authors": "Nagarajan Oluwasanmi O Koyejo; Natarajan; Inderjit S Pradeep K Ravikumar; Dhillon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b10", "title": "Consistent multilabel classification", "year": "2015" }, { "authors": "Aakarsh Malhotra; Anush Sankaran; Mayank Vatsa; Richa Singh", "journal": "IEEE Transactions on Biometrics, Behavior, and Identity Science", "ref_id": "b11", "title": "On matching finger-selfies using deep scattering networks", "year": "2020" }, { "authors": "Davide Maltoni; Dario Maio; Anil K Jain; Jianjiang Feng", "journal": "", "ref_id": "b12", "title": "Fingerprint sensing. Handbook of Fingerprint Recognition", "year": "2022" }, { "authors": "Emanuela Marasco; Anudeep Vurity", "journal": "", "ref_id": "b13", "title": "Fingerphoto presentation attack detection: Generalization in smartphones", "year": "2021" }, { "authors": "Keivan Mg Murshed; Stephanie Bahmani; Faraz Schuckers; Hussain", "journal": "", "ref_id": "b14", "title": "Deep ageinvariant fingerprint segmentation system", "year": "2023" }, { "authors": "J Tempestt; Damon L Neal; Woodard", "journal": "Journal of Pattern Recognition Research", "ref_id": "b15", "title": "Surveying biometric authentication for mobile device security", "year": "2016" }, { "authors": "Martijn Oostdijk; Arnout Van Velzen; Joost Van Dijk; Arnout Terpstra", "journal": "Identity", "ref_id": "b16", "title": "State-ofthe-art in biometrics for multi-factor authentication in a federative context", "year": "2016" }, { "authors": "Jannis Priesnitz; Rolf Huesmann; Christian Rathgeb; Nicolas Buchmann; Christoph Busch", "journal": "Sensors", "ref_id": "b17", "title": "Mobile contactless fingerprint recognition: implementation, performance and usability aspects", "year": "2022" }, { "authors": "Grigorios Tsoumakas; Ioannis Manousos; Katakis ", "journal": "Int. J. Data Warehous. Min", "ref_id": "b18", "title": "Multi-label classification: An overview", "year": "2007" }, { "authors": "G Ci Watson; E Fiumara; S Tabassi; P Cheng; Flanagan; Salamon", "journal": "", "ref_id": "b19", "title": "Fingerprint vendor technology evaluation", "year": "2014" }, { "authors": "Peter Wild; Franz Daubner; Harald Penz; Gustavo Fernández Domínguez", "journal": "", "ref_id": "b20", "title": "Comparative test of smartphone finger photo vs. touch-based cross-sensor fingerprint recognition", "year": "2019" }, { "authors": "Yuxin Wu; Alexander Kirillov; Francisco Massa; Wan-Yen Lo; Ross Girshick", "journal": "", "ref_id": "b21", "title": "Detectron2", "year": "2019" } ]
[ { "formula_coordinates": [ 10, 220.95, 582.58, 319.05, 13.72 ], "formula_id": "formula_0", "formula_text": "L o-rpn = L cls (p, u) + λuL reg (t, t * ) (1)" }, { "formula_coordinates": [ 11, 192.99, 102, 347.02, 11.7 ], "formula_id": "formula_1", "formula_text": "L cls (p, u) = -u • log(p) -(1 -u) • log(1 -p) (2)" }, { "formula_coordinates": [ 11, 212.24, 153.79, 327.76, 111.16 ], "formula_id": "formula_2", "formula_text": "t x = (x -x a )/w a , t y = (y -y a )/h a , t w = log(w/w a ), t h = log(h/h a ), t θ = θ -θ a (3) t * x = (x * -x a )/w a , t * y = (y * -y a )/h a , t * w = log(w * /w a ), t * h = log(h * /h a ), t * θ = θ * -θ a (4)" }, { "formula_coordinates": [ 11, 202.02, 310.96, 337.98, 65.43 ], "formula_id": "formula_3", "formula_text": "L reg (t, t * ) = i∈x,y,w,h,θ u.smooth L1 (t * i -t i ) (5) smooth L1 (x) = 0.5x 2 if |x| <1 |x| -0.5 otherwise(6)" }, { "formula_coordinates": [ 13, 244.26, 143.17, 295.74, 31.85 ], "formula_id": "formula_4", "formula_text": "MAE = 1 N N i=0 |X error i | (7)" }, { "formula_coordinates": [ 13, 252.3, 336.64, 287.7, 31.85 ], "formula_id": "formula_5", "formula_text": "EAP = 1 N N i=0 |θ -θ * | (8)" }, { "formula_coordinates": [ 13, 226.26, 516.4, 308.75, 31.85 ], "formula_id": "formula_6", "formula_text": "Hamming loss = 1 N N i=1 |Y i ∆Z i | |L| (9" }, { "formula_coordinates": [ 13, 535.01, 526.44, 4.99, 10.68 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 14, 81.96, 128.37, 458.04, 26.97 ], "formula_id": "formula_8", "formula_text": "T AR = Correct accepted fingerprints Total number of mated matching attempts × 100%(11)" } ]
2023-11-26
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b15", "b21", "b6", "b10", "b34", "b17", "b35", "b18", "b27", "b28", "b29", "b29", "b28", "b0", "b1" ], "table_ref": [], "text": "3D reconstruction and animation of clothed human avatars is a fundamental problem in computer vision research with various applications in robotics, graphics, augmented reality, virtual reality, and human digitization. In all these applications, the human reconstruction models are Figure 1. With geometric cues, our method is able to greatly improve geometric smoothness (Depth Error↓). In addition, the integration of physical priors into the rendering process also greatly improves the image rendering quality (PSNR↑). In particular, when the input image carries noise, our method still achieves good results and does not suffer from significant performance degradation as the baseline method.\nexpected to achieve high-quality human geometry and appearance performance. Thus, they require various hardware, such as 4D scanners, depth sensors, or RGB cameras. Among them, the captured images and videos by RGB cameras are the most readily available and used. However, they provide the insufficient supervision information, making this setup the most challenging for high-fidelity digital human reconstruction.\nTraditional works in human body modeling use explicit mesh to represent the human geometry and store the appearance in 2D texture maps, but they require dense camera arrays [16,22] and controlled lighting conditions [7,11]. These devices are extremely bulky and expensive for daily use. In contrast to these works, PIFu [35], StereoPIFu [18] and PIFuHD [36] propose to regress neural implicit function using pixel-level image features and are able to reconstruct high-resolution clothed human results. ARCH [19] extends a series of PIFu methods to regress animatable clothed human avatars in a canonical pose from monocular images. These methods adopt neural implicit function to represent the human's implicit geometry and achieve impressive results, but fail to reconstruct dynamic clothed humans from a sparse set of multi-view videos. In addition, they also require the corresponding ground-truth geometry of color images to train the model, which is expensive to acquire and limits their generalization.\nRecently, neural radiance fields (NeRF) [28], as an effective scene representation, achieve impressive novel view synthesis results with a differentiable renderer. Many methods [29,30] use NeRF as an implicit representation to remove the need for ground-truth geometry and reconstruct humans from sparse multi-view videos with only image supervision. Animatable NeRF [30] uses a parametric human body model as a strong geometry prior to canonical NeRF model, and achieves impressive visual fidelity on novel view and pose synthesis results. NARF [29] proposes neural deformable fields to model the dynamic human and represent pose-controllable changes of articulated objects. However, these existing methods usually suffer from two problems: (1) the NeRF-based reconstruction results tend to be noisy due to the lack of proper geometric regularization on radiance fields. This is more common in sparseview dynamic human reconstructions and often results in erroneous artifacts in the form of color blocks. (2) As existing approaches train their canonicalization networks on inputs in observation space, they are more prone to overfitting to color. The rendering quality depends heavily on the color quality of the observed image, leading to a severely performance degradation when the raw image appears significantly noisy, as shown in the Fig. 2.\nMotivated by recent advances in the field of humanspecific monocular geometry prediction, we leverage the estimated geometric cues as a regularization technique in dynamic human reconstruction. The predicted pseudo groundtruths provide globally consistent and smooth geometric constraints as a complement to the photometric consistency RGB loss during optimization. We observe that the geometry-based regularization technique leads to significant improvement in geometry reconstruction quality as shown in Fig. 1. This is due to the fact that depth and normal constraints can mitigate the geometry ambiguity and achieve smoother reconstructions with more fine details, required by image-based view synthesis.\nApart from incorporating these geometric cues, we investigate the strong physics background of neural radiation rendering and elaborately select two physical priors. Based on these physical priors, we construct additional optimization objective functions to guide the neural reconstruction process. The key idea is that good rendering results should be highly robust to view direction and the estimated density along rays should be the maximum value at the ray-surface points 1 . We leverage this idea as proxy guidance to help our model to learn view-invariant representation and reduce the inherent ambiguities of density estimated along rays. By integrating geometric cues and physical priors into human reconstruction process, we can successfully constrain both geometry and appearance to boost the fidelity of neural radiance fields even in highly sparse multi-view setting. 1 Intersection points of the camera ray and the human surface.\nIn summary, we make the following contributions:\n• We propose the HumanRecon approach, which leverages geometric cues and physical priors to improve geometry reconstruction and photorealistic rendering.\n• To enforce surface constraints on the learned geometry, we use depth and surface normals, predicted by a human-specific geometry estimator, as additional supervision signals during volume rendering together with the RGB image reconstruction loss.\n• We investigate the strong physics background of neural radiation rendering and elaborately select several physical prior as proxy guidance, which help our model to alleviate the overfitting on image color and reduce the inherent ambiguities of density estimated along rays.\n• We conduct extensive experiments on several challenging datasets, and show the effectiveness and interpretability of our method with quantitative and qualitative results." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b0", "b25", "b23", "b40", "b34", "b12", "b13", "b3", "b4", "b14", "b18", "b41", "b18", "b14", "b27", "b2", "b8", "b16", "b38", "b24", "b28", "b29", "b30", "b31", "b29", "b30", "b31", "b28", "b1", "b33", "b9", "b7", "b9" ], "table_ref": [], "text": "3D human reconstruction. The 3D shape of a human body is typically represented by parametric mesh models such as SCAPE [1] or SMPL [26]. SMPL recovers the human mesh deforms the template mesh with the linear blend skinning (LBS) algorithm. The template-based 3D human model [24,41] is easy to control huamn pose, but difficulty to model large garment deformations. Unlike mesh-based method, PIFu [35] learns an implicit surface function using pixel-level image features and generalizes well to clothingtype variation. Geo-PIFu [13] adds a coarse human shape proxy, and Dong et al. [14] uses a depth denoising process to regularize global shape. Recent methods [4,5,15,19,42] use both mesh-based statistical models and deep implicit functions and achieve impressive performance. For example, ARCH [19] and ARCH++ [15] combine implicit function learning with the deformation field to achieve the animatable reconstruction of clothed humans. Such methods often require high-quality 3D geometry or dense multi-view images with depth for training, while our goal is to train model on sparse-view image-level RGB supervision alone.\nNeural radiance fields for a dynamic human body. Our work builds on a new 3D scene representation form of neural radiance fields (NeRF) [28]. NeRF and its extensions [3,9,17,39] are capable of rendering novel views of static scenes with high quality due to their powerful expressiveness. Recent works [25,[29][30][31][32] add a deformation field on the original NeRF to enable it to model dynamic human body. Some works [30,31] use the human pose and skinning weights form SMPL as prior knowledge to facilitate the learning of deformation field. Neu-ralBody [32] use the same set of latent codes anchored at deformable mesh to encode the pose-deformation observations. Further, NARF [29] resorts to human skeleton to learn pose-dependent changes from images with pose annotations. These approaches mainly focus on how to better learn a deformation field and animate human pose, while they do not fully utilize geometric cues or physical priors to enhance the rendering capabilities of neural human reconstruction model.\nIncorporating priors into neural scene representations. Recent studies employ depth priors [2], sparse point clouds [34], semantic labels [10], and Manhattan-world Assumption [8,10] for novel view synthesis on neural scene representations. These methods are beneficial for static indoor 3D scene such as walls and floors, but difficult to handle dynamic human body. Motivated by recent advances in the field of human-specific geometry prediction, we use the estimated human depth and normals as geometric cues for dynamic human reconstruction. Based on the observation of neural rendering process, we also integrate physical priors into human reconstruction model and further demonstrate their effectiveness." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [ "b29" ], "table_ref": [], "text": "Our goal is to reconstruct a clothed human model that can be used to render free-view human videos while utilizing geometric cues and physical priors to guide the rendering process. To this end, we start by briefly reviewing Animatable Neural Radiance Fields (Ani-NeRF) [30] as our baseline model in Section 3.1. Next, we introduce the predicted geometric cues we investigate, and formulate their loss functions to integrate pseudo ground-truth labels from a pretrained network into our baseline model in Section 3.2. Finally, to alleviate the overfitting to image colors and reduce density ambiguity along rays, we leverage several beneficial physical priors into a prior loss committee as the proxy guidance for neural human reconstruction in section 3.3. An overview of our proposed method is presented in Fig. 2." }, { "figure_ref": [], "heading": "Revisiting Animatable Neural Radiance Fields", "publication_ref": [ "b25", "b29", "b27", "b27", "b26", "b29" ], "table_ref": [], "text": "Ani-NeRF firstly represents the dynamic human body in the video with the posed space and the canonical space. The posed space is defined for each video frame, and the deformation field T maps each posed space point x to the canonical space. Then, Ani-NeRF takes spatial location x and viewing direction d as input to a shared canonical human network and outputs a view-invariant volume density σ and view-dependent emitted radiance c.\nDeformation field. To be more specific about the deformation field T , we first define the posed space for each frame video and the canonical space shared for all the frames. For a pose-space point x p at frame i, we have rigid transformations of body joints, G k i ∈ SE(3), which are from the fitted SMPL model [26]. Based on this transformation, the spatial point x p can be mapped to the canonical space by\nx c = K k=1 w k i (x)G k i -1 x p . w i (x) = {w k i (x)} K k=1\nis the blend weight function of k-part. Note that, a neural blend weight w can at the canonical space should also be additionally learned to animate the human NeRF. More details on this part can be found in [30].\nCanonical human network. Canonical human network [28] represents a static scene as a continuous volumetric representation learned by a neural network. Specifically, given the density network F σ , the density model of human body at frame i can be defined as\n(σ i , c i ) = F σ (γ x (T i (x)) , γ d (x)),(1)\nwhere the positional encoding γ [28] transforms the sampled coordinate and view direction into their Fourier features that facilitate learning of high-frequency details. Given a pixel, a ray r(τ ) = o + τ d is cast from the camera origin o and through the pixel along direction d. Along this ray, we sample J discrete points {x j = r(τ j )} J j=1 and estimate their densities and color {σ j , c j } J j=1 . The predicted color of this pixel can be rendered by a numerical quadrature approximation [27]:\nĈ(r) = J j=1 Γ j (1 -exp(-σ k (τ j+1 -τ j )))c k , (2) with Γ k = exp(-j ′ <j (τ j ′ +1 -τ j ′ ))\nwhere {(τ j )} J j=1 are discrete sampling points on the ray r and Γ j can be interpreted as an accumulated transmittance along the ray. Finally, Ani-NeRF is trained to minimize the following photometric loss and blend weight consistency loss.\nL photo = r∈R ∥ Ĉ(r) -C(r)∥ 2 ,(3)\nand\nL weight = x∈Xi ∥w i (x) -w can (T i (x))∥ 1 ,(4)\nwhere R represents a batch of training rays passing through image pixels and X i is the set of 3D points sampled within the 3D human bounding box at frame i. L photo enforces the rendered pixel color Ĉ(r) to be consistent with the observed pixel color C(r), while L weight minimizing the difference between blend weight fields. L base = L photo + L weight is the overall optimization objective of Ani-NeRF. For more details, we refer readers to [30]." }, { "figure_ref": [], "heading": "Pseudo-supervision from Geometric Cues", "publication_ref": [ "b20", "b20", "b5" ], "table_ref": [], "text": "As a powerful representation, Ani-NeRF produces impressive photorealistic renderings of unseen viewpoints. In this work we use the geometric cues of estimated depth and normals, and physical priors to guide the optimization of neural implicit human models. More specifically, for a batch of rays, we render their predicted color and expected termination depth, and calculate the gradient of volume density as surface normal. Given the geometry properties predicted by pretrained network [21], we supervise the rendered geometry results by comparing them with the corresponding estimation. In addition, we apply carefully-designed training strategies (i.e. view noise and density maximization) from physical priors to ensure the rendered color robust for view direction and reduce the inherent ambiguities of density estimated along rays.\nYet, this approach relies only on RGB color supervision without considering explicit geometric constraints, leading to es geometrically inherent ambiguities in sparse-view videos. To overcome this limitation, we unify Ani-NeRF's powerful implicit rendering capabilities with explicit 3D reconstruction methods by using explicit geometry supervision.\nPredicted depth cues. One common geometric cues is the depth map, which contains geometric structures of shapes in 3D space and can be easily obtained with an off-the-shelf depth estimator. Specifically, we use a pretrained human depth estimator [21] to predict a depth map D, which is faithful to the input real image. Our goal is to use the predicted depth cues as a regularization technique thereby improving existing neural implicit methods. A key observation is that the volume rendering scheme can generate depth maps for each image in a similar manner to rendering RGB pixels. Formulately, the expected termination depth of each ray r can be produced by modifying volumetric rendering equation (i.e. Eq. 2) as\nD(r) = J j=1 Γ j (1 -exp(-σ k (τ j+1 -τ j )))τ k . (5\n)\nwhere {(τ j )} J j=1 are discrete sampling points on the ray r. A straightforward strategy is to use depth-based pseudolabel to directly supervise the learning of this depth estimated by Ani-NeRF via the squared distance. D and D are both absolute depth estimates from their own networks, and therefore hinders depth supervision in general scenes. To alleviate the influence, we consider depth information as relative cues and enforce consistency between our rendered expected depth and the predicted depth by\nL depth = r∈R ∥ D(r) Mean r∈R ( D(r)) - D(r) Mean r∈R (D(r)) ∥ 2 , (6\n)\nwhere R represents a batch of training rays r passing through image pixels and two depth distributions are scaled with the mean value calculated from themselves.\nPredicted normal cues. The surface normal is another geometric cues we use. Unlike depth cues that provide globally structure information and reduce shape ambiguity, normal cues are local and capture smooth geometries with fine details. Similar to the depth cues, we use the same model to predict normal maps N for each RGB image, which provide constraint on the normals calculated by the gradient of the volume density σ with respect to 3D position x [6]:\nN(x) = - ∇σ(x) ∥∇σ(x)∥ . (7\n)\nTo regularize the consistency of the computed density gradient normals N with pseudo ground truth normals N from a pretrained network, we enforce a cosine embedding loss on them:\nL normal = 1 -cos( N(x), N(x)). (8\n)\nNormal cues are local and capture smooth geometric feature, while depth cues provide relative information with strong global constraints. We hence expect that two geometric constraints are complementary and enhance each other." }, { "figure_ref": [], "heading": "Physical Priors", "publication_ref": [], "table_ref": [], "text": "NeRF-based 3D human reconstruction is the task of using a set of input images and their camera poses to reconstruct a dynamic human representation capable of rendering unseen views. Without sufficient multi-view supervision (i.e. sparse-view video), Ani-NeRF is prone to overfitting on existing data and struggles to accurately learn the density distributions along rays. In this section, we present several beneficial physical priors and leverage them into a prior loss committee to build a view-consistent representation and reduce the inherent ambiguities of density.\nPerturbation on view direction. To alleviate the overfitting on multi-view videos, we augment the raw input data to provide necessary constraints on volume rendering. Specifically, we treat available RGB images with view direction as original data and consider these images with their perturbational view directions as the augmentation data. To generate perturbational view directions (i.e. noisy views), we use Gaussian distribution (e.g. N (0, 1)) as perturbation and add it into the raw view direction. Through this way, we propagate the image information under the available views to their surrounding (noisy) views to better utilize them, thus enabling to build a view-invariant representation. This works well for clean and noise-free images, which standard NeRF takes as input. Further, our view perturbation strategy can be directly applied on noisy raw input images and is highly robust to the zero-mean distribution of raw noise, which is shown in Tab. 2.\nMaximizing density at ray-surface points. The densities predicted by canonical network describe the geometry of scene and are used as weights in the weighted sum of color values of the sampled points along rays. For each ray that passes through an image pixel, we expect those colors on the ray-surface points to have the largest contribution for rendered color. Following this intuition, our model should take the maximum density values at these points. Given depth D(r) found via Eq.( 5), we can further determine the ray-surface points by\ns(r) = o + D(r)d, r ∈ R,(9)\nwhere R represents a batch of training rays r passing through image pixels. Then, we achieve our goal of density maximization on boundary surface by the following loss:\nL surface (r) = (1 -σ(s(r))) 2 , r ∈ R.(10)\nwhere σ(s(r) is density values predicted by the network F σ at ray-surface points s(r). In this way, we can encourage network model to prevent the formation of spurious and invisible surfaces in the scene. The overall loss function is\nL overall = L base +λ depth L depth +λ normal L normal +λ surface L surface ,(11)\nwhere λ depth , λ normal , and λ surface are weight parameters." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and metrics", "publication_ref": [ "b19", "b31", "b29", "b11" ], "table_ref": [], "text": "Datasets and evaluation metrics.\nWe choose H36M [20] as our primary benchmark dataset. For completeness, we also evaluate our approach on the ZJU-MoCap dataset [32] and present a quantitative comparison to [30] in supplementary. We benchmark our proposed HumanRecon on two tasks: generalization to novel views and poses. For image synthesis, we quantify model performance using three metrics: mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). For geometry reconstruction, since there is no available ground truth geometry, we evaluate our approach using depth smoothness (i.e. depth error) [12]. The detailed description of benchmark datasets, implementation details and the more experimental results are presented in the supplementary." }, { "figure_ref": [ "fig_2" ], "heading": "Ablation studies", "publication_ref": [ "b29", "b30", "b29" ], "table_ref": [], "text": "We conduct ablation studies to evaluate the effectiveness of each component of our proposed method on one subject \"S9\" of the H36M dataset. We choose Ani-NeRF [30] as a default baseline model, and Ani-SDF [31] as an additional one.\nThe effectiveness of each component of our proposed method. As shown in Tab. 1 and Fig. 3, we can see that both geometric constraints and physical priors can significantly improve the performance of baseline. In detail, depth cues significantly boosts reconstruction quality in terms of all evaluation metrics, especially in depth error (i.e. Dep. Err. Table 3. Comparison between our HumanRecon and other traditional image-based data augmentation methods (e.g. horizontal flips and center cropping). We also show some perturbations on the input data as naive data augmentation, which includes add noise on raw input image (Raw rgb + N (0, 0.05), camera origin (oray + N (0, 0.5))), ray direction (dray + N (0, 0.5)). N/A means that the model does not learn any geometric shape . in Tab. 1). This shows that our approach not only obtains more faithful visual rendering results in the human reconstruction, but also better represents the underlying geometry as indicated by the lower depth error. Some improvement is also achieved when normal cues are used. With both depth and normal cues, our method achieves a better performance compared to using either one. This is also consistent with our previous analysis that global depth and local normal are complementary to each other. Different from the inherent geometric constraints above, our physical priors do not benefit the geometric results, but rather achieve more improvements in pixel-level image rendering. The main reason is that the additional usage of physical priors can help our method to capture finer details and alleviate overfitting on only image supervision during rendering process.\nThe impact of perturbation on view direction. We investigate the impact of perturbation on view direction and Comparison with traditional data augmentation methods. We compare HumanRecon with traditional data augmentation methods: horizontal flips, center cropping, and adding Gaussian noise on input image. From the experimental results of the Tab. 3, it can be seen that none of the traditional data augmentation methods can be used directly for the neural reconstruction task of dynamic human. On the contrary, our proposed method from geometric constraint and physical prior is suitable for modeling dynamic human bodies and achieves significant improvements on the original baseline.\nOverfitting problem and generalization to noisy raw data. As shown in Tab. 4, as our baseline, Ani-NeRF achieves the best when the number of views is 2, and faces an overfitting problem when the number of views increases to 3. However, our approach solves the overfitting problem. The available datasets are often collected in standard scenarios with high quality. To illustrate that our method can handle more realistic scenes, we artificially generate the noisy data by adding the zero-mean noise distribution to the raw input image. As shown in Tab. 4, our method outperforms the baseline with large margins on both noisy and noiseless data. This shows that the proposed HumanRecon is highly robust to zero-mean noise distribution and is more practical in real scenarios compared to the baseline. Compatibility with other baseline methods. Finally, to illustrate the compatibility of our approach, we choose Ani-SDF [30] as our baseline model. Ani-SDF models the human geometry with a signed distance field, which naturally regularizes the learned geometry and achieves the high-quality reconstruction of human bodies. Despite its power, our approach still improves rendering results (e.g. +0.25 for PSNR) on subject \"S6\" and geometry quality (e.g. +0.013 for Dep. Err.) on subject \"S9\", which is shown in Tab. 5." }, { "figure_ref": [], "heading": "Comparison with other existing methods", "publication_ref": [ "b37", "b39", "b28", "b29", "b31" ], "table_ref": [], "text": "Novel view synthesis on H36M dataset. Tab. 6 summaries the quantitative comparison between our method with [38], [40], [29], [30] on novel view synthesis of seen poses. Our proposed geometric clues and physical priors contribute to the performance of the baseline method and achieve the best performance compared to other existing methods. Compared with them, our method is able to accurately control the viewpoint and render the human body, which is shown in Fig. 4. This suggests that the appropriate geometry properties and physical priors are beneficial for the visual fidelity of novel views on seen poses. Due to space limitations, we present experimental results about novel view synthesis of unseen pose and another ZJU-MoCap dataset [32] in the supplementary." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b29", "b31", "b32", "b39", "b37" ], "table_ref": [], "text": "In this paper, we systematically explore how geometric cues and physical priors are incorporated into neural radiance fields for animatable 3D human synthesis. We show Table 6. Quantitative evaluation of our method against other existing methods. Our proposed geometric cues and physical priors contribute to the performance of the baseline method and achieve the best performance compared to other existing methods. Baseline means that we re-train Ani-NeRF as our baseline model and report their results; \"Baseline+Dep.\" and \"Baseline+Nor.\" represent that we integrate depth and normal cues into our baseline model, respectively; \"Baseline+Den.\" and \"Baseline+Vie.\" imply that we integrate density maximization and view perturbation into our baseline model, respectively.\nFigure 4. Novel view synthesis results of our HumanRecon and other methods, including Ani-NeRF [30], NeuralBody [32], D-NeRF [33], NHR [40], and NT [38]. Existing methods produce distorted rendering results and have difficult to control colors. Compared with existing methods, our method produces darker colors closer to ground truth (especially on the first row of the result) and accurately renders the target view with finer details (especially on the second row of the result). Zoom in for details.\nthat such easy-to-obtain geometric cues provide reliable yet explicit supervision for implicit neural rendering process. As a complement, physical priors enable the network to learn view-invariant implicit representation and reduce den-sity ambiguity along rays. Experimental results show that the proposed approach can reconstruct accurate and smooth geometry while maintaining the high-quality rendering results on public datasets." }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [ "b19", "b29", "b31" ], "table_ref": [], "text": "In the supplementary material, we provide the dataset description and implementation details. To show the effectiveness and generalization of our approach, we report the more qualitative and quantitative results of geometry reconstruction and image synthesis.\n6. Experiment 6.1. Dataset H36M [20] dataset captures multi-view human performers with four synchronized cameras and collects human poses with a marker-based motion capture system. We select representative actions and conduct extensive experiments on subjects \"s1\", \"s5\", \"s6\", \"s7\", \"s8\", and \"s9\". Among four cameras, three cameras are used for training and the remaining one is for testing. For more details settings, we refer to as [30].\nZJU-MoCap [32] dataset records multi-view human movements with twelve cameras and obtains human poses with the marker-based motion capture system. For the evaluation of our method on ZJU-MoCap, we choose three representative sequences datasets and conduct experiments on subjects \"313\", \"315\", and \"386\". The four cameras are used for training and others are used for testing." }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b29", "b27", "b29", "b22" ], "table_ref": [], "text": "Following [30], we adopt the single-level NeRF as our radiance field F σ and sample 64 points for each camera ray. Position encoding [28] is used to encode the spatial point and the view direction. We use 6 frequencies for encoding spatial position and 4 frequencies for encoding view direction. The latent code dimension of both appearance code and blend weight field code is 128.\nTraining. We refer to [30] for the training setting. Our method adopts a two-stage training pipeline. First, the parameters of the canonical human network, blend weight field, appearance code, and blend weight field code are learned over the input data. Second, under novel poses, neural blend weight fields are learned, while the remaining parameters are fixed. We use Adam optimizer [23] to train our model with a learning rate of 5e -5 . We run all the experiments on one 2080 Ti GPU. We set the parameter of λ depth to 1 for L depth . For other parameters λ normal and λ normal , smaller values are often more appropriate according to our empirical experiments. Note that the accuracy can be further improved when the optimal setting is used." }, { "figure_ref": [ "fig_4" ], "heading": "Experimental results", "publication_ref": [ "b29", "b31", "b32", "b37", "b39", "b29", "b29", "b29", "b30", "b29", "b30", "b29", "b31", "b32", "b39", "b37" ], "table_ref": [], "text": "Novel view synthesis on H36M dataset. Novel view synthesis results on seen pose have been discussed in the main paper. Next, we investigate our proposed Hu-manRecon for novel view synthesis on unseen poses. Tab. 7 presents a quantitative comparison to the baseline. Compared with the baseline, the proposed HumanRecon achieves higher image fidelity and better consistency across novel views. This indicates that our approach can be generalized to unseen poses and suited for representing animatable human model. As shown in Fig. 5, our method produces higher visual quality with fewer artifacts than existing methods [30,32,33,38,40], which also indicates a better correspondence across frames.\nNovel view synthesis on the ZJU-MoCap dataset. We present additional quantitative results on ZJU-MoCap dataset. Compared with the baseline model, our approach obtains a better performance on seen and unseen poses in terms of MSE, PSNR, and SSIM, which is shown in Tab. 9. For example, the proposed HumanRecon achieves 0.87% improvement for PSNR on subject \"313\" with seen poses than Ani-NeRF [30]. We argue this performance benefits from the additional prior knowledge of geometry and physics in our method while enhancing the learning for neural implicit representation.\nAblation on subject \"s1\" of the H36M dataset. We further conduct some ablation studies to verify the effectiveness of each component of our method on subject \"s1\" of the H36M dataset. Experimental results are shown in Tab. 8. From the results of Tab. 8, we can observe that geometric cues achieve 1.06% improvement for PSNR and 0.16 improvement for Dep. Err. than the baseline. With this module, our method has explicit geometry supervision and reduces the geometrically ambiguity, therefore enhancing the rendering capability of the human reconstruction model. Similar performance can be found in the physical priors module. With both modules, our method reaches the best performance by successfully constraining both geometry and appearance. 9. Quantitative results of models on ZJU-MoCap seen poses and unseen poses. Compared with Ani-NeRF [30], our approach achieves a better performance of novel view synthesis on seen and unseen poses.\nComparison between models trained with different numbers of video frames on the subject \"S9\". In this subsection, we present the comparison between models trained with different numbers of video frames. For this setup, we do not modify our method and all hyperparameters are the same as before. For comparison, we train the baseline and our model with 1, 100, 200, and 300 video frames and test the models on the same pose sequence. We present the qualitative results in Fig. 6. Due to both geometrical and physical constraints, our method better capture clothing color, arms, and face details which the baseline suffers from. The stable performance is achieved for our model at 100 video frames, while the baseline model achieves stable performance with 200 video frames.\nGeometry reconstruction results on different baseline models. We adopt Ani-NeRF [30] and Ani-SDF [31] as our baseline model and present the reconstruction results of our method in comparison with the two baseline models. As shown in Fig. 7, the overall results of Ani-SDF based methods are better than that of Ani-NeRF based method. Specifically, compared with Ani-NeRF [30] and Ani-SDF [31], our approach not only reconstructs smoother geometry results with high-quality but also captures more geometry details (e.g. arm details). 5. Novel view synthesis results of our HumanRecon and other methods on subject \"s7\" of H36M dataset, including Ani-NeRF [30], NeuralBody [32], D-NeRF [33], NHR [40], and NT [38]. Compared with existing method, our approach achieves higher visual quality with fewer artifacts while preserving the details on the faces and arms of the subjects. Zoom in for details." }, { "figure_ref": [], "heading": "Ground Truth", "publication_ref": [], "table_ref": [], "text": "1 frame 100 frames 200 frames 300 frames \nBaseline Ours" }, { "figure_ref": [], "heading": "Ani-NeRF Real", "publication_ref": [ "b29", "b30" ], "table_ref": [], "text": "Ours(Ani-NeRF) Ani-SDF Ours(Ani-SDF)\nFigure 7. Geometry reconstruction results on different baseline models. compared with Ani-NeRF [30] and Ani-SDF [31], our approach not only reconstructs smoother geometry results with high-quality but also captures more geometry details (e.g. arm details). Zoom in for details." } ]
Recent methods for dynamic human reconstruction have attained promising reconstruction results. Most of these methods rely only on RGB color supervision without considering explicit geometric constraints. This leads to existing human reconstruction techniques being more prone to overfitting to color and causes geometrically inherent ambiguities, especially in the sparse multi-view setup. Motivated by recent advances in the field of monocular geometry prediction, we consider the geometric constraints of estimated depth and normals in the learning of neural implicit representation for dynamic human reconstruction. As a geometric regularization, this provides reliable yet explicit supervision information, and improves reconstruction quality. We also exploit several beneficial physical priors, such as adding noise into view direction and maximizing the density on the human surface. These priors ensure the color rendered along rays to be robust to view direction and reduce the inherent ambiguities of density estimated along rays. Experimental results demonstrate that depth and normal cues, predicted by human-specific monocular estimators, can provide effective supervision signals and render more accurate images. Finally, we also show that the proposed physical priors significantly reduce overfitting and improve the overall quality of novel view synthesis.
HumanRecon: Neural Reconstruction of Dynamic Human Using Geometric Cues and Physical Priors
[ { "figure_caption": "Figure 2 .2Figure2. The overview of our method. In this work we use the geometric cues of estimated depth and normals, and physical priors to guide the optimization of neural implicit human models. More specifically, for a batch of rays, we render their predicted color and expected termination depth, and calculate the gradient of volume density as surface normal. Given the geometry properties predicted by pretrained network[21], we supervise the rendered geometry results by comparing them with the corresponding estimation. In addition, we apply carefully-designed training strategies (i.e. view noise and density maximization) from physical priors to ensure the rendered color robust for view direction and reduce the inherent ambiguities of density estimated along rays.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Geometry reconstruction results on subject \"S9\" of H36M dataset. Compared with Ani-NeRF, our approach reconstructs smoother geometry results with high-quality. Zoom in for details.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "FigureFigure5. Novel view synthesis results of our HumanRecon and other methods on subject \"s7\" of H36M dataset, including Ani-NeRF[30], NeuralBody[32], D-NeRF[33], NHR[40], and NT[38]. Compared with existing method, our approach achieves higher visual quality with fewer artifacts while preserving the details on the faces and arms of the subjects. Zoom in for details.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Comparison between models trained with different numbers of video frames on the subject \"S9\" of H36M dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Ablation of geometric cues and physical priors on the subject \"S9\" of the H36M dataset. Both geometric cues and physical priors significantly improve human reconstruction quality in terms of all evaluation metrics. With geometry cues, our approach can better reduce geometrically inherent ambiguities and represent the underlying geometry as indicated by the lower depth error. With physical priors, finer details can be captured and the rendering results become better. Using both leads to the best performance. \"Dmax\" represents density maximization. Due to space limitations, we present the qualitative and quantitative results of other subjects on novel view synthesis in supplementary.", "figure_data": "Geo. CuesPhy. PriorsImageGeometryBaselineDepth NormalVnoise DmaxMSE↓PSNR↑ SSIM↑Dep. Err.↓√ √ √ √√ √√ √0.00354 0.00337 0.00340 0.0032624.77 24.94 24.94 25.050.907 0.907 0.909 0.9100.229 0.205 0.229 0.203√ √ √√ √√ √0.00249 0.00339 0.0024726.17 24.99 26.190.909 0.912 0.9140.232 0.228 0.232√√√√√0.0024426.250.9160.204MethodModeMSE↓PSNR↑ SSIM↑BaselineN/A0.0040124.900.908Train & Test 0.0025726.240.915γ(d) + N (0.2, 0.5)Train0.00243 26.350.916Train & Test 0.0026326.020.908γ(d) + N (0.0, 0.5)Train0.0026126.060.913d + N (1.0, 0.1)Train & Test 0.00255 Train 0.0025126.13 26.150.912 0.912d + N (1.0, 0.5)Train & Test 0.00256 Train 0.0025026.12 26.170.911 0.912d ray + N (0.0, 0.5)Train & Test 0.0150018.550.715Table 2. Results of models trained with different view direc-tion perturbation strategies on the subject \"S9\" of H36M dataset.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of models trained with different views and noisy input images. N.V represents the number of views. report the experimental results in Tab. 2. We can draw the following conclusions from Tab. 2. First, adding perturbations (noise) on the view direction can greatly improve the performance of the baseline, which is shown in the first and second blocks of Tab. 2. Second, comparing the second and third blocks of Tab. 2, adding noise before or after the position embedding makes no significant difference, and they are beneficial for improving the model performance. Third, similar to Dropout[37], the perturbation strategy can be removed during the test phase to achieve a better result. This can be observed by comparing two modes (Train & Test and Train) in Tab. 2. This strategy is not sensitive to hyperparameters such as noise intensity. The last point to note is that we only change the view direction as the input to the color prediction network, not camera position (o ray ) and ray direction (d ray ), otherwise it will bring serious model degradation, as shown in the last block of the Tab. 2.", "figure_data": "OverfittingNoisy InputImageGeometryImageGeometryMethodN.VMSE↓PSNR↑ SSIM↑Dep. Err.↓SettingMethodMSEPSNR SSIMDep. Err.10.0044923.930.8730.731Baseline0.00354 24.77 0.9070.229Baseline2 30.00308 0.0035425.35 24.770.904 0.9070.456 0.229Raw Input+Phy. Priors 0.00247 26.19 0.910 +Geo. Cues 0.00244 26.25 0.9120.232 0.20410.0039424.490.8860.644Baseline0.00380 24.49 0.9040.309Ours2 30.00319 0.0024425.25 25.250.903 0.9160.420 0.204Input + N (0, 0.05)+Phy. Priors 0.00247 26.13 0.909 +Geo. Cues 0.00245 26.18 0.9110.232 0.206", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of models generalized to other baseline.", "figure_data": "S6MethodMSE↓PSNR↑ SSIM↑ Dep. Err.↓Ani-SDF [30] 0.0038224.350.8950.112HumanRecon 0.0035624.610.8960.112S9MethodMSE↓PSNR↑ SSIM↑ Dep. Err.↓Ani-SDF [30] 0.0026426.070.9180.214HumanRecon 0.0025426.110.9170.201", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results of novel pose synthesis on subjects \"S1\", \"S5\", and \"S9\" of H36M dataset. Compared with the baseline, the proposed HumanRecon achieves higher image fidelity and smoother geometry results on novel poses.", "figure_data": "S1MethodMSE↓PSNR↑ SSIM↑ Dep. Smo.↓Baseline0.0070621.250.8670.0553HumanRecon 0.0055222.580.8760.0490S5MethodMSE↓PSNR↑ SSIM↑ Dep. Smo.↓Baseline0.0076621.210.8590.118HumanRecon 0.0054722.690.8740.110S9MethodMSE↓PSNR↑ SSIM↑ Dep. Smo.↓Baseline0.0044523.560.8780.122HumanRecon 0.0038324.220.8780.117", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Ablation of geometric cues and physical priors on the subject \"s1\" of the H36M dataset. The proposed geometric cues and physical priors are beneficial for the reconstruction quality and rendering results of our model. \"Dmax\" represents density maximization. Bold indicates the highest result, and underline indicates the second highest result.", "figure_data": "Geo. CuesPhy. PriorsImageGeometryBaseline √ √ √ √Depth Normal √ √ √ √Vnoise DmaxMSE(↓) PSNR(↑) SSIM(↑) 0.00545 22.77 0.896 0.00457 23.50 0.906 0.00447 23.60 0.903 0.00422 23.83 0.907Dep. Err.(↓) 0.136 0.120 0.134 0.120√ √ √ √√√√ √ √√ √ √0.00344 0.00468 0.00341 0.0033424.72 23.43 24.79 24.840.906 0.901 0.907 0.9090.135 0.134 0.134 0.119313315386SettingMethodMSE(↓) PSNR(↑) SSIM(↑)MSE(↓) PSNR(↑) SSIM(↑)MSE(↓) PSNR(↑) SSIM(↑)Ani-NeRF [30]0.0025126.280.9320.0068721.780.8480.0022626.680.884Seen poseHumanRecon0.0021027.150.9370.0066522.010.8530.0021726.850.887Ani-NeRF [30]0.0058622.590.8850.017117.850.7900.0029725.450.881Unseen poseHumanRecon0.0056422.830.8870.015618.290.8000.0029625.520.882Table", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" } ]
Junhui Yin; Wei Yin; Hao Chen; Xuqian Ren; Zhanyu Ma; Jun Guo; Yifan Liu
[ { "authors": "Dragomir Anguelov; Praveen Srinivasan; Daphne Koller; Sebastian Thrun; Jim Rodgers; James Davis", "journal": "", "ref_id": "b0", "title": "Scape: shape completion and animation of people", "year": "2005" }, { "authors": "Dejan Azinović; Ricardo Martin-Brualla; Dan B Goldman; Matthias Nießner; Justus Thies", "journal": "", "ref_id": "b1", "title": "Neural rgb-d surface reconstruction", "year": "2022" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Matthew Tancik; Peter Hedman; Ricardo Martin-Brualla; Pratul P Srinivasan", "journal": "", "ref_id": "b2", "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", "year": "2021" }, { "authors": "Bharat Lal Bhatnagar; Cristian Sminchisescu; Christian Theobalt; Gerard Pons-Moll", "journal": "Springer", "ref_id": "b3", "title": "Combining implicit function learning and parametric models for 3d human reconstruction", "year": "2020" }, { "authors": "Bharat Lal Bhatnagar; Cristian Sminchisescu; Christian Theobalt; Gerard Pons-Moll", "journal": "Adv. Neural Inform. Process. Syst", "ref_id": "b4", "title": "Loopreg: Self-supervised learning of implicit surface correspondences, pose and shape for 3d human mesh registration", "year": "2020" }, { "authors": "Mark Boss; Raphael Braun; Varun Jampani; Jonathan T Barron; Ce Liu; Hendrik Lensch", "journal": "", "ref_id": "b5", "title": "Nerd: Neural reflectance decomposition from image collections", "year": "2021" }, { "authors": "Alvaro Collet; Ming Chuang; Pat Sweeney; Don Gillett; Dennis Evseev; David Calabrese; Hugues Hoppe; Adam Kirk; Steve Sullivan", "journal": "ACM Trans. Graph", "ref_id": "b6", "title": "High-quality streamable free-viewpoint video", "year": "2015" }, { "authors": "M James; Alan L Coughlan; Yuille", "journal": "IEEE", "ref_id": "b7", "title": "Manhattan world: Compass direction from a single image by bayesian inference", "year": "1999" }, { "authors": "Sara Fridovich-Keil; Alex Yu; Matthew Tancik; Qinhong Chen; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b8", "title": "Plenoxels: Radiance fields without neural networks", "year": "2022" }, { "authors": "Haoyu Guo; Sida Peng; Haotong Lin; Qianqian Wang; Guofeng Zhang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b9", "title": "Neural 3d scene reconstruction with the manhattan-world assumption", "year": "2022" }, { "authors": "Kaiwen Guo; Peter Lincoln; Philip Davidson; Jay Busch; Xueming Yu; Matt Whalen; Geoff Harvey; Sergio Orts-Escolano; Rohit Pandey; Jason Dourgarian", "journal": "ACM Trans. Graph", "ref_id": "b10", "title": "The relightables: Volumetric performance capture of humans with realistic relighting", "year": "2019" }, { "authors": "Di Yuan-Chen Guo; Linchao Kang; Yu Bao; Song-Hai He; Zhang", "journal": "", "ref_id": "b11", "title": "Nerfren: Neural radiance fields with reflections", "year": "2022" }, { "authors": "Tong He; John Collomosse; Jin Hailin; Stefano Soatto", "journal": "Adv. Neural Inform. Process. Syst", "ref_id": "b12", "title": "Geo-pifu: Geometry and pixel aligned implicit functions for single-view human reconstruction", "year": "2020" }, { "authors": "Tong He; John Collomosse; Jin Hailin; Stefano Soatto", "journal": "Adv. Neural Inform. Process. Syst", "ref_id": "b13", "title": "Geometry-aware two-scale pifu representation for human reconstruction", "year": "2022" }, { "authors": "Tong He; Yuanlu Xu; Shunsuke Saito; Stefano Soatto; Tony Tung", "journal": "", "ref_id": "b14", "title": "Arch++: Animation-ready clothed human reconstruction revisited", "year": "2021" }, { "authors": "Peter Hedman; Julien Philip ; Frahm; George Drettakis; Gabriel Brostow", "journal": "", "ref_id": "b15", "title": "Deep blending for free-viewpoint image-based rendering", "year": "2018" }, { "authors": "Peter Hedman; P Pratul; Ben Srinivasan; Jonathan T Mildenhall; Paul Barron; Debevec", "journal": "", "ref_id": "b16", "title": "Baking neural radiance fields for real-time view synthesis", "year": "2021" }, { "authors": "Yang Hong; Juyong Zhang; Boyi Jiang; Yudong Guo; Ligang Liu; Hujun Bao", "journal": "", "ref_id": "b17", "title": "Stereopifu: Depth aware clothed human digitization via stereo vision", "year": "2021" }, { "authors": "Zeng Huang; Yuanlu Xu; Christoph Lassner; Hao Li; Tony Tung", "journal": "", "ref_id": "b18", "title": "Arch: Animatable reconstruction of clothed humans", "year": "2020" }, { "authors": "Catalin Ionescu; Dragos Papava; Vlad Olaru; Cristian Sminchisescu", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b19", "title": "Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments", "year": "2013" }, { "authors": "Yasamin Jafarian; Hyun Soo; Park ", "journal": "", "ref_id": "b20", "title": "Learning high fidelity depths of dressed humans by watching social media dance videos", "year": "2021" }, { "authors": "Hanbyul Joo; Tomas Simon; Yaser Sheikh", "journal": "", "ref_id": "b21", "title": "Total capture: A 3d deformation model for tracking faces, hands, and bodies", "year": "2018" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b22", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Nikos Kolotouros; Georgios Pavlakos; Michael J Black; Kostas Daniilidis", "journal": "", "ref_id": "b23", "title": "Learning to reconstruct 3d human pose and shape via model-fitting in the loop", "year": "2019" }, { "authors": "Lingjie Liu; Marc Habermann; Viktor Rudnev; Kripasindhu Sarkar; Jiatao Gu; Christian Theobalt", "journal": "ACM Trans. Graph", "ref_id": "b24", "title": "Neural actor: Neural free-view synthesis of human actors with pose control", "year": "2021" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "ACM Trans. Graph", "ref_id": "b25", "title": "Smpl: A skinned multiperson linear model", "year": "2015" }, { "authors": "Nelson Max", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b26", "title": "Optical models for direct volume rendering", "year": "1995" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b27", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Atsuhiro Noguchi; Xiao Sun; Stephen Lin; Tatsuya Harada", "journal": "", "ref_id": "b28", "title": "Neural articulated radiance field", "year": "2021" }, { "authors": "Sida Peng; Junting Dong; Qianqian Wang; Shangzhan Zhang; Qing Shuai; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b29", "title": "Animatable neural radiance fields for modeling dynamic human bodies", "year": "2021" }, { "authors": "Sida Peng; Shangzhan Zhang; Zhen Xu; Chen Geng; Boyi Jiang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b30", "title": "Animatable neural implicit surfaces for creating avatars from videos", "year": "2022" }, { "authors": "Sida Peng; Yuanqing Zhang; Yinghao Xu; Qianqian Wang; Qing Shuai; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b31", "title": "Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans", "year": "2021" }, { "authors": "Albert Pumarola; Enric Corona; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b32", "title": "D-nerf: Neural radiance fields for dynamic scenes", "year": "2021" }, { "authors": "Barbara Roessle; Jonathan T Barron; Ben Mildenhall; Matthias Pratul P Srinivasan; Nießner", "journal": "", "ref_id": "b33", "title": "Dense depth priors for neural radiance fields from sparse input views", "year": "2022" }, { "authors": "Shunsuke Saito; Zeng Huang; Ryota Natsume; Shigeo Morishima; Angjoo Kanazawa; Hao Li", "journal": "", "ref_id": "b34", "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "year": "2019" }, { "authors": "Shunsuke Saito; Tomas Simon; Jason Saragih; Hanbyul Joo", "journal": "", "ref_id": "b35", "title": "Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization", "year": "2020" }, { "authors": "Nitish Srivastava; Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov", "journal": "The journal of machine learning research", "ref_id": "b36", "title": "Dropout: a simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "Justus Thies; Michael Zollhöfer; Matthias Nießner", "journal": "ACM Trans. Graph", "ref_id": "b37", "title": "Deferred neural rendering: Image synthesis using neural textures", "year": "2019" }, { "authors": "Qianqian Wang; Zhicheng Wang; Kyle Genova; P Pratul; Howard Srinivasan; Jonathan T Zhou; Ricardo Barron; Noah Martin-Brualla; Thomas Snavely; Funkhouser", "journal": "", "ref_id": "b38", "title": "Ibrnet: Learning multi-view image-based rendering", "year": "2021" }, { "authors": "Minye Wu; Yuehao Wang; Qiang Hu; Jingyi Yu", "journal": "", "ref_id": "b39", "title": "Multiview neural human rendering", "year": "2020" }, { "authors": "Hongwen Zhang; Yating Tian; Xinchi Zhou; Wanli Ouyang; Yebin Liu; Limin Wang; Zhenan Sun", "journal": "", "ref_id": "b40", "title": "Pymaf: 3d human pose and shape regression with pyramidal mesh alignment feedback loop", "year": "2021" }, { "authors": "Zerong Zheng; Tao Yu; Yebin Liu; Qionghai Dai", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b41", "title": "Pamir: Parametric model-conditioned implicit representation for image-based human reconstruction", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 308.86, 96.8, 236.25, 33.86 ], "formula_id": "formula_0", "formula_text": "x c = K k=1 w k i (x)G k i -1 x p . w i (x) = {w k i (x)} K k=1" }, { "formula_coordinates": [ 3, 359.52, 236.92, 185.59, 9.83 ], "formula_id": "formula_1", "formula_text": "(σ i , c i ) = F σ (γ x (T i (x)) , γ d (x)),(1)" }, { "formula_coordinates": [ 3, 335.82, 376.34, 209.3, 53.84 ], "formula_id": "formula_2", "formula_text": "Ĉ(r) = J j=1 Γ j (1 -exp(-σ k (τ j+1 -τ j )))c k , (2) with Γ k = exp(-j ′ <j (τ j ′ +1 -τ j ′ ))" }, { "formula_coordinates": [ 3, 366.23, 500.15, 178.89, 22.74 ], "formula_id": "formula_3", "formula_text": "L photo = r∈R ∥ Ĉ(r) -C(r)∥ 2 ,(3)" }, { "formula_coordinates": [ 3, 347.72, 539.88, 197.39, 22.02 ], "formula_id": "formula_4", "formula_text": "L weight = x∈Xi ∥w i (x) -w can (T i (x))∥ 1 ,(4)" }, { "formula_coordinates": [ 4, 75.85, 684.66, 206.64, 30.32 ], "formula_id": "formula_5", "formula_text": "D(r) = J j=1 Γ j (1 -exp(-σ k (τ j+1 -τ j )))τ k . (5" }, { "formula_coordinates": [ 4, 282.49, 695.39, 3.87, 8.64 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 4, 313.84, 549.05, 227.4, 29.32 ], "formula_id": "formula_7", "formula_text": "L depth = r∈R ∥ D(r) Mean r∈R ( D(r)) - D(r) Mean r∈R (D(r)) ∥ 2 , (6" }, { "formula_coordinates": [ 4, 541.24, 558.52, 3.87, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 5, 127.76, 96.12, 154.73, 22.31 ], "formula_id": "formula_9", "formula_text": "N(x) = - ∇σ(x) ∥∇σ(x)∥ . (7" }, { "formula_coordinates": [ 5, 282.49, 103.17, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 5, 105.43, 186.23, 177.06, 12.22 ], "formula_id": "formula_11", "formula_text": "L normal = 1 -cos( N(x), N(x)). (8" }, { "formula_coordinates": [ 5, 282.49, 188.96, 3.87, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 375.17, 319.9, 169.94, 11.32 ], "formula_id": "formula_13", "formula_text": "s(r) = o + D(r)d, r ∈ R,(9)" }, { "formula_coordinates": [ 5, 354.86, 388.72, 190.25, 11.88 ], "formula_id": "formula_14", "formula_text": "L surface (r) = (1 -σ(s(r))) 2 , r ∈ R.(10)" }, { "formula_coordinates": [ 5, 308.86, 476.53, 236.8, 20.91 ], "formula_id": "formula_15", "formula_text": "L overall = L base +λ depth L depth +λ normal L normal +λ surface L surface ,(11)" } ]
2023-11-30
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7" ], "table_ref": [], "text": "Synthetic Aperture Radar (SAR) is an active microwave remote sensing imaging system, whose all-day and all-weather working capacity makes it become the most effective part in ocean applications. As an important ocean mission, SAR ship classification has always been a hot research topic and greatly benefited from the development of deep neural networks during past years [1,2].\nEffective feature extraction plays an important role in image classification task. However, due to the acquisition and annotation costs, it is difficult to collect a large-scale labeled data of remote sensing images, which inevitably affects the richness of embeddings and thus restricting the improvement of model performance. In recent years, traditional handcrafted features have been introduced to alleviate this dilemma. Huang et al. [3] directly combined the Gaborbased MS-CLBP, patch-based MS-CLBP and BOVW-based feature by the multi-feature learning framework. Zhang et al. [4] proposed a polarization fusion network with geometric feature embedding to enhance the ship representation. Zheng et al. [5] input raw and handcrafted images into two identical backbone networks and performed channel fusion during supervised training. As illustrated in Fig. 1 (a), the above methods mostly focus on the direct concatenation of high-dimensional features, which not only tends to create redundancy, but also fails to capture the interaction between features.\nInspired by the success of contrastive learning (CL) methods MoCo [6], BYOL [7], SimSiam [8] etc., which are capable of learning discriminative features between multiple representations, we utilize CL for the first time to learn complementary information between handcrafted and deep features. As shown in Fig. 1 (b), handcrafted knowledge can be transferred with the update of model parameters.\nUnlike supervised models, the self-supervised pre-training model DCPNet focuses on the connections between samples rather than between samples and labels, which not only en- ables the full utilization of unlabeled data, but also realizes the reuse of handcrafted knowledge by migrating model to downstream classification tasks.\nSpecifically, the main contributions of this paper are as follows:\n• A novel dual-stream contrastive predictive network (DCPNet) with a false negative samples elimination (FNSE) module is proposed as a pre-training network and thus obtaining an encoder with good generalization performance for SAR Ship image classification. • In DCPNet, a handcrafted feature branch is designed to guide the transfer of complementary information generating during the process of prediction task. • The cluster consistency loss is introduced on the basis of contrastive loss at the instance level, which ensures the separability between samples and the compactness within different categories." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "Considering the labeled SAR ship dataset\nX = {(x i , y i )} B i=1\n, where x i is the image and y i is the class label, the unlabeled dataset U = {u i } B i=1 is fed to network in the pre-training stage. Ultimately our goal is to obtain the online encoder that is updated end-to-end. An overview of DCPNet is shown in Fig. 2, which consists of the early image augmentation stage and the core dual-stream contrastive predictive training stage. The specific design principles and training strategies of each part are described in detail below." }, { "figure_ref": [], "heading": "Image Augmentation Stage", "publication_ref": [ "b8", "b9", "b10", "b11", "b12" ], "table_ref": [], "text": "In this stage, three augmentation processing are designed to obtain different views of training samples, including the handcrafted feature extraction Aug h (•) and two deep learning image transformations with different strengths, i.e. week augmentation Aug w (•) and strong augmentation Aug s (•).\nThe process of Aug h (•) is to introduce physically interpretive information. Many experiments have demonstrated that, the injection of handcrafted feature can complement the deep feature due to its characteristics about being able to focus on the specific physical information [9,10,11,12].\nBesides, two transformations of Aug w (•) and Aug s (•) are raised to simulate the different working conditions. Considering the properties of SAR images, we remove invalid operations from the settings of existing research [13]. Specifically, Aug w (•) concluded simple random cropping, horizontal flips etc., is regarded as an anchor for providing the benchmark to guide the generation of pseudo-labels, while Aug s (•) concluded gaussian blur, color jitter etc., is used to enrich the feature bank and thus enhancing the generalization of our model." }, { "figure_ref": [], "heading": "Dual-stream Contrastive Predictive Training Stage", "publication_ref": [ "b13", "b14", "b15" ], "table_ref": [], "text": "CL essentially guides model training by establishing pretext tasks. For example, SimCLR constructs positive and negative pairs through augmentations within mini-batch. MoCo builds a dictionary look-up task by designing a dynamic dictionary with a queue and a moving-averaged encoder. Different from these methods, DCPNet adopts three new pretext tasks for classification from the prespective of feature fusion, including the handcrafted prediction task, the image similarity comparison task and the image cluster consistency task.\nHandcrafted feature prediction tasks. Considering that handcrafted features are far away from deep features in the embedding space, forcing a comparison of two features at the instance level will affect the focus of feature extraction and make it difficult for the model to converge.\nIn order to address this problem, the prediction head G(• | Ψ) is connected behind the encoder F (• | Θ w ) to realize the knowledge transfer of handcrafted features within the model by calculating the similarity loss between the output z w of the prediction head and the output x h of the encoder. This loss based on mean square error is defined as follows:\nL hand = 2 -2 • ⟨G (x w | Ψ) , x h ⟩ ∥G (x w | Ψ)∥ 2 • ∥x h ∥ 2(1)\nInstance-level image similarity comparison task. For each sample, different augmentations of the same image should be brought \"nearby\" in an embedding space since they likely contain similar semantic content or correlated features. Following the InfoNCE loss [14], strong augmentations are utilized for image to generate a memory bank for storing features. Taking x i w as an anchor, x i w , x i s and x i w , k j K ′ j=1 are defined respectively as positive pair and a multi-negative pair. In particular, k j K ′ j=1 represents the memory bank containing K ′ embeddings after eliminating false negative samples through pseudo-labels, which will be described later. Finally, the instance-level contrastive loss is defined as follows:\nL inst = - 1 N N i=1 log e s(x i w ,x i s )/τ e s(x i w ,x i s )/τ + K ′ j=1 e s(x i w ,k j )/τ (2)\nwhere s (u, v) is the similarity function, i.e., the inner product s (u, v) = u T v, and τ is the temperature factor. Cluster-level image cluster consistency task. Instance-level constraint is equally important as cluster-level constraint. The same batch of images should have similar category distributions under different augmentations. Our framework projects multi-view features into an M-dimensional (M is the number of ship categories) space through a classifier, and utilizes a consistency loss to promote the compactness within classes in feature spaces. Specifically, using c i p , c i q ∈ R N ×1 to denote the distribution representation of cluster i under two augmentation strategies p and q, corresponding to the above, p, q ∈ {A weak , A strong , A handcrafted }. And the loss is defined as follows: The overall loss should be:\nL clust = - 1 M M i=1 log e s(c i p ,c i q )/τ e s(c i p ,c i q )/τ + M j=1 1 [i̸ =j] e s(c i p ,c j q )/τ(3\nL overall = αL hand + βL inst + γL clust (4\n)\nwhere the loss coefficients α, β, γ satisfy α + β + γ = 1. False negative sample elimination. Generally, negative pairs are formed by sampling views from different images without labels, which may ignore the semantic information within them, and thus resulting in unreliable negative samples in the memory bank [15]. Therefore, pseudo-labels are utilized to weaken the impact of false negative samples on model training. Define c as the degree of confidence, the pseudo-label probability vector can be expressed as:\nP elim i = c • P w i + (1 -c) • P s i(5)\nAccording to FixMatch [16], we use confidence to filter out labels with high reliability for elimination and discard the low-confidence pseudo-labels. The new queue after eliminating false negative samples based on sample i is as follows:\nk j K ′ j=1 = k j orig K j=1 • mask p ∩ 1 Pi̸ = Pj(6)\nwhere\nk j orig K j=1\nis the original negative sample queue, and mask p represents the filter strategy for retaining pseudolabels with a confidence level above the setting threshold." }, { "figure_ref": [], "heading": "EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiment Preparation", "publication_ref": [], "table_ref": [], "text": "Dataset. 1) OpenSARShip: The OpenSARShip dataset is collected by the Senti-nel-1A satellite, which contains three types: bulk carriers , container ships and tankers. " }, { "figure_ref": [], "heading": "Comparison with other self-supervised methods", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 1 shows the evaluation accuracy of DCPNet and the existing CL framework on the SAR ship classification task. It can be seen that when pre-training epochs is set to 200 and using the fine-tuning method, DCPNet has a larger gain than all the CL frameworks except using the \"ftall\". The reason is that when the test accuracy of \"KNN-way\" is close to supervised models, extracted features are distinguishable enough to be classified through \"ft1./ft2.\". So compared with BYOL and SimSiam, fine-tuning with several layers is more effective. Secondly, the results of 20 epochs emphasize that DCPNet can approach the CL models with small epochs on openSAR-Ship with \"KNN-way\" and FUSAR-Ship with \"fine-tuning. Thirdly, when using knn method, although the effectiveness is weaker than fine-tuning, it can also achieve the best results on both datasets while using ResNet18 as backbone." }, { "figure_ref": [], "heading": "Comparison with supervised learning baseline", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "To further prove the superiority of our framework, the DCP-Net framework is compared with baseline supervised mod-els. As shown in Table 2, through our pre-training framework and supervised training with labeled data, the accuracy of benchmark neural networks such as ResNet-18 on downstream tasks increased by 1.51% and 6.98% respectively. Two evaluation methods respectively prove that DCPNet can not only train an encoder that learns prior physical knowledge of handcrafted features without causing redundancy, but also obtain a feature set with better generalization and discrimination. Both the encoder and feature set can demonstrate the excellent performance of DCPNet in downstream tasks." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This paper proposes a new dual-stream contrastive predictive network (DCPNet) based on the fusion of deep and handcrafted feature for SAR ship classification. The two main contrastive pretext tasks, along with the cluster-level task are designed to learn the inherent general features of images under different augmentations and to achieve collaboration of model parameter update and handcrafted knowledge transfer. Furthermore, this framework enhances the separability between classes by adding the cluster loss. In addition, pseudolabels based on confidence are used to filter the memory bank, which improves the effectiveness of the negative samples and corrects the embedding space. Through two-stage comparative experiments, it is concluded that the performance of the proposed pre-training framework DCPNet in SAR ship classification tasks is significantly higher than the existing CL methods, and the classification accuracy of the supervised benchmark models are also effectively improved. The proposed DCPNet only achieves knowledge transfer for single handcrafted feature, but further research is needed on how to aggregate information between multiple handcrafted features." }, { "figure_ref": [], "heading": "A. ABLATION EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "Three ablation studies are conducted to investigate the effectiveness of each component of the proposed DCPNet. To be specific, the effectiveness of the handcrfted prediction task, the image cluster consistency task and the false negative sample elimination module. In addition, we provide specific analysis of the experimental results as follows." }, { "figure_ref": [], "heading": "A.1. Ablation Study on the Handcrafted Prediction Task", "publication_ref": [], "table_ref": [ "tab_2", "tab_2", "tab_3", "tab_3" ], "text": "In the ablation study on traditional handcrafted feature prediction task, we choose HOG feature for research and two sets of experiments are arranged, Experiment 1 is used to explore the contrastive mechanism between handcrafted and deep features to match the distribution of both in the embedding space and Experiment 2 is used to further validate the effectiveness of handcrafted feature knowledge. As shown in Table 3, with the addition of handcrafted features, the accuracy of direct contrast mechanism drops severely but feature prediction mechanism is greatly improved. 3.\nTable 4 shows the accuracy results with and without the prediction task. From Table 4, it can be seen that the utilization of HOG knowledge proves its effectiveness with a huge " }, { "figure_ref": [], "heading": "A.2. Ablation Study on the Image Cluster Consistency Task", "publication_ref": [], "table_ref": [ "tab_4", "tab_4", "tab_5", "tab_5" ], "text": "Table 5 shows the accuracy results using constraints of cluster level and not using it. From Table 5, the accuracy is improved by 1.10% and 3.37%. on OpenSARShip and FUSARShip respectively, because this task guides the model to learn more fine-grained distinguishable information of cluster. Table 6 shows the accuracy results retaining the false negative samples and eliminating them. From Table 6, the accuracy gain of 3.00% was achieved on OpenSARShip and 4.55% on FUSAR-Ship. These results demonstrate that correct negative samples lead the model to converge better since the proposed method is based on the comparison of sample pairs. " } ]
Most existing synthetic aperture radar (SAR) ship classification technologies heavily rely on correctly labeled data, ignoring the discriminative features of unlabeled SAR ship images. Even though researchers try to enrich CNN-based features by introducing traditional handcrafted features, existing methods easily cause information redundancy and fail to capture the interaction between them. To address these issues, we propose a novel dual-stream contrastive predictive network (DCPNet), which consists of two asymmetric task designs and the false negative sample elimination module. The first task is to construct positive sample pairs, guiding the core encoder to learn more general representations. The second task is to encourage adaptive capture of the correspondence between deep features and handcrated features, achieving knowledge transfer within the model, and effectively improving the redundancy caused by the feature fusion. To increase the separability between clusters, we also design a cluster-level tasks. The experimental results on OpenSARShip and FUSAR-Ship datasets demonstrate the improvement in classification accuracy of supervised models and confirm the capability of learning effective representations of DCPNet.
DUAL-STREAM CONTRASTIVE PREDICTIVE NETWORK WITH JOINT HANDCRAFTED FEATURE VIEW FOR SAR SHIP CLASSIFICATION
[ { "figure_caption": "Fig. 1 .1Fig. 1. Conceptual illustration of different modes about leveraging the handcrafted feature. Method (a) requires labeled data throughout, whereas method (b) is only required for downstream task fine-tuning.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Method overview. Memory bank is updated once after each epoch by collecting the features generating from the encoder F (• | Θ s ). Pseudo-labels are updated after each epoch according to the confidence scores of both weak and strong view of samples x ′ w , x ′ s before using it to select the unreliable negative samples. The three losses L hand , L inst and L clust are calculated in three separate tasks.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": ") where s (u, v) represents the L 2 normalized cosine similarity sim (u, v) = u T v/∥u∥∥v∥, and 1 [i̸ =j] represents distribution representation that does not belong to the same ship category. Finally, three training objectives are minimized to train the core encoder F (• | Θ w ). All of them simultaneously improve the quality of feature representations and classifiers.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Conceptual illustration of space distribution guided by three augmentation strategies.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Evaluation accuracy(%) of applying two evaluation method \"Fine-tuning\" and \"KNN-way\" for DCPNet and other advanced self-supervised methods. The classification head in Ft1. and Ftall. is a layer of MLP, and in Ft2. is a projector head with two linear layers, a BN layer and a ReLU layer. Ft1. and Ft2. only update classification head during training process, while Ftall. updates all parameters. Besides, the bold black numbers indicate the highest accuracy achieved under different evaluation methods", "figure_data": "2) FUSAR-", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Fine-tuning accuracy(%) of DCPNet and state-ofthe-art supervised learning baseline.", "figure_data": "MethodTrain ep. OpenSARShip FUSAR-ShipResNet-1810071.70±0.9479.71±0.73ResNet-3410071.84±1.2380.73±0.41ResNet-5010072.15±1.2080.96±0.47DCPNet(Best)10073.66±1.0187.94±0.763.2. Experiment Results", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Contrastive mechanism between handcrafted and deep features", "figure_data": "DatasetsContrastive mechanismAccuracy(%)FUSAR-ShipDirectly Contrast Features Prediction(Ours) 87.94±0.76 64.90±2.31", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study on the Handcrafted Prediction Task", "figure_data": "DatasetsHandcrafted TaskAccuracy(%)OpenSARShip71.70±0.90 73.66±1.01FUSAR-Ship81.10±0.56 87.94±0.76improvement in accuracy, i.e., 1.96% on OpenSARShip and6.84% on FUSAR-Ship.", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study on the Cluster Consistency Task", "figure_data": "Datasetscluster TaskAccuracy(%)OpenSARShip72.56±0.22 73.66±1.01FUSAR-Ship84.57±0.59 87.94±0.76", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study on the False Negative Sample Elimination(FSNE) Module", "figure_data": "DatasetsFSNEAccuracy(%)OpenSARShip70.66±0.42 73.66±1.01FUSAR-Ship83.39±0.26 87.94±0.76", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Xianting Feng; Hao Zheng; Zhigang Hu; Liu Yang; Meiguang Zheng
[ { "authors": "Katie Rainey; John D Reeder; Alexander G Corelli", "journal": "SPIE", "ref_id": "b0", "title": "Convolution neural networks for ship type recognition", "year": "2016" }, { "authors": "Liang Zeng; Qingtao Zhu; Danwei Lu; Tao Zhang; Hongmiao Wang; Junjun Yin; Jian Yang", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b1", "title": "Dualpolarized sar ship grained classification based on cnn with hybrid channel feature loss", "year": "2022" }, { "authors": "Longhui Huang; Wei Li; Chen Chen; Fan Zhang; Haitao Lang", "journal": "Multimedia Tools and Applications", "ref_id": "b2", "title": "Multiple features learning for ship classification in optical imagery", "year": "2018" }, { "authors": "Tianwen Zhang; Xiaoling Zhang", "journal": "Pattern Recognition", "ref_id": "b3", "title": "A polarization fusion network with geometric feature embedding for sar ship classification", "year": "2022" }, { "authors": "Zhigang Hao Zheng; Liu Hu; Aikun Yang; Meiguang Xu; Ce Zheng; Keqin Zhang; Li", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b4", "title": "Multifeature collaborative fusion network with deep supervision for sar ship classification", "year": "2023" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b5", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar; Bilal Piot; Remi Munos; Michal Valko", "journal": "Curran Associates, Inc", "ref_id": "b6", "title": "Bootstrap your own latent -a new approach to selfsupervised learning", "year": "2020" }, { "authors": "Xinlei Chen; Kaiming He", "journal": "", "ref_id": "b7", "title": "Exploring simple siamese representation learning", "year": "2021" }, { "authors": "Tianwen Zhang; Xiaoling Zhang; Xiao Ke; Chang Liu; Xiaowo Xu; Xu Zhan; Chen Wang; Israr Ahmad; Yue Zhou; Dece Pan", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b8", "title": "Hog-shipclsnet: A novel deep learning network with hog feature fusion for sar ship classification", "year": "2021" }, { "authors": "Zhang Liu Yun; Zhang Lifeng; Shujun", "journal": "Procedia Engineering", "ref_id": "b9", "title": "A hand gesture recognition method based on multi-feature fusion and template matching", "year": "2012" }, { "authors": " Sérgio F Chevtchenko; F Rafaella; Valmir Vale; Filipe R Macario; Cordeiro", "journal": "Applied Soft Computing", "ref_id": "b10", "title": "A convolutional neural network with feature fusion for real-time hand posture recognition", "year": "2018" }, { "authors": "Xiyue Hou; Wei Ao; Qian Song; Jian Lai; Haipeng Wang; Feng Xu", "journal": "Science China Information Sciences", "ref_id": "b11", "title": "Fusar-ship: Building a highresolution sar-ais matchup dataset of gaofen-3 for ship detection and recognition", "year": "2020" }, { "authors": "Tete Xiao; Xiaolong Wang; Alexei A Efros; Trevor Darrell", "journal": "", "ref_id": "b12", "title": "What should not be contrastive in contrastive learning", "year": "2020" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b13", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Tri Huynh; Simon Kornblith; Michael Matthew R Walter; Maryam Maire; Khademi", "journal": "", "ref_id": "b14", "title": "Boosting contrastive self-supervised learning with false negative cancellation", "year": "2022" }, { "authors": "Kihyuk Sohn; David Berthelot; Nicholas Carlini; Zizhao Zhang; Han Zhang; Colin A Raffel; Ekin Dogus Cubuk; Alexey Kurakin; Chun-Liang Li", "journal": "Advances in neural information processing systems", "ref_id": "b15", "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 219.09, 615.72, 76.6, 14.11 ], "formula_id": "formula_0", "formula_text": "X = {(x i , y i )} B i=1" }, { "formula_coordinates": [ 3, 95.01, 216.5, 203.2, 24.72 ], "formula_id": "formula_1", "formula_text": "L hand = 2 -2 • ⟨G (x w | Ψ) , x h ⟩ ∥G (x w | Ψ)∥ 2 • ∥x h ∥ 2(1)" }, { "formula_coordinates": [ 3, 62.47, 433.75, 235.74, 30.57 ], "formula_id": "formula_2", "formula_text": "L inst = - 1 N N i=1 log e s(x i w ,x i s )/τ e s(x i w ,x i s )/τ + K ′ j=1 e s(x i w ,k j )/τ (2)" }, { "formula_coordinates": [ 3, 55.66, 645.46, 238.81, 41.69 ], "formula_id": "formula_3", "formula_text": "L clust = - 1 M M i=1 log e s(c i p ,c i q )/τ e s(c i p ,c i q )/τ + M j=1 1 [i̸ =j] e s(c i p ,c j q )/τ(3" }, { "formula_coordinates": [ 3, 358.78, 134.93, 196.34, 9.81 ], "formula_id": "formula_4", "formula_text": "L overall = αL hand + βL inst + γL clust (4" }, { "formula_coordinates": [ 3, 555.12, 135.25, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 3, 375.29, 269.1, 183.7, 12.69 ], "formula_id": "formula_6", "formula_text": "P elim i = c • P w i + (1 -c) • P s i(5)" }, { "formula_coordinates": [ 3, 356.13, 343.49, 202.86, 22.08 ], "formula_id": "formula_7", "formula_text": "k j K ′ j=1 = k j orig K j=1 • mask p ∩ 1 Pi̸ = Pj(6)" }, { "formula_coordinates": [ 3, 363.85, 371.31, 40.92, 22.08 ], "formula_id": "formula_8", "formula_text": "k j orig K j=1" } ]
2024-03-15
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b14", "b15", "b19", "b11", "b11", "b18", "b70", "b0", "b12", "b47", "b59", "b60", "b66", "b67", "b50" ], "table_ref": [], "text": "Insects are the most diverse and abundant eukaryotic organisms on the planet. They inhabit all terrestrial and aquatic * Co-first authors Figure 1. Our Proposed Patch-wise Relevant Attention. Given masked insect images and separated image patches, our model can discriminate these patches that have small differences via relevant scores computed between masked images and image patches.\nhabitats and play a significant role within their community, habitat, and ecosystem as contributors to nutrient cycling, maintenance of plant and animal communities, disease cycling, and overall ecosystem health. Therefore, in the agricultural revolution, the detection and identification of insects plays a key role in ensuring healthy crop growth and high-quality production. Prior methods [2, 3, 6, 32, 55, 66] often fine-tuned the pre-trained ImageNet models on insect data for specific insect-related tasks, e.g., Insect Classification [2, 6, 13, 66], Insect Detection [66]. However, these methods remained limited since the models pre-trained on ImageNet [12,15,16,20,50,52] could not model the micro features of insects, e.g., tiny texture and details of insects, as ImageNet [12] is the generic object dataset.\nRecent foundation models [7-9, 17-19, 40, 43, 69, 70] pre-trained on large-scale datasets have revolutionized vision models with solid performance on downstream applications. These models are designed to model general or specific properties of images or videos that can later be generalized to downstream tasks and unseen data. The capability of the foundation model is often implemented with Table 1. Comparison with existing datasets related to insects. Our proposed dataset has hierarchical labels with 6 main hierarchical levels, i.e., Subphylum, Class, Order, Family, Genus, and Species, and large numbers of species and samples. Moreover, the proposed dataset contains hierarchical descriptions for each insect and auxiliary taxonomic level, i.e., Subclass, Suborder, Subfamily, etc. self-supervised or prompt-engineering training on largescale datasets [12,19,49,71]. However, the current insect datasets [1,2,6,13,28,48,60,61,[66][67][68] are insufficient to establish the foundation model of insects due to their scale and diversity. Indeed, the most recent work presents an insect recognition dataset containing over 75, 000 images of 102 species [66]. Although the dataset includes many species, compared to the species of insects in the natural environment with over 5.5 million species [45,51], the current work needs to have the diversity of insects. Furthermore, to our knowledge, the current insect dataset [66] does not provide the corresponding insect descriptions, limiting the ability to learn the foundation models." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b42", "b18", "b69", "b9", "b16", "b6", "b39", "b17" ], "table_ref": [], "text": "Although the dataset is an important factor in developing an insect foundation model, the learning approach of the foundation model plays a significant role in performance. There is significant progress in developing vision foundation models. Common approaches learned alignment between vision and language, for example, CLIP [43], ALIGN [19], CoCa [70], to model visual concepts and data distributions. Meanwhile, self-supervised contrastive or distillation learning approaches, e.g., MoCo [9,10,17], DINO [7,40], MAE [18], etc., learned the vision model by various pretext tasks and have shown its scaling ability and generalizes well to various downstream tasks. However, most of these previous foundation models represent the general information of natural images without specific knowledge. When deploying in the insect domains, they cannot capture the micro-features of insects, i.e., key features or appearance to distinguish the species, since the texture and details of insects are often small and diverse compared to generic objects. Meanwhile, fine-grained discrimination between insect images is crucial in insect foundation models due to the high diversity of species. Therefore, to successfully develop the insect foundation model, the learning approach needs to understand and be able to model the micro-features of in-sects. Based on this observation, we present a novel pre-text task to enhance the recognition ability of the model between small features of the insect, as illustrated in Fig. 1." }, { "figure_ref": [], "heading": "Contributions of this Work:", "publication_ref": [], "table_ref": [], "text": "To contribute to the development of the Insect Foundation Model in precision agriculture, we introduce a novel large-scale insect dataset, i.e., Insect-1M, and a new Insect Foundation Model, i.e., Insect-Foundation, that can transfer to various downstream insectrelated applications, e.g., insect detection, insect classification, insect vision-language understanding. Our contributions can be summarized as follows. First, we present a new rich and large-volume insect dataset, i.e., Insect-1M, that consists of 1 million images of insects with dense identifications of taxonomy hierarchy from the abstract level of taxonomy, e.g., Class, Order, to the detailed level of taxonomy, e.g., Genus, Species. In addition, each insect contains a detailed description that describes the details and features of insects. To the best of our knowledge, our proposed Insect-1M dataset is 13× larger than the prior published IP102 dataset [66]. Second, to model the micro features of insects, we introduce a new self-supervised contrastive learning paradigm with a novel Patch-wise Relevant Attention mechanism to model the feature correlations of insect details. Third, to increase the modeling capability of the Insect Foundation Model in learning insect details, we introduce a new Description Consistency loss to learn the detailed features of insects via the textual description. Finally, through our intensive experiments on the Insect Classification and Insect Detection benchmarks [66], we show the effectiveness of our approach in insect modeling and our superior performance compared to the prior methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b59", "b66", "b67", "b4", "b58", "b7", "b16", "b9", "b6", "b39", "b17", "b68", "b10", "b53", "b42", "b18", "b1", "b69", "b72", "b30", "b63" ], "table_ref": [], "text": "Insect Datasets. There are prior studies releasing insect datasets on a small scale for recognition problems. [60] presented a dataset consisting of 20 species with 10 samples for each species. Then, [67] introduced an insect dataset including 1, 440 samples of 24 species. Several subsequent studies have larger datasets for deep learning, e.g., [68] proposed an insect dataset of 4, 500 images with 40 different species for insect classification, and [28] proposed an insect dataset with over 5, 000 samples for insect recognition and localization. PestNet [25] and AgriPest [63] were introduced for the small pest detection task. Recently, [66] has presented IP102 as a large-scale dataset containing over 75K samples of insects with 102 species for classification and detection tasks. Meanwhile, [59] proposed a large-scale dataset including over 723K samples of Arthropoda phylum with 2, 752 species. Although prior efforts promoted the development of vision and machine intelligence in precision agriculture, no dataset has a large volume of samples and diverse species for insect-related foundation model training. Therefore, this work introduces a novel dataset that not only contains a large number of samples, i.e. 1M images, but also has hierarchical labels from the high to the low taxonomy level, including class, order, family, genus, and species. Table 1 compares our proposed dataset with the prior ones. In comparison with prior datasets, the number of images in our proposed Insect-1M dataset is 13× higher than the prior IP102 dataset, and the number of species is 335× higher than IP102 [66]. To preserve the rights of datasets and authors of images, instead of publishing images, we only provide labels and links to download images. Self-supervised Pre-training. Self-supervised pre-training has become a popular strategy for solving visual recognition problems, including classification, localization, segmentation, video recognition, tracking, and many other problems [18, 33-38, 53, 54, 56-58]. SimCLR [8] learned the visual representation of images via a contrastive learning framework using different data augmentation operations. MoCo [17] introduced momentum updating for the encoder while learning the image representation via contrastive learning. The MoCo framework was later used to improve the Sim-CLR approach without requiring a large training batch size [9]. MoCo-V3 [10] improved prior Momentum Contrastive frameworks by eliminating the memory queue to stabilize the training when the batch size is large. DINO [7] proposed a self-supervised learning approach using knowledge distillation with no labels. Later, it was extended to DINO-V2 [40] by stabilizing self-supervised learning when scaling the size of models and data. BEiT [5] proposed a masked image modeling task and used discrete visual tokens from the original image as prediction targets. MAE [18] and SimMIM [69] directly used a decoder to reconstruct pixel values from masked regions. Jigsaw-ViT [11] presented a pre-training task for transformer models by solving the shuffled patches of images. This learning strategy was also applied on the temporal dimension to improve the robustness of video modeling [54]. Micron-BERT [36] studied the micro-changing in facial videos by learning to detect the minor differences in an image that has swapping regions between two frames. Joint Vision-Language Pre-training. Recent work introduced joint vision-language pre-training. CLIP [43], and ALIGN [19] addressed that dual-encoder models pretrained on image-text pairs in contrastive objectives can learn strong representations of image and text for crossmodal alignment and zero-shot image recognition problems. LiT [72] and BASIC [42] proposed zero-shot transfer learning approaches by teaching the text model to learn the representation of the pre-trained image model via contrastive losses with large-scale data. SimVLM [65], OFA [62], and BLIP [22] trained an encoder-decoder model with language generative losses and achieved high performance in the vision-language benchmarks. CoCa [70] utilized contrastive learning and generative image captioning for global representation learning and fine-grained image-text alignment. Later work [73] used sigmoid loss to compute the image-text similarity for batch size scaling. LexLIP [31] projected images into a lexicon space for image-text sparse matching. Meanwhile, EQSIM [64] computed the similarity by the image-text equivariant changing. " }, { "figure_ref": [ "fig_0" ], "heading": "The Proposed Insect 1M Dataset", "publication_ref": [], "table_ref": [], "text": "To contribute to establishing the insect foundation model, the large-scale dataset of insects with diverse species is essential. Therefore, we collect a new insect dataset with dense labels of a hierarchical taxonomy. In particular, our Insect-1M dataset contains 1 million insect images with dense hierarchical labels with six main taxonomies, i.e., Subphylum, Class * , Order, Family, Genus, and Species. The samples are in the Phylum Arthropoda and can be divided into 4 Subphylums, which are Chelicerata, Crustacea, Hexapoda, and Myriapoda as shown in Fig. 2. Compared to prior datasets, our Insect-1M has more hierarchical levels with large numbers of species and samples as in Table 1." }, { "figure_ref": [ "fig_0" ], "heading": "Data Collection Protocol", "publication_ref": [], "table_ref": [], "text": "We utilize insect information containing insect data with images and taxonomies collected by naturalists and entomologists. Each insect sample has a corresponding image and its taxonomic label. From the taxonomic label, we crawl the identification description of the corresponding taxonomy. Notice that the taxonomic labels are hierarchical. The description is written from high-level descriptions, e.g., Subphylum and Class, to low-level descriptions, e.g., Species. Fig. 2 shows an example of an insect description." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Data Preprocessing and Statistic", "publication_ref": [], "table_ref": [], "text": "Data Preprocessing. The raw data is stored in over 1 million HTML files with predefined HTML structures. Then, we parse the data structures to collect the insect images and their labels. More than 2 million raw images and their corresponding labels have been collected. However, the raw data collected consists of a lot of noise, e.g., incorrect identification of insects, corrupted images, and non-insect images. Therefore, to filter these outliers, our entomology experts must verify the images and their labels, i.e., insect identification. Finally, our collected Insect-1M dataset consists of * In this paper, we use the term \"Class\" as a biological taxonomic level.\n1, 017, 036 clean images with dense labels of 34, 212 different insect species. Data Statistic Fig. 3 shows the sample distributions of the Subphylums and their Classes. It is shown that the Class Insecta has the majority of samples. Fig. 3 also illustrates the distribution of the Orders in the major Classes. For each major Class, the data distribution of Orders is well-balanced." }, { "figure_ref": [], "heading": "The Proposed Insect Foundation Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Limitations of Prior Foundation Training Approaches", "publication_ref": [ "b17", "b17", "b10", "b10", "b8", "b10", "b17" ], "table_ref": [], "text": "Limitations One of the issues in the visual insect understanding problem is the visual representation and discrimination of the small and undistinguished features of the insects. While MAE [18] reconstructs an image from a masked image for visual representation learning, it focuses Figure 4. Comparisons of Self-supervised Methods. MAE [18] fails to reconstruct the details of the insect since it learns general information about the image. Micron-BERT [36] hardly distinguishes the insect and background. Jigsaw-ViT [11] cannot correct shuffled patches due to confusion between the background and the object. Meanwhile, our approach can find separated patches belonging to the insect by scoring each patch. Best viewed in color. on the context inside the image individually without realizing the small details to discriminate between the insects. Meanwhile, Jigsaw solving methods [11,39] correct the position of image patches to enhance the model robustness to the image structure. This strategy needs more mechanisms to focus on the small details of the image. Micron-BERT [36] highlights the small changes in the image by swapping the regions between two images with similar contexts. However, the small changes in the insect image still preserve the signature features representing the insect. Thus, it makes the model collapse in detecting the small features of insects. Therefore, to address these limitations, we introduce a new approach that learns to recognize the tiny features in the insect images. These features are distinguished from the background by discriminating the minor differences between patches of images individually. Fig. 4 compares prior self-supervised methods [11,18,36] with our approach.\nFig. 5 illustrates our insect foundation model. The model is designed to capture the small differences in insect features, i.e., textures or limbs, via our new self-supervised pre-text task. Moreover, the model is pre-trained to learn the fine-grained alignment between the insect description and its visual features. Formally, given an input image I, we divide I into non-overlapping patches. Then, a subset of patches P s is sampled, and the remaining patches are put into a pool of image patches P pool . The sampling is processed randomly in a uniform distribution. An image encoder is used to map I p into latent vectors. Given an insect description T of the image, a text encoder is presented to extract information from T . A text decoder and joint imagetext contrastive learning module are introduced to map the description into the image. Finally, a Patch-wise Relevant Attention module is proposed for self-supervised learning to enhance the discrimination robustness of the model." }, { "figure_ref": [], "heading": "Input Modeling", "publication_ref": [], "table_ref": [], "text": "An input image I ∈ R H×W ×3 is divided into nonoverlapping patches P = {p i s } N P i=1 where H, W are the height and width of the input image, N P = HW/(s p ) 2 is the number of patches. Each patch p i s has a resolution of s p × s p . The non-overlapping patches P are then randomly sampled into a subset of patches P s ⊂ P and put the other patches into a pool of image patches P pool . Note that P pool contains patches from multiple images in the training set." }, { "figure_ref": [], "heading": "Image Encoder", "publication_ref": [], "table_ref": [], "text": "Each patch p i s ∈ P s is projected into a latent vector x i s ∈ R d where d is the dimension of the latent vectors. A subset patches P s can be represented as follows:\nXs = concat[x i s ] N Ps i=1 ∈ R N Ps ×d , x i s = αp(p i s ) + ep(i)(1)\nwhere α p and e p are the projection embedding and position embedding.\nLet an image encoder E image (X s ) be a stack of L e transformer blocks where each block contains multi-head selfattention (MSA) and multi-layer perceptron (MLP).\nX ′ l = X l-1 + MSA(LN(X l-1 )) X l = X ′ l + MLP(LN(X ′ l )) X0 = Xs, 1 ≤ l ≤ Le (2)\nwhere LN is the layer normalization. Then, given X s , the output latent vector Z s is represented as follows:\nZs = E image (Xs), Zs ∈ R N Ps ×d\n(3)" }, { "figure_ref": [ "fig_4" ], "heading": "Insect Micro-feature Self-supervised Learning", "publication_ref": [ "b69" ], "table_ref": [], "text": "The recognition of insects relies on the insect texture, eyes, or limbs that are tiny to detect. To make the model robust to the small features of insect images, we propose a selfsupervised learning strategy to spot these small features via the small differences in the images. Notice that the insects can be distinguished by detecting and discriminating the critical features in each part of those insects. To enhance this ability for the model, a pre-text task is presented. In particular, after extracting global information from a masked image of the insect, the vision model learns to find the remaining patches of the image by comparing image patches of different insect species. Thanks to our learning mechanism, the model learns the key features representing each insect and discriminates the small features between different species. As illustrated in Fig. 6, given a subset of patches P s from the image I and a pool of image patches P pool , we train the model to find the patches p t ∈ P pool that originally belong to the image I. Then, given latent vectors Z s of P s , a patch-wise relevant attention score (PRS) is computed between Z s and each patch p ∈ P pool . The score can be defined as:\nPRS = f (Zs, p) ∈ [0, 1](4)\nThe higher the score is, the more possibility that p ∈ P . Attention Pooling To compute the relevance between latent vectors Z s from the image I and the patch p ∈ P pool , the latent vectors Z s should be aggregated to represent the holistic information of I. Inspired by [70], we compute the global information of I via attention pooling. Given a placeholder contextual token z ′ ct as a query Q ct and latent vectors Z s as a key K Z and a value V Z , we compute an attention map between Q ct and K Z . Then, a contextual token z ct representing the global information of I is computed via the attention map and the value V Z . The attention pooling (Fig. 7) can be formulated as Eqn. (5). vectors. From Eqn. (6), we expand the score function into a self-supervised loss function L PRS as follow:\nL rel = -y log(H(zct, zp)) -(1 -y) log(1 -H(zct, zp)) (7)\nwhere y = 1 if p ∈ P and y = 0 otherwise." }, { "figure_ref": [], "heading": "Fine-grained Insect Image-Text Alignment", "publication_ref": [ "b13", "b20", "b26", "b43", "b10" ], "table_ref": [], "text": "Each species has an individual definition and description that can be aligned to parts of the insect image. We adopt a text decoder to generate the species descriptions from insect images. Moreover, to capture the general information of species, we utilize contrastive learning between global features of the insect images and description. As a result, the model can learn specific information from insect images via insect descriptions.\nFormally, an insect description text is tokenized into\nT = {t i } N T\ni=1 where N T is the number of tokens of the description. Each token t i ∈ T is embedded into a latent vector w i ∈ R d . The description can be represented as:\nW = concat[wi] N T i=1 ∈ R N T ×d , wi = αw + ew(i)(8)\nwhere α w and e w are the projection embedding and position embedding.\nSimilar to the image encoder, let the text encoder E text (W) be a stack of L ′ e transformer blocks containing multi-head self-attention and multi-layer perceptron. The output latent vector Z ′ of the description is computed as\nW ′ = Etext(W), Z ′ ∈ R N T ×d (9)\nWe then use the latent vector Z s of the insect image and W ′ of the description text for image-text contrastive learning and multi-modal image description decoding.\nImage-text Contrastive Learning. Inspired by the prior language model frameworks [14,21,27,44], a contextual token w ct representing the semantic information of the description is added at the beginning of W as in Eqn. 8. Then the two encoders E image and E text can be jointly optimized via contrastive learning as follow: where z i and w i is the contextual token of the i-th insect image and description.\nLcon = -1 N N i=1 log exp(z T i w i ) N j=1 exp(z T i w j ) + log exp(w T i z i ) N j=1 exp(w T i z j )(10)\nMulti-modal Image Description Decoding. While imagetext contrastive learning represents the global semantic information between the image and description, the multimodel image description decoding aims for the fine-grained details by predicting the tokenized texts of T in an autoregressive manner, as shown in Eqn. (11).\nL desc = - N T t=1 log D multi (wt|W0:t-1, Zs)(11)\nwhere D multi is an autoregressive multi-modal text decoder." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Foundation Model Pre-training", "publication_ref": [ "b14", "b13", "b40", "b28" ], "table_ref": [], "text": "Our experiments use ViT-Base (ViT-B/16) [15] as the backbone. The images are resized and cropped randomly into the resolution of 224 × 224. Then, each image is divided into patches of 16 × 16, creating N P = 196 patches. The patch sampling ratio is selected as 50%, and the remaining patches are put into the pool of image patches. Each patch is projected to latent space of d = 768 dimensions. The text encoder and multi-modal text decoder are adopted from the pre-trained BERT model [14]. The model is implemented in PyTorch [41] and trained by 16×A100 GPUs. The learning rate is initially set to 1.5 × 10 -4 with the Consine learning rate scheduler [29]. The model is optimized by AdamW [30] with 200 epochs and a batch size of 64 per GPU. " }, { "figure_ref": [], "heading": "Datasets and Benchmarks", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b17", "b23", "b14", "b23", "b16", "b23", "b6", "b23", "b17", "b23", "b42", "b69", "b23", "b11", "b11" ], "table_ref": [ "tab_1", "tab_5", "tab_6" ], "text": "Our ablation experiments study the effectiveness of our proposed model and hyper-parameters on the IP102 Classification Benchmark as shown in Table 2. Visualization Results Fig. 8 visualizes the attention maps of our model compared to MAE [18] pre-trained on the proposed dataset. Since the textures are similar to the background, it is hard for MAE to focus on the small details of the insect. On the contrary, our model can detect the key features, i.e., the textures and the limbs, of the insects. Zero-shot Insect Classification. We evaluate the performance of our model on the IP102 dataset [66] in a zero-shot manner. In detail, a description corresponds to each species to make the text encoder extract more semantic information about each species. Then, for each insect image, we use the image encoder to extract global features and compare them to each description feature to predict the insect [24] ViT [15] ImageNet1K 32.8 54.7 35.0 FPN [24] MoCo [17] 1M-Insect 33.6 56.1 35.3 FPN [24] DINO [7] 1M-Insect 34.0 55.8 37.1 FPN [24] MAE [18] 1M-Insect 34.7 58.4 37.8 FPN [24] Insect-Foundation 1M-Insect 36.6 59. 1 40.3 species. Table 4 reports the results of zero-shot classification on the IP102 Classification benchmark. Our model outperforms prior image-text pre-training methods [43,70,72] at an accuracy of 49.9%. It shows that our model has wellalignment between the insect image and its description. Insect Detection Tasks. As shown in Table 5, we train a Faster R-CNN model [47] on the IP102 Detection dataset with the ViT backbone adapted for FPN [24]. Compared to models pre-trained on ImageNet [12], our model achieves SOTA results with an average precision of 36.6% and AP .50 of 59.1% higher than the same backbone pre-trained on ImageNet [12] having AP of 32.8% and AP .50 of 54.7%." }, { "figure_ref": [], "heading": "Effectiveness of Network Backbones", "publication_ref": [ "b6", "b16", "b17" ], "table_ref": [], "text": "Compared to other self-supervised methods [7,17,18], our model achieves higher precision. Thus, our model focuses on the features of insects better than prior methods." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper has introduced a new large-scale Insect-1M dataset that supports the development of the Insect Foundation Model in precision agriculture. Our proposed dataset includes a large diversity of insect species and multi-level labels of taxonomy. In addition, Insect-1M consists of detailed descriptions of insects that support vision-language insect model training. Then, to improve the micro-feature modeling of our insect foundation model, we introduce a new Patch-wise Relevant Attention mechanism and Description Consistency loss to learn the details of insects. Our experimental results have illustrated the effectiveness and significance of our Insect-1M and Insect Foundation Model.\nLimitations This study used a specific network design and learning hyper-parameter to support our hypothesis. However, our approach potentially consists of several limitations related to the design of our Patch-wise Relevant Attention mechanism, where the patches of background and foreground are equally treated. It could result in difficulty in learning the different features of insects. This limitation will further motivate future research to improve the Insect Foundation Model and Micro-feature Modeling." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgment. This work is partly supported by NSF DART, NSF SBIR Phase 2, and JBHunt Company. We also acknowledge the Arkansas High-Performance Computing Center for GPU servers and Jesse Ford for dataset tasks." } ]
In precision agriculture, the detection and recognition of insects play an essential role in the ability of crops to grow healthy and produce a high-quality yield. The current machine vision model requires a large volume of data to achieve high performance. However, there are approximately 5.5 million different insect species in the world. None of the existing insect datasets can cover even a fraction of them due to varying geographic locations and acquisition costs. In this paper, we introduce a novel "Insect-1M" dataset, a game-changing resource poised to revolutionize insect-related foundation model training. Covering a vast spectrum of insect species, our dataset, including 1 million images with dense identification labels of taxonomy hierarchy and insect descriptions, offers a panoramic view of entomology, enabling foundation models to comprehend visual and semantic information about insects like never before. Then, to efficiently establish an Insect Foundation Model, we develop a micro-feature self-supervised learning method with a Patch-wise Relevant Attention mechanism capable of discerning the subtle differences among insect images. In addition, we introduce Description Consistency loss to improve micro-feature modeling via insect descriptions. Through our experiments, we illustrate the effectiveness of our proposed approach in insect modeling and achieve State-of-the-Art performance on standard benchmarks of insect-related tasks. Our Insect Foundation Model and Dataset promise to empower the next generation of insect-related vision models, bringing them closer to the ultimate goal of precision agriculture.
Insect-Foundation: A Foundation Model and Large-scale 1M Dataset for Visual Insect Understanding
[ { "figure_caption": "Figure 2 .2Figure 2. Examples of Our Insect-1M Dataset. The left figure illustrates the samples of the four Subphylums, including Chelicerata, Crustacea, Hexapoda, and Myriapoda. The right figure shows an example of hierarchical descriptions of the Aurantia Species.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. The Distribution of Subphylum and Its Classes (Left) and The Distribution of Class and Its Orders (Right). Best viewed in color.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. The Overview Framework of Our Proposed Approach to Insect Foundation Model.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "QctFigure 6. Pool of Image Patches. A subset of patches of an image is sampled for image encoding while the remaining patches are placed into a pool of patches for the self-supervised pre-text task.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Attention Pooling Module. The contextual token zct represents the global information of the image I.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Effectiveness of our method on the IP102 Classification. We evaluate approach with three different vision transformer backbones, i.e., ViT-small/16, ViT-base/16, and ViT-large/16, without or with Attention Pooling (Attn Pool), and three different losses, i.e. Patch-wise Relevant Loss (Lrel), Image-Text Contrastive Loss (Lcon), and Description Loss (Ldesc).", "figure_data": "BackboneL relAttn PoolL con L descAcc@1 (%)Acc@5 (%)✓68.988.8ViT-small/16✓ ✓✓ ✓✓69.5 70.789.7 89.9✓✓✓✓71.587.7✓72.491.0ViT-base/16✓ ✓✓ ✓✓73.3 74.291.6 91.9✓✓✓✓75.892.1✓73.890.9ViT-large/16✓ ✓✓ ✓✓74.6 75.991.6 91.4✓✓✓✓76.992.7", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Classification results on IP102 Classification benchmark. Both proposed models pre-trained with and without the insect descriptions outperform prior methods by a large margin.", "figure_data": "MethodDescriptionPre-train DataAcc@1 (%)Acc@5 (%)ResNet [66]✗ImageNet1K49.4-EfficientNet [6]✗ImageNet1K60.7-DenseNet [32]✗ImageNet1K61.9-GAEnsemble [3]✗ImageNet1K67.1-ViT [15]✗ImageNet1K71.687.7MoCo [17]✗1M-Insect70.688.4DINO [7]✗1M-Insect71.591.4MAE [18]✗1M-Insect72.091.5CoCa [70]✓1M-Insect72.891.1Insect-Foundation✗1M-Insect73.391.6Insect-Foundation✓1M-Insect75.892.1diseased crop caused by the species. The insects are in dif-ferent forms for each class, e.g., egg, larva, pupa, and adult.The performance of insect classification is evaluated by theaccuracy of Top 1 (Acc@1) and Top 5 (Acc@5).IP102 Detection [66] includes 15,178 training images and3,798 testing images of 102 different species. Following theCOCO benchmark [23], the insect detection performance ismeasured by the Average Precision (AP) and Average Pre-cision at IoU thresholds of 0.5 (AP .50 ) and 0.75 (AP .75 ).", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Effectiveness of Attention PoolingWe evaluate the impact of the attention pooling in the visual representation of the insect images. As shown in Table2, the Attention Pooling has better representation than the standard classification token computed through transformer layers. In particular, the top-1 accuracies for the three backbones, i.e., small, base, and large, have been increased from 68.9% to 69.5%, from 72.4% to 73.3%, and from 73.8% to 74.6%. Effectiveness of Image-Text Contrastive Loss As reported in Table2, the model can understand the insect images better when the model learns to match the images and their descriptions. In detail, the accuracy scores have been increased by 0.8%, 0.9%, and 1.3% for the three backbones when applying the Image-Text Contrastive Loss.Effectiveness of Description LossThe full configuration in Table2shows the experimental results of our model using the Description Loss. As shown in Table2, the Description Loss helps the model to well-align the information between images and the details of descriptions. Hence, the model Figure8. Attention Visualization. Compared to MAE[18], our model is robust to small details of insect images. The model can focus on the small textures of the insect, even if the texture is the same as the background (bottom images). Best viewed in color.", "figure_data": "studies the", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Zero", "figure_data": "-shot classification results on IP102 Classifica-tion benchmark. The proposed model outperforms prior vision-language pretraining methods.MethodPretrain Data Accuracy (%)CLIP [43]1M-Insect41.1LiT [72]1M-Insect43.6CoCa [70]1M-Insect45.3Insect-Foundation1M-Insect49.9", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Detection results on IP102 Detection benchmark. The proposed model outperforms prior pre-training methods.", "figure_data": "MethodBackbonePre-train DataAP (%)AP .50 (%)AP .75 (%)FRCNN [47] VGG-16 [50]ImageNet1K 21.1 47.9 15.2FPN [24]ResNet-50 [16]ImageNet1K 28.1 54.9 23.3SSD300 [26] VGG-16 [50]ImageNet1K 21.5 47.2 16.6RefineDet [74] VGG-16 [50]ImageNet1K 22.8 49.0 16.8YOLOv3 [46] DarkNet-53 [46] ImageNet1K 25.7 50.6 21.8FPN", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Hoang-Quan Nguyen; Thanh-Dat Truong; Xuan Bac Nguyen; Ashley Dowling; Xin Li; Khoa Luu
[ { "authors": "Ahmad Arib; Alfarisy ; Quan Chen; Minyi Guo", "journal": "", "ref_id": "b0", "title": "Deep learning based classification for paddy pests & diseases recognition", "year": "2018" }, { "authors": "Adão Nunes Alves; Witenberg Sr Souza; Díbio Leandro Borges", "journal": "Computers and Electronics in Agriculture", "ref_id": "b1", "title": "Cotton pests classification in field-based images using deep residual networks", "year": "2020" }, { "authors": "Hasan Enes Ayan; Fatih Erbay; Varc ¸ın", "journal": "Computers and Electronics in Agriculture", "ref_id": "b2", "title": "Crop pest classification with a genetic algorithm-based weighted ensemble of deep convolutional neural networks", "year": "2020" }, { "authors": "Sarkhan Badirli; Zeynep Akata; George Mohler; Christine Picard; Mehmet M Dundar", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Fine-grained zero-shot learning with dna as side information", "year": "2021" }, { "authors": "Hangbo Bao; Li Dong; Songhao Piao; Furu Wei", "journal": "", "ref_id": "b4", "title": "Beit: Bert pre-training of image transformers", "year": "2021" }, { "authors": "Edson Bollis; Helio Pedrini; Sandra Avila", "journal": "", "ref_id": "b5", "title": "Weakly supervised learning guided by activation mapping applied to a novel citrus pest benchmark", "year": "2020" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b6", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b7", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Xinlei Chen; Haoqi Fan; Ross Girshick; Kaiming He", "journal": "", "ref_id": "b8", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "S Chen; Xie; He", "journal": "", "ref_id": "b9", "title": "An empirical study of training self-supervised vision transformers", "year": "2021" }, { "authors": "Yingyi Chen; Xi Shen; Yahui Liu; Qinghua Tao; Johan Ak Suykens", "journal": "Pattern Recognition Letters", "ref_id": "b10", "title": "Jigsaw-vit: Learning jigsaw puzzles in vision transformer", "year": "2023" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b11", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Limiao Deng; Yanjiang Wang; Zhongzhi Han; Renshi Yu", "journal": "Biosystems Engineering", "ref_id": "b12", "title": "Research on insect pest image detection and recognition based on bio-inspired methods", "year": "2018" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b14", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b15", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b16", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b17", "title": "Masked autoencoders are scalable vision learners", "year": "2008" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b18", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b20", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2019" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b21", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b22", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b23", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Liu Liu; Rujing Wang; Chengjun Xie; Po Yang; Fangyuan Wang; Sud Sudirman; Wancai Liu", "journal": "Ieee Access", "ref_id": "b24", "title": "Pestnet: An end-toend deep learning approach for large-scale multi-class pest detection and classification", "year": "2019" }, { "authors": "Wei Liu; Dragomir Anguelov; Dumitru Erhan; Christian Szegedy; Scott Reed; Cheng-Yang Fu; Alexander C Berg", "journal": "Springer", "ref_id": "b25", "title": "Ssd: Single shot multibox detector", "year": "2016" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b26", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Ziyi Liu; Junfeng Gao; Guoguo Yang; Huan Zhang; Yong He", "journal": "Scientific reports", "ref_id": "b27", "title": "Localization and classification of paddy field pests using a saliency map and deep convolutional neural network", "year": "2016" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b28", "title": "Sgdr: Stochastic gradient descent with warm restarts", "year": "2016" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b29", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Ziyang Luo; Pu Zhao; Can Xu; Xiubo Geng; Tao Shen; Chongyang Tao; Jing Ma; Qingwei Lin; Daxin Jiang", "journal": "", "ref_id": "b30", "title": "Lexlip: Lexicon-bottlenecked language-image pre-training for large-scale image-text sparse retrieval", "year": "2023" }, { "authors": "Loris Nanni; Gianluca Maguolo; Fabio Pancino", "journal": "Ecological Informatics", "ref_id": "b31", "title": "Insect pest image detection and recognition based on bio-inspired methods", "year": "2020" }, { "authors": "Xuan-Bac Nguyen; Guee ; Sang Lee; Soo Hyung Kim; Hyung Jeong; Yang ", "journal": "IEEE Access", "ref_id": "b32", "title": "Self-supervised learning based on spatial awareness for medical image analysis", "year": "2020" }, { "authors": "Xuan-Bac Nguyen; Duc Toan Bui; Chi Nhan Duong; Tien D Bui; Khoa Luu", "journal": "", "ref_id": "b33", "title": "Clusformer: A transformer based clustering approach to unsupervised large-scale face and visual landmark recognition", "year": "2021" }, { "authors": "Xuan Bac Nguyen; Apoorva Bisht; Hugh Churchill; Khoa Luu", "journal": "", "ref_id": "b34", "title": "Two-dimensional quantum material identification via self-attention and soft-labeling in deep learning", "year": "2022" }, { "authors": "Xuan-Bac Nguyen; Chi Nhan Duong; Xin Li; Susan Gauch; Han-Seok Seo; Khoa Luu", "journal": "", "ref_id": "b35", "title": "Micron-bert: Bert-based facial micro-expression recognition", "year": "2023" }, { "authors": "Xuan-Bac Nguyen; Chi Nhan Duong; Marios Savvides; Kaushik Roy; Khoa Luu", "journal": "", "ref_id": "b36", "title": "Fairness in visual clustering: A novel transformer clustering approach", "year": "2023" }, { "authors": "Xuan-Bac Nguyen; Xin Li; Khoa Samee U Khan; Luu", "journal": "", "ref_id": "b37", "title": "Brainformer: Modeling mri brain functions to machine vision", "year": "2023" }, { "authors": "Mehdi Noroozi; Paolo Favaro", "journal": "Springer", "ref_id": "b38", "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "year": "2016" }, { "authors": "Maxime Oquab; Timothée Darcet; Théo Moutakanni; Huy Vo; Marc Szafraniec; Vasil Khalidov; Pierre Fernandez; Daniel Haziza; Francisco Massa; Alaaeldin El-Nouby", "journal": "", "ref_id": "b39", "title": "Dinov2: Learning robust visual features without supervision", "year": "2023" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "Advances in neural information processing systems", "ref_id": "b40", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Hieu Pham; Zihang Dai; Golnaz Ghiasi; Kenji Kawaguchi; Hanxiao Liu; Adams Wei Yu; Jiahui Yu; Yi-Ting Chen; Minh-Thang Luong; Yonghui Wu", "journal": "Neurocomputing", "ref_id": "b41", "title": "Combined scaling for zero-shot transfer learning", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b42", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b43", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Sujeevan Ratnasingham; Paul Dn Hebert", "journal": "Molecular ecology notes", "ref_id": "b44", "title": "Bold: The barcode of life data system (http://www. barcodinglife. org)", "year": "2007" }, { "authors": "Joseph Redmon; Ali Farhadi", "journal": "", "ref_id": "b45", "title": "Yolov3: An incremental improvement", "year": "2018" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "Advances in neural information processing systems", "ref_id": "b46", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Indrajit Rk Samanta; Ghosh", "journal": "International Journal of Computer Engineering Science (IJCES)", "ref_id": "b47", "title": "Tea insect pests classification based on artificial neural networks", "year": "2012" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b49", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "Nigel E Stork", "journal": "Annual review of entomology", "ref_id": "b50", "title": "How many species of insects and other terrestrial arthropods are there on earth?", "year": "2018" }, { "authors": "Christian Szegedy; Wei Liu; Yangqing Jia; Pierre Sermanet; Scott Reed; Dragomir Anguelov; Dumitru Erhan; Vincent Vanhoucke; Andrew Rabinovich", "journal": "", "ref_id": "b51", "title": "Going deeper with convolutions", "year": "2015" }, { "authors": "Thanh-Dat Truong; Chi Nhan Duong; Ngan Le; Son Lam Phung; Chase Rainwater; Khoa Luu", "journal": "", "ref_id": "b52", "title": "Bimal: Bijective maximum likelihood approach to domain adaptation in semantic scene segmentation", "year": "2021" }, { "authors": "Thanh-Dat Truong; Quoc-Huy Bui; Chi Nhan Duong; Han-Seok Seo; Son Lam Phung; Xin Li; Khoa Luu", "journal": "", "ref_id": "b53", "title": "Direcformer: A directed attention in transformer approach to robust action recognition", "year": "2022" }, { "authors": "Thanh-Dat Truong; Ravi Teja; Nvs Chappa; Xuan-Bac Nguyen; Ngan Le; Ashley Pg Dowling; Khoa Luu", "journal": "IEEE", "ref_id": "b54", "title": "Otadapt: Optimal transport-based approach for unsupervised domain adaptation", "year": "2022" }, { "authors": "Thanh-Dat Truong; Chi Nhan Duong; Kha ; Gia Quach; Ngan Le; Tien D Bui; Khoa Luu", "journal": "Neurocomputing", "ref_id": "b55", "title": "Liaad: Lightweight attentive angular distillation for large-scale age-invariant face recognition", "year": "2023" }, { "authors": "Thanh-Dat Truong; Ngan Le; Bhiksha Raj; Jackson Cothren; Khoa Luu", "journal": "", "ref_id": "b56", "title": "Fredom: Fairness domain adaptation approach to semantic scene understanding", "year": "2023" }, { "authors": "Thanh-Dat Truong; Hoang-Quan Nguyen; Bhiksha Raj; Khoa Luu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b57", "title": "Fairness continual learning approach to semantic scene understanding in open-world environments", "year": "2024" }, { "authors": "Grant Van Horn; Elijah Cole; Sara Beery; Kimberly Wilber; Serge Belongie; Oisin Mac; Aodha ", "journal": "", "ref_id": "b58", "title": "Benchmarking representation learning for natural world image collections", "year": "2021" }, { "authors": "Kanesh Venugoban; Amirthalingam Ramanan", "journal": "International Journal of Machine Learning and Computing", "ref_id": "b59", "title": "Image classification of paddy field insect pests using gradient-based features", "year": "2014" }, { "authors": "Jiangning Wang; Congtian Lin; Liqiang Ji; Aiping Liang", "journal": "Knowledge-Based Systems", "ref_id": "b60", "title": "A new automatic identification system of insect images at the order level", "year": "2012" }, { "authors": "Peng Wang; An Yang; Rui Men; Junyang Lin; Shuai Bai; Zhikang Li; Jianxin Ma; Chang Zhou; Jingren Zhou; Hongxia Yang", "journal": "PMLR", "ref_id": "b61", "title": "Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework", "year": "2022" }, { "authors": "Rujing Wang; Liu Liu; Chengjun Xie; Po Yang; Rui Li; Man Zhou", "journal": "Sensors", "ref_id": "b62", "title": "Agripest: A large-scale domain-specific benchmark dataset for practical agricultural pest detection in the wild", "year": "2021" }, { "authors": "Tan Wang; Kevin Lin; Linjie Li; Chung-Ching Lin; Zhengyuan Yang; Hanwang Zhang; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b63", "title": "Equivariant similarity for vision-language foundation models", "year": "2023" }, { "authors": "Zirui Wang; Jiahui Yu; Adams Wei Yu; Zihang Dai; Yulia Tsvetkov; Yuan Cao", "journal": "", "ref_id": "b64", "title": "Simvlm: Simple visual language model pretraining with weak supervision", "year": "2021" }, { "authors": "Xiaoping Wu; Chi Zhan; Yu-Kun Lai; Ming-Ming Cheng; Jufeng Yang", "journal": "", "ref_id": "b65", "title": "Ip102: A large-scale benchmark dataset for insect pest recognition", "year": "2019" }, { "authors": "Chengjun Xie; Jie Zhang; Rui Li; Jinyan Li; Peilin Hong; Junfeng Xia; Peng Chen", "journal": "Computers and Electronics in Agriculture", "ref_id": "b66", "title": "Automatic classification for field crop insects via multiple-task sparse representation and multiple-kernel learning", "year": "2015" }, { "authors": "Chengjun Xie; Rujing Wang; Jie Zhang; Peng Chen; Wei Dong; Rui Li; Tianjiao Chen; Hongbo Chen", "journal": "Computers and Electronics in Agriculture", "ref_id": "b67", "title": "Multilevel learning features for automatic classification of field crop pests", "year": "2018" }, { "authors": "Zhenda Xie; Zheng Zhang; Yue Cao; Yutong Lin; Jianmin Bao; Zhuliang Yao; Qi Dai; Han Hu", "journal": "", "ref_id": "b68", "title": "Simmim: A simple framework for masked image modeling", "year": "2022" }, { "authors": "Jiahui Yu; Zirui Wang; Vijay Vasudevan; Legg Yeung; Mojtaba Seyedhosseini; Yonghui Wu", "journal": "", "ref_id": "b69", "title": "Coca: Contrastive captioners are image-text foundation models", "year": "2008" }, { "authors": "Xiaohua Zhai; Alexander Kolesnikov; Neil Houlsby; Lucas Beyer", "journal": "", "ref_id": "b70", "title": "Scaling vision transformers", "year": "2022" }, { "authors": "Xiaohua Zhai; Xiao Wang; Basil Mustafa; Andreas Steiner; Daniel Keysers; Alexander Kolesnikov; Lucas Beyer", "journal": "", "ref_id": "b71", "title": "Lit: Zero-shot transfer with locked-image text tuning", "year": "2022" }, { "authors": "Xiaohua Zhai; Basil Mustafa; Alexander Kolesnikov; Lucas Beyer", "journal": "", "ref_id": "b72", "title": "Sigmoid loss for language image pre-training", "year": "2023" }, { "authors": "Shifeng Zhang; Longyin Wen; Xiao Bian; Zhen Lei; Stan Z Li", "journal": "", "ref_id": "b73", "title": "Single-shot refinement neural network for object detection", "year": "2018" } ]
[ { "formula_coordinates": [ 5, 316.04, 460.81, 229.07, 13.06 ], "formula_id": "formula_0", "formula_text": "Xs = concat[x i s ] N Ps i=1 ∈ R N Ps ×d , x i s = αp(p i s ) + ep(i)(1)" }, { "formula_coordinates": [ 5, 365.96, 537.66, 179.15, 38.02 ], "formula_id": "formula_1", "formula_text": "X ′ l = X l-1 + MSA(LN(X l-1 )) X l = X ′ l + MLP(LN(X ′ l )) X0 = Xs, 1 ≤ l ≤ Le (2)" }, { "formula_coordinates": [ 5, 360.94, 604.71, 131.6, 11.56 ], "formula_id": "formula_2", "formula_text": "Zs = E image (Xs), Zs ∈ R N Ps ×d" }, { "formula_coordinates": [ 6, 123.36, 257.86, 163, 8.06 ], "formula_id": "formula_3", "formula_text": "PRS = f (Zs, p) ∈ [0, 1](4)" }, { "formula_coordinates": [ 6, 314.32, 104.77, 230.79, 9.33 ], "formula_id": "formula_4", "formula_text": "L rel = -y log(H(zct, zp)) -(1 -y) log(1 -H(zct, zp)) (7)" }, { "formula_coordinates": [ 6, 308.86, 265.86, 53.8, 12.04 ], "formula_id": "formula_5", "formula_text": "T = {t i } N T" }, { "formula_coordinates": [ 6, 324.36, 307.45, 220.75, 12.06 ], "formula_id": "formula_6", "formula_text": "W = concat[wi] N T i=1 ∈ R N T ×d , wi = αw + ew(i)(8)" }, { "formula_coordinates": [ 6, 365.32, 402.49, 179.79, 10.63 ], "formula_id": "formula_7", "formula_text": "W ′ = Etext(W), Z ′ ∈ R N T ×d (9)" }, { "formula_coordinates": [ 6, 310.66, 532.22, 234.46, 34.06 ], "formula_id": "formula_8", "formula_text": "Lcon = -1 N N i=1 log exp(z T i w i ) N j=1 exp(z T i w j ) + log exp(w T i z i ) N j=1 exp(w T i z j )(10)" }, { "formula_coordinates": [ 7, 90.18, 388.66, 196.18, 27.35 ], "formula_id": "formula_9", "formula_text": "L desc = - N T t=1 log D multi (wt|W0:t-1, Zs)(11)" } ]
2023-12-03
[ { "figure_ref": [ "fig_6" ], "heading": "Introduction", "publication_ref": [ "b4", "b44", "b5", "b63", "b67", "b15" ], "table_ref": [], "text": "Designing agents that demonstrate intelligent behavior and adaptability in open-world settings has been a longstanding and significant challenge in the field of artificial intelligence [25,45]. However, recent progress in the development of large language models (LLMs) [6,53] has exhibited their potential as versatile, general-purpose assistants. Recent innovations in agent design [54,56,64,68] have effectively harnessed these advanced LLMs, tapping into their extensive world knowledge and reasoning abilities. This development has paved the way for agents, that are autonomously driven and equipped, to formulate and implement strategies and actions across a diverse array of skills and tasks in open-world environments.\nIn many open-world settings, like Minecraft, contemporary agents predominantly use LLMs for their textual inter-actions. However, this reliance on text for communication poses considerable limitations in their interactions within these worlds. Minecraft, with its expansive and interactive sandbox environment [16,19], demands a variety of skills from agents, ranging from crafting basic items to executing complex tasks. Yet, agents driven by LLMs often generate unpredictable outputs. The effectiveness of their interactions is largely contingent on meticulously crafted prompts [23], designed to align the LLM's understanding with the environmental context and the intended objectives. This process of prompt engineering is not only laborious but also fails to meet the goal of fostering autonomous, selfdirected agents. Furthermore, textual communication has its limitations in naturally and intuitively conveying certain concepts of the world, like crafting recipes, which are often more effectively communicated through visual means.\nPlayers have the distinct capability to assimilate and convey information using both visual and textual channels, significantly enhancing our interactions with the world around us. Yet, the integration of LLM-based agents with multimodal inputs in open-ended environments remains an under-explored area. STEVE, named after the protagonist of the game Minecraft, is our proposed framework aims to build an embodied agent based on the vision model and LLMs within an open world, as illustrated in Figure 1. STEVE harnesses a vision model to visually perceive its surroundings, coupled with an LLM to strategize and plan actions. This model represents a leap forward in agent design, combining these two input modes, vision and text, to offer a more nuanced and comprehensive understanding of the environment, along with practical and executable skills.\nOur key contributions are outlined as follows: • We propose STEVE, an embodied agent in virtual environment, consists of vision perception, language instruction, and code action, achieving 1.5× faster unlocking of key tech trees and is 2.3× quicker in block search tasks compared to previous state-of-the-art methods. • We present STEVE-7B/13B, a series of large language model obtained by fine-tuning with Minecraft knowledge question-answering pairs from Llama-2-7B/13B. • We collect STEVE-21K dataset, including 600+ visionenvironment pairs, 20K knowledge question-answering pairs, and 200+ skill-code pairs, for justifying the effective performance of STEVE." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Intelligent Agent in Minecraft", "publication_ref": [ "b20", "b23", "b30", "b31", "b48", "b3", "b3", "b15", "b42", "b63", "b67" ], "table_ref": [], "text": "As an open-ended sandbox game, Minecraft has always been an ideal setting for testing the performance of intelligent agents [21,24]. The agents are required to autonomously perform various tasks in Minecraft, such as chopping trees, crafting tools, and mining diamonds. At the beginning, much of the works focus on exploring reinforcement learning [31,32,40,49] or imitation learning [2,4], without satisfactory performance. VPT [4] and MineDojo [16] collect internet-scale datasets for their model pre-training. More specifically, VPT offers the exciting possibility of directly learning to act during video pre-training and using these learned behavioral priors as extremely effective exploration priors for reinforcement learning. Yet, recent works found that the pre-trained LLMs could serve as a strong \"mind\" that provides planning ability to the agents. Voyager [54] leverages GPT-4 [43] as both a high-level planner and a low-level action code generator. Plan4MC [64] proposes a skill graph pre-generated by the LLMs. DEPS [56], an interactive planning method based on LLMs, addresses multi-step reasoning issue in open-world planning. GITM [68] develops a set of structured actions and leverages LLMs to generate action plans for the agents to execute, achieving impressive results in various tasks." }, { "figure_ref": [], "heading": "Embodied Multimodal Model", "publication_ref": [ "b6", "b14", "b40", "b57" ], "table_ref": [], "text": "Embodied agent operates within its environment by synthesizing sensory perceptions and physical actions supported by computational intelligence. This synthesis enables the agent to undertake a variety of tasks, achieving specific objectives. Its key areas of application are diverse, including Navigation [7,15,26,41,58,63 " }, { "figure_ref": [], "heading": "Large Language Model with Equipped Tools", "publication_ref": [ "b34", "b45", "b58", "b47", "b50", "b59" ], "table_ref": [], "text": "While Large Language Models (LLMs) demonstrate impressive skill in tackling novel tasks via prompt-based instructions, they often face challenges in areas where simpler models or tools excel, like mathematical calculations or identifying palindromes. However, LLMs' potential is significantly expanded when integrated with other modalityspecific models, such as those for vision or audio, enabling multi-modal capabilities [5,35]. Innovations like Toolformer [46] demonstrate LLMs' self-learning to utilize tools through finetuning with extensive API call samples.\nVisual ChatGPT [59] from Hugging Face for task resolution. AutoGPT [48] is an open-source application that broadens GPT-4's capabilities with internet access, memory management, and plugins. The recent introduction of MovieChat [51] brings a memory mechanism to MLLM, enhancing its performance in video understanding tasks. Furthermore, LLMs can be adeptly used for goal planning, analogous to language translation [60]. This evolving landscape suggests that toolequipped LLMs could forge a new paradigm in AI solution design." }, { "figure_ref": [], "heading": "Method: STEVE", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Overview", "publication_ref": [ "b16" ], "table_ref": [], "text": "We propose STEVE, an autonomous system for embodied agents in Minecraft. It integrates visual perception into the large language model (LLM) instruction and combines them with a skill database to execute actions. It consists of three components, including:\n• Vision Perception (Section 3.2) We create a vision encoder to interpret visual information of the environment, such as blocks and entities. • Language Instruction (Section 3.3) We develop STEVE-13B, a powerful language model fine-tuned specifically for Minecraft content using LLaMA2-13B [17]. This model enables adaptive interaction for iterative reasoning and step-by-step decomposition. • Code Action (Section 3.4) We generate the Skill-Code database. It enables executive skill actions based on guidelines from the Language Instruction." }, { "figure_ref": [], "heading": "Vision Perception", "publication_ref": [ "b29" ], "table_ref": [], "text": "The vision perception part aims to convert the vision information from the agent to the tokenizer representation. We utilize the visual branch of EfficientFormer [30] as the vision encoder to extract features from images or video frames. These features are then transformed into tokens that encapsulate critical visual information. The encoder is designed to be robust against variability in environmental conditions, ensuring consistent performance regardless of lighting or weather changes in the virtual world.\nThe visual tokens are amalgamated with textual tokens representing the agent's current state (e.g., health, inventory, etc.) and the task description. This is accomplished using a tokenizer that maps state variables and task parameters to a tokenized form. The resultant unified token set serves as a comprehensive representation of the current situational context, ready for processing by the language model." }, { "figure_ref": [], "heading": "Language Instruction", "publication_ref": [ "b16" ], "table_ref": [], "text": "The Language Instruction, which aims to provide adaptive interaction for iterative reasoning and step-by-step decomposition, is applied to the skill retrieval of the Section 3.4. It provides reasoning and planning, it also decomposes from high-level guidelines to low-level steps. We propose STEVE-13B, a powerful language model derived from LLaMA-2-13B [17], fine-tuned specifically on Minecraftrelated content from the STEVE-20K. This model's expertise covers a broad spectrum of game-specific knowledge areas on worlds, entities, player mechanics, survival, and even game practical experience etc.\nBy integrating environmental information with this extensive knowledge base, STEVE-13B iteratively reasons about tasks and devises a structured plan. It decomposes complex goals into manageable instructions step-by-step, determining the sequence of actions needed to navigate the world and manipulate objects effectively.\nIterative reasoning. The STEVE-13B receives a stream of tokens that encode not only the current visual scene but also the agent's state and the task's textual description. STEVE-13B interprets this rich context to undertake complex reasoning. The model initiates the reasoning process by constructing a series of high-level strategies that outline the pathway to task completion. This involves: • Goal understanding: identifying the goal and understanding the requirements for success. • Comprehensive consideration: considering the agent's current resources, abilities, and constraints. • Environment assessing: including potential hazards, opportunities, and strategic advantages. • Strategy formulating: requiring logical and effective sequences of actions.\nThe reasoning mechanism is akin to an experienced player who can visualize the end game and chart a course to victory, considering all gameplay elements. This approach ensures that the plans are not just reactive but also proactive, allowing the agent to anticipate and mitigate future challenges. However, most strategies are high-level and abstract, therefore they often require step-by-step decomposition to derive executable guidelines.\nDecomposition. The decomposition process makes the complex strategies break down into a series of simple, lowlevel guidelines that can be directly mapped to actions in the Minecraft world. It is similar to how a high-level command like \"build a shelter\" is divided into actionable instructions like \"collect wood,\" \"craft planks,\" and \"place blocks.\" The granular steps are structured to provide explicit instructions that the game engine can readily interpret. This requires: • High-low conversion: converting high-level strategies into a sequence of low-level, actionable guidelines. • Relevance guarantee: ensuring each step is contextually relevant to the preceding actions and the current state of the game. • Environmental adaptability: maintaining adaptability, allowing the agent to respond to dynamic changes in the environment. This systematic breakdown from high-level reasoning to low-level actionable steps is the hallmark of the STEVE system, enabling the embodied agent to interact with the Minecraft environment in a meaningful and goal-oriented manner. Through this intricate process of reasoning and decomposition, STEVE-13B embodies the cognitive capabilities required for sophisticated task management in virtual environments." }, { "figure_ref": [], "heading": "Code Action", "publication_ref": [], "table_ref": [], "text": "The code action part is the execution phase, where the STEVE system converts planned, decomposed guidelines into concrete actions within the Minecraft environment. This process leverages a specialized skill database that pairs code snippets with their descriptions and relevant metadata, encoded as vectors for efficient retrieval. The transition from high-level language instruction to executable code is achieved through query encoding and similarity matching.\nQuery encoding. Each low-level step derived from the Language Instruction phase is encoded into a query. This encoding captures the essence of the action to be performed in a format that the skill database can understand. The database operates like a vast repository of code snippets, each tagged with a description and a vector that abstracts its functionality and context of use. The encoding process involves the following:\n• Transformation of instructions: The low-level steps are transformed into structured queries, which reflect the nature of the action and the context in which it should be executed. • Contextual relevance: The query must account for the current state of the Minecraft agent, ensuring that the code snippet retrieved is not just theoretically appropriate but also practically applicable given the situation. • Survival strategies: The instruction is semantically encoded into a vector that resonates with the encoding scheme of the skill database, allowing for a meaningful comparison." }, { "figure_ref": [], "heading": "Retrieval.", "publication_ref": [ "b56" ], "table_ref": [], "text": "Once the queries are encoded, the system computes similarities between the query vectors and the vectors of the code snippets stored in the database. This step is essential for determining which skill best matches the required action.\n• Vector similarity: computing the cosine similarity between the query vector and the database vectors that reflects the closeness of semantic meaning. • Skill retrieval: retrieving one or multiple code snippets that best align with the query, prioritizing those with the highest similarity measures. • Code selection: selecting the most appropriate code snippet (combining them, or choosing one based on additional context or priorities in cases where multiple snippets are viable).\nExecution. After the selection of the appropriate code or skills, the STEVE system executes the action within the game. This involves:\n• Action instantiation: The selected code snippet is instantiated as a game action, which interacts with the mineflayer API. • Monitoring and feedback: As actions are executed, the system monitors their outcomes for feedback, which may inform subsequent actions or lead to real-time adjustments in strategy. • Sequential Execution: Actions are executed in sequence, adhering to the planned steps while remaining flexible to adapt to any unexpected changes or results in the game environment. In the Vision-Environment section, STEVE-13B plays the game according to specified tasks defined by the human player, collecting visual information through prismarine-viewer and capturing environmental information from the screen using Ray Tracing [57]. Note that during the collection phase, the language instruction task is also performed. We simultaneously record and save the chat flow from the reasoning and decomposition stages. In the Question-Answering section, we obtain information from the Minecraft-Wiki and Reddit forums, and use GPT-3.5 to clean the data into Single-round QA pairs. In the Skill-Code section, we use GPT-3.5 combined with the human player's code to synthesize code snippets, and then check and revise them in the game environment. During the Code Action phase, the STEVE system effectively translates its strategic planning into reality, transforming the theoretical notion of task completion into a sequence of targeted and context-sensitive actions within the virtual landscape of Minecraft. This crucial phase involves converting abstract, language-based directives into practi-cal, executable code. This pivotal transformation exemplifies the zenith of STEVE's embodied cognitive processing, endowing the system with the capability to operate autonomously and interact dynamically with its environment." }, { "figure_ref": [], "heading": "Vision-Environment", "publication_ref": [], "table_ref": [], "text": "20K" }, { "figure_ref": [ "fig_1" ], "heading": "Dataset: STEVE-21K", "publication_ref": [ "b56", "b15" ], "table_ref": [], "text": "As shown in Figure 3, STEVE-21K has been meticulously compiled, featuring a diverse collection of Vision-Environment pairs, Question-Answering pairs, and a Skill-Code database. Vision-Environment pairs contain 600 pairs of first-person perspective videos from Minecraft gameplay across six different terrains (including forest, desert, coastal, etc.), along with corresponding environmental block entities in the field of vision and context information for each timestamp. Additionally, all pairs are oriented around executing the skillrelated task supported by Skill-Code part mentioned in Section 4. We employ the STEVE-13B model to enable robots to autonomously plan and execute actions based on tasks defined by human supervisors. We record the video of the robot operation, the environment information and all the corresponding chatflow. Note that we use rayTracing [57] to ensure that the environmental information obtained is the blocks and entities seen in the field of vision. Question-Answering pairs contain 20K questionanswering pairs from the Minecraft-Wiki and Reddit corpus across six data types partly sourced from [16]." }, { "figure_ref": [], "heading": "Method Wooden Tool", "publication_ref": [ "b47", "b5" ], "table_ref": [], "text": "Stone Tool Iron Tool Diamond Tool\nAutoGPT [48] 92 The pairs are organized into instruction, input, and output triplets and used to train STEVE-13B. The GPT-3.5 [6] is employed to derive meaningful single-round questionanswer pairs, and LoRA [22] is incorporated during the fine-tuning process for efficient resource allocation. Skill-Code pairs contain 210 skill execution scripts with descriptions, covering 8 skill types including collecting, crafting, exploration etc. The code part is collected by manual coding. We use GPT-3.5 to describe all codes and utilize langchain vectordb to give all pairs a database vector.\n± 72 ( 3 /3) 94 ± 72 ( 3 /3) 135 ± 103 ( 3 /3) N/A ( 0 /3) Voyager [54] 6 ± 2 ( 3 /3) 11 ± 2 ( 3 /3) 21 ± 7 ( 3 /3) 102 ( 1 /3) STEVE 4 ± 1 ( 3 /3) 8 ± 2 ( 3 /3) 16 ± 4 ( 3 /3) 131 ± 27 ( 3 /3)" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b16", "b29", "b15", "b43" ], "table_ref": [], "text": "We select LLaMA-2-13B [17] as the base model and then conduct finetuning with the STEVE-21K dataset (the Question-Answering part for warm-up, the chatflow content of Vision-Environment part for practical experience), as shown in Section 4. In the text part, we set all temperatures to 0 except for the task proposal, which uses 0.9 to encourage task diversity. The vision unit is based on EfficientFormerV2-S0 [30], which is trained on the Vision-Environment part of our STEVE-21K dataset. Our simulation environment is built on top of MineDojo [16] and leverages Mineflayer [44] APIs for motor controls." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b47", "b42", "b42" ], "table_ref": [], "text": "Currently, no vision-based LLM-driven agents work out of the box for Minecraft, so we carefully selected several representative algorithms as baselines for our experiment. They rely on extracting information from a system's backend, presenting a significant divergence from real-world scenarios.\nAutoGPT [48] is a widely used software tool for automating NLP tasks. It breaks down a high-level goal into smaller subgoals. We reinterpret them to be executable in Mine-Dojo and compatible with our experimental setting. In our experiments, AutoGPT works on GPT-4 [43] for task decomposition. We provide it with agent states, environment Voyager is known for its ability to explore areas and master the tech tree. However, its main focus is to prompt GPT-4 [43] on background text messages in embodied agents rather than vision perception." }, { "figure_ref": [], "heading": "Evaluation Results", "publication_ref": [ "b5", "b8" ], "table_ref": [ "tab_4" ], "text": "Continuous block search. As shown in Table 2, we experiment on block-searching tasks to assess the agent's exploratory capabilities and proficiency in locating specified blocks. Diamond blocks are placed at every 16-block interval across the mainland map. The agent's objective is to identify as many blocks as possible within the fewest iterations, which indicates the method's efficiency. As shown in Figure 5, enriching information through visual perception significantly enhances the efficiency of search and exploration tasks, leading to more effective world exploration.\nKnowledge question and answering. We have created a question-answering database to evaluate our model's ability to generate accurate responses related to Minecraft. This is done by using a validation dataset. When each model provides a response, both the generated response and the correct answer are fed into GPT-4, Claude-2, and human participants for blind rating. Initially, We check the accuracy of the response to determine its correctness. To ensure consistency in the evaluation process, all evaluation methods conduct a thorough assessment that considers the accuracy, relevance, and level of detail of the response. The outcome of this comprehensive evaluation is an overall score on a scale of 0 to 10, where a higher score indicates better overall performance.\nAs shown in Table 3, we experiment on the instructional capabilities of different LLM models. In the STEVE- Table 3. Quantitive comparison on knowledge question and answering task. Questions, model-generated responses, and ground truth inputs are evaluated in GPT-4 [6], Claude-2 [9] and human blind rating rated on a scale of 0 to 10; The scores above are the average of them. Higher scores indicate greater alignment of the generated answers with the ground truth." }, { "figure_ref": [], "heading": "STEVE Voyager AutoGPT", "publication_ref": [], "table_ref": [], "text": "Figure 5. Schematic of the Block Search Task. We capture an asynchronous segment with each method 30 iterations from the experiments for illustration. The reason we choose diamond blocks is that they are not naturally occurring in the given context, making them easily distinguishable from other blocks.\n21k knowledge instruction pairs, we split the dataset into 18, 622 samples for training and 1, 000 for testing. STEVE-7B and STEVE-13B models outshine the LLaMA2 models in all evaluated categories. STEVE-13B has achieved the highest total score of 8.54, manifesting its superior ability to comprehend and address a broad spectrum of Minecraftrelated inquiries. This superior performance of the STEVE models, particularly in the 7B and 13B versions, is likely attributable to their specific optimization for content related to Minecraft. The observed gradual enhancement from STEVE-7B to STEVE-13B indicates that an increase in model size, coupled with further fine-tuning, benefits performance in knowledge-intensive tasks. Although GPT-4 demonstrates robust capabilities across various categories, it is slightly outperformed by the more specialized STEVE-13B in overall scoring. In brief, custom model fine-tuned on specific knowledge enhances the precision and relevance of 6 ± 2\n( 3 /3) 10 ± 1 ( 3 /3) 14 ± 3 ( 3 /3) 89 ± 9 ( 3 /3) STEVE (Ours) 4 ± 1 ( 3 /3) 8 ± 2 ( 3 /3) 16 ± 4 ( 3 /3) 131 ± 27 ( 3 /3)\nTable 4.\nAblation studies for the tech tree mastery. STEVE (Ours) is the STEVE-13B version. The 0/3 score means the method can't progress beyond 160 iterations in the tech tree." }, { "figure_ref": [], "heading": "knowledge-based question-answering responses.", "publication_ref": [ "b47", "b42", "b16" ], "table_ref": [ "tab_3", "tab_3" ], "text": "Tech tree mastery. As shown in Table 1, we experiment on the Minecraft tech tree mastery to test the agent's ability to craft and use a hierarchy of tools. Progressing through this tree (wooden tool → stone tool → iron tool → diamond tool) requires the agent to master systematic and compositional skills. As to the wooden, stone, and iron levels of the tech tree, STEVE achieves remarkable efficiency: 23×, 11.7×, and 8.4× faster than AutoGPT [48], and 1.5×, 1.4×, and 1.3× faster than Voyager [54]. Additionally, STEVE successfully accesses the diamond level, as documented in Table 1. While its performance slightly trails Voyager [54], which also relies on GPT4 [43] for critical inference and possesses a skill library. However, STEVE operates solely on the fine-tuned LLaMA2-13B-chat-hf [17]. This model is more cost-effective and starts with a lower initial performance. Furthermore, STEVE incorporates a vision unit, prioritizing visual data over background information. While this integration can introduce certain inaccuracies, it offers distinct advantages. We also compare our method using the basic skill database and observe a substantial decrease in performance as the capacity of the skill database diminished." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "To understand the impact of different components on the performance of our system, we conducted ablation studies focusing on the tech tree mastery task in Minecraft. The results, as shown in Table 4, provide insights into the effectiveness of the vision unit and compare our STEVE model with STEVE GPT-4 version (with the same vision unit as ours). Note that the w/o vision unit setup is that the environmental perception encompasses data on blocks within an Vision unit is critical. The omission of the vision unit markedly affects the system's performance, especially in more advanced tasks. While it successfully crafts Wooden, Stone and Iron Tools, it is challenged with Diamond Tools. This outcome underscores the vital importance of visual information in accomplishing complex tasks.\nComparison with GPT-4. STEVE GPT-4 version exhibits consistent success across all categories, securing a flawless success rate. Interestingly, the STEVE-13B version excels in simpler tasks like crafting Wooden and Stone Tools. Additionally, it requires fewer iterations than methods without the vision part, underscoring its superior efficiency.\nThe ablation study illustrates the importance of vision perception and STEVE-13B. Visual perception acquiring more distant information aids in exploring natural environments and aligns better with knowledge primarily obtained by human players through visual perception to gameplay. Although it's impossible to perceive unseen block information, this kind of support for exploration is still benefi-cial for development-oriented tasks like tech tree mastery. Meanwhile, the much smaller STEVE-13B, fine-tuned on Minecraft knowledge, can provide more efficient guidance with shorter and more precise instructions. This is demonstrated in simpler tasks like Wooden Tools and Stone Tools." }, { "figure_ref": [ "fig_5" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "As shown in Figure 6, we perform an extensive case study comparison of GPT-4, LLaMA2-13B, and our method STEVE-13B. Each model maintains the same information and question inputs to compare feedback under different environmental information. Our STEVE overall achieves the best results, surpassing GPT-4 and showing significant improvement compared to the original LLaMA. Especially in parts involving numerical calculations, such as the leftmost image, STEVE accurately tracks food values to restore hunger levels." }, { "figure_ref": [], "heading": "Limitation and Future Work", "publication_ref": [], "table_ref": [], "text": "Lifelong Learning Tasks. Lifelong learning scenarios and tasks that exceed the database's scope still plague us. Despite an extensive and detailed skill database, STEVE struggles with high-dimensional tasks like \"survival\", revealing a gap between its theoretical knowledge and practi-cal adaptability. We leave it as our future works.\nKnowledge Base Gaps. The model's training data, while extensive, may not cover all the intricacies and constraints of the game environment. This can lead to scenarios where the Instructor, guided by incomplete or inaccurate knowledge, proposes tasks that are logically conceivable but practically inapplicable." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In conclusion, STEVE makes a significant advancement in the realm of multimodal learning frameworks and their application in open-ended environments. By integrating visual-text interfaces with LLM-based agents, STEVE paves the way for more nuanced and intelligent interactions within complex digital worlds like Minecraft. The threefold functions of vision perception, language instruction and code action endow STEVE with the ability to understand, predict and act within its environment." }, { "figure_ref": [], "heading": "See and Think: Embodied Agent in Virtual Environment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "The supplementary material is structured as follows:\n• We begin with the \"STEVE Pseudo Code\" section, detailing the STEVE method, encompassing input requirements, variables, functions, and the procedure of the STEVE Method algorithm in Section A • Next, the \"Component Comparison\" section offers a comparison of the STEVE system with existing works, focusing on various system characteristics such as availability of demos, nature of rewards, and types of observations, among others, in Section B • The \"Implementation Detail\" section outlines the steps involved in implementing STEVE, including the warm-up with the Question-Answering pair, collection of Vision-Environment pair, and training and fine-tuning processes in Section C. • In the \"STEVE-7B/13B\" section, we introduce the features and capabilities of the STEVE-7B/13B model, along with the Pearson correlation coefficient analysis for various score methods and showcases the efficiency of language models in the Minecraft knowledge QA domain, as shown in Section D. • The \"Prompt\" section provides insights into the LLM Decomposer, the process for generating Question-Answering pairs, and the evaluation methodology in Section E. • The \"Demo Image and Video\" section presents practical demonstrations of STEVE in various supported by figures and video content, and examples of skills and code in Section F. • Lastly, the \"Skill-Code Example\" section contains examples from the STEVE-21K skill-code part, illustrating specific skill codes used in the STEVE system, as shown in Section G" }, { "figure_ref": [], "heading": "A. STEVE Pseudo Code", "publication_ref": [], "table_ref": [], "text": "STEVE takes an image, the agent's current state, and a task as inputs, and uses a series of functions like a vision encoder, text tokenizer, and the core STEVE-7B/13B model to process these inputs. The model generates a plan consisting of steps, each of which is decomposed into an executable format, encoded into a query, and then used to retrieve an action code from a skill database. This process ultimately produces a series of action codes that guide the agent in performing the specified task." }, { "figure_ref": [], "heading": "B. Component Comparison", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "As shown in Table B1, we compare STEVE with sev- end for 21: end procedure eral existing works in the field. We consider the following characteristics of each system: availability of demos, nature of rewards, types of observations, action form, use of iterative planning, existence and capacity of a skill database, and whether they are gradient-free. STEVE" }, { "figure_ref": [], "heading": "C. Implementation Detail", "publication_ref": [ "b56" ], "table_ref": [], "text": "• Warm-up with the Question-Answering pair. Firstly, Our STEVE-7B/13B is finetuned on Llama2-7B/13B with the Question-Answering pair of STEVE-21K. • Collect the Vision-Environment pair. We employ the STEVE-7B/13B model to enable robots to execute tasks supervised by humans for data gathering. • Train and Finetune with the Vision-Environment Pair.\nWe train the vision encoder with vision and environmental information. We finetune the STEVE-7B/13B with the Chatflow (practical question-answering pairs).\nWarm-up with extensive knowledge. Our STEVE-7B/13B model has been fine-tuned with the Llama2-7B/13B dataset, using the question-answering pairs from STEVE-21K. The warm-up step is a one-time process that allows the model to absorb extensive knowledge. It is a crucial step in the initial simulation, which enables the collection of the vision-environment pair.\nCollect the Vision-Environment pair. We employ the STEVE-7B/13B model to enable robots to execute tasks supervised by humans for data gathering. We collect the dataset in six different terrains such as forest, desert, coastal, etc. We use the STEVE-7B/13B model to enable robots to autonomously plan and execute actions based on tasks defined by human supervisors. We record the video of the robot operation, environment information, and all the corresponding chat flow. Note that we use ray tracing [57] to ensure the environmental information obtained is the blocks and entities seen in the field of vision." }, { "figure_ref": [], "heading": "D. STEVE-7B/13B", "publication_ref": [ "b16" ], "table_ref": [], "text": "We propose STEVE-7B/13B, a powerful language model series derived from LLaMA2 [17], fine-tuned specifically on Minecraft-related content from the Minecraft-wiki and Reddit corpus. This model's expertise covers a broad spectrum of game-specific knowledge areas including:\n• World Understanding: Geography, biomes, and entity interactions. " }, { "figure_ref": [], "heading": "D.1. Detailed evaluation", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "Pearson correlation coefficient on score methods. The Pearson correlation coefficient is represented by the formula:\nr xy = n i=1 (x i -x)(y i -ȳ) n i=1 (x i -x) 2 n i=1 (y i -ȳ) 2\nwhere: • r xy is the Pearson correlation coefficient between two variables x and y. • x i and y i are the individual sample points for variables x and y, respectively. • x and ȳ are the means (averages) of the x and y samples, respectively. • n is the number of sample points.\nThis formula essentially measures the degree of linear relationship between two variables. It compares the product of their deviations from their respective means. The numerator captures the covariance between the two variables, Human Blind Rating and the denominator normalizes this value, ensuring that the coefficient remains between -1 and +1. The Pearson correlation coefficient is a measure of how much two variables change together compared to how much they vary individually.\nAs shown in Table D5, the Pearson correlation coefficients are calculated between these methods: GPT-4 vs. Claude-2 (0.976), GPT-4 vs. Human Blind Rating (0.965), and Claude-2 vs. Human Blind Rating (0.998). It suggests a high level of agreement among these evaluation approaches. The alignment of scores from different evaluation methods reinforces the reliability of our assessment. Crucially, our STEVE models outperform other language models, including the benchmark GPT-4, in knowledge question and answering tasks. This superior performance of STEVE, particularly the 13B version, is evident across a broad spectrum of categories, suggesting that our model not only has a deeper understanding of Minecraft-related content but also exhibits a more accurate and consistent ability to generate relevant responses." }, { "figure_ref": [], "heading": "D.2. Language model efficiency", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "We primarily test the reasoning efficiency of large language models on Minecraft knowledge QA and showcased case studies.\nWe conduct this experiment in the Question-Answering part of our STEVE-21K dataset. As shown in Table D6, our model STEVE-7B/13B achieves leading performance in terms of time cost. This is due to its smaller number of parameters compared to GPT-4, and its shorter yet more accurate responses compared to the Llama series." }, { "figure_ref": [], "heading": "E. Prompt E.1. LLM Decomposer", "publication_ref": [], "table_ref": [], "text": "We use STEVE-7B/13B for goal decomposition. The format of the prompt is presented thus: directions under the \"SYSTEM\" role and questions under the \"USER\" role. The {target amount}, {target name} and {extensive knowledge} are designed to be filled with specific content, which will then be inputted into the language model after substitution." }, { "figure_ref": [], "heading": "E.2. Generate Question-Answering pairs", "publication_ref": [], "table_ref": [], "text": "We employ the following prompt during the collection of Question-Answer pairs. In this context, the placeholder {text} is intended to be filled with cleaned Minecraft-Wiki text." }, { "figure_ref": [], "heading": "E.3. Evaluation", "publication_ref": [], "table_ref": [], "text": "We utilize GPT-4 and Claude-2 for evaluating various models, employing the following prompt structure. Within this structure, the placeholders {question}, {ground truth}, and {answer} are designated to be filled with questions, standard answers, and model-generated answers, respectively." }, { "figure_ref": [ "fig_0" ], "heading": "F. Demo Image and Video", "publication_ref": [], "table_ref": [], "text": "As shown in Figure G2, we experiment with our STEVE on various practical tasks, covering collecting, combating, mining etc." }, { "figure_ref": [], "heading": "G. Examples", "publication_ref": [], "table_ref": [], "text": "The section contains examples from the STEVE-21K skill-code part, illustrating specific skill codes used in the STEVE system. // Smelt iron ores into iron ingots await smeltItem(bot, \"iron_ore\", \"coal\", 1); bot.chat(\"Smelted iron ores into iron ingots.\"); } // Place the crafting table near the bot const craftingTablePosition = bot.entity.position.offset(1, 0, 0); await placeItem(bot, \"crafting_table\", craftingTablePosition); // Craft a shield using the crafting table await craftItem(bot, \"shield\", 1); bot.chat(\"Crafted a shield.\"); }" }, { "figure_ref": [], "heading": "Discrete", "publication_ref": [], "table_ref": [], "text": "Code Code\nIterative Planning" }, { "figure_ref": [], "heading": "SYSTEM:", "publication_ref": [], "table_ref": [], "text": "You are designated as an assistant for the game Minecraft. Your task involves formulating goals related to obtaining specific objects within the game. I'll provide you with a specific target and relevant extensive information about it. Please detail the process of acquiring this target as a goal in a standardized format.\nGoal Structure: { \"object\": \"[Name of Target Object]\", \"amount\": [Target Amount], \"material\": { [Material Name]: [Material Quantity], ... }, \"tool\": \"[Tool Required]\", \"info\": \"[Concise Information Related to the Goal]\" } -Target: Enter the name of the target object you wish to obtain or craft., -Amount: Specify the amount of the target object needed., -Material: List the materials required to achieve this goal. Format each entry as \"material name\": quantity. If no material is required, set this field to None, -Tool: Indicate the most basic tool necessary for this goal. If multiple tools can be used, choose the simplest one. If no tool is required, set this to None, -Info: Provide essential information related to the goal. Summarize the knowledge in up to three sentences, focusing on key details about obtaining or crafting the target object.\nRequirements: 1. Goals must be constructed based on the provided knowledge, rather than relying solely on pre-existing knowledge. 2. The \"info\" section should be concise and informative, limited to a maximum of three sentences. It is essential to extract and summarize the key information from the provided knowledge, rather than replicating the entire text.\nExample Goal 1: { \"target\": \"bed\", \"amount\": 1, \"material\": {\"wool\": 3, \"planks\": 3}, \"tool\": \"crafting table\", \"info\": \"A bed is crafted using 3 wool and 3 wooden planks on a crafting table. Beds allow players to skip the night and reset their spawn point.\" } Example Goal 2: { \"target\": \"paper\", \"amount\": 3, \"material\": {\"sugar cane\": 3}, \"tool\": \"None\", \"info\": \"Paper is crafted from 3 sugar cane, arranged in a row. It's used for creating maps and books.\" }" }, { "figure_ref": [], "heading": "USER:", "publication_ref": [], "table_ref": [], "text": "Target info: {target amount} {target name} Knowledge info: {extensive knowledge}" }, { "figure_ref": [], "heading": "SYSTEM:", "publication_ref": [], "table_ref": [], "text": "You are a helpful assistant engaging in a conversation with an in-game Minecraft agent. This agent seeks to enhance both its survival techniques and interaction tactics within the Minecraft universe.\nYour task is to extract crucial questions and their corresponding answers from the provided Minecraft information. These may pertain to survival challenges, crafting recipes, interactions with in-game entities, and other relevant aspects." }, { "figure_ref": [], "heading": "##INSTRUCTIONS:", "publication_ref": [], "table_ref": [], "text": "-Craft questions to prompt the in-game Minecraft agent to contemplate various aspects of gameplay, including survival challenges, crafting recipes, managing resources, navigating terrain, interactions with in-game entities, and more.\n-For each question, extract a suitable answer. Each answer should elucidate the agent's perspective, emphasizing effective strategies, item applications, or interaction approaches." }, { "figure_ref": [], "heading": "USER:", "publication_ref": [], "table_ref": [], "text": "The information about Minecraft is: {text}.\nPresent your output in the mold of a Python dictionary string, utilizing 'Q' for questions and 'A' for the agent's anticipated responses. For multiple questions derived, maintain the 'Q' and 'A' structure.\nAn example could be: 'Q': 'How to craft a wooden pickaxe in Minecraft?', 'A': 'To craft a wooden pickaxe, the approach involves...', 'Q': '...', 'A':'...', ..." }, { "figure_ref": [], "heading": "SYSTEM:", "publication_ref": [], "table_ref": [], "text": "You are an intelligent chatbot designed for evaluating the correctness of generative outputs for question-answer pairs. Your task is to compare the predicted answer with the correct answer and determine if they match meaningfully.\nHere's how you can accomplish the task:" }, { "figure_ref": [], "heading": "##INSTRUCTIONS:", "publication_ref": [], "table_ref": [], "text": "-Focus on the meaningful match between the predicted answer and the correct answer.\n-Consider synonyms or paraphrases as valid matches.\n-Evaluate the correctness of the prediction compared to the answer." }, { "figure_ref": [], "heading": "USER:", "publication_ref": [], "table_ref": [], "text": "Please evaluate the following question-answer pair: Question: {question} Correct Answer: {ground truth} Predicted Answer: {answer} Provide your evaluation only in the form of a score, where the score is an integer value between 0 and 10, with 10 indicating the highest meaningful match. Do not provide any other output text or explanations. Only provide output score. For example, your response should look like this: '9'. " } ]
How to craft a stone pickaxe? Open your crafting table… observe search Figure 1. STEVE integrates the three parts: Vision Perception, Language Instruction, and Code Action, supported closely by our proposed STEVE-21K. It demonstrates commendable performance on Continuous Block Search, Knowledge QA, and Tech Tree Mastery.
See and Think: Embodied Agent in Virtual Environment
[ { "figure_caption": "Figure 2 .2Figure 2. STEVE framework. The Vision Perception part takes images or videos, encodes them into tokens, and combines them with the tokens of Agent State and Task as input. The STEVE-13B in the Language Instruction part is used for automatic reasoning and task decomposition, and it calls the Skill Database in the form of the Query to output code as action.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure3. STEVE-21K collection pipeline. In the Vision-Environment section, STEVE-13B plays the game according to specified tasks defined by the human player, collecting visual information through prismarine-viewer and capturing environmental information from the screen using Ray Tracing[57]. Note that during the collection phase, the language instruction task is also performed. We simultaneously record and save the chat flow from the reasoning and decomposition stages. In the Question-Answering section, we obtain information from the Minecraft-Wiki and Reddit forums, and use GPT-3.5 to clean the data into Single-round QA pairs. In the Skill-Code section, we use GPT-3.5 combined with the human player's code to synthesize code snippets, and then check and revise them in the game environment.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Example of Vision-Environment pairs. It represents the data format of the Vision-Environment pairs in our STEVE-21K dataset: including visual signals, environmental information, LLM's Chat Log, Context QA pairs, and planning in actual tasks.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "vision unit 11 ± 5 ( 3 /3) 27 ± 5 ( 3 /3) 46 ± 11 ( 3 /3) 158 ( 1 /3) STEVE (GPT-4) [54]", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "How to restore full hunger efficiently? You should eat 1 bread and 1 apple. The bread will restore 5 hunger points, and the apple will restore 4 hunger points adding up to the 6 hunger points you need to have a full hunger bar... Simply eat one bread from your inventory. It will increase your hunger level by 5 points, which is precisely what you need to reach 20... You should eat an Apple or 2 Breads. Both Apples and Breads restore 5 hunger pointswhich will bring your hunger level back to full. Should I combat zombie and how to survive? You should combat the zombie using your iron sword. Zombies are relatively weak and should not pose a significant threat to you... You should combat the zombie; remember to strike and step back to avoid getting hit. You seem to be in an abandoned mineshaft. Once the area is safe, continue to explore... you should combat the zombie and collect resources to craft better equipment. The current environment is likely a village or a farm... What do you find and how to get more resources? You should open the nearby Chest, or explore the surrounding area to find better resources... Begin by inspecting the nearby Chest, it could contain valuable resources. Use your Iron Pickaxe to mine any precious ores you encounter in the mineshaft... You should find more Cobblestones to craft better armor and weapons. The nearby blocks and entities suggest that you are in a Minecraft world, and the resources you need are likely to be found in the nearby area... Should I combat skeleton and how to survive? Given your current situation, it is not advisable to engage in combat with skeletons or any other hostile mobs... You should avoid engaging the skeleton at the moment since your health is a bit low and you have no armor to protect yourself. You should instead focus on your immediate safety... You should combat the skeleton. To survive, it is recommended to prioritize crafting better equipment, such as armor and weapons, to increase your health and combat effectiveness...", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Qualitative comparison on knowledge question and answering tasks. Green ( Red ) highlights the correct or good (wrong or bad) answer. Blue indicates the suboptimal answer. Grey indicates the meaningless answer.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Algorithm 11STEVE Method F Input: Image I, Agent State S, Task T 1: Variables: Retrieval module on Skill Database 11: procedure F (I, S, T ) 12:O V ← V (I) 13:O S ← T (S)14:O T ← T (T )15: P H ← S(O V , O S , O T ) 16:for each step in P H do 17:step e ← Decompose(step)", "figure_data": "", "figure_id": "fig_6", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Comparison on tech tree mastery task. The values presented are in fractions, representing successful trials out of three attempts. A score of 0/3 signifies the method's inability to progress within the tech tree after a maximum of 160 prompting iterations. The reported numbers denote the average iterations across three trials. Lower iteration values indicate higher efficiency of the respective method.", "figure_data": "", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison on continues block search task. # Iters represents the average number of iterations required to locate the first ten diamond blocks with a maximum of 100 prompting iterations; the lower this number, the higher the task completion efficiency. # Blocks denotes the average number of diamond blocks found over 100 iterations, with higher values indicating better performance.feedback, and execution errors as observations for subgoal execution.Voyager [54] relies only on textual grounding for perception. It has a long-term procedural memory that stores a hierarchical library of code-based grounding procedures. Complex skills can use simpler skills as sub-procedures.", "figure_data": "Method# Iters (↓) # Blocks (↑)AutoGPT [48]N/A7Voyager [54]3526STEVE1468", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison between STEVE and existing works. It is a system-level comparison consisting of LLM-based and RL-based methods.", "figure_data": "VPT [3]DreamerV3 [20] DECKARD [42] DEPS [56]Plan4MC [64] Voyager [54]STEVE (ours)DemosVideosNoneVideosNoneNoneNoneVideosRewardsSparseDenseSparseNoneDenseNoneNoneObservationsPixels OnlyPixels &Pixels &Feedback &Pixels &Feedback &MetaInventoryInventoryMetaMeta &Inventory✓✓✓Skill9172210DatabaseGradient-Free✓✓", "figure_id": "tab_6", "figure_label": "B1", "figure_type": "table" }, { "figure_caption": "Quantitive comparison on knowledge question and answering task by GPT-4[43]. It is rated on a scale of 0 to 10; Higher scores indicate greater alignment of the generated answers with the ground truth.", "figure_data": "8.508.258.007.25 7.50 7.75 Claude-27.006.756.50(c) Claude-2 and Human Blind Rating: 0.998Figure D1. Pearson Correlation Analysis, including scatters and trend. Note that all scatters come from Table D2 D3 D4MethodWorld & Entities Player Mechanics & Survival Knowledge & Discovery Resources & Crafting Tools & Utilities Miscellaneous OverallGPT-48.538.568.858.498.639.008.59Llama2-7B7.227.437.127.017.288.457.23Llama2-13B7.347.327.427.377.446.757.36STEVE-7B8.678.418.428.538.358.508.52STEVE-13B8.748.678.728.688.747.958.70MethodWorld & Entities Player Mechanics & Survival Knowledge & Discovery Resources & Crafting Tools & Utilities Miscellaneous OverallGPT-48.178.318.278.178.248.108.21Llama2-7B6.766.946.906.607.106.856.83Llama2-13B7.217.197.017.007.327.157.16STEVE-7B8.148.128.098.118.188.008.13STEVE-13B8.248.348.218.248.208.158.24", "figure_id": "tab_8", "figure_label": "D2", "figure_type": "table" }, { "figure_caption": "Quantitive comparison on knowledge question and answering task by Claude-2[9]. It is rated on a scale of 0 to 10; Higher scores indicate greater alignment of the generated answers with the ground truth.", "figure_data": "MethodWorld & Entities Player Mechanics & Survival Knowledge & Discovery Resources & Crafting Tools & Utilities Miscellaneous OverallGPT-47.487.337.087.117.397.527.32Llama2-7B5.345.675.725.656.015.575.62Llama2-13B6.236.355.895.946.196.026.14STEVE-7B7.157.117.017.227.276.977.16STEVE-13B7.437.397.177.527.437.067.41", "figure_id": "tab_9", "figure_label": "D3", "figure_type": "table" }, { "figure_caption": "Quantitive comparison on knowledge question and answering task by human blind rating. It is rated on a scale of 0 to 10; Higher scores indicate greater alignment of the generated answers with the ground truth.", "figure_data": "", "figure_id": "tab_10", "figure_label": "D4", "figure_type": "table" }, { "figure_caption": "Comparison on Pearson correlation coefficients between score methods. It calculates the mean score across each score dimensions for each language methods and then computes the Pearson correlation between these means for each pair of score methods (GPT-4, Claude-2 and Human blind rating). The Pearson correlation coefficient ranges from -1 to +1. Higher Pearson correlation coefficient (closer to +1) means that there is a stronger positive linear relationship between the two sets of data.", "figure_data": "MethodPearson Correlation CoefficientsGPT-4 VS. Claude-20.976GPT-4 VS. Human Blind Rating0.965Claude-2 VS. Human Blind Rating0.998Methodtime(s) (↓)Llama2-7B [17]18.34Llama2-13B [17]21.17GPT-4 [43]22.86STEVE-7B7.19", "figure_id": "tab_11", "figure_label": "D5", "figure_type": "table" }, { "figure_caption": "Comparison on language model efficiency. It shows the average time for each question-answering.", "figure_data": "", "figure_id": "tab_12", "figure_label": "D6", "figure_type": "table" }, { "figure_caption": "// If not, explore to find and mine oak logs if (oakLogsCount < planksToCraft) { await exploreUntil(bot, new Vec3(1, 0, 1), 60, () => { const oak_log = bot.findBlock({ matching: mcData.blocksByName[\"oak_log\"].id, Craft oak planks from oak logs await craftItem(bot, \"oak_planks\", planksToCraft); bot.chat(\"Crafted oak planks.\"); oakPlanksCount = bot.inventory.count(mcData.itemsByName.oak_planks.id); } // Check if there are enough iron ingots in the inventory let ironIngotsCount = bot.inventory.count(mcData.itemsByName.iron_ingot.id); // If not, explore to find and mine iron ores if (ironIngotsCount < 1) { await exploreUntil(bot, new Vec3(0, -1, 0), 60, () => { const iron_ore = bot.findBlock({ matching: mcData.blocksByName[\"iron_ore\"].id,", "figure_data": "maxDistance: 32});return oak_log;});await mineBlock(bot, \"oak_log\", planksToCraft -oakLogsCount);bot.chat(\"Collected oak logs.\");}// maxDistance: 32});return iron_ore;});await mineBlock(bot, \"iron_ore\", 1);bot.chat(\"Collected iron ores.\");", "figure_id": "tab_13", "figure_label": "", "figure_type": "table" } ]
Zhonghan Zhao; Wenhao Chai; Xuan Wang; Li Boyi; Shengyu Hao; Shidong Cao; Tian Ye; Jenq-Neng Hwang; Gaoang Wang
[ { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Artemij Amiranashvili; Nicolai Dorka; Wolfram Burgard; Vladlen Koltun; Thomas Brox", "journal": "", "ref_id": "b1", "title": "Scaling imitation learning in minecraft", "year": "2020" }, { "authors": "Bowen Baker; Ilge Akkaya; Peter Zhokhov; Joost Huizinga; Jie Tang; Adrien Ecoffet; Brandon Houghton; Raul Sampedro; Jeff Clune", "journal": "", "ref_id": "b2", "title": "Video pretraining (vpt): Learning to act by watching unlabeled online videos", "year": "2022" }, { "authors": "Bowen Baker; Ilge Akkaya; Peter Zhokov; Joost Huizinga; Jie Tang; Adrien Ecoffet; Brandon Houghton; Raul Sampedro; Jeff Clune", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Video pretraining (vpt): Learning to act by watching unlabeled online videos", "year": "2022" }, { "authors": "Wenhao Chai; Gaoang Wang", "journal": "Applied Sciences", "ref_id": "b4", "title": "Deep vision multimodal learning: Methodology, benchmark, and trend", "year": "2022" }, { "authors": "", "journal": "Introducing chatgpt", "ref_id": "b5", "title": "", "year": "2022" }, { "authors": "Shizhe Chen; Pierre-Louis Guhur; Cordelia Schmid; Ivan Laptev", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "History aware multimodal transformer for vision-and-language navigation", "year": "2021" }, { "authors": "Tao Chen; Saurabh Gupta; Abhinav Gupta", "journal": "", "ref_id": "b7", "title": "Learning exploration policies for navigation", "year": "2018" }, { "authors": "", "journal": "Talk to claude", "ref_id": "b8", "title": "", "year": "2023" }, { "authors": "Wenliang Dai; Junnan Li; Dongxu Li; Anthony Meng; Huat Tiong; Junqi Zhao; Weisheng Wang; Boyang Li; Pascale Fung; Steven Hoi", "journal": "", "ref_id": "b9", "title": "Instructblip: Towards generalpurpose vision-language models with instruction tuning", "year": "2023" }, { "authors": "Abhishek Das; Samyak Datta; Georgia Gkioxari; Stefan Lee; Devi Parikh; Dhruv Batra", "journal": "", "ref_id": "b10", "title": "Embodied question answering", "year": "2018" }, { "authors": "Samyak Datta; Sameer Dharur; Vincent Cartillier; Ruta Desai; Mukul Khanna; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b11", "title": "Episodic memory question answering", "year": "2022" }, { "authors": "Victoria Dean; Shubham Tulsiani; Abhinav Gupta", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "See, hear, explore: Curiosity via audio-visual association", "year": "2020" }, { "authors": "Danny Driess; Fei Xia; S M Mehdi; Corey Sajjadi; Aakanksha Lynch; Brian Chowdhery; Ayzaan Ichter; Jonathan Wahid; Quan Tompson; Tianhe Vuong; Yu", "journal": "", "ref_id": "b13", "title": "Palme: An embodied multimodal language model", "year": "2023" }, { "authors": "Heming Du; Xin Yu; Liang Zheng", "journal": "", "ref_id": "b14", "title": "Vtnet: Visual transformer network for object goal navigation", "year": "2020" }, { "authors": "Linxi Fan; Guanzhi Wang; Yunfan Jiang; Ajay Mandlekar; Yuncong Yang; Haoyi Zhu; Andrew Tang; De-An; Yuke Huang; Anima Zhu; Anandkumar", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "Minedojo: Building open-ended embodied agents with internet-scale knowledge", "year": "2022" }, { "authors": "Peng Gao; Jiaming Han; Renrui Zhang; Ziyi Lin; Shijie Geng; Aojun Zhou; Wei Zhang; Pan Lu; Conghui He; Xiangyu Yue", "journal": "", "ref_id": "b16", "title": "Llama-adapter v2: Parameter-efficient visual instruction model", "year": "2023" }, { "authors": "Tao Gong; Chengqi Lyu; Shilong Zhang; Yudong Wang; Miao Zheng; Qian Zhao; Kuikun Liu; Wenwei Zhang; Ping Luo; Kai Chen", "journal": "", "ref_id": "b17", "title": "Multimodal-gpt: A vision and language model for dialogue with humans", "year": "2023" }, { "authors": "Brandon William H Guss; Nicholay Houghton; Phillip Topin; Cayden Wang; Manuela Codel; Ruslan Veloso; Salakhutdinov", "journal": "", "ref_id": "b18", "title": "Minerl: A large-scale dataset of minecraft demonstrations", "year": "2019" }, { "authors": "Danijar Hafner; Jurgis Pasukonis; Jimmy Ba; Timothy Lillicrap", "journal": "", "ref_id": "b19", "title": "Mastering diverse domains through world models", "year": "2023" }, { "authors": "Katja Hofmann", "journal": "", "ref_id": "b20", "title": "Minecraft as ai playground and laboratory", "year": "2019" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b21", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Wenlong Huang; Fei Xia; Ted Xiao; Harris Chan; Jacky Liang; Pete Florence; Andy Zeng; Jonathan Tompson; Igor Mordatch; Yevgen Chebotar", "journal": "", "ref_id": "b22", "title": "Inner monologue: Embodied reasoning through planning with language models", "year": "2022" }, { "authors": "Matthew Johnson; Katja Hofmann; Tim Hutton; David Bignell", "journal": "", "ref_id": "b23", "title": "The malmo platform for artificial intelligence experimentation", "year": "2016" }, { "authors": "Eric Kolve; Roozbeh Mottaghi; Winson Han; Eli Vanderbilt; Luca Weihs; Alvaro Herrasti; Matt Deitke; Kiana Ehsani; Daniel Gordon; Yuke Zhu", "journal": "", "ref_id": "b24", "title": "Ai2-thor: An interactive 3d environment for visual ai", "year": "2017" }, { "authors": "Obin Kwon; Jeongho Park; Songhwai Oh", "journal": "", "ref_id": "b25", "title": "Renderable neural radiance map for visual navigation", "year": "2023" }, { "authors": "Bo Li; Yuanhan Zhang; Liangyu Chen; Jinghao Wang; Jingkang Yang; Ziwei Liu", "journal": "", "ref_id": "b26", "title": "Otter: A multi-modal model with in-context instruction tuning", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b27", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b28", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Yanyu Li; Geng Yuan; Yang Wen; Ju Hu; Georgios Evangelidis; Sergey Tulyakov; Yanzhi Wang; Jian Ren", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Efficientformer: Vision transformers at mobilenet speed", "year": "2022" }, { "authors": "Shalev Lifshitz; Keiran Paster; Harris Chan; Jimmy Ba; Sheila Mcilraith", "journal": "", "ref_id": "b30", "title": "Steve-1: A generative model for text-tobehavior in minecraft", "year": "" }, { "authors": "Zichuan Lin; Junyou Li; Jianing Shi; Deheng Ye; Qiang Fu; Wei Yang", "journal": "", "ref_id": "b31", "title": "Juewu-mc: Playing minecraft with sampleefficient hierarchical reinforcement learning", "year": "2021" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b32", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Shuang Liu; Takayuki Okatani", "journal": "IEEE", "ref_id": "b33", "title": "Symmetry-aware neural architecture for embodied visual exploration", "year": "2022" }, { "authors": "Pan Lu; Baolin Peng; Hao Cheng; Michel Galley; Kai-Wei Chang; Ying Nian Wu; Song-Chun Zhu; Jianfeng Gao", "journal": "", "ref_id": "b34", "title": "Chameleon: Plug-and-play compositional reasoning with large language models", "year": "2023" }, { "authors": "Wenhan Luo; Peng Sun; Fangwei Zhong; Wei Liu; Tong Zhang; Yizhou Wang", "journal": "PMLR", "ref_id": "b35", "title": "End-to-end active object tracking via reinforcement learning", "year": "2018" }, { "authors": "Wenhan Luo; Peng Sun; Fangwei Zhong; Wei Liu; Tong Zhang; Yizhou Wang", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b36", "title": "End-to-end active object tracking and its real-world deployment via reinforcement learning", "year": "2019" }, { "authors": "Chenyang Lyu; Minghao Wu; Longyue Wang; Xinting Huang; Bingshuai Liu; Zefeng Du; Shuming Shi; Zhaopeng Tu", "journal": "", "ref_id": "b37", "title": "Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration", "year": "" }, { "authors": "Muhammad Maaz; Hanoona Rasheed; Salman Khan; Fahad Shahbaz Khan", "journal": "", "ref_id": "b38", "title": "Video-chatgpt: Towards detailed video understanding via large vision and language models", "year": "2023" }, { "authors": "Hangyu Mao; Chao Wang; Xiaotian Hao; Yihuan Mao; Yiming Lu; Chengjie Wu; Jianye Hao; Dong Li; Pingzhong Tang", "journal": "Springer", "ref_id": "b39", "title": "Seihai: A sample-efficient hierarchical ai for the minerl competition", "year": "2021" }, { "authors": "Abhinav Moudgil; Arjun Majumdar; Harsh Agrawal; Stefan Lee; Dhruv Batra", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b40", "title": "Soat: A scene-and object-aware transformer for vision-and-language navigation", "year": "2021" }, { "authors": "Kolby Nottingham; Prithviraj Ammanabrolu; Alane Suhr; Yejin Choi; Hanna Hajishirzi; Sameer Singh; Roy Fox", "journal": "ARXIV.ORG", "ref_id": "b41", "title": "Do embodied agents dream of pixelated sheep?: Embodied decision making using language guided world modelling", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b42", "title": "", "year": "2023" }, { "authors": " Prismarinejs", "journal": "", "ref_id": "b43", "title": "Prismarinejs/mineflayer: Create minecraft bots with a powerful, stable, and high level javascript api", "year": "2013" }, { "authors": "Manolis Savva; Abhishek Kadian; Oleksandr Maksymets; Yili Zhao; Erik Wijmans; Bhavana Jain; Julian Straub; Jia Liu; Vladlen Koltun; Jitendra Malik", "journal": "", "ref_id": "b44", "title": "Habitat: A platform for embodied ai research", "year": "2019" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b45", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Yongliang Shen; Kaitao Song; Xu Tan; Dongsheng Li; Weiming Lu; Yueting Zhuang", "journal": "", "ref_id": "b46", "title": "Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface", "year": "2023" }, { "authors": "", "journal": "Significant-Gravitas", "ref_id": "b47", "title": "Auto-gpt", "year": "2023" }, { "authors": "Alexey Skrynnik; Aleksey Staroverov; Ermek Aitygulov; Kirill Aksenov; Vasilii Davydov; Aleksandr I Panov", "journal": "Cognitive Systems Research", "ref_id": "b48", "title": "Hierarchical deep q-network from imperfect demonstrations in minecraft", "year": "2021" }, { "authors": "Hee Chan; Jiaman Song; Clayton Wu; Brian M Washington; Wei-Lun Sadler; Yu Chao; Su", "journal": "", "ref_id": "b49", "title": "Llm-planner: Few-shot grounded planning for embodied agents with large language models", "year": "2022" }, { "authors": "Enxin Song; Wenhao Chai; Guanhong Wang; Yucheng Zhang; Haoyang Zhou; Feiyang Wu; Xun Guo; Tian Ye; Yan Lu; Jenq-Neng Hwang", "journal": "", "ref_id": "b50", "title": "Moviechat: From dense token to sparse memory for long video understanding", "year": "2023" }, { "authors": "Yixuan Su; Tian Lan; Huayang Li; Jialu Xu; Yan Wang; Deng Cai", "journal": "", "ref_id": "b51", "title": "Pandagpt: One model to instruction-follow them all", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b52", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Guanzhi Wang; Yuqi Xie; Yunfan Jiang; Ajay Mandlekar; Chaowei Xiao; Yuke Zhu; Linxi Fan; Anima Anandkumar", "journal": "", "ref_id": "b53", "title": "Voyager: An open-ended embodied agent with large language models", "year": "2023" }, { "authors": "Wenhai Wang; Zhe Chen; Xiaokang Chen; Jiannan Wu; Xizhou Zhu; Gang Zeng; Ping Luo; Tong Lu; Jie Zhou; Yu Qiao", "journal": "", "ref_id": "b54", "title": "Visionllm: Large language model is also an open-ended decoder for vision-centric tasks", "year": "2023" }, { "authors": "Zihao Wang; Shaofei Cai; Anji Liu; Xiaojian Ma; Yitao Liang", "journal": "", "ref_id": "b55", "title": "Describe, explain, plan and select: Interactive planning with large language models enables open-world multitask agents", "year": "2023" }, { "authors": "Turner Whitted", "journal": "", "ref_id": "b56", "title": "An improved illumination model for shaded display", "year": "2005" }, { "authors": "Erik Wijmans; Abhishek Kadian; Ari Morcos; Stefan Lee; Irfan Essa; Devi Parikh; Manolis Savva; Dhruv Batra", "journal": "", "ref_id": "b57", "title": "Ddppo: Learning near-perfect pointgoal navigators from 2.5 billion frames", "year": "2019" }, { "authors": "Chenfei Wu; Shengming Yin; Weizhen Qi; Xiaodong Wang; Zecheng Tang; Nan Duan", "journal": "", "ref_id": "b58", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "Yaqi Xie; Chen Yu; Tongyao Zhu; Jinbin Bai; Ze Gong; Harold Soh", "journal": "", "ref_id": "b59", "title": "Translating natural language to planning goals with large-language models", "year": "2023" }, { "authors": "Qinghao Ye; Haiyang Xu; Guohai Xu; Jiabo Ye; Ming Yan; Yiyang Zhou; Junyang Wang; Anwen Hu; Pengcheng Shi; Yaya Shi", "journal": "", "ref_id": "b60", "title": "mplug-owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "Licheng Yu; Xinlei Chen; Georgia Gkioxari; Mohit Bansal; Tamara L Berg; Dhruv Batra", "journal": "", "ref_id": "b61", "title": "Multi-target embodied question answering", "year": "2019" }, { "authors": "Yinfeng Yu; Wenbing Huang; Fuchun Sun; Changan Chen; Yikai Wang; Xiaohong Liu", "journal": "", "ref_id": "b62", "title": "Sound adversarial audiovisual navigation", "year": "2021" }, { "authors": "Haoqi Yuan; Chi Zhang; Hongcheng Wang; Feiyang Xie; Penglin Cai; Hao Dong; Zongqing Lu", "journal": "", "ref_id": "b63", "title": "Plan4mc: Skill reinforcement learning and planning for open-world minecraft tasks", "year": "2023" }, { "authors": "Fangwei Zhong; Peng Sun; Wenhan Luo; Tingyun Yan; Yizhou Wang", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b64", "title": "Ad-vat+: An asymmetric dueling mechanism for learning and understanding visual active tracking", "year": "2019" }, { "authors": "Fangwei Zhong; Peng Sun; Wenhan Luo; Tingyun Yan; Yizhou Wang", "journal": "PMLR", "ref_id": "b65", "title": "Towards distraction-robust active visual tracking", "year": "2021" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b66", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" }, { "authors": "Xizhou Zhu; Yuntao Chen; Chenxin Hao Tian; Weijie Tao; Chenyu Su; Gao Yang; Bin Huang; Lewei Li; Xiaogang Lu; Wang", "journal": "", "ref_id": "b67", "title": "Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory", "year": "2023" } ]
[ { "formula_coordinates": [ 6, 55.69, 86.4, 225.1, 21.73 ], "formula_id": "formula_0", "formula_text": "± 72 ( 3 /3) 94 ± 72 ( 3 /3) 135 ± 103 ( 3 /3) N/A ( 0 /3) Voyager [54] 6 ± 2 ( 3 /3) 11 ± 2 ( 3 /3) 21 ± 7 ( 3 /3) 102 ( 1 /3) STEVE 4 ± 1 ( 3 /3) 8 ± 2 ( 3 /3) 16 ± 4 ( 3 /3) 131 ± 27 ( 3 /3)" }, { "formula_coordinates": [ 7, 314.23, 223.99, 225.52, 13.36 ], "formula_id": "formula_1", "formula_text": "( 3 /3) 10 ± 1 ( 3 /3) 14 ± 3 ( 3 /3) 89 ± 9 ( 3 /3) STEVE (Ours) 4 ± 1 ( 3 /3) 8 ± 2 ( 3 /3) 16 ± 4 ( 3 /3) 131 ± 27 ( 3 /3)" }, { "formula_coordinates": [ 13, 339.9, 535.68, 172.48, 29.23 ], "formula_id": "formula_2", "formula_text": "r xy = n i=1 (x i -x)(y i -ȳ) n i=1 (x i -x) 2 n i=1 (y i -ȳ) 2" } ]
10.18653/v1/N19-4010
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b38", "b7", "b23", "b15", "b36", "b3", "b16", "b27" ], "table_ref": [], "text": "Once upon a time, syntactic structures were deemed essential in natural language processing (NLP). Modeling and inference about syntactic structures was an indispensable component in many NLP systems. That has all changed since the deep learning revolution started a decade ago. Modern NLP predominantly employs various neural models, most of which do not consider syntactic structures in their design.\nOne type of neural models that are particularly successful is transformers (Vaswani et al., 2017). Given an input text, a transformer produces a vector representation for each word that captures the meaning as well as other properties of the word in its context. Such contextual word representations can then be served into downstream neural networks for solving various NLP tasks. The power of transformers in producing high-quality contextual word representations is further unleashed with large-scale pretraining (Devlin et al., 2019;Liu et al., 2020). Nowadays, a vast majority of NLP models and systems are built on top of contextual word representations produced by some variants of pretrained transformers.\nLike most other neural models, transformers were developed based on human insight and trial and error, without explicit design for incorporating syntactic structures. Nevertheless, there is evidence that contextual word representations produced by pretrained transformers encode certain syntactic structures (Hewitt and Manning, 2019;Tenney et al., 2019) and attention heads in pretrained transformers may reflect syntactic dependencies (Clark et al., 2019;Htut et al., 2019;Ravishankar et al., 2021). Because of the heuristic nature of the transformer model design, exactly how transformers acquire such syntactic capability remains unclear.\nIn this paper, we propose probabilistic transformers, a very different approach to deriving contextual word representations that is based on classic nonneural probabilistic modeling with innate syntactic components. Specifically, we design a conditional random field that models discrete latent representations of all words as well as a syntactic dependency structure of the input sentence, and we define a potential function which evaluates the compatibility of the latent representations of any pair of words connected by a dependency arc. We use mean field variational inference for approximate inference, producing a marginal distribution for each latent word representation, the probability vector of which can then be used as a contextual vector representation of the word.\nWhile we propose our model from a purely syntactic and probabilistic perspective that is unrelated to transformers, we show that there is a striking resemblance between the computation graph of the inference procedure of our model and that of a transformer, with our intermediate distributions over dependency heads corresponding to self-attention scores and our intermediate distributions over latent word representations corresponding to intermediate word embeddings in a transformer. In short, we start with a probabilistic syntactic model but reach the transformer! We empirically compare our model with transformers when trained with either masked language modeling or downstream tasks. Our experimental results show that our model performs competitively to transformers on small to medium sized datasets.\nWe hope that probabilistic transformers, instead of being a replacement of transformers, could benefit the analysis of the syntactic capability of transformers and at the same time inspire novel extensions of transformers. Furthermore, we hope our work would promote future research of neural models that are linguistically more principled, theoretically more well-founded, and empirically no less powerful than existing models." }, { "figure_ref": [ "fig_0" ], "heading": "Probabilistic Transformers", "publication_ref": [], "table_ref": [], "text": "We will first introduce the basic model, a conditional random field (CRF) as illustrated in Figure 1, then show the inference procedure, and finally introduce some variants to the basic model." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "The CRF Model", "publication_ref": [ "b20" ], "table_ref": [], "text": "Given a sentence (a sequence of words), denote n as the sequence length. For the i-th word, we define Z i as a discrete latent label that represents the syntactic (and possibly semantic) property of the word in the sentence (i.e., it is a contextual representation) with a label set of size d. Such a discrete representation deviates from the common practice of representing a word with a continuous vector, but it is sufficient at least for syntactic processing (Kitaev et al., 2022) and it greatly simplifies our probabilistic model. For the i-th word, we also define H i ∈ {1, 2, • • • , n} representing the syntactic dependency head of the word. So the set of variables {H i } n i=1 specifies a dependency structure. We may also allow H i to point to a dummy root node, which will be discussed in Section 2.3.5. We follow the head-selection paradigm of dependency parsing and do not enforce the tree constraint, which again simplifies our model design.\nNext, we define two types of potential functions. For the i-th word w i , we define a unary potential function (corresponding to the unary factors in Figure 1) evaluating the compatibility of the word and its label Z i :\nϕ u (Z i ) = exp (S w i ,Z i ) (1)\nwhere S ∈ R |V|×d is a score matrix, |V| is the size of the vocabulary. For simplicity, we do not exploit any morphological or contextual features for computing the scores. For every pair of words w i and w j (i ̸ = j), we define a ternary potential function (corresponding to the ternary factors in Figure 1) over Z i , Z j and H i , which evaluates the compatibility between the labels of the two words if w j is the dependency head of w i :\nϕ t (H i , Z i , Z j ) = exp T Z i ,Z j H i = j 1 otherwise (2)\nwhere T ∈ R d×d is a score matrix.\nInspired by the multi-head structure in transformers, we allow multiple dependency structures for the same sentence, which may represent different flavors of dependencies. Each dependency structure resides in a different channel with its own dependency head variables and ternary potential functions. For the c-th channel, we denote the set of dependency head variables by {H (c) i } n i=1 and the score matrix of the ternary potential function by T (c) . Let h denote the total number of channels. We may stack all the score matrices T (c) for c = 1, • • • , h to form a score tensor T ∈ R d×d×h . Note that all the channels share the same set of latent label variables {Z i } n i=1 ." }, { "figure_ref": [], "heading": "Inference", "publication_ref": [ "b39" ], "table_ref": [], "text": "Following Wang and Tu (2020) \nF (t) ic (j) = a b Q (t) i (a)Q (t) j (b)T (c) a,b(3)\nG (t) i (a) = c j̸ =i b Q (t) ic (j)Q (t) j (b)T (c) a,b +Q (t) jc (i)Q (t) j (b)T (c) b,a(4)\nwhere\nQ (t) i (a) ∝ exp S w i ,a + G (t-1) i (a)(5)\nQ (t) ic (j) ∝ exp F (t-1) ic (j)(6)\nare the approximate marginal distributions at time step t, with\nQ (t) i (•) over Z i and Q (t) ic (•) over H (c)\ni . We initialize these distributions by\nQ (0) i (a) ∝ exp (S w i ,a )(7)\nQ (0) ic (j) ∝ 1 (8)\nAfter a fixed number of T > 0 iterations, we obtain the final posterior marginal distribution\nQ (T ) i (Z i ) for i = 1, • • • , n.\nResulted from interactions with all the words of the sentence, the distribution Q (T ) i (Z i ) incorporates information of not only the i-th word, but also its context. Therefore, we can treat the probability vector of this distribution as a contextual vector representation for the i-th word. In practice, we find that using unnormalized scores in log space as contextual word representations produces better results, i.e., we skip exponentiation and normalization when computing Q (T ) i (Z i ) using Equation 5during the final iteration.\nSince all the computation during MFVI is fully differentiable, we can regard the corresponding computation graph as a recurrent or graph neural network parameterized with score matrix S and tensor T. We can use the contextual word representations for downstream tasks by connecting the network to any downstream task-specific network, and we can update the model parameters using any task-specific learning objective through gradient descent. This is exactly the same as how transformers are used." }, { "figure_ref": [], "heading": "Extensions and Variants", "publication_ref": [], "table_ref": [], "text": "We introduce a few extensions and variants to the basic model that are empirically beneficial. Additional variants are discussed in Appendix B." }, { "figure_ref": [], "heading": "Distance", "publication_ref": [], "table_ref": [], "text": "Similar to the case of transformers, our probabilistic model is insensitive to the word order of the input sentence. In order to capture the order information, we apply relative positional encoding to our model by using distance-sensitive ternary potential functions. Specifically, we use different ternary scores for different distances between words denoted by the two Z variables of the potential function. The ternary potential function in Equation 2 becomes:\nϕ t (H (c) i , Z i , Z j ) = exp T[f (i -j)] (c) Z i ,Z j H (c) i = j 1 otherwise (9)\nwhere f is a clip function with threshold γ:\nf (x) =        0 x < -γ x + γ + 1 -γ ≤ x < 0 x + γ 0 < x ≤ γ 2γ + 1 x > γ(10)\nNotice that x cannot be zero since the head of a word cannot be itself. We set γ = 3 by default." }, { "figure_ref": [], "heading": "Asynchronous Update", "publication_ref": [], "table_ref": [], "text": "During inference of the basic model, we iteratively update all variables in a synchronous manner. This can be problematic. Consider the first iteration.\nThe messages passed to Z variables from H variables do not contain meaningful information because the initial distributions over H are uniform.\nConsequently, after one iteration, distributions over all Z variables become almost identical.\nTo fix this problem, we use the asynchronous update strategy by default in this work. For each iteration, we first update distributions over H variables, and then update distributions over Z variables based on the updated distributions over H variables. Formally, we rewrite Formula 6 as\nQ (t) ic (j) ∝ exp F (t) ic (j)\nand eliminate Formula 8 because distributions over H variables no longer need initialization." }, { "figure_ref": [], "heading": "Message Weight", "publication_ref": [], "table_ref": [], "text": "During inference, H variables have much fewer message sources than Z variables. This often pushes H variables towards being uniformly distributed. To balance the magnitude of the messages, we follow the Entropic Frank-Wolfe algorithm (Lê-Huu and Alahari, 2021), a generalization of MFVI, and introduce weight λ Z > 0 and λ H > 0 to Equation 5 and 6:\nQ (t) i (a) ∝ exp 1 λ Z S w i ,a + G (t-1) i (a)(11)\nQ (t) ic (j) ∝ exp 1 λ H F (t-1) ic (j)(12)\nWe set λ Z = 1 and λ H = 1 d by default2 ." }, { "figure_ref": [], "heading": "Tensor Decomposition", "publication_ref": [], "table_ref": [], "text": "Ternary score T is a tensor of shape d × d × h. Since d is usually set to several hundred, such a tensor leads to a huge number of parameters. To reduce the number of parameters, we apply the Kruskal form (which is closely related to tensor rank decomposition) to build the ternary score from smaller tensors.\nT (c) a,b = r l=1 U a,l • V b,l • W c,l(13)\nwhere U, V ∈ R d×r and W ∈ R h×r . Since the number of channels h is relatively small, we may also choose only to decompose the first two dimensions.\nT (c) a,b = r l=1 U a,c,l • V b,c,l(14)\nwhere U, V ∈ R d×h×r ." }, { "figure_ref": [], "heading": "Root Node", "publication_ref": [], "table_ref": [], "text": "Dependency parsing assumes a dummy root node, which we may add to the CRF model. The root node is not associated with any word and instead can be seen as representing the entire sentence. Therefore, we assume that it has a different (and possibly larger) label set from words and hence requires a different ternary potential function. Specifically, we define Z ROOT as a discrete latent label of the root node with a label set of size\nd root . For i ∈ {1, 2, • • • , n}, c ∈ {1, 2, • • • , h}, we add a ternary potential function over Z i , H(c)\ni and Z ROOT :\nϕ t (H (c) i , Z i , Z ROOT ) = exp T ′ (c) Z i ,Z ROOT H (c) i = ROOT 1 otherwise\nwhere T ′ ∈ R d×droot×h is the root score tensor.\nDuring inference, we initialize Q (0) (Z ROOT ) with a uniform distribution. After inference, we can regard the posterior marginal distribution of Z ROOT as a sentence representation." }, { "figure_ref": [], "heading": "Comparison with Transformers", "publication_ref": [], "table_ref": [], "text": "Although our probabilistic transformers are derived as a probabilistic model of dependency structures over latent word labels, we find that its computational process has lots of similarities to that of transformers. Below, we first re-formulate a probabilistic transformer in a tensor form to facilitate its comparison with a transformer, and then discuss the similarities between the two models at three levels." }, { "figure_ref": [], "heading": "Probabilistic Transformers in Tensor Form", "publication_ref": [ "b32" ], "table_ref": [], "text": "Consider a probabilistic transformer using a distance-insensitive ternary potential function without a dummy root node. We tensorize the update formulas in the inference process of probabilistic transformers. Suppose\nQ (t)\nz ∈ R n×d is a tensor that represents the posterior distributions of all the Z variables, and\nQ (t)\nh,c ∈ R n×n is a tensor that represents the posterior distributions of all the H variables in channel c (with a zero diagonal to rule out self-heading). We can rewrite Equation 3 and 4 as\nF (t) c = Q (t) z T (c) Q (t)T z (15) G (t) = c Q (t) h,c Q (t) z T (c)T + Q (t)T h,c Q (t) z T (c)(16)\nwhere\nQ (t) z = σ(S + G (t-1) ) (17) Q (t) h,c = σ(F (t-1) c ) (18\n)\nand σ is the softmax function. We still set λ Z to its default value 1 but regard λ H as a hyperparameter.\nWith asynchronous update, Equation 18 becomes:\nQ (t) h,c = σ F (t) c λ H(19)\nWe assume that T (c) is symmetric for c = 1, • • • , h. This is the only assumption that we make in this section beyond the original definition from the previous section. Symmetric score matrices indicate that the ternary factors are insensitive to the head-child order, which is related to undirected dependency parsing (Sleator and Temperley, 1993)\n. If T (c) is symmetric, then Q (t)\nh,c is also symmetric based on Formula 15 and 19. Thus, we can simplify Equation 16to\nG (t) = 2 c Q (t) h,c Q (t) z T (c)T(20)\nSuppose we decompose the ternary score tensor into two tensors U, V ∈ R d×h×r according to Equation 14, which can be rewritten as:\nT (c) = U (c) V (c)T (21)\nwhere U (c) , V (c) ∈ R d×r are the c-th channel of tensor U and V respectively. Substitute 21 into 15 and 20, we have\nF (t) c = Q (t) z U (c) V (c)T Q (t)T z (22) G (t) = 2 c Q (t) h,c Q (t) z V (c) U (c)T (23)\nWe define\nQ c = Q (t-1) z U (c)(24)\nK c = V c = Q (t-1) z V (c)(25)\nFor time step t -1, we could rewrite Formula 22 and 23 as\nF (t-1) c = Q c K T c (26) G (t-1) = 2 c Q (t-1) h,c V c U (c)T (27)\nApply Equation 27, 19, 26 to 17, we have\nQ (t) z = σ(S + 2 c channel c U (c)T )(28)\nwhere\nchannel c = σ Q c K T c λ H V c (29)\nWe call the computation of channel c a singlechannel update for channel c. Now we have a tensorized formulation of the computation in probabilistic transformers and we are ready for its comparison with transformers at three different levels." }, { "figure_ref": [], "heading": "Single-Channel Update vs. Scaled Dot-Product Attention", "publication_ref": [], "table_ref": [], "text": "Scaled dot-product attention in transformers is formulated as:\nAttention(Q, K, V ) = σ QK T √ d k V\nAs we can see, our single-channel update in Equation 29 is almost identical to scaled dot-product attention in transformers. The only difference is that the diagonal of the tensor Q c K T c is zero in our model because the head of a word cannot be itself." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Multi-Channel Update vs. Multi-Head Attention", "publication_ref": [ "b11" ], "table_ref": [], "text": "Multi-head attention in transformers is formulated as:\nMultiHead(Q, K, V ) = Concat (head 1 , . . . , head h ) W O\nwhere update formula (the second term within the softmax function in Equation 28) is similar to the multi-head attention in transformers, as shown in Figure 2. The main difference is that probabilistic transformers use the same parameters for W K and W V (both are V, shown in green color in Figure 2b) and for W Q and W O (both are U, shown in orange color in Figure 2b).\nhead i = Attention QW Q i , KW K i , V W V i It is equivalent to MultiHead(Q, K, V ) = i head i (W O i ) T where W O ≡ Concat(W O 1 , . . . , W O h ) and W Q i , W K i , W V i , W O i ∈ R d×r . Our multi-channel\nRecall that U and V are obtained from matrix decomposition (Equation 14). Therefore, the correspondence between U, V and W Q , W K , W O , W V in transformers suggests that the latter can also be seen as derived from tensor decomposition. Previous work on transformers has the same findings (Elhage et al., 2021)." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Full Model Comparison", "publication_ref": [ "b41", "b6", "b21" ], "table_ref": [], "text": "Figure 3 compares the full computation graphs of the two models, which have a similar overall structure that repeats a module recurrently until outputting contextual word representations. Within the module, we have also established the correspondence between multi-channel update and multihead attention. On the other hand, there are a few interesting differences.\nFirst, our model does not have a feed-forward structure as in a transformer. However, we do propose a variant of our model that contains global variables representing topics (Appendix B.3), which may have similar functionality to the feed-forward structure.\nSecond, our model does not have residual connections or layer norms. Instead, it adds the initial distributions (unary scores) to the updated message at each iteration. This may replace the functionality of residual connections and may even make more sense when the downstream task strongly depends on the original word information.\nThird, we have an additional softmax in each iteration. Note that we do softmax before the first iteration (Equation 7) and also at the end of each iteration (Equation 28), but bypass it in the last iteration when producing the output word representations, so our model could be equivalently formulated as doing softmax before each iteration, which we show in Figure 3c. Doing softmax in this way is similar to the layer norm in pre-LN transformers (Xiong et al., 2020) (Figure 3b).\nFinally, our model shares parameters in all iterations. This is similar to some variants of transformers that share parameters between layers, such as Universal Transformer (Dehghani et al., 2019) and ALBERT (Lan et al., 2019).\nOne consequence of these differences is that probabilistic transformers have much fewer parameters than transformers with the same number of layers, heads and embedding dimensions, because of shared parameters between iterations, absence of a feed-forward structure, and tied parameter matrices in multi-channel updates." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We empirically compare probabilistic transformers with transformers on three tasks: masked language modeling, sequence labeling, and text classification. For each task, we use two different datasets. We also perform a syntactic test to evaluate the compositional generalization ability of our model. " }, { "figure_ref": [], "heading": "Tasks and Datasets", "publication_ref": [ "b24", "b2", "b29", "b24", "b5", "b37", "b33", "b18", "b26" ], "table_ref": [], "text": "Here we briefly introduce our tasks and datasets. A detailed description is presented in Appendix D.\nMasked Language Modeling (MLM). We perform MLM tasks on two corpora: the Penn Tree-Bank (PTB) (Marcus et al., 1993) and Brown Laboratory for Linguistic Information Processing (BLLIP) (Charniak et al., 2000). Following Shen et al. (2022), we randomly replace words with a mask token <mask> at a rate of 30%. The performance of MLM is evaluated by measuring perplexity (lower is better) on masked words.\nWe project the final word representation of each mask token to the vocabulary. For transformers, we tie the projection parameters to the initial word embeddings. We find that this trick improves the performance of transformers.\nSequence Labeling. For sequence labeling tasks, we perform part-of-speech (POS) tagging on two datasets: the Penn TreeBank (PTB) (Marcus et al., 1993) and the Universal Dependencies (UD) (De Marneffe et al., 2021). We also perform named entity recognition (NER) on CoNLL-2003 (Tjong Kim Sang andDe Meulder, 2003).\nWe directly project the final word representation of each word to the target tag set. For POS tagging, we evaluate the results by the accuracy of wordlevel predictions. For NER, we evaluate the results by measuring the F1 score of named entities.\nText Classification. We use the Stanford Senti-ment Treebank (SST) (Socher et al., 2013) as the dataset. It has two variants: binary classification (SST-2) and fine-grained classification (SST-5).\nFor transformers, we add a <CLS> token at the front of the sentence and then project its representation to the tag set. For our model, we use the variant with a root node introduced in Section 2.3.5 and project the representation of the root node to the tag set.\nSyntactic Test. To evaluate the compositional generalization abilities of our model, we perform a syntactic test on the COGS dataset (Kim and Linzen, 2020). We follow the settings in Ontanón et al. (2021), who cast the task as a sequence labeling task.\nAs in sequence labeling, we project word representations to tag sets. If all words in a sentence are correctly predicted, the sentence prediction will be counted as correct. We evaluate the results by the sentence-level accuracy of the predictions." }, { "figure_ref": [], "heading": "Settings", "publication_ref": [ "b26", "b26" ], "table_ref": [], "text": "We tune transformers and our model separately for each task except the syntactic test. For the syntactic test, we find that both transformers and our model easily reach 100% accuracy on the validation set. This observation is consistent with Ontanón et al. (2021). Therefore, instead of tuning, we use the best-performed setting of transformers in Ontanón et al. (2021) for our experiments. The hyperparameters of our model are determined by their counter- parts of transformers based on the correspondence discussed in Section 3.\nFor our model, we integrate all the variants mentioned in Section 2.3 except the root node variant, which we only use for text classification tasks. We tune the tensor decomposition strategy on different tasks. For MLM tasks, we add a small L2 regularization term to the ternary scores in our model, which we experimentally find beneficial. We optimize both models using the Adam optimizer (Kingma and Ba, 2015) with β 1 = 0.9, β 2 = 0.999." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We report the average and standard deviation results of 5 random runs in Table 1. It shows that our model has a competitive performance compared with transformers. In most tasks, probabilistic transformers perform competitively to transformers. It is worth noting that in these experiments, probabilistic transformers have much fewer parameters than transformers. For most tasks, the number of parameters of our best model is about one-fifth to one-half of that of the best transformer.\nWe also conduct case studies of the dependency structures inferred by our model after training on downstream tasks. Similar to the case of selfattentions in transformers, the inferred dependency structures are only partially consistent with human intuition. See Appendix F for details." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b34", "b40", "b0", "b30" ], "table_ref": [], "text": "There have been several studies trying to incorporate syntactic structures to transformers. Strubell et al. (2018) force one attention head to attend to predicted syntactic governors of input tokens. Wang et al. (2019); Ahmad et al. (2021) try to integrate constituency or dependency structures into transformers. Shen et al. (2021) propose a dependency-constrained self-attention mechanism to induce dependency and constituency structures. Our work deviates from all these previous studies in that we start from scratch with probabilistic modeling of word representations and dependencies, but obtain a model that is strikingly similar to transformers." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b9" ], "table_ref": [], "text": "It is worth noting that in this work, our primary goal is not to propose and promote a new model to compete with transformers. Instead, it is our hope that our work could benefit the analysis and extension of transformers, as well as inspire future research of transformer-style models that are linguistically more principled, theoretically more well-founded, and empirically no less powerful than existing models. In the long run, we aim to bridge the gap between traditional statistical NLP and modern neural NLP, so that valuable ideas, techniques and insights developed over the past three decades in statistical NLP could find their place in modern NLP research and engineering.\nThe datasets used in our experiments have small to medium sizes (around 10k to 60k training sentences). Our preliminary experiments with MLM on larger data show that our models significantly underperform transformers, which suggests that our model may not be as scalable as transformers. One possible cause is the absence of a feed-forward structure in our model. Recent researches show that the feed-forward layers might serve as an important part of transformers (Dong et al., 2021). Further research is needed to analyze this problem.\nOur model can be extended in a few directions. Instead of discrete labels, we may assume Z variables representing discrete vectors or even continuous vectors, which may lead to more complicated inference. We may model dependency labels by pairing every H variable with a dependency label variable. While we focus on contextual word representation (i.e., encoding) in this paper, we may extend our probabilistic model to include a decoder. Considering the similarity between our model and transformers, we speculate that some of these extensions may be used to inspire extensions of transformers as well." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present probabilistic transformers, a type of syntactic-aware probabilistic models for contextual word representation. A probabilistic transformer acquires discrete latent representations of all words in the input sentence by modeling a syntactic dependency structure of the input sentence. We use MFVI for approximate inference and find a striking resemblance between the computation graph of the inference procedure of our model and that of a transformer. Our experimental results demonstrate that our model performs competitively to transformers on small to medium sized datasets." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Though we have found a tight connection between probabilistic transformers and transformers in Section 3, this does not mean that our model can be directly used to interpret or modify transformers. For instance, in Section 3.3, we find that W K and W V in transformers both correspond to U in probabilistic transformers. However, if we tie W K and W V in transformers, then we may observe a performance drop on some downstream tasks.\nThe performance of probabilistic transformers lags behind transformers on large datasets (>100k), which suggests that our model may not be as scalable as transformers. We have discussed this in Section 6.\nThe way of positional encoding for probabilistic transformers leads to slower training and inference speed. On masked language modeling tasks, our model is about 3 times slower than transformers with either absolute or relative positional encoding, though it has much fewer parameters than transformers." }, { "figure_ref": [], "heading": "A Extended Entropic Frank-Wolfe", "publication_ref": [], "table_ref": [], "text": "In Section 2.3.3, we add message weights to the update function of the posterior marginal distributions. It follows an extension of the Entropic Frank-Wolfe algorithm (Lê-Huu and Alahari, 2021), which is a generalization of MFVI. Below we briefly introduce the algorithm and our extension following most of the notations in their paper." }, { "figure_ref": [], "heading": "A.1 Entropic Frank-Wolfe", "publication_ref": [], "table_ref": [], "text": "Suppose we want to minimize a continuous differentiable energy function E(•). Vanilla Frank-Wolfe solves the problem min x∈X E(x) by starting from a feasible x (0) ∈ X at time step 0, and iterating the following steps:\np (t) ∈ argmin p∈X ∇E x (t) , p x (t+1) = x (t) + α t p (t) -x (t)\nwhere α t ∈ [0, 1] follows some stepsize scheme, X is the value range of x, and here we let x ∈ R n×d be the concatenation of the distributions over the label set of all variables in CRF.\nRegularized Frank-Wolfe (Lê-Huu and Alahari, 2021) adds a regularization term r(•) to the objective. It solves the new objective E(x) + r(x) by iterating\np (t) ∈ argmin p∈X ∇E x (t) , p + r(p) x (t+1) = x (t) + α t p (t) -x (t)\nIt has been proved that regularized Frank-Wolfe achieves a sublinear rate of convergence O(1/ √ t) for suitable stepsize schemes.\nEntropic Frank-Wolfe is a special case of regularized Frank-Wolfe, which sets the regularization term as an entropy function r(x) = -λH(x), where H(x) = -i∈V s∈S x is log x is , S is the label set of the variables, V is the set of indices of the variables. Entropy Frank-Wolfe has a closedform solution for the update process\np (t) = argmin p∈X ∇E x (t) , p -λH(p) = softmax - 1 λ ∇E x (t)\n∀t ≥ 0 (30) When λ = 1 and α t = 1, ∀t ≥ 0, it is the same as the mean field algorithm." }, { "figure_ref": [], "heading": "A.2 Extended Entropic Frank-Wolfe", "publication_ref": [], "table_ref": [], "text": "We extend the Entropic Frank-Wolfe algorithm by using a more general regularization term\nr(x) = - i∈V λ i H(x i )\n, where λ i > 0 is the regularization weight of the i-th variable and H(x i ) = -s∈S x is log x is is the entropy of x i over the probability simplex ∆ = x ∈ R d : x ≥ 0, 1 ⊤ x = 1 . It allows us to assign different regularization weights for different variables. We claim that the update function could be written as\np (t) = argmin p∈X ∇E x (t) , p -λ i H(p i ) = softmax (R) ∀t ≥ 0 (31) , where R ∈ R nd and R i = - 1 λ i ∇E x (t) i ∀i ∈ V\nThis extension is still a special case of the regularized Frank-Wolfe algorithm. As a result, it inherits all the convergence properties from the regularized Frank-Wolfe mentioned in the previous section. On the other hand, it is also an extension of MFVI, which allows adding a message weight to each variable during inference." }, { "figure_ref": [], "heading": "A.3 A Proof for Extended Entropic", "publication_ref": [], "table_ref": [], "text": "Frank-Wolfe\nWe give a simple proof to the close-form solution of extended Entropic Frank-Wolfe in Equation 31.\nSince the optimization could reduce to n independent subproblems over each i ∈ V, We only need to give the closed-form solution to each subproblem:\nLemma 1. For a given vector c ∈ R d , λ > 0, the optimal solution z * to\nmin z∈∆ ⟨c, z⟩ + λ d s=1 z s log z s is z * = softmax(-1 λ c), where ∆ is the probability simplex x ∈ R d : x ≥ 0, 1 ⊤ x = 1 .\nProof. We can rewrite the problem as\nmin z ⟨c, z⟩ + λ d s=1 z s log z s s.t. 1 ⊤ z = 1, -z ≤ 0,\nThe Lagrangian of the above problem is given by\nL(z, µ, ν) = ⟨c, z⟩ + λ d s=1 z s log z s + µ ⊤ (-z) + ν 1 ⊤ z -1 = -ν + d s=1 (c s z s + λz s log z s -µ s z s + νz s )\nwhere µ = (µ 1 , µ 2 , . . . , µ d ) ≥ 0 and ν ∈ R are the Lagrange multipliers.\nSince the given problem is convex and there exists z ∈ R d such that 1 ⊤ z = 1 and z > 0, the Slater's constraint qualification holds. Thus, it suffices to solve the following Karush-Kuhn-Tucker (KKT) system to obtain the optimal solution:\nc s + λ log z s + 1 -µ s + ν = 0 ∀1 ≤ s ≤ d, 1 ⊤ z = 1, z ≥ 0, µ ≥ 0, µ s z s = 0 ∀1 ≤ s ≤ d.\nThe first equation implies ∀1 ≤ s ≤ d, z s > 0, and thus in combination with the last, we obtain ∀1 ≤ s ≤ d, µ s = 0. Therefore, the first equation becomes\nc s + λ log z s + 1 + ν = 0 ∀1 ≤ s ≤ d. Rewrite the equation as z s = exp -1 -ν λ exp - 1 λ c s ∀1 ≤ s ≤ d.\nSumming up this result for all s, and taking into account the second equation, we have\nd s=1 exp -1 -ν λ exp - 1 λ c s = 1 That is, exp -1 -ν λ = 1 d s=1 exp -1 λ c\ns Combine these two formulas, we have\nz s = exp -1 λ c s d t=1 exp -1 λ c t ∀1 ≤ s ≤ d.\nIn other words, z = softmax(-1 λ c)." }, { "figure_ref": [], "heading": "A.4 Inference in CRF", "publication_ref": [ "b31" ], "table_ref": [], "text": "In this work, we apply the extended Entropic Frank-Wolfe to do inference in the CRF. Let s\n= (Z 1 , • • • , Z n , H (1) 1 , • • • , H(1)\nn , H\n(2)\n1 , • • • , H(h)\nn ) denote an assignment to all the random variables. Our CRF encodes the joint distribution\np(s) = 1 Z i ϕ u (Z i ) c i j̸ =i ϕ t (H (c) i , Z i , Z j )\nwhere Z is a normalization factor. The objective is to find an assignment s that maximizes the joint distribution p(s). To express in the form of an energy function, let p(s) = 1 Z exp(-e(s)), we have\ne(s) = - i S w i ,Z i - c i j̸ =i 1 H i =j T (c) Z i ,Z j\nwhere 1 H i =j is an indicator function, which is equal to 1 if H i = j and is equal to 0 otherwise. The objective could now be expressed as minimizing the energy function e(s).\nIn general, the problem of CRF inference is NP-Hard (Shimony, 1994). In MFVI, we solve the continuous relaxation of the CRF problem instead. Let X be the simplex. That is, we allow a marginal distribution for each random variable. As in Section 2.2, let Q i (•) be the approximate marginal distribution over Z i and Q ic (•) be the approximate marginal distribution over H (c) i . The energy function is then\nE(Q * ) = - i a Q i (a)S w i ,a - c i j̸ =i a b Q i (a)Q j (b)Q ic (j)T (c) a,b\nThen we have\n∂E ∂Q i (a) = -S w i ,a - c j̸ =i b Q j (b)Q ic (j)T (c) a,b + Q j (b)Q jc (i)T (c) b,a ∂E ∂Q ic (j) = - a b Q i (a)Q j (b)T (c) a,b\nIn MFVI, the update for each distribution is the softmax of the derivative (let λ = 1 and α t = 1, ∀t ≥ 0 in Equation 30). That is,\nQ (t) i (a) ∝ exp - ∂E (t-1) ∂Q (t-1) i (a) Q (t) ic (j) ∝ exp - ∂E (t-1) ∂Q (t-1) ic (j)\nTogether with Equation 3 and 4, we have\n∂E (t-1) ∂Q (t-1) i (a) = -S w i ,a -G (t-1) i (a) ∂E (t-1) ∂Q (t-1) ic (j) = -F (t-1) ic (j)\n, which directly leads us to Formula 5 and 6.\nIn the extended Entropic Frank-Wolfe, the update for each distribution is the regularized softmax of the derivative (Equation 31). That is,\nQ (t) i (a) ∝ exp - 1 λ i ∂E (t-1) ∂Q (t-1) i (a) Q (t) ic (j) ∝ exp - 1 λ ic ∂E (t-1) ∂Q (t-1) ic (j) Let λ i = λ Z > 0, λ ic = λ H > 0, ∀i, c.\nThen it is equivalent to Formula 11 and 12 with regularization weight λ Z > 0 for Z variables and λ H > 0 for H variables." }, { "figure_ref": [], "heading": "A.5 The Choice of Message Weights", "publication_ref": [ "b38" ], "table_ref": [], "text": "In Section 2.3.3, we set λ Z = 1 and λ H = 1 d by default. This choice comes from a theoretical analysis similar to Vaswani et al. (2017), and we empirically find it helpful to improve the performance.\nAssume that the ternary scores in T are independent random variables with mean 0 and variance σ 2 . Then from Equation 3, we know that F (t) ic (j) is a weighted sum of these random variables. Suppose the weights are uniformly distributed, then\nF (t) ic (j)\nhas mean 0 and variance d 2 (d 2 ) 2 σ 2 = 1 d 2 σ 2 . Since d is usually set to several hundred, this might result in a small variance in the message received by H variables and thus lead to uniformly distributed H variables. To balance this effect, we set λ H = 1 d such that the variance of 1 λ H F (t) ic (j) is still σ 2 . From Equation 4we know that the variance of\nG (t) i (a) is 2(n-1)\nhd σ 2 . Here, since n varies in sentences, it is impossible to set a fixed λ Z that always recovers the original variance σ 2 . Compared to\nF (t) ic (j), the variance of G (t)\ni (a) does not change significantly. For simplicity, we set λ Z = 1." }, { "figure_ref": [], "heading": "B More Extensions and Variants", "publication_ref": [], "table_ref": [], "text": "We have introduced several extensions and variants that are beneficial to the model performance in Section 2.3. There are some other variants that we find do not bring significant improvement empirically, but might also be meaningful and have interesting correspondences to transformers." }, { "figure_ref": [], "heading": "B.1 Step Size", "publication_ref": [], "table_ref": [], "text": "In our model, we can retain information between iterations and do partially update with a proper step size. Let\nQ ⋆(t) i (a) ∝ exp S w i ,a + G (t-1) i (a) Q ⋆(t) ic (j) ∝ exp F (t-1) ic (j)\nbe the original posterior marginal distributions of the variables at time step t, which is the same as Formula 5 and 6. We have the posterior distributions with step size\nQ (t) i (Z i ) = α Z Q ⋆(t) i (Z i ) + (1 -α Z )Q (t-1) i (Z i ) Q (t) ic (H (c) i ) = α H Q ⋆(t) ic H (c) i + (1 -α H )Q (t-1) ic H (c) i\nwhere α Z , α H ∈ (0, 1] are the step sizes of each update. When α Z = α H = 1, it is equivalent to the original model. We initialize these distribution by Formula 7 and 8." }, { "figure_ref": [], "heading": "B.2 Damping", "publication_ref": [ "b14" ], "table_ref": [], "text": "Similar to step size in Appendix B.1, the damping approach also aims at retaining information between iterations. Instead of partially updating the posterior distribution, the damping approach partially updates the messages.\nWe define messages in time step t as\nM (t) i (a) = S w i ,a + G (t-1) i (a) (32) M (t) ic (j) = F (t-1) ic (j) (33)\nwhere\nM (t) i (Z i ) is the message passed to Z i and M (t) ic (H (c) i ) is the message passed to H (c)\ni . Thus, Formula 5 and 6 can be written as\nQ (t) i (a) ∝ exp M (t) i (a) Q (t) ic (j) ∝ exp M (t) ic (j)\nNow, we add damping factors β Z and β H , which restrict the message update between iterations. We change Equation 32and 33 to\nM (t) i (a) =(1 -β Z ) S w i ,a + G (t-1) i (a) + β Z M (t-1) i (a) M (t) ic (j) = (1 -β H ) F (t-1) ic (j) + β H M (t-1) ic (j)\nWe initialize the message by\nM (0) i (a) = S w i ,a M (0) ic (j) = 0\nWhen β Z = β H = 0, there is no damping in the update process and it is equivalent to the original model. When β Z = 0.5 and β H = 0, it is similar to the residual connection in transformers. When β Z = β H = 0.5, it is similar to the residual attention mechanism proposed in RealFormer (He et al., 2021)." }, { "figure_ref": [], "heading": "B.3 Global Variables", "publication_ref": [ "b9", "b13", "b12", "b35" ], "table_ref": [], "text": "As we mentioned in Section 3.4, probabilistic transformers do not have a feed-forward structure as in transformers. Feed-forward layers, however, constitute two-thirds of a transformer model's parameters. Recent researches show that the feedforward layers might serve as an important part of transformers (Dong et al., 2021;Geva et al., 2021Geva et al., , 2022)).\nInspired by Sukhbaatar et al. (2019), who combines the feed-forward layer and the self-attention layer into a unified all-attention layer, we design a similar structure based on dependency relations. Intuitively, we could add some global variables that are similar to the latent word representations (Z variables) but these representations are global features that do not change with input sentences. We will introduce 3 different model designs below." }, { "figure_ref": [ "fig_3" ], "heading": "B.3.1 All-dep", "publication_ref": [ "b35", "b35" ], "table_ref": [], "text": "Based on the intuition above, we add some global variables to the CRF model. Define F i as the i-th discrete global feature variable with the same label set as Z variables, representing the global features of the corpus. The total number of global feature variables is m. These variables are observed and the distributions on the label set will not change during inference. The head of each word could either be another word or a global feature variable. That is,\nH (c) i ∈ {1, 2, • • • , n, n + 1, • • • , n + m}.\nThen, for each word w i and global feature F j in channel c, we define a ternary potential function over Z i , H (c) i and F j , which evaluates the compatibility between the labels of the word and the global feature of the entire corpus.\nϕ t (H (c) i , Z i , F j ) = exp(T ′′ (c) Z i ,F j ), H (c) i = n + j 1, otherwise where T ′′ (c) ∈ R d×d is a score matrix for channel c.\nAn illustration of the CRF model is shown in Figure 4. We call this setting all-dep since the head of each word could either be another word or a dummy global feature variable. It follows the all-attn setting in Sukhbaatar et al. (2019).\nNotice that F j is a variable that does not participate in inference. It could be seen as part of the model. Thus, we could design an equivalent model that does not contain global feature variables but have a binary factor between Z i and H\n(c) i : ϕ b (H (c) i , Z i ) =      g exp(P (F H (c) i -n = g)T ′′ (c) Z i ,g ), H (c) i > n 1, otherwise\nwhere P (F i = g) is the probability that the i-th global variable has label g. It can be proved that the MFVI inference process for the model with global feature variables and the model with binary factors is the same. Move the product inside the exponential term, we have\nϕ b (H (c) i , Z i ) =      exp( g P (F H (c) i -n = g)T ′′ (c) Z i ,g ), H (c) i > n 1, otherwise\nThe term inside the exponential is a weighted sum of ternary scores. We may re-formulate this potential function with a simplified term:\nϕ b (H (c) i , Z i ) =    exp(B(c)\nH (c) i -n,Z i ), H (c) i > n 1, otherwise\nwhere B (c) ∈ R m,d is a score matrix for channel c. The weighted sum of ternary scores could be regarded as a neural parameterization of the binary scores B (c) . An illustration of the simplified CRF model is shown in Figure 5. Given the model above, we can now derive the following iterative update equations of posterior distribution:\nF (t) ic (j) =        a b Q (t) i (a)Q (t) j (b)T (c) a,b , j ≤ n a Q(t) i (a)B (c) j,a , j > n (34)\nG (t) i (a) = c j̸ =i,j≤n b Q (t) ic (j)Q (t) j (b)T (c) a,b +Q (t) jc (i)Q (t) j (b)T (c) b,a + c j>n Q (t) ic (j)B (c) j,a(35)\nwhere\nQ (t) i (a) ∝ exp S w i ,a + G (t-1) i (a)(36)\nQ (t) ic (j) ∝ exp F (t-1) ic (j)(37)\nThe initialization of the posterior marginal distributions\nQ (t) i (•) and Q (t)\nic (•) is the same as Formula 7 and 8. Notice that F (t) ic ∈ R n+m looks like a concatenation of a context vector and a persistent vector in all-attention networks (Sukhbaatar et al., 2019)." }, { "figure_ref": [ "fig_4" ], "heading": "B.3.2 Dep-split", "publication_ref": [ "b35", "b35" ], "table_ref": [], "text": "Following the attn-split setting in Sukhbaatar et al. (2019), we also design a dep-split version of our model. In each channel, we split the head of each word into two heads: one for the head word in the sentence and one for the global feature. We call the heads for global features 'global heads'. Denote G\ni ∈ {1, •, m} as the global head variable for i-th word in channel c. H is still the variable representing the syntactic dependency head of the i-th word in the c-th channel. Similar to the approaches in the all-dep setting, we define a simplified binary potential function for Z i and G\n(c) i ∈ {1, •, n} 𝐻 1 (1) 𝐻 1 (ℎ) 𝐻 2 (1) 𝐻 2 (ℎ)\n(c) i ϕ b (G (c) i = k, Z i = a) = exp B (c) k,a(38)\nFigure 6 illustrates the CRF model of the dep-split setting.\nWe could derive the following iterative update equations of posterior distribution:\nF (t) ic (j) = a b Q (t) i (a)Q (t) j (b)T (c) a,b(39)\nH (t) i,k,c = a Q (t) i (a)B (c) k,a(40)\nG (t) i (a) = c j̸ =i b Q (t) ic (j)Q (t) j (b)T (c) a,b + c j̸ =i b Q (t) jc (i)Q (t) j (b)T (c) b,a + c k Q ′ (t) ic (k)B (c) k,a(41)\nwhere\nQ (t) i (a) ∝ exp S w i ,a + G (t-1) i (a)(42)\nQ (t) ic (j) ∝ exp F (t-1) ic (j) (43) Q ′ (t) ic (k) ∝ exp H (t-1) i,k,c(44)\nare the approximate marginal distributions at time step t, with Q\n′ (t) ic (•) over G (c)\ni . We initialize these distributions by Formula 7, 8 and\nQ ′ (0) ic (k) ∝ 1 (45) B.3.3 Single-split\nFollowing the single-split setting in Sukhbaatar et al. (2019), we design a CRF model that is similar to the dep-split model but only allows one global head for each word. We also call this setting singlesplit. Denote G i as the global head variable for i-th word with a label set of size m. We define a binary potential for Z i and\nG i ϕ b (G i = k, Z i = a) = exp (B k,a )(46)\nwhere B ∈ R m×d is a score matrix. Figure 7 illustrates the CRF model of the single-split setting. We could derive the following iterative update equations of posterior distribution:\nF (t) ic (j) = a b Q (t) i (a)Q (t) j (b)T (c) a,b(47)\nH (t) i,k = a Q (t) i (a)B k,a(48)\nG (t) i (a) = c j̸ =i b Q (t) ic (j)Q (t) j (b)T (c) a,b + c j̸ =i b Q (t) jc (i)Q (t) j (b)T (c) b,a + k Q ′ (t) i (k)B k,a(49)\nwhere\nQ (t) i (a) ∝ exp S w i ,a + G (t-1) i (a)(50)\nQ (t) ic (j) ∝ exp F (t-1) ic (j) (51) Q ′ (t) i (k) ∝ exp H (t-1) i,k(52)\nare the approximate marginal distributions at time step t, with Q\n′ (t) i (•) over G i .\nWe initialize these distributions by Formula 7, 8 and\nQ ′ (0) i (k) ∝ 1 (53)\nsingle-split might be the setting that has the most similar computation process to that of transformers. If we consider the tensorized form of single-split, then for the posterior distributions of all the G variables Q (t) g ∈ R n×m , we have\nF (t) c = Q (t) z T (c) Q (t)T z (54\n)\nH (t) = Q (t) z B T(55)\nG (t) = c Q (t) h,c Q (t) z T (c)T + c Q (t)T h,c Q (t) z T (c) + Q (t) g B(56)\nwhere\nQ (t) z = σ S + G (t-1) (57) Q (t) h,c = σ F (t-1) c (58) Q (t) g = σ H (t-1)(59)\nWith the similar trick in Section 3, we have where\nQ (t) z =σ(S + 2 c channel c U (c)T + GFU(Q (t-1) z ))(60)\nchannel c = σ Q c K T c λ H V c (61) GFU(x) = σ xB T B (62\n)\nwhere we can regard GFU as an operator that updates the latent word representations from global features. An illustration of the computation process is shown in Figure 8. From Figure 9, we can see that the feed-forward structure in transformers is very similar to the global feature update process in probabilistic transformers with global variables." }, { "figure_ref": [], "heading": "C Distance and Relative Positional Encoding (RPE)", "publication_ref": [ "b28" ], "table_ref": [], "text": "In Section 3.2, we find that the single-channel update (Equation 29) in probabilistic transformers is almost identical to scaled dot-product attention in transformers. This observation is based on the hypothesis that probabilistic transformers and transformers are sharing the same positional encoding method. But this is not the case.\nIn section 2.3.1, we mention that to capture the word order information, we use a clip function to select the ternary potential function based on the distance of two words (Equation 9). This is similar to the relative positional encoding (RPE) in transformers. Shaw et al. (2018) proposes a method to add an additional component to key and value, based on the clipped distance. Specifically, the scaled dot-product attention with RPE could be rewritten as\ne ij = x i W Q x j W K + a K ij T √ d k z i = n j=1 α ij x j W V + a V ij\nwhere x i is the input representation of the i-th word, z i is the output representation,\nα ij = exp e ij\nk exp e ik . The additional component is a learnable parameter that based on the clipped distance\na K ij = w K clip(j-i,k) a V ij = w V clip(j-i,k) clip(x, k) = max(-k, min(k, x))\nFor probabilistic transformers, we directly add the distance information to the ternary potential function. Combining Equation 9 and 29, we could rewrite the single-channel update as\ne ij = x i U ij (x j V ij ) T λ H z i = n j=1 α ij (x j V ij )\nwhere\nα ij = exp e ij\nk exp e ik . The weights are based on the clip function f in Equation 10\nU ij = U[f (i -j)] V ij = V[f (i -j)]\nNotice that this way of positional encoding is quite parameter inefficient. It also makes our training process much slower than that of transformers." }, { "figure_ref": [], "heading": "D Details for Tasks and Datasets", "publication_ref": [], "table_ref": [], "text": "In this section, we will introduce our tasks and datasets in detail. A brief introduction is shown in Section 4.1." }, { "figure_ref": [], "heading": "D.1 Masked Language Modeling", "publication_ref": [ "b29", "b24", "b25", "b8", "b30", "b25", "b2", "b29", "b17" ], "table_ref": [], "text": "Masked Language Modeling (MLM) tasks generally evaluate the expressiveness of contextural word representations. We perform MLM tasks on two corpora: the Penn TreeBank (PTB) and Brown Laboratory for Linguistic Information Processing (BLLIP). We randomly replace words with a mask token <mask> at a rate of 30% and the model is required to predict the original word. Following Shen et al. (2022), we never mask <unk> tokens. The performance of MLM is evaluated by measuring perplexity (lower is better) on masked words.\nPTB. The Penn Treebank (Marcus et al., 1993), in particular the sections of the corpus corresponding to the articles of Wall Street Journal (WSJ), is a standard dataset for language modeling (Mikolov et al., 2012) and sequence labeling (Dinarelli and Grobol, 2019). Following the setting in Shen et al. (2021), we use the preprocessing method proposed in Mikolov et al. (2012). It removes all punctuation and replaces low-frequency words with <unk>. The processed dataset has a vocabulary size of 10000, including <unk> and <mask>.\nBLLIP. The Brown Laboratory for Linguistic Information Processing dataset (Charniak et al., 2000) is a large corpus similar to the PTB dataset in style. The entire dataset contains 24 million sentences from Wall Street Journal. In our experiments, we only use a small subset of this corpus. Following the same setting as Shen et al. (2022), we use the BLLIP-XS split proposed in Hu et al. (2020) with around 40k sentences and 1M tokens as the train set. The validation set consists of the first section each year and the test set consists of the second section each year. We remove all punctuation, replace numbers with a single character N and use lower-case letters. The vocabulary contains words that appear more than 27 times in the entire BLLIP dataset, with size 30231 including <unk> and <mask>." }, { "figure_ref": [], "heading": "D.2 Sequence Labeling", "publication_ref": [ "b5", "b37" ], "table_ref": [], "text": "Sequence labeling tasks require models to predict the tag for each word in the sequence. For sequence labeling tasks, we perform part-of-speech (POS) tagging on two datasets: the Penn TreeBank (PTB) and the Universal Dependencies (UD). We also perform named entity recognition (NER) on CoNLL-2003.\nPTB. As introduced in Appendix D.1, we also use the PTB dataset for POS tagging but with a different setting. We use the most commons split of this corpus for POS tagging, where sections from 0 to 18 are used as the train set, sections from 19 to 21 are used as the validation set, and sections from 22 to 24 are used as the test set. All words in the train set compose the vocabulary.\nUD. UD is a project that develops crosslinguistically consistent treebank annotation for many languages (De Marneffe et al., 2021). We test our model on the language-specific part-of-speech (XPOS) tags of the English EWT dataset with the standard splits. All words in the train set compose the vocabulary.\nCoNLL-2003. It is a named entity recognition dataset which is released as part of CoNLL-2003 shared task (Tjong Kim Sang and De Meulder, 2003). We test our model on the English dataset. All words in the train set compose the vocabulary. We only project the final word representation of each word to the tag set with the BIOES scheme without using a CRF decoder." }, { "figure_ref": [], "heading": "D.3 Text Classification", "publication_ref": [ "b33", "b4" ], "table_ref": [], "text": "Text Classification tasks need to classify sentences into different classes. We use the Stanford Sentiment Treebank (SST) (Socher et al., 2013) as the dataset. It has two variants: binary classification (SST-2) and fine-grained classification (SST-5). The dataset comes from SentEval (Conneau and Kiela, 2018).\nSST-2. SST-2 classifies each movie review into positive or negative classes. It contains 67k sentences in the train set.\nSST-5. SST-5 classifies sentences into 5 classes: negative, somewhat negative, neutral, somewhat positive and positive. It contains 8.5k sentences in the train set.\nIn text classification, all words in the train set compose the vocabulary." }, { "figure_ref": [], "heading": "D.4 Syntactic Test", "publication_ref": [ "b18", "b26", "b26", "b26" ], "table_ref": [], "text": "To evaluate the compositional generalization abilities of our model, we perform a syntactic test on the COGS (Kim and Linzen, 2020) dataset. COGS is a semantic parsing dataset that measures the compositional generalization abilities of models. We follow the settings in Ontanón et al. (2021), which turns the task from seq2seq into a sequence tagging task. The model needs to predict 5 tags for each input word: a parent word, the role of the relation between the word and its parent (if applicable), the category, the noun determiner (for nouns) and the verb name (for verbs). With these tags, one can reconstruct the original output deterministically.\nFor role, category, noun determiner and verb name, we directly project word representations to each tag set. For the parent tag, (Ontanón et al., 2021) propose 3 types of prediction heads:\n• Absolute uses a direct projection to predict the absolute index of the parent in the input sequence (-1 for no parent).\n• Relative uses a direct projection to predict the relative offset of the parent token with respect to the current token, or self for no parent.\n• Attention uses the attention weights from a new attention layer with a single head to predict the parent.\nWe empirically find that relative performs the best in most settings for both transformers and probabilistic transformers. This is not consistent with the observations in Ontanón et al. (2021) who finds that attention outperforms other settings. We still apply the relative setting in our experiments." }, { "figure_ref": [], "heading": "E Hyperparameters and Implementation", "publication_ref": [ "b1" ], "table_ref": [ "tab_7" ], "text": "We report our hyperparameters in Table 2 for probabilistic transformers and Table 3 for transformers. We tune the models for each task except the syntactic test through random search. We run experiments on one NVIDIA GeForce RTX 2080 Ti and all the experiments could finish in one day. Our implementation is based on the flair framework (Akbik et al., 2019)." }, { "figure_ref": [ "fig_5" ], "heading": "F Case Studies of Learned Dependency Structures", "publication_ref": [], "table_ref": [], "text": "A probabilistic transformer infers marginal distributions over both Z and H variables, the latter of which can be used to extract a dependency structure. Since our model is trained on downstream tasks such as MLM without access to gold parse trees, it can be seen as performing unsupervised dependency parsing. We visualize the dependency structures learned by a probabilistic transformer by looking at the most probable head of each word in the sentence. Figure 10 illustrates the dependency structures extracted from a probabilistic transformer trained on the PTB dataset under the MLM task. The sentence comes from the test set of the PTB dataset.\nWe show the head of each word in all the channels. The numbers on the dependency arcs represent probabilities estimated by the model. The model does not contain a root node, so there is at least one circle in the dependency graph.\nFrom the figure, we can see that our model is very confident in its choices of dependency arcs, with all the probabilities close to 1, which indicates strong compatibilities between the latent representations of connected word pairs. The predicted structure somewhat makes sense. For example, it puts 'she said' together. But generally, most of the dependency arcs are not consistent with humandesigned dependency relations. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Natural Science Foundation of China (61976139)." } ]
Syntactic structures used to play a vital role in natural language processing (NLP), but since the deep learning revolution, NLP has been gradually dominated by neural models that do not consider syntactic structures in their design. One vastly successful class of neural models is transformers. When used as an encoder, a transformer produces contextual representation of words in the input sentence. In this work, we propose a new model of contextual word representation, not from a neural perspective, but from a purely syntactic and probabilistic perspective. Specifically, we design a conditional random field that models discrete latent representations of all words in a sentence as well as dependency arcs between them; and we use mean field variational inference for approximate inference. Strikingly, we find that the computation graph of our model resembles transformers, with correspondences between dependencies and self-attention and between distributions over latent representations and contextual embeddings of words. Experiments show that our model performs competitively to transformers on small to medium sized datasets. We hope that our work could help bridge the gap between traditional syntactic and probabilistic approaches and cutting-edge neural approaches to NLP, and inspire more linguistically-principled neural approaches in the future.
Probabilistic Transformer: A Probabilistic Dependency Model for Contextual Word Representation
[ { "figure_caption": "Figure 1 :1Figure 1: The factor graph for our CRF model with n = 3. For clarity, ternary factors that connect to H (c) i", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Computation graphs for multi-head attention in transformers and multi-channel update in probabilistic transformers. See an explanation of replacing concat+linear with linear+sum in the upper part of multi-head attention in Section 3.3.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Computation graphs for transformers and probabilistic transformers.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An equivalent factor graph for the all-dep CRF model in Figure 4.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The factor graph for the single-split CRF model where n = 2. For clarity, ternary factors with channel c > 1 are not shown in the figure.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Dependency structures learned by a probabilistic transformer under the MLM task. The numbers on the dependency arcs represent the confidence of the head word.", "figure_data": "", "figure_id": "fig_5", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Main results of probabilistic transformers compared with transformers.", "figure_data": "TaskDatasetMetricTransformerProbabilistic TransformerMLMPTB BLLIPPerplexity58.43 ± 0.58 101.91 ± 1.4062.86 ± 0.40 123.18 ± 1.50POSPTB UDAccuracy96.44 ± 0.04 91.17 ± 0.1196.29 ± 0.03 90.96 ± 0.10NERCoNLL-2003F174.02 ± 1.1175.47 ± 0.35CLSSST-2 SST-5Accuracy82.51 ± 0.26 40.13 ± 1.0982.04 ± 0.88 42.77 ± 1.18Syntactic TestCOGSSentence-level Accuracy82.05 ± 2.1884.60 ± 2.06", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The factor graph for an intuitive design of CRF model with global variables where n = m = 2. For clarity, ternary factors that connect to H", "figure_data": "Unary FactorsTernary Factors (for 𝐹 𝑖 )Ternary FactorsDependency HeadVariablesLabel Variables𝑍 1𝑍 2𝐹 1𝐹 2Global Feature VariablesFigure 4: (c) iwith c > 1 are not shown in the figure.Unary FactorsBinary FactorsTernary FactorsDependency Head Variables𝐻 1 (1)𝐻 1 (ℎ)(1) 𝐻 2(ℎ) 𝐻 2Label Variables𝑍 1𝑍 2", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Figure 6: The factor graph for the dep-split CRF model where n = 2. For clarity, binary and ternary factors with channel c > 1 are not shown in the figure.", "figure_data": "Unary FactorsBinary FactorsTernary FactorsDependency Head Variables 𝐻 1 (1)𝐻 1 (ℎ) 𝐺 1 (1) 𝐺 1 (1)𝐺 1 (ℎ)𝐻 2 (1)𝐻 2 (ℎ) 𝐺 2 (1)(ℎ) 𝐺 2Global Head VariablesLabel Variables𝑍 1𝑍 2Unary FactorsBinary FactorsTernary FactorsDependency Head Variables 𝐻 1 (1)𝐻 1 (ℎ) 𝐺 1𝐻 2 (1)(ℎ) 𝐻 2 𝐺 2Global HeadVariablesLabel Variables", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Hyperparameters for transformers in our experiments.", "figure_data": "TransformerMLM PTB BLLIPPTBPOSUDCLS SST-2 SST-5 COGS SYNEmbedding size d model38425651238425612864FFN inner layer size d f f2048204820485125121024256# of heads h814141410144# of layers N5454842Positional Encodingabsabsabsabsabsabsrel-8Head dimension d qkv256128321625625616Dropout0.150.150.1500.0500.1Learning rate0.0001 0.0002 0.0004 0.0004 0.0001 0.0002 0.0005Weight decay1.2e-6 3.5e-6 3.2e-6 1.4e-6 1.9e-6 2.7e-61e-9", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" } ]
Haoyi Wu; Kewei Tu
[ { "authors": "Ahmad Wasi Uddin; Nanyun Peng; Kai-Wei Chang", "journal": "", "ref_id": "b0", "title": "Gate: graph attention transformer encoder for cross-lingual relation and event extraction", "year": "2021" }, { "authors": "Alan Akbik; Tanja Bergmann; Duncan Blythe; Kashif Rasul; Stefan Schweter; Roland Vollgraf", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "FLAIR: An easy-to-use framework for state-of-theart NLP", "year": "2019" }, { "authors": "Eugene Charniak; Don Blaheta; Niyu Ge; Keith Hall; Mark Johnson", "journal": "", "ref_id": "b2", "title": "Bllip 1987-89 wsj corpus release 1, ldc no", "year": "2000" }, { "authors": "Kevin Clark; Urvashi Khandelwal; Omer Levy; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "What does BERT look at? an analysis of BERT's attention", "year": "2019" }, { "authors": "Alexis Conneau; Douwe Kiela", "journal": "", "ref_id": "b4", "title": "Senteval: An evaluation toolkit for universal sentence representations", "year": "2018" }, { "authors": "Marie-Catherine De Marneffe; Christopher D Manning; Joakim Nivre; Daniel Zeman", "journal": "Computational linguistics", "ref_id": "b5", "title": "Universal dependencies", "year": "2021" }, { "authors": "Mostafa Dehghani; Stephan Gouws; Oriol Vinyals; Jakob Uszkoreit; Lukasz Kaiser", "journal": "", "ref_id": "b6", "title": "Universal transformers", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Marco Dinarelli; Loïc Grobol", "journal": "", "ref_id": "b8", "title": "Seq2biseq: Bidirectional output-wise recurrent neural networks for sequence modelling", "year": "2019" }, { "authors": "Yihe Dong; Jean-Baptiste Cordonnier; Andreas Loukas", "journal": "", "ref_id": "b9", "title": "Attention is not all you need: pure attention loses rank doubly exponentially with depth", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b10", "title": "", "year": "" }, { "authors": "N Elhage; C Nanda; T Olsson; N Henighan; B Joseph; Mann; Y Askell; Bai; T Chen; Conerly", "journal": "", "ref_id": "b11", "title": "A mathematical framework for transformer circuits", "year": "2021" }, { "authors": "Mor Geva; Avi Caciularu; Kevin Ro Wang; Yoav Goldberg", "journal": "", "ref_id": "b12", "title": "Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space", "year": "2022" }, { "authors": "Mor Geva; Roei Schuster; Jonathan Berant; Omer Levy", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Transformer feed-forward layers are keyvalue memories", "year": "2021" }, { "authors": "Ruining He; Anirudh Ravula; Bhargav Kanagal; Joshua Ainslie", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "RealFormer: Transformer likes residual attention", "year": "2021" }, { "authors": "John Hewitt; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "A structural probe for finding syntax in word representations", "year": "2019" }, { "authors": "Jason Phu Mon Htut; Shikha Phang; Bordia; Samuel R Bowman", "journal": "", "ref_id": "b16", "title": "Do attention heads in bert track syntactic dependencies?", "year": "2019" }, { "authors": "Jennifer Hu; Jon Gauthier; Peng Qian; Ethan Wilcox; Roger Levy", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "A systematic assessment of syntactic generalization in neural language models", "year": "2020" }, { "authors": "Najoung Kim; Tal Linzen", "journal": "", "ref_id": "b18", "title": "COGS: A compositional generalization challenge based on semantic interpretation", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b19", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "Nikita Kitaev; Thomas Lu; Dan Klein", "journal": "", "ref_id": "b20", "title": "Learned incremental representations for parsing", "year": "2022" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b21", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2019" }, { "authors": "Lê-Huu Khuê; Karteek Alahari", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Regularized frank-wolfe for dense crfs: Generalizing mean field and beyond", "year": "2021" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b23", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2020" }, { "authors": "Mitchell P Marcus; Mary ; Ann Marcinkiewicz; Beatrice Santorini", "journal": "Comput. Linguist", "ref_id": "b24", "title": "Building a large annotated corpus of english: The penn treebank", "year": "1993" }, { "authors": "Tomáš Mikolov", "journal": "Presentation at Google, Mountain View", "ref_id": "b25", "title": "Statistical language models based on neural networks", "year": "2012-04-02" }, { "authors": "Santiago Ontanón; Joshua Ainslie; Vaclav Cvicek; Zachary Fisher", "journal": "", "ref_id": "b26", "title": "Making transformers solve compositional tasks", "year": "2021" }, { "authors": "Artur Vinit Ravishankar; Mostafa Kulmizev; Anders Abdou; Joakim Søgaard; Nivre", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Attention can reflect syntactic structure (if you let it)", "year": "2021" }, { "authors": "Peter Shaw; Jakob Uszkoreit; Ashish Vaswani", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Self-attention with relative position representations", "year": "2018" }, { "authors": "Yikang Shen; Shawn Tan; Alessandro Sordoni; Peng Li; Jie Zhou; Aaron Courville", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Unsupervised dependency graph network", "year": "2022" }, { "authors": "Yikang Shen; Yi Tay; Che Zheng; Dara Bahri; Donald Metzler; Aaron Courville", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "StructFormer: Joint unsupervised induction of dependency and constituency structure from masked language modeling", "year": "2021" }, { "authors": "Solomon Eyal; Shimony ", "journal": "Artificial intelligence", "ref_id": "b31", "title": "Finding maps for belief networks is np-hard", "year": "1994" }, { "authors": "D Daniel; Davy Sleator; Temperley", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Parsing English with a link grammar", "year": "1993" }, { "authors": "Richard Socher; Alex Perelygin; Jean Wu; Jason Chuang; Christopher D Manning; Andrew Ng; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013" }, { "authors": "Emma Strubell; Patrick Verga; Daniel Andor; David Weiss; Andrew Mccallum", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Linguisticallyinformed self-attention for semantic role labeling", "year": "2018" }, { "authors": "Sainbayar Sukhbaatar; Edouard Grave; Guillaume Lample; Herve Jegou; Armand Joulin", "journal": "", "ref_id": "b35", "title": "Augmenting self-attention with persistent memory", "year": "2019" }, { "authors": "Ian Tenney; Patrick Xia; Berlin Chen; Alex Wang; Adam Poliak; Thomas Mccoy; Najoung Kim; Benjamin Van Durme; Sam Bowman; Dipanjan Das; Ellie Pavlick", "journal": "", "ref_id": "b36", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "year": "2019" }, { "authors": "Erik F Tjong; Kim Sang; Fien De; Meulder ", "journal": "", "ref_id": "b37", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "year": "2003" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b38", "title": "Attention is all you need", "year": "2017" }, { "authors": "Xinyu Wang; Kewei Tu", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Second-order neural dependency parsing with message passing and end-to-end training", "year": "2020" }, { "authors": "Yaushian Wang; Hung-Yi Lee; Yun-Nung Chen", "journal": "", "ref_id": "b40", "title": "Tree transformer: Integrating tree structures into self-attention", "year": "2019" }, { "authors": "Ruibin Xiong; Yunchang Yang; Di He; Kai Zheng; Shuxin Zheng; Chen Xing; Huishuai Zhang; Yanyan Lan; Liwei Wang; Tieyan Liu", "journal": "PMLR", "ref_id": "b41", "title": "On layer normalization in the transformer architecture", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 364.93, 426.92, 160.21, 11.51 ], "formula_id": "formula_0", "formula_text": "ϕ u (Z i ) = exp (S w i ,Z i ) (1)" }, { "formula_coordinates": [ 2, 337.08, 584.8, 188.07, 40.89 ], "formula_id": "formula_1", "formula_text": "ϕ t (H i , Z i , Z j ) = exp T Z i ,Z j H i = j 1 otherwise (2)" }, { "formula_coordinates": [ 3, 78.93, 358.61, 210.94, 26.09 ], "formula_id": "formula_2", "formula_text": "F (t) ic (j) = a b Q (t) i (a)Q (t) j (b)T (c) a,b(3)" }, { "formula_coordinates": [ 3, 78.93, 389.58, 210.94, 48.79 ], "formula_id": "formula_3", "formula_text": "G (t) i (a) = c j̸ =i b Q (t) ic (j)Q (t) j (b)T (c) a,b +Q (t) jc (i)Q (t) j (b)T (c) b,a(4)" }, { "formula_coordinates": [ 3, 100.84, 473.33, 189.03, 16 ], "formula_id": "formula_4", "formula_text": "Q (t) i (a) ∝ exp S w i ,a + G (t-1) i (a)(5)" }, { "formula_coordinates": [ 3, 101.49, 496.95, 188.38, 16 ], "formula_id": "formula_5", "formula_text": "Q (t) ic (j) ∝ exp F (t-1) ic (j)(6)" }, { "formula_coordinates": [ 3, 123.96, 538.61, 163.81, 16 ], "formula_id": "formula_6", "formula_text": "Q (t) i (•) over Z i and Q (t) ic (•) over H (c)" }, { "formula_coordinates": [ 3, 130.37, 578.34, 159.5, 16 ], "formula_id": "formula_7", "formula_text": "Q (0) i (a) ∝ exp (S w i ,a )(7)" }, { "formula_coordinates": [ 3, 131.02, 598.12, 158.85, 16 ], "formula_id": "formula_8", "formula_text": "Q (0) ic (j) ∝ 1 (8)" }, { "formula_coordinates": [ 3, 70.87, 651.44, 123.01, 16 ], "formula_id": "formula_9", "formula_text": "Q (T ) i (Z i ) for i = 1, • • • , n." }, { "formula_coordinates": [ 3, 314.96, 498.05, 210.18, 50.72 ], "formula_id": "formula_10", "formula_text": "ϕ t (H (c) i , Z i , Z j ) = exp T[f (i -j)] (c) Z i ,Z j H (c) i = j 1 otherwise (9)" }, { "formula_coordinates": [ 3, 318.39, 581.61, 206.75, 51.77 ], "formula_id": "formula_11", "formula_text": "f (x) =        0 x < -γ x + γ + 1 -γ ≤ x < 0 x + γ 0 < x ≤ γ 2γ + 1 x > γ(10)" }, { "formula_coordinates": [ 4, 124.45, 187.89, 104.58, 16 ], "formula_id": "formula_12", "formula_text": "Q (t) ic (j) ∝ exp F (t) ic (j)" }, { "formula_coordinates": [ 4, 84.12, 374.25, 205.74, 38.03 ], "formula_id": "formula_13", "formula_text": "Q (t) i (a) ∝ exp 1 λ Z S w i ,a + G (t-1) i (a)(11)" }, { "formula_coordinates": [ 4, 84.77, 418.96, 205.09, 25.55 ], "formula_id": "formula_14", "formula_text": "Q (t) ic (j) ∝ exp 1 λ H F (t-1) ic (j)(12)" }, { "formula_coordinates": [ 4, 116.06, 585.26, 173.81, 33.98 ], "formula_id": "formula_15", "formula_text": "T (c) a,b = r l=1 U a,l • V b,l • W c,l(13)" }, { "formula_coordinates": [ 4, 125.03, 683.11, 164.83, 33.98 ], "formula_id": "formula_16", "formula_text": "T (c) a,b = r l=1 U a,c,l • V b,c,l(14)" }, { "formula_coordinates": [ 4, 306.14, 203.68, 220.18, 39.63 ], "formula_id": "formula_17", "formula_text": "d root . For i ∈ {1, 2, • • • , n}, c ∈ {1, 2, • • • , h}, we add a ternary potential function over Z i , H(c)" }, { "formula_coordinates": [ 4, 306.14, 275.27, 218.42, 50.72 ], "formula_id": "formula_18", "formula_text": "ϕ t (H (c) i , Z i , Z ROOT ) = exp T ′ (c) Z i ,Z ROOT H (c) i = ROOT 1 otherwise" }, { "formula_coordinates": [ 4, 413.21, 690.55, 18.27, 13.31 ], "formula_id": "formula_19", "formula_text": "Q (t)" }, { "formula_coordinates": [ 4, 386.45, 719.18, 18.27, 13.31 ], "formula_id": "formula_20", "formula_text": "Q (t)" }, { "formula_coordinates": [ 5, 77.48, 97.93, 212.39, 57.02 ], "formula_id": "formula_21", "formula_text": "F (t) c = Q (t) z T (c) Q (t)T z (15) G (t) = c Q (t) h,c Q (t) z T (c)T + Q (t)T h,c Q (t) z T (c)(16)" }, { "formula_coordinates": [ 5, 130.96, 193.65, 158.9, 34.79 ], "formula_id": "formula_22", "formula_text": "Q (t) z = σ(S + G (t-1) ) (17) Q (t) h,c = σ(F (t-1) c ) (18" }, { "formula_coordinates": [ 5, 285.32, 216.28, 4.54, 9.46 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 5, 139.13, 292.92, 150.74, 29.29 ], "formula_id": "formula_24", "formula_text": "Q (t) h,c = σ F (t) c λ H(19)" }, { "formula_coordinates": [ 5, 70.87, 423.28, 220.18, 24.54 ], "formula_id": "formula_25", "formula_text": ". If T (c) is symmetric, then Q (t)" }, { "formula_coordinates": [ 5, 119.48, 488.96, 170.39, 25.29 ], "formula_id": "formula_26", "formula_text": "G (t) = 2 c Q (t) h,c Q (t) z T (c)T(20)" }, { "formula_coordinates": [ 5, 139.53, 578.4, 150.34, 12.3 ], "formula_id": "formula_27", "formula_text": "T (c) = U (c) V (c)T (21)" }, { "formula_coordinates": [ 5, 107.91, 657.6, 181.95, 43.82 ], "formula_id": "formula_28", "formula_text": "F (t) c = Q (t) z U (c) V (c)T Q (t)T z (22) G (t) = 2 c Q (t) h,c Q (t) z V (c) U (c)T (23)" }, { "formula_coordinates": [ 5, 154.19, 738.48, 135.67, 14.19 ], "formula_id": "formula_29", "formula_text": "Q c = Q (t-1) z U (c)(24)" }, { "formula_coordinates": [ 5, 128.48, 757.01, 161.39, 14.19 ], "formula_id": "formula_30", "formula_text": "K c = V c = Q (t-1) z V (c)(25)" }, { "formula_coordinates": [ 5, 347.32, 109.28, 177.82, 43.82 ], "formula_id": "formula_31", "formula_text": "F (t-1) c = Q c K T c (26) G (t-1) = 2 c Q (t-1) h,c V c U (c)T (27)" }, { "formula_coordinates": [ 5, 325.67, 186.5, 199.47, 24.04 ], "formula_id": "formula_32", "formula_text": "Q (t) z = σ(S + 2 c channel c U (c)T )(28)" }, { "formula_coordinates": [ 5, 353.55, 240.2, 171.59, 27.5 ], "formula_id": "formula_33", "formula_text": "channel c = σ Q c K T c λ H V c (29)" }, { "formula_coordinates": [ 5, 331.22, 432.23, 165.69, 28.19 ], "formula_id": "formula_34", "formula_text": "Attention(Q, K, V ) = σ QK T √ d k V" }, { "formula_coordinates": [ 5, 344.07, 602.26, 143.51, 28.77 ], "formula_id": "formula_35", "formula_text": "MultiHead(Q, K, V ) = Concat (head 1 , . . . , head h ) W O" }, { "formula_coordinates": [ 5, 305.75, 659.42, 218.66, 116.41 ], "formula_id": "formula_36", "formula_text": "head i = Attention QW Q i , KW K i , V W V i It is equivalent to MultiHead(Q, K, V ) = i head i (W O i ) T where W O ≡ Concat(W O 1 , . . . , W O h ) and W Q i , W K i , W V i , W O i ∈ R d×r . Our multi-channel" }, { "formula_coordinates": [ 11, 342.87, 465.95, 137.81, 41.69 ], "formula_id": "formula_37", "formula_text": "p (t) ∈ argmin p∈X ∇E x (t) , p x (t+1) = x (t) + α t p (t) -x (t)" }, { "formula_coordinates": [ 11, 320.01, 640.04, 183.27, 41.69 ], "formula_id": "formula_38", "formula_text": "p (t) ∈ argmin p∈X ∇E x (t) , p + r(p) x (t+1) = x (t) + α t p (t) -x (t)" }, { "formula_coordinates": [ 12, 74.31, 142.17, 195.57, 51.38 ], "formula_id": "formula_39", "formula_text": "p (t) = argmin p∈X ∇E x (t) , p -λH(p) = softmax - 1 λ ∇E x (t)" }, { "formula_coordinates": [ 12, 129.53, 309.36, 100.94, 22.26 ], "formula_id": "formula_40", "formula_text": "r(x) = - i∈V λ i H(x i )" }, { "formula_coordinates": [ 12, 70.87, 452.72, 219, 102.84 ], "formula_id": "formula_41", "formula_text": "p (t) = argmin p∈X ∇E x (t) , p -λ i H(p i ) = softmax (R) ∀t ≥ 0 (31) , where R ∈ R nd and R i = - 1 λ i ∇E x (t) i ∀i ∈ V" }, { "formula_coordinates": [ 12, 306.14, 110.98, 218.27, 71.03 ], "formula_id": "formula_42", "formula_text": "min z∈∆ ⟨c, z⟩ + λ d s=1 z s log z s is z * = softmax(-1 λ c), where ∆ is the probability simplex x ∈ R d : x ≥ 0, 1 ⊤ x = 1 ." }, { "formula_coordinates": [ 12, 351.5, 216.2, 128.87, 62.27 ], "formula_id": "formula_43", "formula_text": "min z ⟨c, z⟩ + λ d s=1 z s log z s s.t. 1 ⊤ z = 1, -z ≤ 0," }, { "formula_coordinates": [ 12, 324.05, 315.74, 181.96, 111.69 ], "formula_id": "formula_44", "formula_text": "L(z, µ, ν) = ⟨c, z⟩ + λ d s=1 z s log z s + µ ⊤ (-z) + ν 1 ⊤ z -1 = -ν + d s=1 (c s z s + λz s log z s -µ s z s + νz s )" }, { "formula_coordinates": [ 12, 313.32, 546.65, 203.91, 78.34 ], "formula_id": "formula_45", "formula_text": "c s + λ log z s + 1 -µ s + ν = 0 ∀1 ≤ s ≤ d, 1 ⊤ z = 1, z ≥ 0, µ ≥ 0, µ s z s = 0 ∀1 ≤ s ≤ d." }, { "formula_coordinates": [ 12, 306.14, 703.57, 179.14, 72.17 ], "formula_id": "formula_46", "formula_text": "c s + λ log z s + 1 + ν = 0 ∀1 ≤ s ≤ d. Rewrite the equation as z s = exp -1 -ν λ exp - 1 λ c s ∀1 ≤ s ≤ d." }, { "formula_coordinates": [ 13, 70.53, 108.78, 194.78, 92.3 ], "formula_id": "formula_47", "formula_text": "d s=1 exp -1 -ν λ exp - 1 λ c s = 1 That is, exp -1 -ν λ = 1 d s=1 exp -1 λ c" }, { "formula_coordinates": [ 13, 70.87, 227.97, 155.8, 52.4 ], "formula_id": "formula_48", "formula_text": "z s = exp -1 λ c s d t=1 exp -1 λ c t ∀1 ≤ s ≤ d." }, { "formula_coordinates": [ 13, 69.59, 337.42, 219.54, 25.67 ], "formula_id": "formula_49", "formula_text": "= (Z 1 , • • • , Z n , H (1) 1 , • • • , H(1)" }, { "formula_coordinates": [ 13, 212.53, 347.23, 59.69, 15.86 ], "formula_id": "formula_50", "formula_text": "1 , • • • , H(h)" }, { "formula_coordinates": [ 13, 73.22, 397.32, 213.57, 29.73 ], "formula_id": "formula_51", "formula_text": "p(s) = 1 Z i ϕ u (Z i ) c i j̸ =i ϕ t (H (c) i , Z i , Z j )" }, { "formula_coordinates": [ 13, 73.63, 500.26, 211.4, 26.09 ], "formula_id": "formula_52", "formula_text": "e(s) = - i S w i ,Z i - c i j̸ =i 1 H i =j T (c) Z i ,Z j" }, { "formula_coordinates": [ 13, 78.69, 723.17, 202.12, 52.24 ], "formula_id": "formula_53", "formula_text": "E(Q * ) = - i a Q i (a)S w i ,a - c i j̸ =i a b Q i (a)Q j (b)Q ic (j)T (c) a,b" }, { "formula_coordinates": [ 13, 308.38, 94.47, 207.97, 88.29 ], "formula_id": "formula_54", "formula_text": "∂E ∂Q i (a) = -S w i ,a - c j̸ =i b Q j (b)Q ic (j)T (c) a,b + Q j (b)Q jc (i)T (c) b,a ∂E ∂Q ic (j) = - a b Q i (a)Q j (b)T (c) a,b" }, { "formula_coordinates": [ 13, 343.06, 246.29, 134.6, 68.97 ], "formula_id": "formula_55", "formula_text": "Q (t) i (a) ∝ exp - ∂E (t-1) ∂Q (t-1) i (a) Q (t) ic (j) ∝ exp - ∂E (t-1) ∂Q (t-1) ic (j)" }, { "formula_coordinates": [ 13, 339.02, 348.84, 153.7, 67.2 ], "formula_id": "formula_56", "formula_text": "∂E (t-1) ∂Q (t-1) i (a) = -S w i ,a -G (t-1) i (a) ∂E (t-1) ∂Q (t-1) ic (j) = -F (t-1) ic (j)" }, { "formula_coordinates": [ 13, 306.14, 492.18, 179.1, 90.65 ], "formula_id": "formula_57", "formula_text": "Q (t) i (a) ∝ exp - 1 λ i ∂E (t-1) ∂Q (t-1) i (a) Q (t) ic (j) ∝ exp - 1 λ ic ∂E (t-1) ∂Q (t-1) ic (j) Let λ i = λ Z > 0, λ ic = λ H > 0, ∀i, c." }, { "formula_coordinates": [ 13, 493.02, 759.83, 32.67, 16 ], "formula_id": "formula_58", "formula_text": "F (t) ic (j)" }, { "formula_coordinates": [ 14, 72.06, 159.14, 217.07, 22.97 ], "formula_id": "formula_59", "formula_text": "G (t) i (a) is 2(n-1)" }, { "formula_coordinates": [ 14, 70.59, 203.38, 218.54, 29.53 ], "formula_id": "formula_60", "formula_text": "F (t) ic (j), the variance of G (t)" }, { "formula_coordinates": [ 14, 98.72, 450.42, 156.04, 39.62 ], "formula_id": "formula_61", "formula_text": "Q ⋆(t) i (a) ∝ exp S w i ,a + G (t-1) i (a) Q ⋆(t) ic (j) ∝ exp F (t-1) ic (j)" }, { "formula_coordinates": [ 14, 70.87, 572.85, 226.71, 35.77 ], "formula_id": "formula_62", "formula_text": "Q (t) i (Z i ) = α Z Q ⋆(t) i (Z i ) + (1 -α Z )Q (t-1) i (Z i ) Q (t) ic (H (c) i ) = α H Q ⋆(t) ic H (c) i + (1 -α H )Q (t-1) ic H (c) i" }, { "formula_coordinates": [ 14, 350.3, 94.14, 174.84, 35.77 ], "formula_id": "formula_63", "formula_text": "M (t) i (a) = S w i ,a + G (t-1) i (a) (32) M (t) ic (j) = F (t-1) ic (j) (33)" }, { "formula_coordinates": [ 14, 306.14, 139.12, 218.27, 31.98 ], "formula_id": "formula_64", "formula_text": "M (t) i (Z i ) is the message passed to Z i and M (t) ic (H (c) i ) is the message passed to H (c)" }, { "formula_coordinates": [ 14, 357.65, 190.62, 108.73, 39.62 ], "formula_id": "formula_65", "formula_text": "Q (t) i (a) ∝ exp M (t) i (a) Q (t) ic (j) ∝ exp M (t) ic (j)" }, { "formula_coordinates": [ 14, 306.14, 289.75, 223.82, 59.4 ], "formula_id": "formula_66", "formula_text": "M (t) i (a) =(1 -β Z ) S w i ,a + G (t-1) i (a) + β Z M (t-1) i (a) M (t) ic (j) = (1 -β H ) F (t-1) ic (j) + β H M (t-1) ic (j)" }, { "formula_coordinates": [ 14, 377.55, 380.56, 74.95, 35.77 ], "formula_id": "formula_67", "formula_text": "M (0) i (a) = S w i ,a M (0) ic (j) = 0" }, { "formula_coordinates": [ 15, 105.98, 210.11, 181.75, 16 ], "formula_id": "formula_68", "formula_text": "H (c) i ∈ {1, 2, • • • , n, n + 1, • • • , n + m}." }, { "formula_coordinates": [ 15, 70.47, 304.51, 218.66, 91.26 ], "formula_id": "formula_69", "formula_text": "ϕ t (H (c) i , Z i , F j ) = exp(T ′′ (c) Z i ,F j ), H (c) i = n + j 1, otherwise where T ′′ (c) ∈ R d×d is a score matrix for channel c." }, { "formula_coordinates": [ 15, 76.32, 519.49, 206.18, 88.76 ], "formula_id": "formula_70", "formula_text": "(c) i : ϕ b (H (c) i , Z i ) =      g exp(P (F H (c) i -n = g)T ′′ (c) Z i ,g ), H (c) i > n 1, otherwise" }, { "formula_coordinates": [ 15, 75.41, 711.44, 207.99, 62.75 ], "formula_id": "formula_71", "formula_text": "ϕ b (H (c) i , Z i ) =      exp( g P (F H (c) i -n = g)T ′′ (c) Z i ,g ), H (c) i > n 1, otherwise" }, { "formula_coordinates": [ 15, 336.43, 121.1, 67.01, 55.24 ], "formula_id": "formula_72", "formula_text": "ϕ b (H (c) i , Z i ) =    exp(B(c)" }, { "formula_coordinates": [ 15, 392.64, 141.2, 100.29, 36.38 ], "formula_id": "formula_73", "formula_text": "H (c) i -n,Z i ), H (c) i > n 1, otherwise" }, { "formula_coordinates": [ 15, 314.36, 303.59, 210.78, 76.86 ], "formula_id": "formula_74", "formula_text": "F (t) ic (j) =        a b Q (t) i (a)Q (t) j (b)T (c) a,b , j ≤ n a Q(t) i (a)B (c) j,a , j > n (34)" }, { "formula_coordinates": [ 15, 312.85, 390.62, 212.29, 93 ], "formula_id": "formula_75", "formula_text": "G (t) i (a) = c j̸ =i,j≤n b Q (t) ic (j)Q (t) j (b)T (c) a,b +Q (t) jc (i)Q (t) j (b)T (c) b,a + c j>n Q (t) ic (j)B (c) j,a(35)" }, { "formula_coordinates": [ 15, 327.57, 505.44, 197.57, 16 ], "formula_id": "formula_76", "formula_text": "Q (t) i (a) ∝ exp S w i ,a + G (t-1) i (a)(36)" }, { "formula_coordinates": [ 15, 328.22, 529.06, 196.92, 16 ], "formula_id": "formula_77", "formula_text": "Q (t) ic (j) ∝ exp F (t-1) ic (j)(37)" }, { "formula_coordinates": [ 15, 352.22, 567.91, 71.89, 16 ], "formula_id": "formula_78", "formula_text": "Q (t) i (•) and Q (t)" }, { "formula_coordinates": [ 15, 461.56, 759.83, 62.85, 16 ], "formula_id": "formula_80", "formula_text": "(c) i ∈ {1, •, n} 𝐻 1 (1) 𝐻 1 (ℎ) 𝐻 2 (1) 𝐻 2 (ℎ)" }, { "formula_coordinates": [ 16, 90.49, 472.64, 199.38, 52.16 ], "formula_id": "formula_81", "formula_text": "(c) i ϕ b (G (c) i = k, Z i = a) = exp B (c) k,a(38)" }, { "formula_coordinates": [ 16, 80.06, 621.49, 209.81, 26.09 ], "formula_id": "formula_82", "formula_text": "F (t) ic (j) = a b Q (t) i (a)Q (t) j (b)T (c) a,b(39)" }, { "formula_coordinates": [ 16, 80.06, 652.47, 209.81, 25.28 ], "formula_id": "formula_83", "formula_text": "H (t) i,k,c = a Q (t) i (a)B (c) k,a(40)" }, { "formula_coordinates": [ 16, 80.06, 681.82, 209.81, 89.66 ], "formula_id": "formula_84", "formula_text": "G (t) i (a) = c j̸ =i b Q (t) ic (j)Q (t) j (b)T (c) a,b + c j̸ =i b Q (t) jc (i)Q (t) j (b)T (c) b,a + c k Q ′ (t) ic (k)B (c) k,a(41)" }, { "formula_coordinates": [ 16, 329.04, 442.18, 196.1, 16 ], "formula_id": "formula_85", "formula_text": "Q (t) i (a) ∝ exp S w i ,a + G (t-1) i (a)(42)" }, { "formula_coordinates": [ 16, 326.1, 465.8, 199.04, 39.88 ], "formula_id": "formula_86", "formula_text": "Q (t) ic (j) ∝ exp F (t-1) ic (j) (43) Q ′ (t) ic (k) ∝ exp H (t-1) i,k,c(44)" }, { "formula_coordinates": [ 16, 367.52, 530.42, 68.16, 17.45 ], "formula_id": "formula_87", "formula_text": "′ (t) ic (•) over G (c)" }, { "formula_coordinates": [ 16, 306.14, 570.94, 219, 40.46 ], "formula_id": "formula_88", "formula_text": "Q ′ (0) ic (k) ∝ 1 (45) B.3.3 Single-split" }, { "formula_coordinates": [ 16, 331.72, 699.47, 193.42, 36.05 ], "formula_id": "formula_89", "formula_text": "G i ϕ b (G i = k, Z i = a) = exp (B k,a )(46)" }, { "formula_coordinates": [ 17, 80.06, 445.82, 209.81, 26.09 ], "formula_id": "formula_90", "formula_text": "F (t) ic (j) = a b Q (t) i (a)Q (t) j (b)T (c) a,b(47)" }, { "formula_coordinates": [ 17, 80.06, 476.8, 209.81, 25.28 ], "formula_id": "formula_91", "formula_text": "H (t) i,k = a Q (t) i (a)B k,a(48)" }, { "formula_coordinates": [ 17, 80.06, 506.15, 209.81, 89.66 ], "formula_id": "formula_92", "formula_text": "G (t) i (a) = c j̸ =i b Q (t) ic (j)Q (t) j (b)T (c) a,b + c j̸ =i b Q (t) jc (i)Q (t) j (b)T (c) b,a + k Q ′ (t) i (k)B k,a(49)" }, { "formula_coordinates": [ 17, 93.77, 629.59, 196.1, 16 ], "formula_id": "formula_93", "formula_text": "Q (t) i (a) ∝ exp S w i ,a + G (t-1) i (a)(50)" }, { "formula_coordinates": [ 17, 90.83, 653.21, 199.04, 39.88 ], "formula_id": "formula_94", "formula_text": "Q (t) ic (j) ∝ exp F (t-1) ic (j) (51) Q ′ (t) i (k) ∝ exp H (t-1) i,k(52)" }, { "formula_coordinates": [ 17, 134.59, 717.84, 65.51, 17.45 ], "formula_id": "formula_95", "formula_text": "′ (t) i (•) over G i ." }, { "formula_coordinates": [ 17, 151.43, 758.38, 138.43, 17.45 ], "formula_id": "formula_96", "formula_text": "Q ′ (0) i (k) ∝ 1 (53)" }, { "formula_coordinates": [ 17, 353.86, 487.26, 166.73, 14.19 ], "formula_id": "formula_97", "formula_text": "F (t) c = Q (t) z T (c) Q (t)T z (54" }, { "formula_coordinates": [ 17, 520.6, 490.11, 4.54, 9.46 ], "formula_id": "formula_98", "formula_text": ")" }, { "formula_coordinates": [ 17, 353.86, 505.79, 171.28, 14.19 ], "formula_id": "formula_99", "formula_text": "H (t) = Q (t) z B T(55)" }, { "formula_coordinates": [ 17, 353.86, 524.32, 171.28, 72.89 ], "formula_id": "formula_100", "formula_text": "G (t) = c Q (t) h,c Q (t) z T (c)T + c Q (t)T h,c Q (t) z T (c) + Q (t) g B(56)" }, { "formula_coordinates": [ 17, 363.05, 633.42, 162.09, 61.43 ], "formula_id": "formula_101", "formula_text": "Q (t) z = σ S + G (t-1) (57) Q (t) h,c = σ F (t-1) c (58) Q (t) g = σ H (t-1)(59)" }, { "formula_coordinates": [ 17, 329.31, 734.39, 195.83, 42.29 ], "formula_id": "formula_102", "formula_text": "Q (t) z =σ(S + 2 c channel c U (c)T + GFU(Q (t-1) z ))(60)" }, { "formula_coordinates": [ 18, 118.27, 485.26, 171.59, 43.44 ], "formula_id": "formula_103", "formula_text": "channel c = σ Q c K T c λ H V c (61) GFU(x) = σ xB T B (62" }, { "formula_coordinates": [ 18, 285.32, 519.24, 4.54, 9.46 ], "formula_id": "formula_104", "formula_text": ")" }, { "formula_coordinates": [ 18, 346.64, 216.2, 134.4, 81.8 ], "formula_id": "formula_105", "formula_text": "e ij = x i W Q x j W K + a K ij T √ d k z i = n j=1 α ij x j W V + a V ij" }, { "formula_coordinates": [ 18, 448.44, 316.74, 65.26, 14.39 ], "formula_id": "formula_106", "formula_text": "α ij = exp e ij" }, { "formula_coordinates": [ 18, 340.76, 371.2, 149.02, 49.89 ], "formula_id": "formula_107", "formula_text": "a K ij = w K clip(j-i,k) a V ij = w V clip(j-i,k) clip(x, k) = max(-k, min(k, x))" }, { "formula_coordinates": [ 18, 365.23, 499.31, 97.22, 72.56 ], "formula_id": "formula_108", "formula_text": "e ij = x i U ij (x j V ij ) T λ H z i = n j=1 α ij (x j V ij )" }, { "formula_coordinates": [ 18, 335.26, 578.85, 61.79, 14.38 ], "formula_id": "formula_109", "formula_text": "α ij = exp e ij" }, { "formula_coordinates": [ 18, 373.08, 622.41, 84.39, 27.2 ], "formula_id": "formula_110", "formula_text": "U ij = U[f (i -j)] V ij = V[f (i -j)]" } ]
10.1145/3633779
2023-11-26
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b33", "b53", "b74", "b8", "b53", "b59", "b46", "b23" ], "table_ref": [], "text": "With the development of deep learning [34] in recent years, training deep neural networks has become the main methodology in the computer vision field. The urge to large-scale labeled datasets hinders their applications in the real world since collecting annotations for training data is very expensive. In particular, It is even difficult for humans to memorize all categories [54,75] when the dataset contains a large number of object categories (e.g., ImageNet [9]). In these scenarios, if the worker is asked whether an image belongs to a specified class, instead of finding the accurate class label from a large number of candidates, the annotation job would be much easier. This paper investigates this setting which is called one-bit supervision, for one bit of information is provided by the labeler via answering the yes-or-no question. In comparison, log 2 𝐶 bits of information are provided by annotating an accurate label, where 𝐶 is the number of classes, though we point out that the actual cost of accurate annotation is often much higher than log 2 𝐶× that of one-bit annotation. One-bit supervision is a new challenge of learning from incomplete annotation. We expect its learning efficiency on accuracy to be superior to that of semi-supervised learning under the same amount of supervision bits. For example, in a dataset with 100 classes, we can use 10K × log 2 100 = 66.4K bits of information by accurately annotating 10K samples, or use 33.2K bits of information by accurately annotating 5K samples, and use the remainder 33.2K by answering yes-or-no questions. To verify the superiority of one-bit annotation, we asked three labelers to estimate the label correctness for 100 images (50 correctly labeled and 50 wrongly labeled) from ImageNet. The average annotation time is 2.72 seconds per image (with a precision of 92.3%). The average time for a full-bit annotation is around 1 minute according to [54]. This validates our motivation in a many-class dataset.\nSince supervision mostly comes from guessing the label for each sample, one-bit supervision has higher uncertainty compared to the conventional setting. If a guess is correct, an accurate label of the sample is obtained, if not, only one class is eliminated from the possibilities. To efficiently learn from this setting, there are two keys that should be ensured:\n(i) trying to improve the accuracy of each guess to obtain more positive labels, and (ii) making full use of the failed guesses so as to not waste the negative labels. This inspires us to develop a multi-stage training framework. It makes use of off-the-shelf semi-supervised approaches to train a reasonable initial model on a small number of full-labeled data. In each of the following stages, we use up part of the supervision quota by querying with the predictions on a subset of unlabeled images selected by the developed sampling strategies. We add correct guesses to a set of full-labeled samples and add wrong guesses to a set of negative labels. To learn from the latter, we force the semi-supervised algorithm to predict a very low probability on the eliminated class. The model is strengthened after each stage and thus is expected to achieve higher guess accuracy in the next stage. Hence, the information obtained by one-bit supervision is significantly enriched.\nTheoretically, mining hard examples for one-bit supervision can be effective when the initial model is strong enough.\nInspired by the current success of self-supervised learning in computer vision, we incorporate unsupervised pre-training with our approach to strengthen the model in each stage.\nIn the training process, we fine-tune the model using pre-trained weights at every stage. To learn from the negative labels (incorrect guesses), differently, we consider it as a one-vs-rest classification problem and use a binary cross-entropy loss to optimize. This new framework makes it feasible to combine with active learning to achieve better performance.\nNevertheless, incorporating self-supervised learning still cannot cast off the need for full-bit labels in the initial stage.\nIn order to design a more elegant framework that conducts pure one-bit annotation on the target dataset, we utilize unsupervised domain adaptation (UDA) to obtain the initial model. By using supervision from the source domain, we Manuscript submitted to ACM can train an initial model for the target domain without any labels. Then we conduct a similar training process to obtain the final model. We evaluate our setting and approach on three image classification benchmarks, namely, CIFAR100, Mini-ImageNet, and ImageNet. For the basic framework (without unsupervised pre-training), we choose the mean-teacher model [60] as a semi-supervised baseline as well as the method used for each training stage. The results on all three datasets verify that one-bit supervision is superior to semi-supervised learning under the same bits of supervision. Additionally, with diagnostic experiments, we verify that the benefits come from a more efficient way of utilizing the information of weak supervision. For the framework with unsupervised pre-training, we conduct experiments by fine-tuning a model on ImageNet. The obvious improvement shows its effectiveness which can benefit from active learning approaches. For the framework with no need for initial full-bit labels, we evaluate it on DomainNet [47], the largest multi-domain dataset.\nThe results reveal that it uses few annotations to achieve comparable performance to full-supervised training.\nA preliminary version of this manuscript appeared as [24]. The major extension of this paper is three-fold. First, we develop a new framework that combines SSL with one-bit supervision and the experiments on ImageNet verify the boost of efficiency. Also, two strategies for class balancing are proposed, and the benefit is verified with and without self-supervised pre-training. Second, we utilize UDA to design a framework to train without initial full-bit labels and conduct only one-bit annotations on the target set. Third, we provide a mathematical foundation for our approach. Inspired by this, a strategy of mining hard examples is proposed to improve the framework beyond self-supervised pre-training.\nThe remainder of this paper is organized as follows. In Section 2, some related work is reviewed. In Section 3, the proposed basic method is introduced in detail. Section 4 explains the mathematical foundations of our approach.\nSection 5 shows the experiments. Finally, the conclusions are drawn in Section 6." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Semi-Supervised Learning", "publication_ref": [ "b49", "b68", "b31", "b39", "b59", "b18", "b30", "b72", "b2", "b26", "b34", "b16", "b63", "b51", "b1", "b56" ], "table_ref": [], "text": "Semi-supervised learning can be categorized into two types. The first one [50,69] focuses on consistency (e.g., the prediction on multiple views of the same training sample should be the same) and uses it as an unsupervised loss term to guide model training. For example, Π-model [32] maintained two corrupted paths for the calculation of unsupervised loss.\nInstead of using stochastic noise, the virtual adversarial training (VAT) [40] algorithm utilized adversarial perturbation which can most greatly alter the output distribution to design consistency loss. Instead of focusing on perturbations, the Mean-Teacher [60] averaged the weights of the teacher model to provide more stable targets for the student model. Also, some works [19,31,73] attempted to construct consistency loss from other views.\nThe second type [3,27,35] assigns pseudo labels to unlabeled data, which has the same effect as entropy minimization [17]. For example, Xie et al. [64] proposed to generate pseudo labels using a clean teacher and train the student be noised. Rizve et al. [52] proposed UPS which leverages the prediction uncertainty to guide pseudo-label selection.\nThere are also works combining the advantages of these two kinds of methods. MixMatch [2] introduced a single loss to seamlessly reduce the entropy while maintaining consistency. FixMatch [57] used the pseudo labels predicted on weakly-augmented images to guide the learning on strongly-augmented images iteratively. One-bit supervision is an extension to semi-supervised learning which allows to exchange quota between fully-supervised and weakly-supervised samples, and we show that it can be more efficient." }, { "figure_ref": [], "heading": "Active Learning", "publication_ref": [ "b37", "b67", "b35", "b61", "b54", "b55", "b10", "b12", "b47", "b22", "b28" ], "table_ref": [], "text": "Active learning [38,68] can be roughly categorized into three types via the criterion of informative examples selection.\nThe uncertainty-based approaches utilize a designed measurement to select samples which can decrease the model uncertainty. These measurements include the predicted probability [36] and the entropy of the class posterior [62]. The diversity-based approaches [55] select diversified data points that represent the whole distribution of the unlabeled pool. Shi et al. [56] proposed to identify a small number of data samples that best represent the overall data space by joining a sparse Bayesian model and a maximum margin machine. The expected model change [11,13,48] approaches select samples that would cause the greatest change to the current model parameters. BALD [23] chose data points that are expected to maximize the mutual information between predictions and model posterior. BatchBALD [29] selected informative samples by utilizing a tractable approximation to the mutual information between a batch of samples and model parameters. One-bit supervision can be viewed as a novel type of active learning that only queries the most informative part at the class level." }, { "figure_ref": [], "heading": "Self-Supervised Learning", "publication_ref": [ "b45", "b15", "b38", "b41", "b40", "b32", "b73", "b5", "b19", "b6", "b7", "b42", "b71", "b6" ], "table_ref": [], "text": "Self-supervised learning aims to explore the intrinsic distribution of data samples via constructing a series of pretext tasks.\nIn the early stage, researchers design some handcrafted tasks to extract features including predicting the consistency from different spatial transformations, like orientation [46], rotation [16,39], counting [42], jigsaw puzzle [41] etc.\nOther pretasks [33,74] restore the original image information to achieve the same destination. Recently contrastive learning [6,20] attracts more and more attention on unsupervised learning field. The contrastive task requires the deep network to identify the features from the same image, which is based on that different views of an image should have consistent representation. Then many works attempt to improve it from the view of increasing the difficulty of identification, e.g., strong data augmentation operations [7], a large negative gallery [8], and additional predictors [43].\nRecently there have been some works combining self-supervised learning with semi-supervised learning and achieving better performance. Zhai et al. [72] proposed self-supervised semi-supervised learning and used it to derive novel semi-supervised image classification methods. Chen et al. [7] proposed to apply \"unsupervised pre-train followed by supervised fine-tuning\" to semi-supervised learning." }, { "figure_ref": [], "heading": "Unsupervised Domain Adaptation", "publication_ref": [ "b70", "b3", "b60", "b57", "b58", "b36" ], "table_ref": [], "text": "As a part of transfer learning, unsupervised domain adaptation (UDA) aims to utilize the label information from the source domain to improve the performance on the unlabeled target domain. It can be roughly classified into three types, the first reduces the discrepancy between source and target domain, e.g., Zellinger et al. [71] achieved this by defining a new metric Central Moment Discrepancy (CMD) for matching distributions. Chen et al. [4] proposed to perform higher-order moment matching for improving unsupervised domain adaptation. The second aligns the features from different domains by using adversarial loss, e.g., ADDA [61]. Thirdly, some methods used self-supervised methods to assist domain adaptation. Sun et al. [58] proposed to perform auxiliary self-supervised tasks on both domains. TTT [59] utilized self-supervision to do test-time training. We design a new training framework by using an off-the-shelf UDA approach SCDA [37] to train the initial model." }, { "figure_ref": [], "heading": "ONE-BIT SUPERVISION", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the proposed one-bit supervision from four aspects. Firstly, we explain the problem setting and verify its effectiveness in section 3.1. Secondly, the main framework is elaborated in section 3.2, 3.3, including multi-stage training and negative label suppression. Thirdly, the two extended training paradigms incorporating with SSL and UDA are introduced detailly in section 3.4 and 3.5. Finally, we discuss its relationship with prior works in section 3.6." }, { "figure_ref": [], "heading": "Problem Statement", "publication_ref": [ "b53", "b74", "b53", "b53" ], "table_ref": [], "text": "In the setting of conventional semi-supervised learning, it often starts with a dataset of D = {x 𝑛 } 𝑁 𝑛=1 , where x 𝑛 is the 𝑛-th sample of image data and 𝑁 is the total number of training samples. We use 𝑦 ★ 𝑛 to denote the ground-truth class label of x 𝑛 , and it is often unknown to the training algorithm in our setting. Specifically, we have a small set that contains 𝐿 samples with their corresponding 𝑦 ★ 𝑛 provided, and 𝐿 is often much smaller than 𝑁 , e.g., researchers often use 20% of labeled data on the CIFAR100 and Mini-ImageNet, and only 10% of labels on the ImageNet, just as in Section 5.1. That is to say, D is partitioned into two subsets D S and D U , where the superscripts respectively represent 'supervised' and 'unsupervised'.\nThe key insight of our research is that it is very challenging to assign an accurate label for an image when the number of classes is too large. The early user studies on ImageNet [54,75] show that a labeler has difficulty memorizing all categories, which largely increases the burden of data annotation. However, if we ask the testee 'Does the image belong to a specific class?' rather than 'What is the accurate class of the image?', the annotation cost will become much smaller.\nTo verify that the new setting indeed improves the efficiency of annotation, we invite three labelers who are moderately familiar with the ImageNet-1K dataset [54] to do a test experiment. We use a trained ResNet-50 model to predict labels for the test set. Then randomly sampling 100 images, 50 correctly labeled and 50 wrongly labeled, for three labelers to judge if the prediction is correct. This aims to set a configuration that maximally approximates the scenario that a labeler can encounter in a real-world annotation process, meanwhile, by using a half-half data mix we avoid the labeler to bias towards either positive or negative samples. An average precision of 92.3% is reported by three labelers and the average annotation time for each image is 2.72 seconds. Since an experienced labeler can achieve a top-5 accuracy of ∼ 95%, this accuracy is acceptable, yet around 1 minute is taken by a full annotation for each image [54], higher than 10× of our cost (log 2 1000 ≈ 10). Hence, the benefit of learning from a large-scale dataset is verified.\nThe above motivates us to propose a new setting that combines semi-supervised learning and weakly-supervised learning. The dataset is partitioned into three parts, namely, D = D S ∪ D O ∪ D U . Here D O represents the weakly (one-bit) annotated subset of the dataset. Given an image from D O and a predicted label, the task of the labeler is to distinguish if the image belongs to the specific label. If the guess is correct, we obtain the positive (true) label 𝑦 ★ 𝑛 of the image, otherwise, we obtain a negative label of it which is denoted as 𝑦 - 𝑛 , and no further supervision of this image can be obtained.\nFrom the view of information theory, the labeler provides 1 bit of supervision to the system by answering the yes-or-no question. In comparison, obtaining the accurate label requires log 2 𝐶 average bits of supervision. Therefore, it alleviates the burden of annotating a single image, so one can obtain much more one-bit annotations than full-bit annotations at the same cost. Taking the CIFAR100 dataset as an example. For a common semi-supervised setting, it annotates 10K out of 50K training images that requires 10K × log 2 100 = 66.4K bits of supervision. Alternately, we 𝕄 \" The training procedure with one-bit supervision (with or without unsupervised pre-training, using initial full-bit labels or UDA methods). The model in each stage is initialized with the weights of a self-supervised model. At the beginning, only a small set of training samples (blue triangles) are provided with ground-truth labels and the remaining part (black circles) are unlabeled. We fine-tune the initial model on the labeled data. In each of the following stages (two stages are shown but there can be more), we send part of the unlabeled data into the current model to predict and ask the labeler to judge if the prediction is correct. Some of these samples obtain positive labels (red circles) while some obtain negative labels (green circles). This process continues until the quota of supervision is used up as scheduled.\ncan annotate 5K images in full-bit as many as 33.2K images in one-bit. One-bit supervision uses the same amount of supervision but can achieve higher accuracy. In addition, providing the accurate label of an image costs a labeler much more effort (more than log 2 𝐶× that of making a one-bit annotation). Therefore, it is larger than log 2 𝐶 : 1 for the 'actual' ratio of supervision between full-supervision and one-bit supervision. That is to say, our approach actually receives a smaller amount of information under the same bits of supervision." }, { "figure_ref": [], "heading": "A Multi-Stage Training Paradigm", "publication_ref": [], "table_ref": [], "text": "There are two important factors to one-bit supervision, namely, (i) making high-quality guesses in D O ; (ii) making use of the negative labels (incorrect guesses). The solution for the first factor is elaborated in this subsection, and the second is left in the next subsection. The remaining part of training is a scheduled procedure which is composed of 𝑇 iterations. We define D R 0 ≡ D U 0 , where D R 𝑡 -1 denotes the set of samples without accurate labels before 𝑡-th iteration. We also maintain two sets for the correct and incorrect guesses, which are denoted as D O+ and D O-. These two sets are initialized as ∅. In the 𝑡-th iteration, we first use the previous model M 𝑡 -1 to make predictions on samples in D R 𝑡 -1 . Then, we select the subset\nD O 𝑡 from D R\n𝑡 -1 using the designed sampling strategies. Except for the random sampling (RS) and hard sampling (HS) strategy, we also propose a new strategy named class balance (CB) for our approach. In particular, RS selects samples from the data pool randomly, HS selects hard samples in terms of the difference of the top-2 prediction scores (the smaller the difference is, the harder the sample is), and CB selects by keeping a balanced number of samples in each class. RS and CB are mainly used for the experiments without unsupervised pre-training. In Section 5, we will discuss the effect of these different sampling strategies.\nThen we use the predictions on D O 𝑡 to check ground-truth. We add the correctly predicted samples to D O+ and the others to D O-. Therefore, the entire training set is split into three parts, D S ∪ D O+ (has positive labels), D O-(has negative labels) and D U 𝑡 (has no labels), and finally,\nD R 𝑡 = D O- 𝑡 ∪ D U 𝑡 .\nWe update M 𝑡 -1 into M 𝑡 with the currently available supervision to obtain a stronger model. The unlabeled and fully-labeled parts of data are used as in the semi-supervised algorithm. Now we focus on utilizing the negative labels in D O-, which will be elaborated in the following." }, { "figure_ref": [], "heading": "Negative Label Suppression", "publication_ref": [ "b59", "b21", "b66" ], "table_ref": [], "text": "To make use of negative labels, we recall the Mean-Teacher algorithm [60] which maintains a teacher and a student model. It trains by making these two models produce consistent outputs, which requires no labels. Mathematically, given a training image x ∈ D, if x ∈ D S ∪D O+ , we compute the cross-entropy loss term first. Additionally, regardless of whether x has an accurate label, we compute the difference between predictions of the teacher and student model as an extra loss term. We represent the mathematical function of the student model as f (x; 𝜽 ), where 𝜽 denotes the network parameters. The teacher model is denoted by f (x; 𝜽 ′ ) where 𝜽 ′ is the moving average of 𝜽 . The loss function is written as:\nL (𝜽 ) = E x∈ D S ∪D O+ ℓ y ★ 𝑛 , f (x; 𝜽 ) + 𝜆 • E x∈ D f x; 𝜽 ′ -f (x; 𝜽 ) 2 ,(1)\nwhere ℓ (•, •) is the cross-entropy loss and 𝜆 is the balancing coefficient. To supplement our class balance strategy, we use a weighted cross-entropy loss to optimize, the detail is introduced in Section 3.4. Here we omit the explicit notation for the individual noise added to the teacher and student model for simplicity. This means that the model's outputs on full-labeled training samples are constrained by both the cross-entropy loss and consistency loss (the idea is borrowed from knowledge distillation [22,67]). However, the first term is unavailable for a training sample with negative labels, so we modify f(x; 𝜽 ′ ) to inject the negative label into the second term. In particular, we let the score of the negative class be suppressed to zero. Practically, we set the logit (before softmax) of the negative class to a large negative value to guarantee the correctness of normalization (i.e., the scores of all classes sum to 1). We call this method negative label suppression (NLS). We believe that NLS is a simple and effective method that takes advantage of both the teacher model and newly added negative labels. In Section 3.4 we also develop a new method to utilize the negative labels. The experiments in Section 5.2 will show that NLS brings significant improvement to the one-bit supervision procedure while being naive and easy to implement. From a generalized view, NLS is a practical method to integrate negative (weak) labels into the framework of the semi-supervised algorithm. It is complementary to the approaches that utilize unlabeled data via a consistency loss." }, { "figure_ref": [], "heading": "Beyond Self-Supervised Pre-Training", "publication_ref": [], "table_ref": [], "text": "In order to strengthen the initial model and mine hard examples for one-bit supervision, we propose to combine it with self-supervised learning. In particular, we integrate unsupervised pre-training into the multi-stage training framework and develop a new method to utilize negative labels. Since using MT initialized with the pre-trained weights to fine-tune makes undesirable results in experiments, we directly fine-tune a ResNet-50 model. As shown in Figure 1, the training procedure is similar to that in 3.2. There are two differences between them: (i) only samples with positive or negative labels are used to fine-tune, and (ii) a new loss is used to optimize the negative labels. Notably, we keep\nD O = D O 0 ∪. . .∪D O 𝑡 ∪. . .∪D O\n𝑇 to make each image own one negative label. For the sampling strategy, we combine hard selection and class balance selection. According to their importance and predicted labels, we sort the samples in each class and select the same number of samples among classes by order.\nSimilarly, the loss consists of two parts, respectively for positive and negative labels, which are written as:\nL (𝜽 ) = E x∈ D S ∪D O+ 𝑤 𝑐 • ℓ y ★ 𝑛 , f (x; 𝜽 ) + 𝜇 • E x∈ D O-1 -y 𝐵 𝑛 • log(1 -𝜎 (f (x; 𝜽 ))).\n(\n)2\nThe left part of Eq. ( 2) is a weighted cross-entropy loss for positive labels. To supplement the strategy of class balance sampling, we use the weighted loss to alleviate the class imbalance issue for experiments both with and without unsupervised pre-training. The corresponding weight for class 𝑐 is defined as 𝑤 𝑐 = 𝑚 𝑐 max(𝑚 0 ,...,𝑚 𝐶 ) , where 𝑚 𝑐 denotes the number of samples in 𝑐-th class. The right part of Eq. ( 2) is a binary cross-entropy loss which is used to optimize negative labels. 𝜇 is the weight parameter of the negative loss, and y 𝐵 𝑛 is the binary label which denotes whether x 𝑛 has an accurate label and 𝜎 (•) represents the sigmoid function. Here we don't use the formulation of negative label suppression in 3.3 because the consistent loss is not maintained. It is believed that the type of negative loss should be adjusted to the baseline method." }, { "figure_ref": [], "heading": "Beyond Unsupervised Domain Adaptation", "publication_ref": [ "b36", "b36", "b56" ], "table_ref": [], "text": "We develop a novel framework that trains without initial full-bit labels and conducts pure one-bit annotation on the target dataset via unsupervised domain adaptation. In particular, firstly we use an off-the-shelf domain adaptation approach, e.g., the recently proposed SCDA [37], to train an initial model for target dataset D T . Here SCDA proposed to efficiently align the feature distributions by encouraging the model to concentrate on the most principal features. Also, using source data from different domains will obtain a model with different accuracy. After obtaining the initial model M 0 without using any full-bit annotation on the target dataset, i.e., the supervised part on target domain D T-S is ∅, we use it to conduct one-bit annotation on whole D T . Similar to that in section 3.2, we obtain the positive and negative labels for samples in D T . Then we utilize all available information to update the model. Here we train the next stage model in two ways, i.e., using source data or not. For training using source data, we add two extra losses for the target dataset to the original loss in SCDA [37], which respectively are cross entropy for positive labels and binary cross-entropy for negative labels.\nFor training without source data, two strategies are adopted in different stages, i.e., using semi-supervised training (e.g., FixMatch [57]) when correctly guessed labels are less than 80% of total images, or supervised fine-tuning when the number is more than 80 percent. Each stage we conduct one-bit annotation for the samples without positive labels, i.e., for D R 𝑡 -1 on 𝑡-th stage, and the training continues until we obtain satisfied accuracy. In addition, we introduce active learning into this framework, to verify the least supervision can be used to approach the full-supervised performance.\nParticularly, we calculate the standard deviation for the predictive probabilities of each sample after multiple different augmentations and select half of the samples with the largest deviation values to annotate in each stage. We utilize this framework to show the superiority of one-bit supervision in annotations saving when approaching upper-bound performance (supervised training using all labels)." }, { "figure_ref": [], "heading": "Relationship with Prior Work", "publication_ref": [ "b20", "b25", "b69", "b9", "b17", "b0", "b35", "b13", "b51", "b4", "b27", "b43", "b21", "b24", "b52", "b11", "b66" ], "table_ref": [], "text": "The development of deep learning, in particular training deep neural networks [21,26,70], is built upon the need for large collections of labeled data. To mitigate this problem, researchers proposed semi-supervised learning [10,18] and active learning [1,36] as effective solutions to utilize the larger amounts of unlabeled data. From the view of entropy minimization, semi-supervised learning and active learning can be considered as two ways to achieve the same goal, and some works indeed combine these two approaches into one [14,52]. One-bit supervision also can be regarded as one of these approaches. Also, the utilization of negative labels is related to the approaches in [5,28]. However, different from those that randomly select negative labels among all classes, our approach obtains hard negative labels by annotating and brings more improvement to training.\nOne-bit supervision shares a similar idea of using human verification for the model predictions with [44]. Additionally, the proposed multi-stage training algorithm is related to knowledge distillation (KD) [22,25,53], which iteratively trains the same model while absorbing knowledge from the previous stage. KD was used for model compression originally, but recent years have witnessed its application for optimizing the same network across generations [12,67]. For one-bit supervision, new supervision comes in after each stage, but the efficiency of supervision is guaranteed by the previous model, which is a generalized way of distilling knowledge from the previous model and fixing it with weak supervision." }, { "figure_ref": [], "heading": "MATHEMATICAL FOUNDATIONS", "publication_ref": [], "table_ref": [], "text": "One-bit supervision can be viewed as a novel type of active learning that only queries the most informative part at the class level. Here we briefly introduce the theory of active learning and then deduce the theory foundations of one-bit supervision from three aspects. First, one-bit annotation is superior to full-bit annotation under the same cost when satisfying specific conditions. Second, the best solution to one-bit supervision is to query by using the class with the largest predicted probability. Third, class-level query and sample-level query can be combined when satisfying specific conditions.\nGiven a model M and unlabeled data D U , active learning aims to utilize an acquisition function 𝛼 (x, M) to select samples x ∈ D U to query:\nx * = argmax x∈ D O 𝛼 (x, M)(3)\nHere we focus on the ideas that define 𝛼 (x, M) based on uncertainty. For the classification task, we want to look for images with high predictive variance to label which can decrease model uncertainty. Then the acquisition function can be defined to choose samples that maximize the predictive entropy,\nH y | x, D S - ∑︁ 𝑐 𝑝 𝑦 𝑐 = 1 | x, D S log 𝑝 𝑦 𝑐 = 1 | x, D S .(4)\nHere y = [𝑦 1 , . . . , 𝑦 𝐶 ] is the one-hot vector. When 𝑦 𝑐 = 1 it represents the sample belongs to class 𝑐, 𝑐 ∈ [1, . . . , 𝐶],\nand 𝐶 is the total number of classes. The 𝑝 𝑦 𝑐 = 1 | x, D S denotes the probability of 𝑦 𝑐 = 1 when given x and D S . For one-bit supervision, we aim to look for the classes with high predictive variance to query. Therefore we can develop our system as:\nargmax 𝑐 ∈ [1,...,𝐶 ] 𝛼 (x, M) .(5)\nWe define the acquisition function as\nH 𝑦 𝑐 | x, D S -𝑝 𝑦 𝑐 | x, D S log 𝑝 𝑦 𝑐 | x, D S -1 -𝑝 𝑦 𝑐 | x, D S log 1 -𝑝 𝑦 𝑐 | x, D S .(6)\nFor simplicity we abbreviate 𝑝 𝑦 𝑐 | x, D S as 𝑝 𝑐 ∈ [0, 1], 𝐶 𝑐=1 𝑝 𝑐 = 1, and denote H 𝑦 𝑐 | x, D S as H (𝑝 𝑐 ). Then we have\nH (𝑝 𝑐 ) = -𝑝 𝑐 log 𝑝 𝑐 -(1 -𝑝 𝑐 ) log (1 -𝑝 𝑐 ) .(7)\nFirst, we compare the efficiency of one-bit and full-bit annotation. In Section 3.1 we point out that the former provides 1 bit of information while the latter provides log 2 𝐶 bits. Therefore, their average entropy production brought by each bit can be denoted by H 𝑦 𝑐 | x, D S /1 and H[y|x, D 𝑆 ]/log 2 𝐶, respectively. Next, we give a theorem to define the relationship between them and then prove it.\nTheorem 4.1.\nSuppose 𝐶 ⩾ 2, 𝜑 (𝐶) is a function of 𝐶, ∃ 𝜑 (𝐶) ⩽ 1 2 such that ∀𝑥 ∈ D 𝑈 , if 𝑝 𝑐 ⩾ 𝜑 (𝐶), H[𝑦 𝑐 |x, D 𝑆 ] ⩾ H[y|x, D 𝑆 ] log 2 𝐶(8)\nProof. Note that,\nH[y|x, D 𝑆 ] = -𝑝 𝑐 log 𝑝 𝑐 - ∑︁ 𝑖≠𝑐 𝑝 𝑖 log 𝑝 𝑖 ⩽ -𝑝 𝑐 log 𝑝 𝑐 -(𝐶 -1) 1 -𝑝 𝑐 𝐶 -1 log 1 -𝑝 𝑐 𝐶 -1 = H[𝑦 𝑐 |x, D 𝑆 ] + (1 -𝑝 𝑐 ) log(𝐶 -1).(9)\nEq. ( 8) holds when H(𝑝 𝑐 ) ⩾\nH(𝑝 𝑐 )+(1-𝑝 𝑐 ) log(𝐶-1) log 2 𝐶\n, which is equivalent to\n-𝑝 𝑐 1 -𝑝 𝑐 log 𝑝 𝑐 -log(1 -𝑝 𝑐 ) ⩾ log(𝐶 -1) log 𝐶 -1 (10) Let 𝑓 (𝑝) ≜ -𝑝 1-𝑝 log 𝑝 -log(1-𝑝), we have 𝑓 ′ (𝑝) = -log 𝑝 (𝑝 -1) 2 ⩾ 0 and 𝑓 ( 1 2 ) = 2 ⩾ log(𝐶 -1) log 𝐶 -1 ⇐⇒ 𝐶 2 -4𝐶 + 4 ⩾ 0 always holds. Hence, ∃ 𝜑 (𝐶) ⩽ 1 2 , ∀𝑝 𝑐 > 𝜑 (𝐶), H(𝑝 𝑐 ) ⩾ H(𝑝 𝑐 ) + (1 -𝑝 𝑐 ) log(𝐶 -1) log 2 𝐶 ⩾ H[y|x, D 𝑆 ] log 2 𝐶(11)\n□ So far, we give a definition for when one-bit annotation brings more average entropy production, i.e., is more efficient, than full-bit annotation. For a dataset with 100 classes, the numerical solution of Eq. ( 10) is 𝑝 𝑐 ⩾ 0.28. Moreover, the restriction for 𝑝 𝑐 will be eased when 𝐶 becomes larger. Hence, the condition for 𝑝 𝑐 in Theorem (4.1) is easy to satisfy in real-world applications, which means one-bit annotation is superior to full-bit annotation in most cases. Next, we will verify that solving Eq. ( 5) is equivalent to solve\nargmax 𝑐 𝑝 𝑦 𝑐 | x, D S .(12)\nFrom the definition of H (𝑝 𝑐 ) we can know that it satisfies H 1 2 + 𝑝 𝑐 = H 1 2 -𝑝 𝑐 . The derivative of H with respect to\n𝑝 𝑐 is 𝜕H 𝜕𝑝 𝑐 = log 1 𝑝 𝑐 -1 .\nThen we know H (𝑝 𝑐 ) increases when 𝑝 𝑐 ∈ [0, 1 2 ] and decreases when 𝑝 𝑐 ∈ [ 1 2 , 1]. 𝑝 𝑐 = 1 2 is the maximal point. Suppose 𝑐 ′ is the solution of Eq. ( 12). We prove that 𝑐 ′ also maximizes Eq. ( 5). If there is a 𝑐 * ≠𝑐 ′ such that H[𝑝 𝑐 * ] > H[𝑝 𝑐 ′ ], we have 𝑝 𝑐 * ∈ (min (𝑝 𝑐 ′ , 1 -𝑝 𝑐 ′ ) , max (𝑝 𝑐 ′ , 1 -𝑝 𝑐 ′ )) due to the monotonicity and symmetry of H. On the other hand, 𝑝 𝑐 * < 𝑝 𝑐 ′ since 𝑐 ′ maximize 𝑝 𝑐 and 𝑝 𝑐 ′ ≠𝑝 𝑐 * . So 𝑝 𝑐 * ∈ (1-𝑝 𝑐 ′ , 𝑝 𝑐 ′ ), which deduce that 𝑝 𝑐 * +𝑝 𝑐 ′ > 1, contradict to 𝑐 𝑝 𝑐 = 1. Therefore, by solving Eq. ( 12) the most informative part in class-level will be chose for the sample. " }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we validate the proposed approach via the elaborate experiments. The used datasets and implementation details are introduced first. Then we compare the main framework of our approach to the full-bit and semi-supervised methods to show its superiority. Next, we analyze the performance of using the different number of training stages and guessing strategies. These two parts belong to the conference version. Followingly, we show the improvement brought by the strategies of class balance and the performance of two extended frameworks which incorporate with SSL and UDA. These are newly proposed in this manuscript." }, { "figure_ref": [], "heading": "Datasets and Implementation Details", "publication_ref": [ "b29", "b50", "b8", "b53", "b46", "b59", "b20", "b14", "b36" ], "table_ref": [ "tab_1", "tab_1" ], "text": "The main experiments are conducted on three popular image classification benchmarks, namely, CIFAR100, Mini-Imagenet, and Imagenet. CIFAR100 [30] contains 60K images in which 50K for training and 10K for testing. All of them are 32 × 32 RGB images and uniformly distributed over 100 classes. Mini-ImageNet contains images from 100 classes with resolution 84 × 84 , and the training/testing split created in [51] is used, which consists of 500 training images, and 100 testing images per class. For ImageNet [9], the competition subset [54] is used which contains 1K classes, 1.3M training images, and 50K testing images. We use all images of high resolution and pre-process them into 224 × 224 as network inputs. The experiments involved with UDA are conducted on DomainNet [47], which contains about 120,906 images of 345 categories in each of the six domains, and we mainly use three of them, namely Clipart, Quickdraw, and Real.\nFor experiments without unsupervised pre-training, we build our baseline by following Mean-Teacher [60], a previous semi-supervised approach. It assumes that there is a labeled small subset, D S ′ . D S ′ is 20% of the training set for CIFAR100 and Mini-ImageNet, and 10% for ImageNet. By allowing part of the annotation to be one-bit, we reschedule the assignment which results in two subsets, D S and D O , satisfying D S′ ≈ D S + D O /log 2 𝐶. Table 1 shows the detailed configuration for the four datasets. For CIFAR100, we use a 26-layer deep residual network [21] with Shake-Shake regularization [15]. For Mini-ImageNet and ImageNet, a 50-layer residual network is used for training.\nFor DomainNet, ResNet-101 is used as the backbone. The experiments are trained 180 epochs for CIFAR100 and For experiments with unsupervised pre-training on ImageNet, we also use a 50-layer residual network to fine-tune.\nThe setting of dataset split is just as the Table 1 shows. The initial learning rate is set to 10 -4 for the backbone and 1 for the fc layer for each training stage. We train the model for 100 epochs for each stage. The weight parameter of the negative loss is set to 0.1. The whole batch size including both the positively and negatively labeled data is set to 1024.\nFor experiments on DomainNet, the setting of domain adaptation just follows SCDA [37], and all the experiments are trained for 100 epochs." }, { "figure_ref": [], "heading": "Main Results Compared to Full-bit, Semi-supervised Supervision", "publication_ref": [ "b59", "b59" ], "table_ref": [], "text": "First, we compare our approach to Mean-Teacher [60], a semi-supervised baseline that utilizes full-bit annotations.\nResults are summarized in Table 2. We can observe that, the one-bit supervision baseline (with two stages, without NLS) is inferior to full-bit supervision. Notably on CIFAR100, it actually obtains more accurate labels (the number is 3K + 25.3K, compared to 10K used in the semi-supervised baseline), but the obtained correct guesses do not contribute much to training. This issue becomes more obvious on Mini-ImageNet and ImageNet since they have weaker initial models and more incorrectly guessed images. We attribute this to that these samples contribute little new knowledge to training because (i) the model has already learned how to classify these samples, and (ii) these samples are relatively easy compared to the incorrectly predicted ones. Therefore, it is crucial for one-bit supervision to make use of negative labels.\nWe next investigate negative label suppression (NLS), an approach used to extract knowledge from incorrect guesses.\nAs shown in Table 2, significant improvement is brought by simply suppressing the score of the incorrect class for each element in D O-. In particular, this brings 4.37%, 5.86%, and 4.76% accuracy gains on CIFAR100, Mini-ImageNet, and\nImageNet respectively, when compared to the two-stage baseline. It reveals that though only filtering out one out of 100 or 1,000 classes, the negative labels can provide important supervision for semi-supervised learning, and the key contribution is to avoid the teacher and student models from arriving in a wrong consensus.\nIn summary, our approach achieves favorable performance in one-bit supervision with two-stage training and negative label suppression. In particular, under the same bits of supervision, we achieve 4.00%, 4.48%, and 2.24% accuracy gain over the semi-supervised baseline, on CIFAR100, Mini-ImageNet, and ImageNet respectively. Hence, it\nManuscript submitted to ACM Table 2. Comparison of accuracy (%) to our baseline, Mean-Teacher [60], and some state-of-the-art semi-supervised learning algorithms, which corresponds to the main framework of our approach. On all datasets, we report the top-1 accuracy. In our multi-stage training process, we report the accuracy of the initial stage (using Mean-Teacher for semi-supervised learning) as well as after each one-bit supervision stage. The discussion of using one or two stages is in Section 5. is cleaner for the effectiveness of our learning framework, as well as the multi-stage training algorithm. Though the experiments have only tested on top of the Mean-Teacher, we believe that this pipeline can be generalized to other semi-supervised approaches as well as other network backbones." }, { "figure_ref": [], "heading": "Number of Stages and Guessing Strategies", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Next, we do some ablation studies for one-bit supervision, namely, the number of stages used, the size of D S , the sampling strategy on D O , etc. In the following, we investigate (i) the split of quota between two stages and (ii) using more training stages. For (i),\nTable 3 shows four options of assigning 47K quota to two stages. One can observe that the accuracy drops consistently when the first stage uses either too many or too few quotas. Intuitively, both cases will push the training paradigm towards one-stage training which is less efficient in one-bit supervision. This shows the importance of making a balanced schedule. For (ii), a three-stage training is performed on CIFAR100. We split the quota uniformly into three stages which follow the conclusions of (i), and each stage has 15K, 17K, and 15K guesses, from the first to the last, respectively. The final test accuracy is 74.72%, comparable to 73.76% of two-stage training. The accuracy gain brought by three-stage training is around 1%, which is considerable but much smaller than 2.63% (two-stage training over one-stage training). Considering the tradeoff between accuracy and computational costs, we use two-stage training with an appropriate quota over two stages." }, { "figure_ref": [], "heading": "5.3.2", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Analysis on the Size of D S . We study the impact of different sizes of D S , i.e., the number of full-bit labels used in the initial. Experiments are conducted on CIFAR100 and Mini-ImageNet, and the size is adjusted to 1K, 3K (as in main experiments), 5K, and 10K (the baseline setting without one-bit supervision); on ImageNet, the corresponding numbers are 10K, 30K (as in main experiments), 50K, and 128K (the baseline setting), respectively. The results in Table 4 show that it is best to keep a proper amount (e.g., 30%-50%) of initial labels and exchange the remaining quota to one-bit supervision. When the part of full-bit supervision is too small, the initial model may be too weak to guess enough positive labels; when it is too large, the advantage of one-bit supervision becomes small and the algorithm degenerates into a regular semi-supervised learning process. This conclusion reveals that the optimal solution is to make a balanced schedule for using supervision, including assigning the quota between the same or different types of supervision forms. However, when the initial model is not strong enough on the dataset, this strategy can harm the training, e.g., on" }, { "figure_ref": [], "heading": "Analysis on", "publication_ref": [], "table_ref": [], "text": "Mini-ImageNet, the accuracy drops about 3%. Hence, we introduce self-supervised learning to enhance the initial model, to achieve success in the combination of one-bit supervision and hard sample mining.\nIn summary, in the setting of one-bit supervision, developing an efficient sampling strategy is important since maximal information can be extracted from the fixed quota of querying. Some heuristic strategies are presented including using multiple stages, and performing uniform sampling. Though favorite performance is achieved, we believe that more efficient strategies still exist." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Results with Strategies of Class Balance", "publication_ref": [], "table_ref": [], "text": "In the following, we investigate the developed class balance strategies. Since one-bit annotation is based on model predictions, the pseudo labels can bring class-imbalance issues and this may have a negative impact on training. We do some experiments to reveal this issue in training and the results are shown in Figure 2. Here we summarize the number of samples in each predicted class and classify them into three groups. Ideally, every class should have around the average number of samples (corresponding to group 1), and class imbalance happens if it is less or more than that number (corresponding to group 0 and group 2). From Figure 2 we can observe that this issue is obvious at stage 0 on all three datasets. This reflects the necessity of taking strategies to relieve the issue of class imbalance, as done in our approach. As a result, the number of samples in group 1 increases stage by stage, i.e., and the class imbalance is eased with the training. We also compare the results of training with/without CB on ImageNet, and the number of samples in group 1 is 33,930 and 24,678 respectively for stage 2, which demonstrate its effectiveness. For the model performance, as shown in the last row of Table 2, the proposed strategies improve the basic framework of one-bit supervision by 0.13%, 1.28%, and 4.70% accuracy gains respectively on CIFAR100, Mini-ImageNet, and ImageNet, when compared to the two-stage framework with NLS.\nIn addition, we notice that the gain on ImageNet is obviously larger than that on CIFAR100 and Mini-ImageNet. This is attributed to that class imbalance has a greater impact on the large-scale datasets, because (i) it has more classes (1,000 vs 100) and (ii) it has more candidate images to be annotated (1,276K vs 66.9K bits). The first means that it is more difficult for the model to predict evenly for all classes. The second means that the large number of samples can aggravate class imbalance. Hence, taking strategies to alleviate this issue on ImageNet brings more improvement. We maintain three groups to category the classes, namely group 0 (the summation of number of samples in which class that less than 𝜇 -10), group 1 (the numbers that between 𝜇 -10 and𝜇 +10) and group 2 (the numbers that more than 𝜇 +10), where 𝜇 = 𝑁 𝑣𝑎𝑙 /𝐶 and 𝑁 𝑣𝑎𝑙 is the number of samples in validation set." }, { "figure_ref": [ "fig_1" ], "heading": "Results with Self-Supervised Pre-Training", "publication_ref": [], "table_ref": [ "tab_7", "tab_1", "tab_7" ], "text": "Here we show the experimental results of the framework incorporating SSL with one-bit supervision, which refers to section 3.4. The results of using unsupervised pre-training on ImageNet are listed in Table 5. Firstly we compare the results of training with and without pre-training. One can observe that, 15.14% accuracy gain, a remarkable improvement is brought by incorporating self-supervised learning with one-bit supervision, which shows the potential of this new training framework. The self-supervised algorithm benefits the initial model largely so that more correct guesses are obtained, and then the model of the next stage can be strengthened further. Compared to the baseline which uses 10% labels to fine-tune, our approach achieves 1.49% accuracy gain under the same bits of supervision (following the setting in Table 1). This further verifies the superiority of one-bit supervision when combined with self-supervised learning.\nSecondly, we investigate the impact of hard sampling (HS). From Thirdly we investigate the strategies of class balance for this new framework. As the discussions in Section 5.4, alleviating the issue of class imbalance is important for datasets with more classes. The subfigure (d) in Figure 2 shows that class imbalance still exists in the model initialized with pre-trained weights. The results in Table 5 show that using CB brings 0.66% accuracy gain. Both the gain achieved and the distribution improvement in subfigure (d) verify that CB is effective in relieving class imbalance issues for this framework. Finally, we underline that this new framework achieves 76.37% top-1 accuracy, which is superior to many state-of-the-art semi-supervised approaches, and is comparable to the result of a standard supervised ResNet-50 trained using all labels (76.5%). This shows the success of the developed framework which combines self-supervised learning with one-bit supervision." }, { "figure_ref": [], "heading": "Results with Unsupervised Domain Adaptation", "publication_ref": [], "table_ref": [], "text": "The performance of combining UDA with one-bit supervision which refers to section 3.5 is shown in this part. We evaluate this new framework which trains without using initial full-bit labels on DomainNet, and two adaptation experiments are conducted, namely, Clip to Real and Quickdraw to Real. The results are listed in Table 6. We can observe that, for Clip→Real, the two-stage training of one-bit supervision without using source data achieves 83.70% accuracy, by using 18.22% of supervision and achieving 99.48% of performance of full-supervised training (84.14%).\nWhen combined with active learning the accuracy becomes 81.57%, which means using only 11.86% of supervision and achieving 96.95% of performance of full-supervised training. These results reveal the superiority of one-bit supervision on the efficient utilization of supervision information. Also, the framework of pure one-bit supervision via UDA enlarges the advantage of one-bit annotation. In addition, we also observe that using source data brings a negative impact in the Manuscript submitted to ACM latter training stages, e.g., it achieves 77.05% accuracy when using the same amount of supervision with that using no source data. We attribute it to that one-bit annotation brings a certain amount of supervision to the target domain, and still using source data can degrade its generalization performance. Table 6. The results for experiments that involve UDA on DomainNet, which corresponds to the extended framework with UDA.\nBesides the upper bound of full-supervised training using all labels, we list the results of three methods which respectively are one-bit supervision with source data, without source data, and with active learning, as well as the bits of supervision they used. These experiments are conducted to verify the superiority of our approach in annotation saving. and it uses 17.79% of supervision. Also, using source data to train achieves a lower accuracy (65.68%) when using the same bits of supervision. These two experiments on DomainNet verify the advantage of the proposed framework on annotation saving." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "CONCLUSION AND FUTURE WORKS", "publication_ref": [ "b44", "b65" ], "table_ref": [], "text": "In this paper, we propose a new learning methodology named one-bit supervision. Compared to conventional approaches which need to annotate the correct label for each image, our system annotates an image by answering a yes-or-no question for the guess made by a trained model. A multi-stage training framework is designed to acquire more correct guesses. Meanwhile, we propose a method of negative label suppression to utilize incorrect guesses. We provide mathematical foundations for the proposed approach from three aspects, namely one-bit annotation is more efficient than full-bit annotation in most cases, the solution to one-bit supervision is to query by the class with the largest predicted probability, and when the predicted probability is large enough mining hard examples can improve our approach.\nExperiments on three popular benchmarks verify that the basic framework outperforms semi-supervised learning under the same bits of supervision. To extend the basic approach, we design two new frameworks by incorporating it with self-supervised learning and unsupervised domain adaptation. The first benefits from active learning and achieves remarkable performance on ImageNet. Also, two strategies are used to alleviate class imbalance in training. The second conducts pure one-bit annotation on the target dataset and enjoys the superiority in annotations saving, which is evaluated on DomainNet.\nOne-bit supervision is a new learning paradigm that leaves a few open problems. For example, we have investigated one-bit supervision in image classification, and it is interesting to extend this framework to other vision tasks such as object detection and semantic segmentation. This is related to some prior efforts such as [45,66]. For detection, the cost will be much lower if the labeler just needs to annotate whether a detected bounding box is correct (e.g., has an IOU no lower than a given threshold to a ground-truth object); for segmentation, the labeler is given the segmentation map and determines whether each instance or a background region has a satisfying IOU. These problems are also more challenging than image classification but they have higher values in real-world applications." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Key Research and Development Program of China under grant 2019YFA0706200, 2018AAA0102002, and in part by the National Natural Science Foundation of China under grant 61932009." } ]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification. Instead of training model using the accurate label of each sample, our setting requires the model to interact with the system by predicting the class label of each sample and learn from the answer whether the guess is correct, which provides one bit (yes or no) of information. An intriguing property of the setting is that the burden of annotation largely alleviates in comparison to offering the accurate label. There are two keys to one-bit supervision, which are (i) improving the guess accuracy and (ii) making good use of the incorrect guesses. To achieve these goals, we propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm. Theoretical analysis shows that one-bit annotation is more efficient than full-bit annotation in most cases and gives the conditions of combining our approach with active learning. Inspired by this, we further integrate the one-bit supervision framework into the self-supervised learning algorithm which yields an even more efficient training schedule. Different from training from scratch, when self-supervised learning is used for initialization, both hard example mining and class balance are verified effective in boosting the learning performance. However, these two frameworks still need full-bit labels in the initial stage. To cast off this burden, we utilize unsupervised domain adaptation to train the initial model and conduct pure one-bit annotations on the target dataset. In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
One-bit Supervision for Image Classification: Problem, Solution, and Beyond
[ { "figure_caption": "Intuitively, using more (fully or weakly) labeled training samples will improve the accuracy of a model. Considering that each image in D O can only be guessed once, it is straightforward to let the training procedure be partitioned into several stages. Each stage makes a prediction on a part of D O and then uses the results to enhance the model. This makes a generalized training algorithm as Figure1illustrated. We train the initial model using a semi-supervised algorithm, with D S as the labeled training set and D O ∪ D U as the unlabeled reference set. An off-the-shelf semi-supervised algorithm, Mean-Teacher[60], is used to utilize the knowledge in the reference set. This makes us have a reasonable model to make predictions on D O .", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Theorem 4. 2 .2H[𝑦 𝑐 |x,D 𝑆 ] H[y|x,D 𝑆 ] increases to 1 as 𝑝 𝑐 increases to 1. Finally, Theorem (4.2) gives a definition for when it can be effective to query at the class level for the mined hard examples. The proof of this theorem is omitted for its simplicity. Notably, H[𝑦 𝑐 |x,D 𝑆 ] H[y|x,D 𝑆 ] → 1 represents the entropy production brought by one-bit annotation increases to approach which full-bit annotations brings. That being said, annotating the hard examples in a one-bit setting can be effective when 𝑝 𝑐 is large enough. More important, the certainty of the model also decides the holding of Theorem (4.2), because a weak model can make a wrong prediction with high confidence. This inspires us to increase 𝑝 𝑐 by enhancing the initial model without using extra labels.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "5. 3 . 131Analysis on Number of Training Stages. We compare the experiments of one-stage training and two-stage training. Specifically, the former uses up the quota of one-bit annotations at one time, and the latter split the quota into two parts to first train an intermediate model and then iterate to obtain the final model. Table 2 shows the results of three benchmarks. It is clear that the advantage of two-stage training is that more positive labels can be found by training a stronger intermediate model. For one-stage training, the numbers of correct guesses are around 23.2K, 9.8K, and 470K, respectively on CIFAR100, Mini-ImageNet, and ImageNet. While using two-stage training, these numbers become 25.3K, 12.2K, and 475K. As a result, compared with one-stage training, the two-stage training boosts the accuracy of the final model by 2.63%, 7.24%, and 3.46% on three datasets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig.2. The class distribution in three training stages on CIFAR100, Mini-ImageNet, and ImageNet, including that using unsupervised pre-training on ImageNet. The results are obtained by using the trained model of each stage to predict labels for the validation set. We maintain three groups to category the classes, namely group 0 (the summation of number of samples in which class that less than", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The comparison between the top-1 prediction scores of the hard examples on ImageNet which are made by models with/without pre-training. The used 10,000 samples are selected randomly from the hard samples mined by the model without pre-training.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. The selected hard examples of ImageNet and their predicted labels made by the model training with/without unsupervised pre-training. The first line of the text is the predicted labels of the model without pre-training, and the second line is that of the model with pre-training. For each image, the red text indicates the correctly guessed label.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "The data split for semi-supervised and one-bit-supervised approaches, where we compare the total numbers of supervision bits. The other data splits are investigated in Section 5.3. ImageNet and 60 epochs for ImageNet. As in[60], we compute the consistency loss using the mean square error in each stage. The consistency parameter is 1,000 for CIFAR100 and 100 for Mini-ImageNet and ImageNet. We simply follow the original implementation for other hyper-parameters, except adjusting the batch size to fit our hardware (e.g., eight NVIDIA Tesla-V100 GPUs for ImageNet experiments).", "figure_data": "semi-supervisedone-bit-supervisedDataset𝐶log 2 𝐶|D|D S ′# of bitsD SD O# of bitsCIFAR1001006.643950K10K66.4K3K47K66.9KMini-ImageNet1006.643950K10K66.4K3K47K66.9KImageNet1,0009.96581281K128K1276K30K977K1276KDomainNet3458.4305120,906120,9061019.3K0120,906-Mini-", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "3. NLS indicates negative label suppression (see Section 3.3).CB represents the strategies of class balance.", "figure_data": "MethodCIFAR100Mini-ImageNetImageNetΠ-Model [32]56.57 (ConvNet-13)--DCT [49]61.23 (ConvNet-13)-53.50 (ResNet-18)LPDSSL [27]64.08 (ConvNet-13)42.65 (ResNet-18)-Mean Teacher [60]69.76 (ResNet-26)41.06 (ResNet-50)58.16 (ResNet-50)Ours (1-stage base)51.47 → 66.2622.36 → 35.8847.83 → 54.46+NLS51.47 → 71.1322.36 → 38.3047.83 → 58.52Ours (2-stage base)51.47 → 64.83 → 69.3922.36 → 33.97 → 39.6847.83 → 54.04 → 55.64+NLS51.47 → 67.82 → 73.7622.36 → 37.92 → 45.5447.83 → 57.44 → 60.40+CB51.47 → 68.14 → 73.8922.36 → 38.17 → 46.8247.83 → 59.41 → 65.10", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Accuracy (%) of using different partitions of quota in the two-stage training process.", "figure_data": "quota split (stage1/stage2)CIFAR100Mini-ImageNet10K/37K73.3645.3020K/27K74.1045.4527K/20K73.7645.5437K/10K73.3344.15", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Accuracy (%) of using different numbers of labeled samples for the three datasets.", "figure_data": "# labels per class in D S103050CIFAR10065.0673.7673.90Mini-ImageNet34.8545.5445.64ImageNet55.4260.4061.03", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Strategies of Sampling. We investigate the strategies of selecting samples for querying by doing experiments based on a two-stage training framework. By connecting to active learning, and measuring the difficulty of each sample using the top-ranked score after softmax, we investigate two sampling strategies, namely easy sampling (with the highest scores) and hard sampling (with the lowest scores). Notably, the guessing accuracy can be impacted heavily by both strategies, e.g., on CIFAR100, comparing to 25.3K correct guesses obtained by random sampling, easy selection, and hard selection lead to different numbers of 30.9K and 18.0K. The final accuracy is slightly changed from 73.76% to 74.23% and 74.96% respectively. However, on Mini-ImageNet, the same operation causes the accuracy to drop from 45.54% to 44.22% and 42.57% respectively. That being said, the amounts of positive labels produced by easy selection mostly are easy samples, and they can't deliver much knowledge to the model; meanwhile, the hard selection strategy mines informative labels, but produces fewer positive labels. Hence, hard selection can benefit training when the dataset is relatively easy or the initial model is strong enough, e.g., the accuracy is boosted by over 1% on CIFAR100.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Table 5 we can observe that, using HS brings 4.43% accuracy gain for our approach without pre-training. Though HS brings gains for the experiment without pre-training, we argue that it can not be always effective, as the discussions in Section 5.3. Hence, if the initial model is too weak to acquire enough positive labels, HS can fail. However, introducing unsupervised pre-training can address this problem by assisting in building a strong initial model. Just as the results show, using HS achieves 75.71% accuracy, bringing 0.17% gains for the experiments with pre-training. Considering it is based on such a high baseline, this is a significant improvement. Figure3shows the top-1 prediction scores (𝑝 𝑐 ) on the selected hard samples made by the model with and without pre-training. It reveals that the model with pre-training obtains higher scores than that without pre-training in", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The results for experiments using unsupervised pre-training on ImageNet, which corresponds to the extended framework with SSL. The upper lists two SOTA semi-supervised methods UDA and FixMatch (using 10% labels). and the unsupervised pre-trained model HSA (fine-tune on 10% labels). The middle shows the results of our approach without unsupervised pre-training. The below shows the results using unsupervised pre-training. Here, PT, HS, and CB indicate pre-training, hard sampling, and class balancing, respectively.", "figure_data": "MethodPTHSCBImageNetUDA [63]---68.80FixMatch [57]---71.50HSA [65]---74.05Ours47.83 → 57.44 → 60.40Ours✓47.83 → 59.33 → 64.83Ours✓47.83 → 59.41 → 65.10Ours✓70.51 → 73.95 → 75.54Ours✓✓70.51 → 74.33 → 75.71Ours✓✓✓70.51 → 75.31 → 76.37", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "→ 65.59 → 80.02 → 82.37 For the experiments Quickdraw→Real, a much harder transfer task that provides a weaker initial model, we conduct a three-stage training framework. Basically, similar observations can be obtained from these results. Firstly, training without source data achieves 83.48% accuracy, 99.22% of performance of the full-supervised training, by using 23.34% of supervision. The result of incorporating active learning is 82.37%, which is also very close to the full-supervised results", "figure_data": "Dataset# of SupervisionAccuracy (%)Full-supervisedReal100%84.14Ours (w/source)18.22%54.53 → 70.69 → 77.05Ours (wo/source)Clip → Real18.22%54.53 → 77.13 → 83.70Ours + AL11.86%54.53 → 65.26 → 81.57Ours (w/source)23.34%19.06 → 52.62 → 61.99 → 65.68Ours (wo/source)Quickdraw → Real23.34%19.06 → 70.39 → 83.18 → 83.48Ours + AL17.79%19.06", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
Hengtong Hu
[ { "authors": "Les E Atlas; David A Cohn; Richard E Ladner", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Training connectionist networks with queries and selective sampling", "year": "1990" }, { "authors": "David Berthelot; Nicholas Carlini; Ian Goodfellow; Nicolas Papernot; Avital Oliver; Colin A Raffel", "journal": "", "ref_id": "b1", "title": "Mixmatch: A holistic approach to semi-supervised learning", "year": "2019" }, { "authors": "Paola Cascante-Bonilla; Fuwen Tan; Yanjun Qi; Vicente Ordonez", "journal": "", "ref_id": "b2", "title": "Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning", "year": "2020" }, { "authors": "Chao Chen; Zhihang Fu; Zhihong Chen; Sheng Jin; Zhaowei Cheng; Xinyu Jin; Xian-Sheng Hua", "journal": "", "ref_id": "b3", "title": "Homm: Higher-order moment matching for unsupervised domain adaptation", "year": "2020" }, { "authors": "John Chen; Vatsal Shah; Anastasios Kyrillidis", "journal": "PMLR", "ref_id": "b4", "title": "Negative sampling in semi-supervised learning", "year": "2020" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b5", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Ting Chen; Simon Kornblith; Kevin Swersky; Mohammad Norouzi; Geoffrey Hinton", "journal": "", "ref_id": "b6", "title": "Big self-supervised models are strong semi-supervised learners", "year": "2020" }, { "authors": "Xinlei Chen; Haoqi Fan; Ross Girshick; Kaiming He", "journal": "", "ref_id": "b7", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Computer Vision and Pattern Recognition", "ref_id": "b8", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Rob Fergus; Yair Weiss; Antonio Torralba", "journal": "Advances in neural information processing systems", "ref_id": "b9", "title": "Semi-supervised learning in gigantic image collections", "year": "2009" }, { "authors": "Alexander Freytag; Erik Rodner; Joachim Denzler", "journal": "Springer", "ref_id": "b10", "title": "Selecting influential examples: Active learning with expected model output changes", "year": "2014" }, { "authors": "Tommaso Furlanello; Zachary C Lipton; Michael Tschannen; Laurent Itti; Anima Anandkumar", "journal": "", "ref_id": "b11", "title": "Born again neural networks", "year": "2018" }, { "authors": "Yarin Gal; Riashat Islam; Zoubin Ghahramani", "journal": "PMLR", "ref_id": "b12", "title": "Deep bayesian active learning with image data", "year": "2017" }, { "authors": "Mingfei Gao; Zizhao Zhang; Guo Yu; Sercan Ö Arık; Larry S Davis; Tomas Pfister", "journal": "Springer", "ref_id": "b13", "title": "Consistency-based semi-supervised active learning: Towards minimizing labeling cost", "year": "2020" }, { "authors": "Xavier Gastaldi", "journal": "", "ref_id": "b14", "title": "Shake-shake regularization", "year": "2017" }, { "authors": "Spyros Gidaris; Praveer Singh; Nikos Komodakis", "journal": "", "ref_id": "b15", "title": "Unsupervised representation learning by predicting image rotations", "year": "2018" }, { "authors": "Yves Grandvalet; Yoshua Bengio", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Semi-supervised learning by entropy minimization", "year": "2005" }, { "authors": "Matthieu Guillaumin; Jakob Verbeek; Cordelia Schmid", "journal": "IEEE", "ref_id": "b17", "title": "Multimodal semi-supervised learning for image classification", "year": "2010" }, { "authors": "Tao Han; Wei-Wei Tu; Yu-Feng Li", "journal": "", "ref_id": "b18", "title": "Explanation Consistency Training: Facilitating Consistency-Based Semi-Supervised Learning with Interpretability", "year": "2021" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b19", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b20", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b21", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Neil Houlsby; Ferenc Huszár; Zoubin Ghahramani; Máté Lengyel", "journal": "", "ref_id": "b22", "title": "Bayesian active learning for classification and preference learning", "year": "2011" }, { "authors": "Hengtong Hu; Lingxi Xie; Zewei Du; Richang Hong; Qi Tian", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b23", "title": "One-bit Supervision for Image Classification", "year": "2020" }, { "authors": "Hengtong Hu; Lingxi Xie; Richang Hong; Qi Tian", "journal": "", "ref_id": "b24", "title": "Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing", "year": "2020" }, { "authors": "Gao Huang; Zhuang Liu; Laurens Van Der Maaten; Kilian Q Weinberger", "journal": "", "ref_id": "b25", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "Ahmet Iscen; Giorgos Tolias; Yannis Avrithis; Ondrej Chum", "journal": "", "ref_id": "b26", "title": "Label propagation for deep semi-supervised learning", "year": "2019" }, { "authors": "Youngdong Kim; Junho Yim; Juseung Yun; Junmo Kim", "journal": "", "ref_id": "b27", "title": "Nlnl: Negative learning for noisy labels", "year": "2019" }, { "authors": "Andreas Kirsch; Joost Van Amersfoort; Yarin Gal", "journal": "", "ref_id": "b28", "title": "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning", "year": "2019" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b29", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Chia-Wen Kuo; Chih-Yao Ma; Jia-Bin Huang; Zsolt Kira", "journal": "Springer", "ref_id": "b30", "title": "Featmatch: Feature-based augmentation for semi-supervised learning", "year": "2020" }, { "authors": "Samuli Laine; Timo Aila", "journal": "", "ref_id": "b31", "title": "Temporal ensembling for semi-supervised learning", "year": "2016" }, { "authors": "Gustav Larsson; Maire Michael; Gregory Shakhnarovich", "journal": "", "ref_id": "b32", "title": "Colorization as a proxy task for visual understanding", "year": "2017" }, { "authors": "Yann Lecun; Yoshua Bengio; Geoffrey Hinton", "journal": "Nature", "ref_id": "b33", "title": "Deep learning", "year": "2015" }, { "authors": "Dong-Hyun Lee", "journal": "ICML", "ref_id": "b34", "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "year": "2013" }, { "authors": "D David; William A Lewis; Gale", "journal": "Springer", "ref_id": "b35", "title": "A sequential algorithm for training text classifiers", "year": "1994" }, { "authors": "Shuang Li; Mixue Xie; Fangrui Lv; Chi Harold Liu; Jian Liang; Chen Qin; Wei Li", "journal": "", "ref_id": "b36", "title": "Semantic concentration for domain adaptation", "year": "2021" }, { "authors": "Wenjie Luo; Alex Schwing; Raquel Urtasun", "journal": "", "ref_id": "b37", "title": "Latent structured active learning", "year": "2013" }, { "authors": "Tomasz Malisiewicz; Alyosha Efros", "journal": "", "ref_id": "b38", "title": "Beyond categories: The visual memex model for reasoning about object relationships", "year": "2009" }, { "authors": "Takeru Miyato; Shin-Ichi Maeda; Masanori Koyama; Shin Ishii", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b39", "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "year": "2018" }, { "authors": "Mehdi Noroozi; Paolo Favaro", "journal": "Springer", "ref_id": "b40", "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "year": "2016" }, { "authors": "Mehdi Noroozi; Hamed Pirsiavash; Paolo Favaro", "journal": "", "ref_id": "b41", "title": "Representation learning by learning to count", "year": "2017" }, { "authors": "Mehdi Noroozi; Ananth Vinjimoor; Paolo Favaro; Hamed Pirsiavash", "journal": "", "ref_id": "b42", "title": "Boosting self-supervised learning via knowledge transfer", "year": "2018" }, { "authors": " Dim P Papadopoulos; Frank Jasper Rr Uijlings; Vittorio Keller; Ferrari", "journal": "", "ref_id": "b43", "title": "We don't need no bounding-boxes: Training object class detectors using only human verification", "year": "2016" }, { "authors": " Dim P Papadopoulos; Frank Jasper Rr Uijlings; Vittorio Keller; Ferrari", "journal": "", "ref_id": "b44", "title": "Training object class detectors with click supervision", "year": "2017" }, { "authors": "Deepak Pathak; Ross Girshick; Piotr Dollár; Trevor Darrell; Bharath Hariharan", "journal": "", "ref_id": "b45", "title": "Learning features by watching objects move", "year": "2017" }, { "authors": "Xingchao Peng; Qinxun Bai; Xide Xia; Zijun Huang; Kate Saenko; Bo Wang", "journal": "", "ref_id": "b46", "title": "Moment matching for multi-source domain adaptation", "year": "2019" }, { "authors": "Robert Pinsler; Jonathan Gordon; Eric Nalisnick; José Miguel Hernández-Lobato", "journal": "", "ref_id": "b47", "title": "Bayesian batch active learning as sparse subset approximation", "year": "2019" }, { "authors": "Siyuan Qiao; Wei Shen; Zhishuai Zhang; Bo Wang; Alan Yuille", "journal": "", "ref_id": "b48", "title": "Deep co-training for semi-supervised image recognition", "year": "2018" }, { "authors": "Antti Rasmus; Mathias Berglund; Mikko Honkala; Harri Valpola; Tapani Raiko", "journal": "", "ref_id": "b49", "title": "Semi-supervised learning with ladder networks", "year": "2015" }, { "authors": "Sachin Ravi; Hugo Larochelle", "journal": "", "ref_id": "b50", "title": "Optimization as a model for few-shot learning", "year": "2016" }, { "authors": "Mamshad Nayeem Rizve; Kevin Duarte; Yogesh S Rawat; Mubarak Shah", "journal": "", "ref_id": "b51", "title": "In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning", "year": "2021" }, { "authors": "Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio", "journal": "", "ref_id": "b52", "title": "Fitnets: Hints for thin deep nets", "year": "2014" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein", "journal": "International Journal of Computer Vision", "ref_id": "b53", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "Ozan Sener; Silvio Savarese", "journal": "", "ref_id": "b54", "title": "Active learning for convolutional neural networks: A core-set approach", "year": "2017" }, { "authors": "Weishi Shi; Qi Yu", "journal": "", "ref_id": "b55", "title": "Integrating Bayesian and Discriminative Sparse Kernel Machines for Multi-class Active Learning", "year": "2019" }, { "authors": "Kihyuk Sohn; David Berthelot; Chun-Liang Li; Zizhao Zhang; Nicholas Carlini; D Ekin; Alex Cubuk; Han Kurakin; Colin Zhang; Raffel", "journal": "", "ref_id": "b56", "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "year": "2020" }, { "authors": "Yu Sun; Eric Tzeng; Trevor Darrell; Alexei A Efros", "journal": "", "ref_id": "b57", "title": "Unsupervised domain adaptation through self-supervision", "year": "2019" }, { "authors": "Yu Sun; Xiaolong Wang; Zhuang Liu; John Miller; Alexei Efros; Moritz Hardt", "journal": "PMLR", "ref_id": "b58", "title": "Test-time training with self-supervision for generalization under distribution shifts", "year": "2020" }, { "authors": "Antti Tarvainen; Harri Valpola", "journal": "", "ref_id": "b59", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" }, { "authors": "Eric Tzeng; Judy Hoffman; Kate Saenko; Trevor Darrell", "journal": "", "ref_id": "b60", "title": "Adversarial discriminative domain adaptation", "year": "2017" }, { "authors": "Keze Wang; Dongyu Zhang; Ya Li; Ruimao Zhang; Liang Lin", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b61", "title": "Cost-effective active learning for deep image classification", "year": "2016" }, { "authors": "Qizhe Xie; Zihang Dai; Eduard Hovy; Minh-Thang Luong; Quoc V Le", "journal": "", "ref_id": "b62", "title": "Unsupervised data augmentation for consistency training", "year": "2019" }, { "authors": "Qizhe Xie; Minh-Thang Luong; Eduard Hovy; Quoc V Le", "journal": "", "ref_id": "b63", "title": "Self-training with noisy student improves imagenet classification", "year": "2020" }, { "authors": "Haohang Xu; Xiaopeng Zhang; Hao Li; Lingxi Xie; Hongkai Xiong; Qi Tian", "journal": "", "ref_id": "b64", "title": "Hierarchical Semantic Aggregation for Contrastive Representation Learning", "year": "2020" }, { "authors": "Ning Xu; Brian Price; Scott Cohen; Jimei Yang; Thomas S Huang", "journal": "", "ref_id": "b65", "title": "Deep interactive object selection", "year": "2016" }, { "authors": "Chenglin Yang; Lingxi Xie; Siyuan Qiao; Alan Yuille", "journal": "", "ref_id": "b66", "title": "Knowledge distillation in generations: More tolerant teachers educate better students", "year": "2018" }, { "authors": "Donggeun Yoo; In So Kweon", "journal": "", "ref_id": "b67", "title": "Learning loss for active learning", "year": "2019" }, { "authors": "Bing Yu; Jingfeng Wu; Jinwen Ma; Zhanxing Zhu", "journal": "", "ref_id": "b68", "title": "Tangent-normal adversarial regularization for semi-supervised learning", "year": "2019" }, { "authors": "Sergey Zagoruyko; Nikos Komodakis", "journal": "", "ref_id": "b69", "title": "Wide residual networks", "year": "2016" }, { "authors": "Werner Zellinger; Thomas Grubinger; Edwin Lughofer; Thomas Natschläger; Susanne Saminger-Platz", "journal": "", "ref_id": "b70", "title": "Central moment discrepancy (cmd) for domain-invariant representation learning", "year": "2017" }, { "authors": "Xiaohua Zhai; Avital Oliver; Alexander Kolesnikov; Lucas Beyer", "journal": "", "ref_id": "b71", "title": "S4l: Self-supervised semi-supervised learning", "year": "2019" }, { "authors": "Liheng Zhang; Guo-Jun Qi", "journal": "", "ref_id": "b72", "title": "Wcp: Worst-case perturbations for semi-supervised deep learning", "year": "2020" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros", "journal": "Springer", "ref_id": "b73", "title": "Colorful image colorization", "year": "2016" }, { "authors": "Zhuotun Zhu; Lingxi Xie; Alan L Yuille", "journal": "", "ref_id": "b74", "title": "Object Recognition with and without Objects", "year": "2017" } ]
[ { "formula_coordinates": [ 7, 73.62, 98.45, 48.72, 9.51 ], "formula_id": "formula_0", "formula_text": "D O 𝑡 from D R" }, { "formula_coordinates": [ 7, 267.63, 208.04, 68.41, 9.51 ], "formula_id": "formula_1", "formula_text": "D R 𝑡 = D O- 𝑡 ∪ D U 𝑡 ." }, { "formula_coordinates": [ 7, 217.58, 416.85, 284.8, 28.69 ], "formula_id": "formula_2", "formula_text": "L (𝜽 ) = E x∈ D S ∪D O+ ℓ y ★ 𝑛 , f (x; 𝜽 ) + 𝜆 • E x∈ D f x; 𝜽 ′ -f (x; 𝜽 ) 2 ,(1)" }, { "formula_coordinates": [ 8, 110.34, 197.76, 108.23, 9.88 ], "formula_id": "formula_3", "formula_text": "D O = D O 0 ∪. . .∪D O 𝑡 ∪. . .∪D O" }, { "formula_coordinates": [ 8, 230.19, 255.73, 188.36, 30.52 ], "formula_id": "formula_4", "formula_text": "L (𝜽 ) = E x∈ D S ∪D O+ 𝑤 𝑐 • ℓ y ★ 𝑛 , f (x; 𝜽 ) + 𝜇 • E x∈ D O-1 -y 𝐵 𝑛 • log(1 -𝜎 (f (x; 𝜽 )))." }, { "formula_coordinates": [ 8, 532.76, 269.42, 6.34, 4.09 ], "formula_id": "formula_5", "formula_text": ")2" }, { "formula_coordinates": [ 9, 238.43, 508.02, 263.95, 13.04 ], "formula_id": "formula_6", "formula_text": "x * = argmax x∈ D O 𝛼 (x, M)(3)" }, { "formula_coordinates": [ 9, 204.82, 573.47, 297.56, 37.66 ], "formula_id": "formula_7", "formula_text": "H y | x, D S - ∑︁ 𝑐 𝑝 𝑦 𝑐 = 1 | x, D S log 𝑝 𝑦 𝑐 = 1 | x, D S .(4)" }, { "formula_coordinates": [ 10, 276.66, 112.35, 262.43, 9.96 ], "formula_id": "formula_8", "formula_text": "argmax 𝑐 ∈ [1,...,𝐶 ] 𝛼 (x, M) .(5)" }, { "formula_coordinates": [ 10, 237.66, 147.91, 301.44, 50.33 ], "formula_id": "formula_9", "formula_text": "H 𝑦 𝑐 | x, D S -𝑝 𝑦 𝑐 | x, D S log 𝑝 𝑦 𝑐 | x, D S -1 -𝑝 𝑦 𝑐 | x, D S log 1 -𝑝 𝑦 𝑐 | x, D S .(6)" }, { "formula_coordinates": [ 10, 246.23, 232.72, 292.86, 8.59 ], "formula_id": "formula_10", "formula_text": "H (𝑝 𝑐 ) = -𝑝 𝑐 log 𝑝 𝑐 -(1 -𝑝 𝑐 ) log (1 -𝑝 𝑐 ) .(7)" }, { "formula_coordinates": [ 10, 174.13, 314.29, 364.97, 39.76 ], "formula_id": "formula_11", "formula_text": "Suppose 𝐶 ⩾ 2, 𝜑 (𝐶) is a function of 𝐶, ∃ 𝜑 (𝐶) ⩽ 1 2 such that ∀𝑥 ∈ D 𝑈 , if 𝑝 𝑐 ⩾ 𝜑 (𝐶), H[𝑦 𝑐 |x, D 𝑆 ] ⩾ H[y|x, D 𝑆 ] log 2 𝐶(8)" }, { "formula_coordinates": [ 10, 229.19, 370.47, 309.91, 58.91 ], "formula_id": "formula_12", "formula_text": "H[y|x, D 𝑆 ] = -𝑝 𝑐 log 𝑝 𝑐 - ∑︁ 𝑖≠𝑐 𝑝 𝑖 log 𝑝 𝑖 ⩽ -𝑝 𝑐 log 𝑝 𝑐 -(𝐶 -1) 1 -𝑝 𝑐 𝐶 -1 log 1 -𝑝 𝑐 𝐶 -1 = H[𝑦 𝑐 |x, D 𝑆 ] + (1 -𝑝 𝑐 ) log(𝐶 -1).(9)" }, { "formula_coordinates": [ 10, 211.83, 435.74, 71.53, 15.49 ], "formula_id": "formula_13", "formula_text": "H(𝑝 𝑐 )+(1-𝑝 𝑐 ) log(𝐶-1) log 2 𝐶" }, { "formula_coordinates": [ 10, 110.16, 458.24, 428.94, 85.29 ], "formula_id": "formula_14", "formula_text": "-𝑝 𝑐 1 -𝑝 𝑐 log 𝑝 𝑐 -log(1 -𝑝 𝑐 ) ⩾ log(𝐶 -1) log 𝐶 -1 (10) Let 𝑓 (𝑝) ≜ -𝑝 1-𝑝 log 𝑝 -log(1-𝑝), we have 𝑓 ′ (𝑝) = -log 𝑝 (𝑝 -1) 2 ⩾ 0 and 𝑓 ( 1 2 ) = 2 ⩾ log(𝐶 -1) log 𝐶 -1 ⇐⇒ 𝐶 2 -4𝐶 + 4 ⩾ 0 always holds. Hence, ∃ 𝜑 (𝐶) ⩽ 1 2 , ∀𝑝 𝑐 > 𝜑 (𝐶), H(𝑝 𝑐 ) ⩾ H(𝑝 𝑐 ) + (1 -𝑝 𝑐 ) log(𝐶 -1) log 2 𝐶 ⩾ H[y|x, D 𝑆 ] log 2 𝐶(11)" }, { "formula_coordinates": [ 10, 282.3, 643.87, 256.79, 9.58 ], "formula_id": "formula_15", "formula_text": "argmax 𝑐 𝑝 𝑦 𝑐 | x, D S .(12)" }, { "formula_coordinates": [ 11, 73.26, 115.82, 86.92, 11.08 ], "formula_id": "formula_16", "formula_text": "𝑝 𝑐 is 𝜕H 𝜕𝑝 𝑐 = log 1 𝑝 𝑐 -1 ." } ]
2023-12-05
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b24", "b2", "b10", "b17", "b8", "b31", "b33", "b43", "b45", "b46", "b8", "b24", "b17", "b45", "b2", "b43", "b46", "b4", "b43", "b46", "b20", "b3", "b1", "b2", "b0", "b4", "b9", "b44" ], "table_ref": [], "text": "In Neural Radiance Fields (NeRFs) a model is trained to learn a 3D representation from which sensor realistic data can be rendered from new viewpoints [25]. Such techniques have been shown to be useful for a multitude of applications, such as view synthesis [3], generative modeling [11], or pose and shape estimation [42].\nAutonomous Driving (AD) is a field where NeRFs may become very useful. By creating editable digital clones of traffic scenes, safety-critical scenarios can be explored in a scalable manner and without risking physical damage. For example, practitioners can investigate the behavior of the system for harsh braking on a highway or aggressive merging in city traffic. Furthermore, a NeRF-powered closedloop simulator can be used for the targeted generation of corner-case training data.\nMultiple works have applied NeRFs to automotive data [9, 18,29,32,34,44,46,47]. Neural Scene Graphs (NSG) [29] extend the original NeRF model [25] to dynamic automotive sequences by dividing the scene into a static background and a set of rigid dynamic actors with known location and extent, learning separate NeRFs for each. This enables editing the trajectories of both the egovehicle and all actors in the scene. The approach can be further improved by including semantic segmentation [18] or by using anti-aliased positional embeddings [46]. The latter enables NeRFs to reason about scale [3] which is essential for large-scale scenes. However, common for all these approaches is that they require many hours of training, limiting their applicability for scalable closed-loop simulation or data augmentation.\nMore recent works [44,47] rely on Instant NGP's (iNGP) [27] learnable hash grids for embedding positional information, drastically reducing training and inference time. These methods are fast to train and generate realistic renderings in their respective settings, namely front-facing camera with 360 • lidar. However, their performance in 360 • multicamera settings, which is common in many AD datsets [5,43], is either unexplored [44] or is reported by the authors to be suboptimal [47]. Furthermore, both methods deploy simple lidar models and cannot model ray drop, a phenomenon important for closing the real-to-sim gap [21]. Lastly, using the iNGP positional embedding without antialiasing techniques limits performance, especially for larger scenes [4].\nIn this paper, we present NeuRAD, an editable novel view synthesis (NVS) method. NeuRAD is designed to handle large-scale automotive scenes and to work well with multiple datasets out of the box. We find that modeling important sensor characteristics, such as rolling shutter, lidar ray dropping, and beam divergence, is essential for sensor-realistic renderings and learning accurate geometries. Nonetheless, our model features a simple network architecture, where static and dynamic elements are discerned only by their positional embeddings, making it a natural extension of recent methods to AD data. We verify NeuRAD's generalizability and achieve state-of-the-art (SoTA) performance across five automotive datasets without any datasetspecific tuning.\nThe contributions are summarized as follows.\n(1) Our method is the first to combine lidar sensor modeling with the ability to handle 360 • camera rigs in a unified way, extending the applicability of NeRF-based methods for dynamic AD data. (2) We propose using a single network to model dynamic scenes, where dynamics and statics are separated only by their positional embeddings. (3) We propose simple, yet effective methods for modeling multiple key sensor characteristics such as rolling shutter, beam divergence, and ray dropping, and highlight their effect on performance. (4) Extensive evaluation using five popular AD datasets [1,5,10,43,45] shows that our method matches or outperforms state-of-the-art approaches across the board." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b24", "b5", "b1", "b2", "b3", "b12", "b3", "b2", "b8", "b17", "b45", "b31", "b33", "b43", "b46", "b43", "b32", "b46", "b44", "b8", "b4", "b0", "b3" ], "table_ref": [], "text": "NeRFs: Neural radiance fields [25] is a novel view synthesis method in which a neural network learns an implicit 3D representation from which new images can be rendered. Multiple works [6,8,16,27] address the long training time of the original formulation. Notably, Instant-NGP (iNGP) [27] uses a multiresolution, learnable hash grid to encode positional information rather than NeRFs frequency-based encoding scheme. A different line of work [2][3][4]13] focuses on reducing aliasing effects by embedding pixel frustums instead of extent-free points, where Zip-NeRF [4] combines the anti-aliasing properties of mip-NeRF 360 [3] with the fast hash grid embedding of iNGP [27] by using multisampling and downweighting. Although these works were designed for static scenes and cannot be applied to dynamic sequences, we draw inspiration from Zip-NeRF's anti-aliasing techniques to better model large scenes. NeRFs for automotive data: Accurately simulating data for AD systems is a promising avenue for efficient testing and verification of self-driving vehicles. While gameengine-based methods [7,31] have made a lot of progress, they struggle with scalable asset creation, real-to-sim gap, and diversity. NeRFs' sensor-realistic renderings offer an attractive alternative, and consequently, multiple works have studied how to apply neural rendering techniques to automotive data. Neural Scene Graphs [29], Panoptic Neural Fields (PNF) [18] and Panoptic NeRF [9] all model the background and every actor as multi-layer perceptrons (MLPs), but struggle with large-scale scenes due to the MLPs limited expressiveness. S-NeRF [46] extends mip-NeRF 360 to automotive data similar to NSG by modeling each actor with a separate MLP, but requires day-long training, making it impractical for downstream applications. Block-NeRF [32] and SUDS [34] both focus on city-scale reconstruction. While handling impressive scale, Block-NeRF filters out dynamic objects and only models static backgrounds, and SUDS uses a single network for dynamic actors, removing the possibility of altering actor behavior. NeRFs for closed-loop simulation: Among existing work, two methods [44,47] are the most similar to ours. MARS [44] proposes a modular design where practitioners can mix and match existing NeRF-based methods for rendering dynamic actors and the static background. Similar to our work, the implementation is based on Nerfstudio [33] to promote open-source collaboration. Unlike our work, MARS does not natively support lidar point clouds but relies on dense depth maps from either depth completion or mono-depth networks, limiting the ease of application to any dataset. Further, while MARS' semantic segmentation supervision is optional, performance deteriorates when this supervision is not available, especially on real-world data.\nUniSim [47] is a neural sensor simulator, showcasing realistic renderings for PandaSet's [45] front camera and 360 • lidar. The method applies separate hash grid features [27] for modeling the sky, the static background, and each dynamic actor, and uses NSG-style [29] transformations for handling dynamics. For efficiency, the static background is only sampled near lidar points. Further, UniSim renders features from the neural field, rather than RGB, and uses a convolutional neural network (CNN) for upsampling the features and producing the final image. This allows them to reduce the number of sampled rays per image significantly. While efficient, multiple approximations lead to poor performance outside their evaluation protocol. In addition, the lidar occupancy has a limited vertical field of view and fails to capture tall, nearby structures which often becomes evident when using cameras with alternative mounting positions or wider lenses, e.g., nuScenes [5], Argoverse2 [43] or Zenseact Open Dataset (ZOD) [1]. In contrast, our method unifies static and sky modeling and relies on proposal sampling [4] for modeling occupancy anywhere. Further, UniSim's upsampling CNN introduces severe aliasing and model inconsistencies, as camera rays must describe entire RGB patches whereas lidar rays are thin laser beams. In this work, we introduce a novel anti-aliasing strategy that improves performance, with minimal impact on runtime." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Our goal is to learn a representation from which we can generate realistic sensor data where we can change either the pose of the ego vehicle platform, the actors, or both. We assume access to data collected by a moving platform, consisting of posed camera images and lidar point clouds, as well as estimates of the size and pose of any moving actors. To be practically useful, our method needs to perform well in terms of reconstruction error on any major automotive dataset, while keeping training and inference times to a minimum. To this end, we propose NeuRAD, an editable, open source, and performant neural rendering approach; see Fig. 2 for an overview.\nIn the following, we first describe the underlying scene representation and sensor modeling. Next, we cover the internals of our neural field and the decomposition of sequences into static background and dynamic actors. We then present the unique challenges and opportunities of applying neural rendering to AD data and how we address them. Last, we discuss learning strategies." }, { "figure_ref": [], "heading": "Scene representation and sensor modeling", "publication_ref": [ "b3", "b46", "b24", "b46", "b24", "b34", "b38", "b46", "b0", "b29", "b46", "b20", "b13", "b13" ], "table_ref": [], "text": "Neural scene rendering: Building on the recent advancements in novel view synthesis [4,47], we model the world with a neural feature field (NFF), a generalization of NeRFs [25] and similar methods [23]. Given a position x, and a view direction d, an NFF outputs an implicit geometry s and a feature vector f [47]. The NFF, akin to a NeRF, is utilized for volumetric rendering. However, it accumulates implicit geometry and features rather than density and color [25].\nTo extract features for a ray r(τ ) = o + τ d, originating from the sensor center o and extending in direction d, we sample N r points along the ray in 3D space. The feature descriptors of these samples are aggregated using traditional alpha compositing:\nf (r) = Nr i=1 w i f i , w i = α i i-1 j=1 (1 -α j ).(1)\nHere, α i represents the opacity at the point\nx i = o + τ i d,\nand w i the opacity times the accumulated transmittance along the ray up to x i . Inspired by its success in recovering high-quality geometry [20, 28], we represent the implicit geometry using a signed distance function (SDF) and approximate the opacity as\nα i = 1 1 + e βsi ,(2)\nwhere s i is the SDF value at x i and β is a learnable parameter. While more accurate SDF formulations [35,39] can provide better performance, they require gradient calculations for each 3D point, negatively impacting the runtime. Camera modeling: To render an image, we volume render a set of camera rays, generating a feature map F ∈ R H f ×W f ×N f . As in [47], we then rely on a CNN to render the final image I ∈ R H I ×W I ×3 . In practice, the feature map has a lower resolution H f × W f than the image H I × W I , and we use the CNN for upsampling. This allows us to drastically reduce the number of queried rays.\nLidar modeling: Lidar sensors allow self-driving vehicles to measure the depth and the reflectivity (intensity) of a discrete set of points. They do so by emitting laser beam pulses and measuring the time of flight to determine distance and returning power for reflectivity. To capture these properties, we model the transmitted pulses from a posed lidar sensor as a set of rays and use volume rendering similar to (1). For a lidar point, we shoot a ray r(τ ) = o + τ d, where o is the origin of the lidar and d is the normalized direction of the beam. We then find the expected depth D l of a ray as E[D l (r)] = Nr i=1 w i τ i . For predicting intensity, we volume render the ray feature following (1) and pass the feature through a small MLP.\nIn contrast to previous works incorporating lidar measurements [30,47], we also include rays for laser beams which did not return any points. This phenomenon, known as ray dropping, occurs if the return power has too low amplitude, and is important to model for reducing the sim-toreal gap [21]. Typically, such rays travel far without hitting Figure 2. Overview of our approach. We learn a joint neural feature field for the statics and dynamics of an automotive scene, where the two are discerned only by our actor-aware hash encoding. Points that fall inside actor bounding boxes are transformed to actor-local coordinates and, together with actor index, used to query the 4D hash grid. We decode the volume rendered ray-level features to RGB values using an upsampling CNN, and to ray drop probability and intensity using MLPs. a surface, or hit surfaces from which the beam bounces off into empty space, e.g., mirrors, glass, or wet road surfaces. Modeling these effects is important for sensor-realistic simulations, but as noted in [14], are hard to capture fully physics-based because they depend on (often undisclosed) details of the low-level sensor detection logic. Therefore, we opt to learn ray dropping from data. Similar to the intensity, we use the rendered ray feature from (1) and pass it through a small MLP to predict the ray drop probability p d (r). Note that unlike [14], we do not model second returns from lidar beams, as this information is not present in the five datasets considered here." }, { "figure_ref": [], "heading": "Extending Neural Feature Fields", "publication_ref": [ "b3", "b17", "b8", "b45", "b46", "b2", "b46" ], "table_ref": [], "text": "In this section, we delve into the specifics of our volumetric scene representation. We begin by extending the Neural Feature Field (NFF) definition to be a learned function (s, f ) = NFF(x, t, d), where x ∈ R 3 are the spatial coordinates, t ∈ R represents time, and d ∈ R 3 indicates the view direction. Importantly, this definition introduces time as an input, which is essential for modeling the dynamic aspects of the scene. Architecture: Our NFF architecture adheres to wellestablished best practices in the NeRF literature [4,27]. Given a position x and time t we query our actor-aware hash encoding. This encoding then feeds into a small Multilayer Perceptron (MLP), which computes the signed distance s and an intermediate feature g ∈ R Ng . The view direction d is encoded using spherical harmonics [27], allowing the model to capture reflections and other viewdependent effects. Finally, the direction encoding and g are jointly processed through a second MLP, augmented with a skip connection from g, producing the feature f . Scene composition: Similar to previous works [18,29,46,47], we decompose the world into two parts, the static background and a set of rigid dynamic actors, each defined by a 3D bounding box and a set of SO(3) poses. This serves a dual purpose: it simplifies the learning process, and it allows a degree of editability, where actors can be moved after training to generate novel scenarios. Unlike previous methods which utilize separate NFFs for different scene elements, we employ a single, unified NFF, where all networks are shared, and the differentiation between static and dynamic components is transparently handled by our actoraware hash encoding. The encoding strategy is straightforward: depending on whether a given sample (x, t) lies inside an actor bounding box, we encode it using one of two functions. Unbounded static scene: We represent the static scene with a multiresolution hash grid [27], as this has been proven to be a highly expressive and efficient representation. However, to map our unbounded scenes onto a grid, we employ the contraction approach proposed in MipNerf-360 [3]. This allows us to accurately represent both nearby road elements and far-away clouds, with a single hash grid. In contrast, prior automotive approaches utilize a dedicated NFF to capture the sky and other far-away regions [47]. Rigid dynamic actors: When a sample (x, t) falls within the bounding box of an actor, its spatial coordinates x and view directions d are transformed to the actor's coordinate frame at the given time t. This allows us to ignore the time aspect after that, and sample features from a time-independent multiresolution hash grid, just like for the static scene. Naively, we would need to separately sample multiple different hash grids, one for each actor. However, we instead utilize a single 4D hash grid, where the fourth dimension corresponds to the actor index. This novel approach allows us to sample all actor features in parallel, achieving significant speedups while matching the performance of using separate hash grids." }, { "figure_ref": [], "heading": "Automotive data modeling", "publication_ref": [ "b1", "b1", "b2", "b12", "b3", "b46", "b3", "b24", "b3", "b21" ], "table_ref": [], "text": "Multiscale scenes: One of the biggest challenges in applying neural rendering to automotive data is handling the multiple levels of detail present in this data. As vehicles cover large distances, many surfaces are visible both from afar and close up. Applying iNGP's [27] or NeRF's position embedding naively in these multiscale settings results in aliasing artifacts as they lack a sense at which scale a certain point is observed [2]. To address this, many approaches model rays as conical frustums, the extent of which is determined longitudinally by the size of the bin and radially by the pixel area in conjunction with distance to the sensor [2,3,13]. Zip-NeRF [4], which is currently the only anti-aliasing approach for iNGP's hash grids, combines two techniques for modeling frustums: multisampling and downweighting. In multisampling, the positional embeddings of multiple locations in the frustum are averaged, capturing both longitudinal and radial extent. For downweighting, each sample is modeled as an isotropic Gaussian, and grid features are weighted proportional to the fraction between their cell size and the Gaussian variance, effectively suppressing finer resolutions. While the combined techniques significantly increase performance, the multisampling also drastically increases run-time.\nHere, we aim to incorporate scale information with minimal run-time impact. Inspired by Zip-NeRF, we propose an intuitive downweighting scheme where we downweight hash grid features based on their size relative to the frustum. Rather than using Gaussians, we model each ray r(τ ) = o + τ d as a pyramid with cross-sectional area A(τ ) = ṙh ṙv τ 2 , where ṙh , ṙv are horizontal and vertical beam divergence based on the image patch size or the beam divergence of the lidar beam. Then, for a frustum defined by the interval [τ i , τ i+1 ), where A i and A i+1 are the crosssectional areas at the end-points τ i and τ i+1 , we calculate its volume as\nV i = τ i+1 -τ i 3 A i + A i A i+1 + A i+1 ,(3)\nand retrieve its positional embedding e i at the 3D point\nx i = o + τi+τi+1 2 d.\nFinally, for a hash grid at level l with resolution n l we weight the position embedding e i,l with ω i,l = min(1, ( 1\nn l V 1/3 i\n)), i.e., the fraction between the cell size and the frustum size.\nEfficient Sampling: Another difficulty with rendering large-scale scenes is the need for an efficient sampling strategy. In a single image, we might want to render detailed text on a nearby traffic sign while also capturing parallax effects between skyscrapers several kilometers away. Uniformly sampling the ray to achieve both of these goals would require thousands of samples per ray which is computationally infeasible. Previous works have relied heavily on lidar data for pruning samples [47], and as a result struggle to render outside the lidar's field-of-view.\nInstead, we draw samples along rays according to a power function [4], such that the space between samples increases with the distance from the ray origin. Even so, we find it impossible to fulfill all relevant conditions without prohibitively increasing the number of samples. Therefore, we also employ two rounds of proposal sampling [25], where a lightweight version of our NFF is queried to generate a weight distribution along the ray. Then, a new set of samples are drawn according to these weights. After two rounds of this procedure, we are left with a refined set of samples that focus on the relevant locations along the ray and that we can use to query our full-size NFF. To supervise the proposal networks, we adopt an anti-aliased online distillation method [4] and further use the lidar for supervision, see L d and L w introduced in Sec. 3.4. Modeling rolling shutter: In standard NeRF-based formulations, each image is assumed to be captured from a single origin o. However, many camera sensors have rolling shutters, i.e., pixel rows are captured sequentially. Thus, the camera sensor can move between the capturing of the first row and that of the last row, breaking the single origin assumption. Although not an issue for synthetic data [24] or data captured with slow-moving handheld cameras, the rolling shutter becomes evident with captures from fastmoving vehicles, especially for side-cameras. The same effect is also present in lidars, where each scan is typically collected over 0.1 s, which corresponds to several meters when traveling at highway speeds. Even for ego-motion compensated point clouds, these differences can lead to detrimental line-of-sight errors where 3D points translate to rays that cut through other geometries. To mitigate these effects, we model the rolling shutters by assigning individual times to each ray and adjusting their origin according to the estimated motion. As the rolling shutter affects all dynamic elements of the scene, we linearly interpolate actor poses to each individual ray time. See Appendix E for details. Differing camera settings: Another problem when modeling autonomous driving sequences is that images come from different cameras with potentially different capture parameters, such as exposure. Here we draw inspiration from research on \"NeRFs in the wild\" [22], where an appearance embedding is learned for each image, and passed to the second MLP together with g. However, as we know which image comes from which sensor, we instead learn a single embeddings per sensor, minimizing the potential for overfitting, and allowing us to use these sensor embeddings when generating novel views. As we render features rather than color, we apply these embeddings after the volume rendering, significantly reducing computational overhead.\nNoisy actor poses: Our model relies on estimates of poses for dynamic actors, either in the form of annotations or as tracking output. To account for imperfections, we include the actor poses as learnable parameters in the model, and optimize them jointly. The poses are parameterized as a translation t ∈ R 3 and a rotation for which we use a 6Drepresentation [50]." }, { "figure_ref": [], "heading": "Losses", "publication_ref": [ "b37" ], "table_ref": [], "text": "We optimize all model components jointly and use both camera and lidar observations as supervision L = L image + L lidar . In the following, we discuss the different optimization objectives in more detail.\nImage losses: The image loss is computed patch-wise and summed over N p patches and consists of a reconstruction term and a perceptual term:\nL image = 1 N p Np i=1 λ rgb L rgb i + λ vgg L vgg i .(4)\nThe reconstruction loss, L rgb i , is the squared error between predicted and true pixel values. The perceptual loss, L vgg i , is the same as proposed in pix2pixHD [38]. The two loss terms are weighted using the hyperparameters λ rgb and λ vgg . Lidar losses: We incorporate the strong geometric prior given by the lidar by adding a depth loss for lidar rays and employing weight decay to penalize density in empty space. Further, to be able to simulate a more realistic lidar we also include objectives for the predicted intensity and the predicted ray drop probability:\nL lidar = 1 N N i=1 (λ d L d i + λ int L int i + λ p d L p d i + λ w L w i ),(5)\nwhere λ d , λ int , λ p d , and λ w are hyperparameters. The depth loss L d i and the intensity loss L int i are the squared error between the prediction and the observation. For dropped rays, we penalize estimates only below the specified sensor range, and do not supervise intensity. For the ray drop probability loss, L p d i , we use a binary cross entropy loss. The weight decay is applied for all samples outside of a distance ϵ of the lidar observation:\nL w i = τi,j >ϵ ∥w ij ∥ 2 ,(6)\nwhere τ i,j is the distance from sample x ij to the lidar observation for ray i. For dropped rays, weight decay is applied up until the specified sensor range. Lastly, we omit the commonly used eikonal loss, as it provided minimal benefits at a high computational cost." }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b32" ], "table_ref": [], "text": "NeuRAD is implemented in the collaborative, open-source project Nerfstudio [33]. We hope that our developed supporting structures such as data loaders and native lidar support will encourage further research into this area. We train our method for 20,000 iterations using the Adam [17] optimizer. Using a single Nvidia A100, training takes about 1 hour. See Appendix A for further details." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b4", "b44", "b9", "b0" ], "table_ref": [], "text": "To verify the robustness of our model, we evaluate its performance on several popular AD datasets: nuScenes [5], PandaSet [45], Argoverse 2 [43], KITTI [10], and ZOD [1].\nTo prove the robustness of our method we use the same model and hyperparameters on all datasets. We investigate novel view synthesis performance both for hold-out validation images and for sensor poses without any ground truth. Furthermore, we ablate important model components." }, { "figure_ref": [], "heading": "Datasets and baselines", "publication_ref": [ "b46", "b46", "b46", "b45", "b2", "b9", "b43" ], "table_ref": [], "text": "Below, we introduce the datasets used for evaluation. The selected datasets cover various sensors, and the included sequences contain different seasons, lighting conditions, and driving conditions. Existing works typically use one or two datasets for evaluation and build models around assumptions about available supervision, limiting their applicability to new settings. Therefore, for each dataset, we compare our model to SoTA methods that have previously adopted said dataset, and follow their respective evaluation protocols. Similar to our method, UniSim [47] imposes few supervision assumptions, and we, therefore, reimplement the method and use it as a baseline for datasets where no prior work exists. See Appendix C for reimplementation details and Appendix B for further evaluation details.\nPandaSet: We compare our method to UniSim [47] and an iNGP version with lidar depth supervision provided by UniSim. We use every other frame for training and the remaining ones for testing, and evaluate on the same 10 scenes as UniSim. We study two settings: one with lidar and one with front-facing camera (Panda FC) for direct comparison with the results reported in [47], and one with lidar and all six cameras capturing the full 360 • field-of-view around the vehicle (Panda 360). We also evaluate UniSim on the full 360 • setting using our reimplementation. nuScenes: We compare our method to S-NeRF [46] and Mip-NeRF 360 [3]. We follow S-NeRF's protocol, i.e., select 40 consecutive samples halfway into the sequences and use every fourth for evaluation while every other among the remaining ones is used for training. We test on the same four sequences as S-NeRF, using the same sensor setup. KITTI: For KITTI [10], we compare our method to MARS [44]. We use MARS 50% evaluation protocol, i.e., evaluating on every second image from the right camera and using the left and right camera and lidar from remaining " }, { "figure_ref": [], "heading": "Novel scenario generation", "publication_ref": [ "b11" ], "table_ref": [], "text": "In order for our method to be useful in practice, it must not only perform well when interpolating between views, but also when exploring new views, as examplified in Fig. 1.\nTo that end, we investigate NeuRAD's capability to generate images that are significantly different from those encountered during training. We adapt UniSim's protocol on PandaSet, where the authors propose translating the ego vehicle sensors laterally two or three meters to simulate a lane shift, and extend the protocol to include one meter vertical shift, simulating other mounting positions. We further investigate \"actor shift\", and rotate (±0.5 radians) or translate (±2 meters laterally) dynamic actors in the scene to simulate different actor behaviors. As no ground truth images exist, we report FID [12], with \"no shift\" for reference. The results in Tab. 3 show that NeuRAD is able to generalize to new viewpoints and learns meaningful actor representations. We also include results where we optimize the camera poses following [41], as this further increases sharpness." }, { "figure_ref": [ "fig_0" ], "heading": "Ablations", "publication_ref": [], "table_ref": [], "text": "We validate the effectiveness of some key components in Tab. 4. To avoid biases toward any specific dataset, we report averaged metrics from sequences from all five datasets considered in this work. We select 4 diverse sequences from each dataset, see details in Appendix B. Our full model corresponds to the model used in all prior experiments and strikes a good balance between run-time and performance. We see that the CNN decoder (a) significantly increases both quality and speed, by requiring significantly fewer rays and allowing for interaction between rays. Accurate sensor modeling is also very important, as each of our con- tributions in that area provide complementary performance boost: considering rolling shutter (b), modeling each ray as a frustum (c), per-sensor appearance embeddings (d), and considering lidar rays that did not return (e). We also demonstrate that replacing individual actor hash grids with a single 4D hash grid (f) has no detrimental impact on quality, while significantly increasing training speed. Finally, we replace our SDF with a NeRF-like density formulation (g). The performance is overall almost identical and shows that our model can be configured to either of these field representations depending on the need. If we desire to extract surfaces from our model, we can use an SDF, but if our scenes are dominated by fog, transparent surfaces, or other effects where an SDF breaks down, we can fall back to a density formulation. Interestingly, our ablations only show a modest impact of considering rolling shutter. However, the effect is clearly evident at closer inspection of the qualitative results shown in Fig. 3. Here it is apparent that both the renderings and underlying geometry break down without considering this effect. More results, as well as failure cases, can be found in Appendix F." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we have proposed NeuRAD, a neural simulator tailored specifically for dynamic autonomous driving (AD) data. The model jointly handles lidar and camera data in 360 • and decomposes the world into its static and dynamic elements, allowing the creation of sensor-realistic editable clones of real world driving scenarios. NeuRAD incorporates novel modeling of various sensor phenomena including beam divergence, ray dropping, and rolling shutters, all increasing the quality during novel view synthesis. We demonstrate NeuRAD's efficacy and robustness by obtaining state-of-the-art performance on five publicly AD datasets, using a single set of hyperparameters. Lastly, we publicly release our source-code to foster more research into NeRFs for AD.\nLimitations: NeuRAD assumes actors to be rigid and does not support any deformations. Further, many modeling assumptions are invalid for harsh weather like heavy rain or snow. We hope to address these limitations in future work. " }, { "figure_ref": [], "heading": "NeuRAD: Neural Rendering for Autonomous Driving", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "In the supplementary material, we provide implementation details for our method and baselines, evaluation details, and additional results. In Appendix A, we describe our network architecture more closely and provide hyperparameter values. In Appendix B, we provide details on the experimental setting. Then, in Appendix C, we provide details on our baseline implementation. We closely describe the process of inferring lidar rays that did not return in Appendix D. Next, we cover additional details of our proposed rolling shutter modeling in Appendix E. Last, in Appendix F, we showcase additional results and highlight some limitations of our method." }, { "figure_ref": [], "heading": "A. Implementation details", "publication_ref": [ "b46", "b25", "b3", "b3" ], "table_ref": [], "text": "Here we describe our model and training in more detail. Learning: We train all parts of our model jointly for 20,000 iterations, using the Adam optimizer. In each iteration, we randomly sample 16,384 lidar rays, and 40,960 camera rays, the latter corresponding to 40 (32 × 32) patches. For most parameters, we use a learning rate of 0.01, with a short warmup of 500 steps. For the actor trajectory optimization and the CNN decoder, we adopt a longer warmup of 2500 steps, and a lower learning rate of 0.001. If enabled, camera optimization uses a learning rate of 0.0001, also with a warmup of 2500. We use learning rate schedules that decay the rate by an order of magnitude over the course of the training. Networks: As we primarily compare our method with UniSim [47], we follow their network design to a large degree. Our first (geo) MLP has one hidden layer, our second (feature) MLP has two hidden layers, and the lidar decoder also has two hidden layers. For details on the CNN decoder, we refer to Appendix C.2. All networks use a hidden dimension of 32, which is also the dimensionality of the intermediate NFF features. Hashgrids: We use the efficient hashgrid implementation from tiny-cuda-nn [26], with two separate hashgrids for the static scene and the dynamic actors. We use a much larger hash table for the static world, as actors only occupy a small portion of the scene, see Tab. 5. Proposal Sampling: First, we draw uniform samples according to the power function P(0.1x, -1.0) [4], where we have adjusted the parameters to better match our automotive scenes. Next, we perform two rounds of proposal sampling, represented by two separate density fields. Both fields use our actor-aware hash encoding, but with smaller hash tables and a feature dimension of one in the hash tables. Instead of an MLP, we decode density with a single linear layer. The proposal fields are supervised with the anti-aliased online distillation approach proposed for ZipNeRF [4]. Additionally, we supervise lidar rays directly with L d and L w ." }, { "figure_ref": [], "heading": "B. Evaluation details", "publication_ref": [], "table_ref": [], "text": "Here, we describe in detail the evaluation protocol of each SoTA method that we have compared NeuRAD to. Pandaset (UniSim): UniSim uses a simple evaluation protocol, where the entire sequence is used, with every other frame selected for evaluation and the remaining half of the frames for training. The authors report numbers for the front camera and the 360 Here, we use a simple evaluation protocol that is analogous to that used for PandaSet. We select 10 diverse sequences for each dataset, and use each sequence in its entirety, alternating frames for training and evaluation. For Argoverse, we use all surround cameras and both lidars on the following sequences: 05fa5048-f355-3274-b565-c0ddc547b315, 0b86f508-5df9-4a46-bc59-5b9536dbde9f, 185d3943-dd15-397a-8b2e-69cd86628fb7, 25e5c600-36fe-3245-9cc0-40ef91620c22, 27be7d34-ecb4-377b-8477-ccfd7cf4d0bc, 280269f9-6111-311d-b351-ce9f63f88c81, 2f2321d2-7912-3567-a789-25e46a145bda, 3bffdcff-c3a7-38b6-a0f2-64196d130958, 44adf4c4-6064-362f-94d3-323ed42cfda9, 5589de60-1727-3e3f-9423-33437fc5da4b.\nFor ZOD, we use the front-facing camera and all three lidars on the following sequences: 000784, 000005, 000030, 000221, 000231, 000387, 001186, 000657, 000581, 000619, 000546, 000244, 000811. As ZOD does not provide sequence annotations, we use a lidar-based object detector and create tracklets using ImmortalTracker [36]." }, { "figure_ref": [], "heading": "Ablation Dataset:", "publication_ref": [], "table_ref": [], "text": "We perform all ablations on 20 sequences, four from each dataset considered above.\nWe use sequences 001, 011, 063, 106 for PandaSet, 0164, 0209, 0359, 0916 for nuScenes, 0006, 0010, 0000, 0002 for KITTI, 000030, 000221, 000657, 000005 for ZOD, and 280269f9-6111-311d-b351-ce9f63f88c81, 185d3943-dd15-397a-8b2e-69cd86628fb7, 05fa5048-f355-3274-b565-c0ddc547b315, 0b86f508-5df9-4a46-bc59-5b9536dbde9f for Argoverse 2. Here, we no longer adopt the dataset-specific evaluation protocols corresponding to each SoTA method. Instead, we evaluate on the full sequences, on all available sensors, alternating frames for training and evaluation. The exception is nuScenes, where we find the provided poses to be too poor to train on the full sequences. If we optimize poses during training, we get qualitatively good results, and strong FID scores, but poor reconstruction scores due to misalignment between the learned poses and the evaluation poses, see Appendix F for a more detailed exposition. Therefore, we re-use S-NeRF's shortened evaluation protocol, where this problem is mostly avoided, and leave the problem of proper evaluation on nuScenes for future work." }, { "figure_ref": [], "heading": "C. UniSim implementation details", "publication_ref": [ "b46", "b32", "b44" ], "table_ref": [], "text": "UniSim [47] is a neural closed-loop sensor simulator. It features realistic renderings and imposes few assumptions about available supervision, i.e., it only requires camera images, lidar point clouds, sensor poses, and 3D bounding boxes with tracklets for dynamic actors. These characteristics make UniSim a suitable baseline, as it is easy to apply to new autonomous driving datasets. However, the code is closed-source and there are no unofficial implementations either. Therefore, we opt to reimplement UniSim, and as our model, we do so in Nerfstudio [33]. As the UniSim main article does not specify many model specifics, we rely on the supplementary material available through IEEE Xplore 1 . Nonetheless, some details remain undisclosed, and we have tuned these hyperparameters to match the reported performance on the 10 selected PandaSet [45] sequences. We describe the design choices and known differences below." }, { "figure_ref": [], "heading": "C.1. Data processing", "publication_ref": [ "b18" ], "table_ref": [], "text": "Occupancy grid dilation: UniSim uses uniform sampling to generate queries for its neural feature field. Inside dynamic actors' bounding boxes, the step size is 5 cm and inside the static field, the step size is 20 cm. To remove samples far from any surfaces and avoid unnecessary processing, UniSim deploys an occupancy grid. The grid, of cell size 0.5 m, is initialized using accumulated lidar point clouds where the points inside the dynamic actors have been removed. A grid cell is marked occupied if at least one lidar point falls inside of it. Further, the occupancy grid is dilated to account for point cloud sparseness. We set the dilation factor to two. We find the performance to be insensitive to the selection of dilation factor, where larger values mainly increase the number of processed samples. Sky sampling: UniSim uses 16 samples for the sky field for each ray. We sample these linearly in disparity (one over distance to the sensor origin) between the end of the static field and 3 km away. Sample merging: Each ray can generate a different number of sample points. To combine the results from the static, dynamic, and sky fields, we sort samples along the ray based on their distance and rely on nerfacc [19] for efficient rendering." }, { "figure_ref": [], "heading": "C.2. Model components CNN:", "publication_ref": [ "b14", "b36", "b47" ], "table_ref": [], "text": "The CNN used for upsampling consists of four residual blocks with 32 channels. Further, a convolutional layer is applied at the beginning of the CNN to convert input features to 32 channels, and a second convolutional layer is applied to predict the RGB values. For both layers, we use kernel size one with no padding. We set the residual blocks to consist of convolution, batch norm, ReLU, convolution, batch norm, and skip connection to the input. The convolutional layers in the residual block use a kernel size of seven, with a padding of three. Between the second and third residual blocks, we apply a transposed convolution to upsample the feature map. We set the kernel size and stride to the upsample factor. The upsample factor in turn is set to three. Although we follow the specifications of UniSim, we find our implementation to have fewer parameters than what they report (0.7M compared to 1.7M). Likely reasons are interpretations of residual block design (only kernel size and padding a specified), kernel size for the first and last convolution layer, and the design of the upsampling layer. Nonetheless, we found that increasing the CNN parameter count only increased run-time without performance gains. GAN: UniSim deploys an adversarial training scheme, where a CNN discriminator is trained to distinguish between rendered image patches at observed and unobserved viewpoints, where unobserved viewpoints refer to jittering the camera origins. The neural feature field and upsampling CNN are then trained to improve the photorealism at unobserved viewpoints. UniSim results show adversarial training to hurt novel view synthesis metrics (PSNR, LPIPS, SSIM), but boost FID performance for the lane-shift setting.\nUnfortunately, the discriminator design is only briefly described in terms of a parameter count, resulting in a large potential design space. As training is done on patches, we opted for a PatchGAN [15] discriminator design inspired by pix2pixhd [37]. However, we found it difficult to get consistent performance increases and hence removed the adversarial training from our reimplementation. This is likely the reason for our reimplementation to perform slightly worse than the original results in terms of FID for lane shift. However, using adversarial training does not seem to be necessary in general for achieving low FID scores. In Tab. 3, we see NeuRAD, which does not use any GAN training, outperforming the original UniSim method, which does rely on adversarial supervision. SDF to occupancy mapping: UniSim approximates the mapping from signed distance s to occupancy α as\nα = 1 1 + e βs ,(7)\nwhere β is a hyperparameter. As β is unspecified, we follow [48], which uses a similar formulation for neural rendering in an automotive setting. Specifically, we initialize β to 20.0 and make it a learnable parameter to avoid sensitivity to its specific value." }, { "figure_ref": [], "heading": "C.3. Supervision", "publication_ref": [ "b25", "b36" ], "table_ref": [], "text": "Loss hyperparameters: We set λ rgb = 1.0 and λ vgg = 0.05. All other learning weights are given in UniSim's supplementary material and hence are used as is.\nRegularization loss: For lidar rays, UniSim uses two regularizing losses. The first decreases the weights for samples far from any surface and the second encourages the signed distance function to satisfy the eikonal equation close to any surface\nL reg = 1 N N i=1   γi,j >ϵ ||w ij || 2 + γi,j <ϵ (||∇s(x ij )|| -1) 2   .(8)\nHere, i is the ray index, j is the index for a sample x ij along the ray, γ i,j denotes the distance between the sample and the corresponding lidar observation, i.e., γ i,j = |τ ij -D gt i |. We set ϵ = 0.1.\nFurthermore, we rely on tiny-cuda-nn [26] for fast implementations of the hash grid and MLPs. However, the framework does not support second-order derivatives for MLPs, and hence cannot be used when backpropagating through the SDF gradient ∇s(x ij ). Hence, instead of analytical gradients, we use numerical ones. Let\n    k 1 k 2 k 3 k 4     =     1 -1 -1 -1 -1 1 -1 1 -1 1 1 1     .(9)\nTo find ∇s(x ij ), we query the neural feature field at four locations x ij + δk l , l = 1, 2, 3, 4 where δ = 0.01 √ 3 , resulting in four signed distance values s 1 , s 2 , s 3 , s 4 . Finally, we calculate ∇s(\nx ij ) = 1 4δ l s l k l .(10)\nThe use of numerical gradients instead of analytical ones has been shown to be beneficial for learning signed distance functions for neural rendering [20]. Perceptual loss: Just like NeuRAD, UniSim uses a perceptual loss where VGG features of a ground truth image patch are compared to a rendered patch. While multiple formulations of such a loss exist, we opted for the one used in pix2pixHD [37] for both methods." }, { "figure_ref": [ "fig_3" ], "heading": "D. Inferring ray drop", "publication_ref": [ "b44", "b44", "b0", "b4", "b44" ], "table_ref": [], "text": "The inclusion of dropped lidar rays during supervision increases the fidelity of sensor renderings in all aspects, as shown in Tab. 4. The process of inferring which lidar beams are missing in a point cloud differs somewhat between datasets, as they contain different types of information. However, in general, the process consists of three steps: removal of ego-motion compensation, diode index assignment, and point infilling. In Fig. 5, we show a lidar scan from PandaSet [45] (sequence 106) at different stages.\nRemoval of ego-motion compensation: To figure out which points are missing in a single sweep, we want to express their location in terms of azimuth (horizontal angle), elevation (vertical angle), and range at the time the beam was shot from the sensor. However, for all datasets, the provided points have been ego-motion compensated, i.e., their Cartesian coordinates are expressed in a common coordinate frame. Simply converting them to spherical coordinates is therefore not possible until the ego-motion compensation is removed.\nFor each 3D lidar point (x, y, z) captured at time t we first project the point into world coordinates using its assigned sensor pose. For PandaSet [45], this first step is omitted, as points are provided in world coordinates. We then find the pose of the lidar sensor at time t by linearly interpolating existing sensor poses. For rotation, we use a quaternion representation and spherical linear interpolation (slerp). Given the sensor pose at t, we project the 3D point back into the sensor frame. We note that this process is susceptible to noise, since lidar poses are typically provided at a low frequency 10 Hz-20 Hz. We find elevation ϕ, azimuth θ and range r as r = x 2 + y 2 + z 2 , (11) ϕ = arcsin (z/r), (12) θ = arctan (y/x).\n(\n)13\nDiode index assigment: All datasets considered in this work use spinning lidars, where a set of diodes are rotated 360 • around the sensor and each diode is mounted at a fixed elevation angle. Typically, all diodes (or channels) transmit the same number of beams each revolution, where the number depends on the sensors' horizontal resolution. To use this information for inferring missing rays, we need to assign each return to its diode index. For most datasets considered here [1,5,43], this information is present in the raw data. However, for the other [45], we must predict diode assignment based on the points' elevation. As there is no ground truth available for this information, we use qualitative inspections to verify the correctness of the procedures outlined below. PandaSet uses a spinning lidar with a non-linear elevation distribution for the diodes, i.e., diodes are not spaced equally along the elevation axis. Instead, a few channels, the ones with the largest and smallest elevations, have a longer distance from their closest diode neighbor. Points corresponding to these channels are easily found by using sensor specifications. The remaining diodes use equal spacing, but inaccuracies in the removal of ego-motion compensation result in many wrongful diode assignments if sensor specifications are trusted blindly. Thus, we devise a clustering algorithm for inferring diode indices for points originating from diodes within the equal elevation distribution range.\nThe following is done separately for each lidar scan. First, we define the expected upper and lower bounds for elevation for each diode. These decision boundaries are spaced equally between the lowest and highest observed elevations based on the number of diodes. Then, we use histogram binning to cluster points based on their elevation. We use 2,000 bins, and the resulting bin widths are smaller than the spacing between diodes. Next, we identify consecutive empty bins. For any expected decision boundary that falls into an empty bin, we mark it as a true decision boundary. The same is true if the expected decision boundary is within 0.03 • of an empty bin. Following this, if the number of true decision boundaries is smaller than the number of expected decision boundaries, we insert new boundaries between existing ones. Specifically, for the two boundaries with the largest distance between them, we insert as many boundaries as the vertical resolution dictates, but at least one, and at most as many decision boundaries that are missing. This insertion of boundaries is repeated until the required number of boundaries is reached. Point infilling: After removing ego-motion compensation, transforming the points to spherical coordinates (elevation, azimuth, range), and finding their diode index, we can infer which laser rays did not return any points. Separately, for each diode, we define azimuth bins, spanning 0 • to 360 • with a bin width equal to the horizontal resolution of the lidar. If a returning point falls into a bin, we mark it as returned. For the remaining bins, we calculate their azimuth and elevation by linear interpolation." }, { "figure_ref": [ "fig_0", "fig_4" ], "heading": "E. Modeling rolling shutter", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 3 and Tab. 4, modeling the rolling shutter improves generated renderings, especially at high velocities. Fig. 6 further shows the effects of rolling shutter on an ego-motion compensated lidar point cloud. To capture these effects, we assign each ray an individual timestamp. For lidar, these timestamps are typically available in the raw data, else we approximate them based on the rays' azimuth and the sensors' RPM. For cameras, individual timestamps are not available in the data. Instead, we manually approximate the shutter time and offset each image row accordingly. Given the individual timestamps, we linearly interpolate sensor poses to these times, effectively shifting the origin of the rays. Moreover, we model that dynamic actors may move during the capture time. Given the timestamps, we linearly interpolate their poses to the said time before transforming ray samples to the actors' coordinate systems." }, { "figure_ref": [ "fig_5", "fig_5", "fig_6", "fig_7" ], "heading": "F. Additional results", "publication_ref": [ "b46", "b4" ], "table_ref": [], "text": "In the following, we provide additional results and insights, as well as some failure cases of our method. Proposal sampling: To efficiently allocate samples along each ray, we use two rounds of proposal sampling. For comparison, UniSim [47] instead samples along the rays uniformly and relies on a lidar-based occupancy grid to prune samples far from the detected surfaces. Although the occupancy grid is fast to evaluate, it has two shortcomings. First, the method struggles with surfaces far from any lidar points. In the case of UniSim, the RGB values must instead be captured by the sky field, effectively placing the geometry far away regardless of its true position. The upper row of Fig. 7 shows an example of this, where a utility pole becomes very blurry without proposal sampling. Second, uniform sampling is not well suited for recovering thin structures or fine details of close-up surfaces. Doing so would require drawing samples very densely, which, instead, scales poorly with computational requirements. We examine both failure cases in Fig. 7, with thin power lines in the upper row and closeups of vehicles in the lower row. Sensor embedding: As described in Sec. 3.3, and shown in Tab. 4, the effect of different camera settings for different sensors in the same scene has a significant impact on reconstruction results. effect. Ignoring this effect causes shifts in color and lighting, often at the edge of images where the overlap between sensors is bigger, and is clearly visible in the second column of Fig. 8. Including sensor embeddings allows the model to account for differences in the sensors (e.g., different exposure), resulting in more accurate reconstructions. Camera optimization: Neural rendering is reliant on access to accurate sensor poses. For instance, a small translation or rotation of a camera in world coordinates might translate to a small shift in the image plane as well, but this can drastically change each pixels' value.\nIn this work, we rely on sensor poses provided in the datasets, which typically are the result of IMU and GPS sensor fusion, SLAM, or a combination of both. As a result, sensor poses are often accurate to centimeter precision. While nuScenes [5] follows this example, the dataset does not provide height, roll, or pitch information, as this information has been discarded. We found this to be a limiting factor for the performance of NeuRAD, especially for sequences where the ego vehicle does not traverse a simple, flat surface. To address this, we instead enable optimization of the sensor poses, similar to how we optimize the poses of dynamic actors, see Sec. 3.3.\nApplying sensor pose optimization qualitatively results in sharp renderings and quantitatively yields strong FID scores, see Tab. 3. However, we found novel view synthesis performance -in terms of PSNR, LPIPS and SSIMto drop sharply. We find that the reason is that the sensor pose optimization creates an inconsistency between the world frame of the training data and the validation poses. Due to noisy validation poses, we render the world from a slightly incorrect position, resulting in large errors for the NVS per-pixel metrics. We illustrate this in Fig. 9, where the image from the training without sensor pose optimization is more blurry, but receives higher PSNR scores than the one with pose optimization.\nWe explored multiple methods for circumventing these issues, including separate training runs for finding accurate training and validation poses, or optimizing only the poses of validation images post-training. However, to avoid giving NeuRAD an unfair advantage over prior work, we simply disabled sensor pose optimization for our method. Nonetheless, we hope to study the issue of NVS evaluation when accurate poses are not available for neither training or validation in future work." }, { "figure_ref": [ "fig_8" ], "heading": "F.1. Limitations", "publication_ref": [], "table_ref": [], "text": "In this work, we have proposed multiple modeling strategies for capturing important phenomena present in automotive data. Nonetheless, NeuRAD builds upon a set of assumptions, which when violated, result in suboptimal performance. Here, we cover some of these failure cases. Deformable dynamic actors: When modeling dynamic actors, we make one very strong assumption -that the dynamics of an actor can be described by a single rigid transform. This is a reasonable approximation for many types of actors, such as cars, trucks, and to a lesser degree even cyclists. However, pedestrians break this assumption entirely, leading to very blurry reconstructions, as can be seen in Fig. 10. Night scenes: Modelling night scenes with NeRF-like methods can be quite tricky for several reasons. First, night images contain a lot more measurement noise, which hinders the optimization as it is not really related to the underlying geometry. Second, long exposure times, coupled with the motion of both the sensor and other actors, lead to blurriness and can even make thin objects appear transparent. Third, strong lights produce blooming and lens-flare, which have to be explained by placing large blobs of density where there should not be any. Finally, dynamic actors, including the ego-vehicle, frequently produce their own illumination, such as from headlights. While static illumina- tion can usually be explained as an effect dependent on the viewing direction, this kind of time-varying illumination is not modelled at all. Time-dependent object appearance: In order to build a fully-useable closed-loop simulation we need to model brake lights, turning indicators, traffic lights, etc. While the problem is similar to that of deformable actors, it differs in some ways. First, we do not require the geometry to vary over time, potentially simplifying the problem. Second, we can probably treat these appearances as a set of The assumption that all actors are rigid is invalid for pedestrians and the like, leading to blurry reconstruction as seen here.\nOriginal Reconstructed" }, { "figure_ref": [], "heading": "Original", "publication_ref": [], "table_ref": [], "text": "Reconstructed Depth discrete states. Third, the current set of perception annotations/detections might not cover all necessary regions where this effect is present. For instance, most datasets do not explicitly annotate traffic lights. Finally, we require full control and editability for this effect, to the degree that we can enable brake lights for a car that never braked. For general deformable actors, we might be satisfied with reconstructing the observed deformation, without being able to significantly modify it.\nOriginal Reconstructed Depth" }, { "figure_ref": [], "heading": "Original Original", "publication_ref": [], "table_ref": [], "text": "Original Rendered Rendered Rendered Figure 12\n. NeuRAD assumes all radiance to be static over time, even for dynamic actors. Thus, our method cannot express changes in light conditions, such as brake lights highlighted here. Interestingly, the model compensates by making the brake lights a function of the viewing angle instead, as the two are correlated in this particular scene." }, { "figure_ref": [], "heading": "Acknowledegments", "publication_ref": [], "table_ref": [], "text": "We thank Maryam Fatemi for the valuable feedback. Further, this work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Computational resources were provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at NSC Berzelius, partially funded by the Swedish Research Council, grant agreement no. 2022-06725." } ]
Figure 1. NeuRAD is a neural rendering method tailored to dynamic automotive scenes. With it, we can alter the pose of the ego vehicle and other road users as well as freely add and/or remove actors. These capabilities make NeuRAD suitable to serve as the foundation in components such as sensor-realistic closed-loop simulators or powerful data augmentation engines.
NeuRAD: Neural Rendering for Autonomous Driving
[ { "figure_caption": "Figure 3 .3Figure 3. Impact of modeling rolling shutter in a high-speed scenario (with inset PSNR). (a) original side-camera image. Omitting the rolling shutter entirely (b) results in extremely blurry renderings and unrealistic geometry, especially for the pole. Modeling the lidar rolling shutter (c) improves the quality, but it is only when both sensors are modeled correctly (d) that we get realistic renderings.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visualization of ray drop effects for lidar simulation. Highlighted parts show areas where ray dropping effects are important to consider in order to simulate realistic point clouds. CD denotes Chamfer distance normalized by num. GT points.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) Before removing ego-motion compensation. (b) After removing ego-motion compensation. (c) After removing ego-motion compensation and adding missing points.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Lidar scans in spherical coordinates at different stages during inference of missing lidar rays. The color indicates range, where missing points have been set to a large distance for visualization purposes. Note that we do not add missing points for the two bottom rows, as they typically hit the ego vehicle.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Bird's-eye-view of ego-motion compensated point cloud. Cuts in the circular patterns on the ground indicate the distance traveled by the ego-vehicle during one lidar revolution. Further, the cut through the car shows the importance of interpolating actor poses to the time when each lidar ray was shot.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Two failure-cases that demonstrate the importance of proposal sampling over occupancy-based sampling: regions without lidar occupancy that are improperly modeled by sky field (upper), and nearby object that require extremely dense sampling (lower).", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Effect of sensor embedding. The second column shows rendered images from the model trained without sensor embeddings, where a clear degradation is visible due to the shift in appearance (e.g., different exposure) between different sensors. As can be seen in the third column, this effect is remedied by including sensor embeddings.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure9. Effect of camera optimization on nuScenes. Despite clearly sharper image quality, we get drastically reduced PSNR scores when using camera optimization. This is due to the misalignment between the learned poses and the evaluation poses. This can be seen in the far left of the image, where the image with camera optimization displays less of a window.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Failed reconstruction of deformable actors.The assumption that all actors are rigid is invalid for pedestrians and the like, leading to blurry reconstruction as seen here.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Novel view synthesis at night is challenging. For instance, strong lights can produce flares in the camera lens. These are hard to model with the standard NeRF rendering equations, as it requires the network to place density around the lights. Further, longer exposure times at night lead to dark, thin objects appearing semi-opaque, obscuring the learned scene geometry. Last, moving vehicles, including the ego-vehicle, illuminate the scene, resulting in a change of color over time for certain static parts of the scene. For instance, the road contains artifacts due to illumination from the ego-vehicle headlights.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Image novel view synthesis performance comparison to state-of-the-art methods across five datasets. *our reimplementation.", "figure_data": "Instant-NGP † [27, 47]24.030.7080.451PandaFCUniSim [47] UniSim*25.63 25.440.745 0.7320.288 0.228NeuRAD (ours)26.700.7780.193Panda360UniSim* NeuRAD (ours)23.50 25.970.692 0.7570.330 0.247nuScenesMip360 † [3, 46] S-NeRF [46] NeuRAD (ours)24.37 26.21 26.990.795 0.831 0.8150.240 0.228 0.225KITTIMOTSUDS † [34, 44] MARS [44] NeuRAD (ours)23.12 24.00 26.010.821 0.801 0.7790.135 0.164 0.090Argo2UniSim* NeuRAD (ours)23.22 § 26.220.661 § 0.7170.412 § 0.315ZODUniSim* NeuRAD (ours)27.97 29.490.777 0.8090.239 0.226", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Lidar novel view synthesis performance comparison to state-of-the-art methods. Depth is median L2 error [m]. Intensity is RMSE. Drop acc. denotes ray drop accuracy. Chamfer denotes chamfer distance, normalized with num. ground truth points [m]. We report the standard NVS metrics PSNR, SSIM [40] and LPIPS [49], for all datasets and baselines in Tab. 1. NeuRAD achieves SoTA performance across all datasets. On PandaSet, we improve upon previous work across all metrics, for both FC and 360. On nuScenes, Neu-RAD matches the performance of S-NeRF while training much faster (1 hour compared to 17 hours). NeuRAD also outperforms previous SoTA on KITTI with a large margin in terms of PSNR and LPIPS. Finally, NeuRAD also achieves strong performance on Argoverse 2 and ZOD.Lidar: We measure the realism of our lidar simulation in terms of L2 median depth error, RMSE intensity error and ray drop accuracy. We complement the depth error with the Chamfer distance as it enables us to evaluate performance on dropped rays as well. We compare only to UniSim, evaluated on PandaSet, as no other baseline simulates point clouds. UniSim has no notion of ray dropping, hence we assume rays to be dropped past the reported lidar range. We see in Tab. 2 that NeuRAD decreases the depth error by an order of magnitude compared to UniSim in the frontcamera setting. Our method generalizes well to the 360 •", "figure_data": "PandaFCUniSim UniSim* NeuRAD (ours)0.10 0.07 0.010.065 0.085 0.076-91.0 96.3-11.2 3.5Panda360UniSim* NeuRAD (ours)0.07 0.010.087 0.07691.9 96.410.3 3.1", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "FID scores when shifting pose of ego vehicle or actors. *our reimplementation.setting, where similar results are reported. Furthermore, we show that NeuRAD is capable of simulating realistic point clouds, thanks to its high ray drop accuracy and low Chamfer distance. Fig.4further shows the importance of modeling ray drop effects for lidar simulation. As noted in the figure, lidar beams that hit the road far away tend to bounce off and not return. Similar effects occur for transparent surfaces, such as the car window illustrated in the figure, where the lidar beams shoot right through. Modeling these effects can increase the realism of simulated point clouds.", "figure_data": "Ego shiftActor shiftNo shift Lane 2m Lane 3m Vert. 1m Rot. Trans.PandaFCUniSim UniSim* NeuRAD-41.7 25.074.7 79.6 72.397.5 102.0 93.9-89.3 76.3-65.5 64.3-59.6 49.1Panda360UniSim* NeuRAD NeuRAD w/ opt88.3 45.5 43.0115.5 84.0 81.0128.0 98.8 95.3126.7 91.3 88.895.9 58.8 56.793.0 55.4 53.0", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablations when removing core parts of our model. We report NVS performance for images and lidars, scene generation, and training megapixels per second (MP/s). Results are averaged over 20 sequences, evenly split across all five datasets.", "figure_data": "Full model27.220.2170.7790.03076.91.9a) CNN decoder25.290.3290.7200.107127.90.2b) Rolling shutter26.770.2460.7630.06080.61.9c) Downweighting26.120.2830.7410.146100.62.0d) Appearance emb. 25.500.2700.7440.080102.61.9e) Missing points25.360.3610.6850.050106.31.8f) 4D actor grid27.220.2170.7790.03076.51.5g) SDF26.680.2660.7430.03387.61.9", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "in-the-wild for realistic sensor simulation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11661-11668, 2023. 3 [49] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conf. Comput. Vis.", "figure_data": "Pattern Recog., pages 586-595, 2018. 7[50] Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and HaoLi. On the continuity of rotation representations in neuralnetworks. In IEEE Conf. Comput. Vis. Pattern Recog., pages5745-5753, 2019. 6", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Hyperparameters for NeuRAD.", "figure_data": "HyperparameterValueNeural feature fieldRGB upsampling factor proposal samples SDF β power function λ power function scale appearance embedding dim hidden dim (all networks) NFF feature dim3 128, 64 20.0 (learnable) -1.0 0.1 16 32 32hashgrid features per level4actor hashgrid levels4Hashgridsactor hashgrid size static hashgrid levels static hashgrid size proposal features per level2 15 8 2 22 1proposal static hashgrid size 2 20proposal actor hashgrid size 2 15λ rgb5.0Loss weightsλ vgg λ d λ w λ p d proposal λ d proposal λ w5e-2 1e-2 1e-2 1e-2 1e-3 1e-3interlevel loss multiplier1e-3Learningratesactor trajectory lr cnn lr camera optimization lr remaining parameters lr1e-3 1e-3 1e-4 1e-2", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "We call this protocol Panda FC, and additionally report Panda 360 results, with all 6 cameras (and the 360 • lidar). For the backward-facing camera, we crop away 250 pixels from the bottom of the image, as this mainly shows the trunk of the ego vehicle.", "figure_data": "nuScenes (S-NeRF): S-Nerf uses four sequences for eval-uation: 0164, 0209, 0359, 0916. The first 20 sam-ples from each sequence are discarded, and the next 40 con-secutive samples are considered for training and evaluation.The remaining samples are also discarded. Out of the se-lected samples, every fourth is used for evaluation and therest are used for training. We train and evaluate on all 6cameras.KITTI (MARS): MARS reports NVS quality on a singlesequence, 0006, on frames 5-260. We adopt their 50%-protocol, where half of the frames are used for training, and25% for evaluation. Following their implementation, weadopt a repeating pattern where two consecutive frames areused for training, one is discarded, and the fourth is used forevaluation.Argoverse 2 & ZOD:", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
Adam Tonderski; Carl Lindström; Georg Hess; William Ljungbergh; Lennart Svensson; Christoffer Petersson
[ { "authors": "Mina Alibeigi; William Ljungbergh; Adam Tonderski; Georg Hess; Adam Lilja; Carl Lindström; Daria Motorniuk; Junsheng Fu; Jenny Widahl; Christoffer Petersson", "journal": "", "ref_id": "b0", "title": "Zenseact open dataset: A large-scale and diverse multimodal dataset for autonomous driving", "year": "2023" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Matthew Tancik; Peter Hedman; Ricardo Martin-Brualla; Pratul P Srinivasan", "journal": "", "ref_id": "b1", "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", "year": "2021" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Dor Verbin; Peter Pratul P Srinivasan; Hedman", "journal": "IEEE Conf. Comput. Vis. Pattern Recog", "ref_id": "b2", "title": "Mip-nerf 360: Unbounded anti-aliased neural radiance fields", "year": "2022" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Dor Verbin; P Pratul; Peter Srinivasan; Hedman", "journal": "", "ref_id": "b3", "title": "Zip-nerf: Anti-aliased gridbased neural radiance fields", "year": "2023" }, { "authors": "Holger Caesar; Varun Bankiti; Alex H Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom", "journal": "", "ref_id": "b4", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "Anpei Chen; Zexiang Xu; Andreas Geiger; Jingyi Yu; Hao Su", "journal": "Springer", "ref_id": "b5", "title": "Tensorf: Tensorial radiance fields", "year": "2022" }, { "authors": "Alexey Dosovitskiy; German Ros; Felipe Codevilla; Antonio Lopez; Vladlen Koltun", "journal": "PMLR", "ref_id": "b6", "title": "Carla: An open urban driving simulator", "year": "2017" }, { "authors": "Sara Fridovich-Keil; Alex Yu; Matthew Tancik; Qinhong Chen; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b7", "title": "Plenoxels: Radiance fields without neural networks", "year": "2022" }, { "authors": "Xiao Fu; Shangzhan Zhang; Tianrun Chen; Yichong Lu; Lanyun Zhu; Xiaowei Zhou; Andreas Geiger; Yiyi Liao", "journal": "IEEE", "ref_id": "b8", "title": "Panoptic nerf: 3d-to-2d label transfer for panoptic urban scene segmentation", "year": "2022" }, { "authors": "Andreas Geiger; Philip Lenz; Christoph Stiller; Raquel Urtasun", "journal": "The International Journal of Robotics Research", "ref_id": "b9", "title": "Vision meets robotics: The kitti dataset", "year": "2013" }, { "authors": "Ayaan Haque; Matthew Tancik; Alexei Efros; Aleksander Holynski; Angjoo Kanazawa", "journal": "", "ref_id": "b10", "title": "Instruct-nerf2nerf: Editing 3d scenes with instructions", "year": "2023" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "Adv. Neural Inform. Process. Syst", "ref_id": "b11", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Wenbo Hu; Yuling Wang; Lin Ma; Bangbang Yang; Lin Gao; Xiao Liu; Yuewen Ma", "journal": "", "ref_id": "b12", "title": "Tri-miprf: Tri-mip representation for efficient anti-aliasing neural radiance fields", "year": "2023" }, { "authors": "Shengyu Huang; Zan Gojcic; Zian Wang; Francis Williams; Yoni Kasten; Sanja Fidler; Konrad Schindler; Or Litany", "journal": "", "ref_id": "b13", "title": "Neural lidar fields for novel view synthesis", "year": "2023" }, { "authors": "Phillip Isola; Jun-Yan Zhu; Tinghui Zhou; Alexei A Efros", "journal": "", "ref_id": "b14", "title": "Image-to-image translation with conditional adversarial networks", "year": "2017" }, { "authors": "Bernhard Kerbl; Georgios Kopanas; Thomas Leimkühler; George Drettakis", "journal": "ACM Trans. Graph", "ref_id": "b15", "title": "3d gaussian splatting for real-time radiance field rendering", "year": "2023" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Abhijit Kundu; Kyle Genova; Xiaoqi Yin; Alireza Fathi; Caroline Pantofaru; Leonidas J Guibas; Andrea Tagliasacchi; Frank Dellaert; Thomas Funkhouser", "journal": "", "ref_id": "b17", "title": "Panoptic neural fields: A semantic object-aware neural scene representation", "year": "2022" }, { "authors": "Ruilong Li; Hang Gao; Matthew Tancik; Angjoo Kanazawa", "journal": "", "ref_id": "b18", "title": "Nerfacc: Efficient sampling accelerates nerfs", "year": "2023" }, { "authors": "Zhaoshuo Li; Thomas Müller; Alex Evans; Russell H Taylor; Mathias Unberath; Ming-Yu Liu; Chen-Hsuan Lin", "journal": "", "ref_id": "b19", "title": "Neuralangelo: High-fidelity neural surface reconstruction", "year": "2023" }, { "authors": "Sivabalan Manivasagam; Andrei Ioan; Jingkang Bârsan; Ze Wang; Raquel Yang; Urtasun", "journal": "", "ref_id": "b20", "title": "Towards zero domain gap: A comprehensive study of realistic lidar simulation for autonomy testing", "year": "2023" }, { "authors": "Ricardo Martin-Brualla; Noha Radwan; S M Mehdi; Jonathan T Sajjadi; Alexey Barron; Daniel Dosovitskiy; Duckworth", "journal": "", "ref_id": "b21", "title": "Nerf in the wild: Neural radiance fields for unconstrained photo collections", "year": "2021" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b22", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Rodrigo Srinivasan; Nima Ortiz-Cayon; Ravi Khademi Kalantari; Ren Ramamoorthi; Abhishek Ng; Kar", "journal": "ACM Trans. Graph", "ref_id": "b23", "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Springer International Publishing", "ref_id": "b24", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Thomas Müller", "journal": "", "ref_id": "b25", "title": "tiny-cuda-nn", "year": "2021" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Trans. Graph", "ref_id": "b26", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2007" }, { "authors": "Michael Oechsle; Songyou Peng; Andreas Geiger", "journal": "", "ref_id": "b27", "title": "Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction", "year": "2021" }, { "authors": "Julian Ost; Fahim Mannan; Nils Thuerey; Julian Knodt; Felix Heide", "journal": "", "ref_id": "b28", "title": "Neural scene graphs for dynamic scenes", "year": "2021" }, { "authors": "Konstantinos Rematas; Andrew Liu; P Pratul; Jonathan T Srinivasan; Andrea Barron; Thomas Tagliasacchi; Vittorio Funkhouser; Ferrari", "journal": "", "ref_id": "b29", "title": "Urban radiance fields", "year": "2022" }, { "authors": "Shital Shah; Debadeepta Dey; Chris Lovett; Ashish Kapoor", "journal": "Springer", "ref_id": "b30", "title": "Airsim: High-fidelity visual and physical simulation for autonomous vehicles", "year": "2018" }, { "authors": "Matthew Tancik; Vincent Casser; Xinchen Yan; Sabeek Pradhan; Ben Mildenhall; P Pratul; Jonathan T Srinivasan; Henrik Barron; Kretzschmar", "journal": "", "ref_id": "b31", "title": "Block-nerf: Scalable large scene neural view synthesis", "year": "2022" }, { "authors": "Matthew Tancik; Ethan Weber; Evonne Ng; Ruilong Li; Brent Yi; Terrance Wang; Alexander Kristoffersen; Jake Austin; Kamyar Salahi; Abhik Ahuja", "journal": "", "ref_id": "b32", "title": "Nerfstudio: A modular framework for neural radiance field development", "year": "2023" }, { "authors": "Haithem Turki; Jason Y Zhang; Francesco Ferroni; Deva Ramanan", "journal": "", "ref_id": "b33", "title": "Suds: Scalable urban dynamic scenes", "year": "2023" }, { "authors": "Peng Wang; Lingjie Liu; Yuan Liu; Christian Theobalt; Taku Komura; Wenping Wang", "journal": "Adv. Neural Inform. Process. Syst", "ref_id": "b34", "title": "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction", "year": "2021" }, { "authors": "Qitai Wang; Yuntao Chen; Ziqi Pang; Naiyan Wang; Zhaoxiang Zhang", "journal": "", "ref_id": "b35", "title": "Immortal tracker: Tracklet never dies", "year": "2021" }, { "authors": " Ting-Chun; Ming-Yu Wang; Jun-Yan Liu; Andrew Zhu; Jan Tao; Bryan Kautz; Catanzaro", "journal": "", "ref_id": "b36", "title": "High-resolution image synthesis and semantic manipulation with conditional gans", "year": "2018" }, { "authors": " Ting-Chun; Ming-Yu Wang; Jun-Yan Liu; Andrew Zhu; Jan Tao; Bryan Kautz; Catanzaro", "journal": "IEEE Conf. Comput. Vis. Pattern Recog", "ref_id": "b37", "title": "High-resolution image synthesis and semantic manipulation with conditional gans", "year": "2018" }, { "authors": "Yiming Wang; Qin Han; Marc Habermann; Kostas Daniilidis; Christian Theobalt; Lingjie Liu", "journal": "", "ref_id": "b38", "title": "Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction", "year": "2023" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE Trans. Image Process", "ref_id": "b39", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "Zirui Wang; Shangzhe Wu; Weidi Xie; Min Chen; Adrian Victor; Prisacariu", "journal": "", "ref_id": "b40", "title": "Nerf-: Neural radiance fields without known camera parameters", "year": "2021" }, { "authors": "Jonathan Bowen Wen; Valts Tremblay; Stephen Blukis; Thomas Tyree; Alex Müller; Dieter Evans; Jan Fox; Stan Kautz; Birchfield", "journal": "", "ref_id": "b41", "title": "Bundlesdf: Neural 6-dof tracking and 3d reconstruction of unknown objects", "year": "2023" }, { "authors": "Benjamin Wilson; William Qi; Tanmay Agarwal; John Lambert; Jagjeet Singh; Siddhesh Khandelwal; Ratnesh Bowen Pan; Andrew Kumar; Jhony Hartnett; Deva Kaesemodel Pontes; Peter Ramanan; James Carr; Hays", "journal": "", "ref_id": "b42", "title": "Argoverse 2: Next generation datasets for self-driving perception and forecasting", "year": "2021" }, { "authors": "Zirui Wu; Tianyu Liu; Liyi Luo; Zhide Zhong; Jianteng Chen; Hongmin Xiao; Chao Hou; Haozhe Lou; Yuantao Chen; Runyi Yang; Yuxin Huang; Xiaoyu Ye; Zike Yan; Yongliang Shi; Yiyi Liao; Hao Zhao", "journal": "CICAI", "ref_id": "b43", "title": "Mars: An instanceaware, modular and realistic simulator for autonomous driving", "year": "2023" }, { "authors": "Pengchuan Xiao; Zhenlei Shao; Steven Hao; Zishuo Zhang; Xiaolin Chai; Judy Jiao; Zesong Li; Jian Wu; Kai Sun; Kun Jiang; Yunlong Wang; Diange Yang", "journal": "", "ref_id": "b44", "title": "Pandaset: Advanced sensor suite dataset for autonomous driving", "year": "2021" }, { "authors": "Ziyang Xie; Junge Zhang; Wenye Li; Feihu Zhang; Li Zhang", "journal": "", "ref_id": "b45", "title": "S-neRF: Neural radiance fields for street views", "year": "2023" }, { "authors": "Ze Yang; Yun Chen; Jingkang Wang; Sivabalan Manivasagam; Wei-Chiu Ma; Anqi ; Joyce Yang; Raquel Urtasun", "journal": "", "ref_id": "b46", "title": "Unisim: A neural closed-loop sensor simulator", "year": "2007" }, { "authors": "Ze Yang; Sivabalan Manivasagam; Yun Chen; Jingkang Wang; Rui Hu; Raquel Urtasun", "journal": "", "ref_id": "b47", "title": "Reconstructing objects", "year": "" } ]
[ { "formula_coordinates": [ 3, 344.41, 203.6, 200.71, 30.44 ], "formula_id": "formula_0", "formula_text": "f (r) = Nr i=1 w i f i , w i = α i i-1 j=1 (1 -α j ).(1)" }, { "formula_coordinates": [ 3, 485.73, 246.01, 59.38, 9.68 ], "formula_id": "formula_1", "formula_text": "x i = o + τ i d," }, { "formula_coordinates": [ 3, 395.88, 325.28, 149.23, 22.31 ], "formula_id": "formula_2", "formula_text": "α i = 1 1 + e βsi ,(2)" }, { "formula_coordinates": [ 5, 79.22, 621.15, 207.15, 22.31 ], "formula_id": "formula_3", "formula_text": "V i = τ i+1 -τ i 3 A i + A i A i+1 + A i+1 ,(3)" }, { "formula_coordinates": [ 5, 50.11, 661.86, 81.88, 14.47 ], "formula_id": "formula_4", "formula_text": "x i = o + τi+τi+1 2 d." }, { "formula_coordinates": [ 5, 115.25, 693.9, 24.68, 10.27 ], "formula_id": "formula_5", "formula_text": "n l V 1/3 i" }, { "formula_coordinates": [ 6, 92.25, 506.94, 194.11, 31.4 ], "formula_id": "formula_6", "formula_text": "L image = 1 N p Np i=1 λ rgb L rgb i + λ vgg L vgg i .(4)" }, { "formula_coordinates": [ 6, 60.08, 681.03, 226.29, 30.32 ], "formula_id": "formula_7", "formula_text": "L lidar = 1 N N i=1 (λ d L d i + λ int L int i + λ p d L p d i + λ w L w i ),(5)" }, { "formula_coordinates": [ 6, 385.87, 334, 159.25, 21.46 ], "formula_id": "formula_8", "formula_text": "L w i = τi,j >ϵ ∥w ij ∥ 2 ,(6)" }, { "formula_coordinates": [ 14, 399.1, 106.66, 146.02, 22.31 ], "formula_id": "formula_9", "formula_text": "α = 1 1 + e βs ,(7)" }, { "formula_coordinates": [ 14, 318.83, 329.21, 226.29, 73.08 ], "formula_id": "formula_10", "formula_text": "L reg = 1 N N i=1   γi,j >ϵ ||w ij || 2 + γi,j <ϵ (||∇s(x ij )|| -1) 2   .(8)" }, { "formula_coordinates": [ 14, 370.53, 542.66, 174.58, 47.08 ], "formula_id": "formula_11", "formula_text": "    k 1 k 2 k 3 k 4     =     1 -1 -1 -1 -1 1 -1 1 -1 1 1 1     .(9)" }, { "formula_coordinates": [ 14, 395.88, 646.78, 149.23, 26.88 ], "formula_id": "formula_12", "formula_text": "x ij ) = 1 4δ l s l k l .(10)" }, { "formula_coordinates": [ 15, 532.66, 465.33, 12.45, 8.64 ], "formula_id": "formula_13", "formula_text": ")13" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b2" ], "table_ref": [], "text": "Handwritten mathematical expression recognition (HMER) has multiple applications, including assignment grading, digital library services, human-computer interaction, and office automation. Offline HMER is generally considered more difficult than online HMER due to differences in handwriting styles and the complex two-dimensional structure of mathematical formulas (Zhang et al., 2018). Recently, the methods based on encoder-decoder structure have shown obvious success (Cho et al., 2014). However, these methods may not always ensure accurate attention, particularly when the structure of a handwritten formula is complicated or the mathematical notation is unclear (Li et al., 2022). To alleviate this problem, we propose the Intelligent-Detection Network (IDN) that uses improved object detection network YOLOv7 and a structural analysis module based on bidirectional gated recurrent unit (BiGRU) and baseline symbol relationship tree (BSRT) to obtain HMER results. Our extensive experiments demonstrate that IDN outperforms other excellent methods on the HME100K dataset." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "HMER", "publication_ref": [ "b3", "b4", "b5", "b7", "b8" ], "table_ref": [], "text": "HMER methods have been comprised of three key components: symbol segmentation, symbol recognition and structure analysis (Chan & Yeung, 2000). Convolutional neural network (CNN) are capable of learning and automatically extracting features from original image data (Krizhevsky et al., 2017). These low-level feature representations are transformed into high-level feature representations, allowing CNN to effectively complete complex tasks within the field of computer vision. To alleviate the problem of inadequate coverage, Zhang et al. (2017) proposed the WAP encoder-decoder model, which consists of a CNN encoder based on visual geometry group (VGG) architecture and a decoder that uses a recurrent neural network (RNN) with gated recurrent unit (GRU). The WAP model encodes the input image of a mathematical formula to extract high-dimensional features and then decodes the corresponding LaTeX sequence of the formula.\nIn the subsequent research, Zhang et al. (2018) proposed the DenseWAP model to enhance the performance of CNN. The model replaces the VGG in the WAP model with the dense convolutional neural network (DenseNet). Wu et al. (2019) proposed the PAL model and PAL-V2 model which combine deep learning and adversarial learning to overcome the change of writing style. The PAL-V2 model uses a decoder based on CNN to solve the problems of gradient disappearance and gradient explosion in RNN (Wu et al., 2020)." }, { "figure_ref": [], "heading": "Object Detection", "publication_ref": [ "b9", "b11", "b12" ], "table_ref": [], "text": "Object detection algorithms can be broadly classified into two categories: two-stage algorithms and one-stage algorithms. Two-stage algorithms identify regions of interest and then analyze the contents of these regions to obtain detection results, while one-stage algorithms directly perform regression analysis to obtain detection results without explicitly identifying regions of interest. Ren et al. (2015) proposed the Faster R-CNN algorithm, which replaces selective search with a regional suggestion network, thereby significantly improving training efficiency. However, while the detection effect of two-stage object detection algorithms is improving, the detection speed remains limited by the backbone network structure.\nRedmond et al. ( 2016) proposed the one-stage object detection algorithm called you only look once (YOLO) for improving detection speed in practical application scenarios. YOLO has a simple training process and fast detection speed compared to two-stage object detection algorithms. However, it has limitations in detecting small objects and can only detect one object at a time.YOLOv3 model introduced the feature pyramid technique to obtain multi-scale features and balance detection speed and accuracy by changing the model's structure (Redmon & Farhadi, 2018). YOLOv4 improved the model's detection capabilities by incorporating the CSPDarknet-53 backbone network and Mish activation function (Bochkovskiy et al., 2020). The recognition ability of the network and the generalization of the network have been improved in YOLOv7 by incorporating the efficient layer aggregation network (ELAN) module and SPPCSPC module (Wang et al., 2022). As a result, YOLOv7 is currently the most advanced algorithm for one-stage object detection." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Intelligent-Detection Network Based on improved YOLOv7", "publication_ref": [], "table_ref": [], "text": "As shown in Figure 1, we present our Intelligent Detection Network (IDN) which is an end-toend trainable architecture consisting of the improved YOLOv7 network, BiGRU and BSRT. To further enhance the feature extraction of numbers and symbols, we incorporate a feature enhancement module (FEM) into the YOLOv7 structure. FEM uses extended convolution to learn feature information at different scales in order to improve the accuracy of multi-scale target detection and recognition (Wang et al., 2022). We use the feature fusion technique of FEM to enhance the accuracy of multi-scale predictions in HMER.\nThe input image is first processed by the backbone which consists of the ELAN and MP-1 modules. The resulting feature is then passed through a combination of modules including SPPCSPC, MP-2, FEM and others in the head stage to transmit and locate the feature information. Finally, the module based on BiGRU and BSRT is used for structural analysis to obtain the recognition result. " }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Structural Analysis Module Based on BiGRU and BSRT", "publication_ref": [], "table_ref": [], "text": "To improve information processing efficiency and reduce model parameters and tensors, GRUs replace the three gated cycle units of long short-term memory (LSTM) with reset gate (rt) and update gate (zt). This makes GRUs a more concise and efficient option compared to LSTM. The BiGRU layer takes the vector matrix from the previous layer as input and uses a bidirectional gated recurrent neural network to extract semantic information and feature structure. Unlike a regular GRU which only considers past information, the BiGRU consists of forward GRUs and backward GRUs that can consider both past and future information.\nFigure 2 illustrates the structure of GRU and BiGRU. We use BiGRU to extract symbol category and position information from the improved YOLOv7 detection results. This extracted information is then fed into the BSRT for final processing. Last but not least, we use BSRT algorithm to produce mathematical expression results in structural analysis module. BSRT defines seven spatial relationships, including above, below, right, superscript, subscript, and unrelated. After using the improved YOLOv7 algorithm for object detection and recognition, the center point, length, and width of symbols within formula species are identified. This allows for the determination of seven distinct spatial relations between each symbol in the formula based on the gathered information.\nHowever, obtaining precise formula structures cannot be achieved solely through the position information of each symbol. To address this, we also consider the center offset, aspect ratio, and overlap interval ratio of the symbol. The center offset θ is determined by measuring the angle between the line at the center point of the adjacent symbol and the horizontal line. The aspect ratio α and β of a symbol are calculated by dividing the length of the symbol by the length of the adjacent symbol and the width of the symbol by the width of the adjacent symbol, respectively. The overlap interval ratio λ and μ of a symbol are expressed as the ratio of the projected length of the two symbols in the vertical or horizontal direction to the length or width of the previous symbol. The relationship between symbols is determined using the three functions that have been defined. For example, if the center offset θ between two symbols is between 0.4π and 0.6π and the overlap interval ratio μ is greater than 0.5, the upper and lower relation between the two symbols is considered to be satisfied. Figure 3 gives the resulting BSRT structure following object detection and recognition. In BSRT structure, each node represents a symbol, and the label on each side denotes the relationship between the two symbols." }, { "figure_ref": [], "heading": "Experiments and Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b15" ], "table_ref": [], "text": "We select the offline dataset HME 100K as our experimental dataset (Yuan et al., 2022). The HME100K dataset is a collection of handwritten mathematical expressions from real-life scenarios, comprising 74,502 training images and 24,607 testing images. These images are sourced from thousands of photos of students' paper, and they contain real background and color information. Therefore, we need to binarize them before inputting them into the network to facilitate subsequent detection and recognition." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "We evaluate the performance of HMER using the expression recognition rate (ExpRate) as the evaluation metrics. ExpRate is calculated by determining the ratio of correctly recognized symbols to all symbols. To better assess the effectiveness of the algorithm, we also consider the tolerance levels of ≤1 and ≤2, which indicate that the expression recognition rate can tolerate one or two sign-level errors at most." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The proposed IDN is implemented in PyTorch 1.10 and trained on a single Nvidia RTX 3090 with 24GB RAM. Figure 4 gives some results identified by IDN algorithm, including the identified LaTeX sequence results and visualization results. To demonstrate the effectiveness of our algorithm, we conducted experiments comparing it to other excellent algorithms in the same environment. As shown in Table 1, our IDN algorithm achieved an increase in ExpRate of 1.89% compared to ABM, 5.97% compared to DWAP, and 10.6% compared to PAL-v2. Furthermore, IDN outperformed the state-of-theart algorithm ABM for ExpRate, ≤1 and ≤2. To better understand the benefits of our proposed algorithm IDN, we provide some examples of its performance compared to DWAP in the HMER task. Figure 5 demonstrates that IDN is capable of recognizing intricate symbols, such as the \"x\" the first image, the \"-\" in the second and third images, and the number \"3\" in the fourth image. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents the design of an intelligent detection network named IDN that uses object detection and structure analysis to achieve better recognition performance compared to other excellent HMER algorithms. The proposed IDN incorporates a FEM module to adaptively learn feature information from multiple receptive domains, thereby enhancing the detection performance of the original YOLOv7 algorithm. In addition, we developed a structure analysis module, which uses BiGRU and BSRT, to determine the recognition results of the final formula. Our experiments demonstrate that the proposed IDN algorithm effectively recognizes handwritten mathematical expressions and outperforms other existing algorithms." } ]
The use of artificial intelligence technology in education is growing rapidly, with increasing attention being paid to handwritten mathematical expression recognition (HMER) by researchers. However, many existing methods for HMER may fail to accurately read formulas with complex structures, as the attention results can be inaccurate due to illegible handwriting or large variations in writing styles. Our proposed Intelligent-Detection Network (IDN) for HMER differs from traditional encoder-decoder methods by utilizing object detection techniques. Specifically, we have developed an enhanced YOLOv7 network that can accurately detect both digital and symbolic objects. The detection results are then integrated into the bidirectional gated recurrent unit (BiGRU) and the baseline symbol relationship tree (BSRT) to determine the relationships between symbols and numbers. The experiments demonstrate that the proposed method outperforms those encoder-decoder networks in recognizing complex handwritten mathematical expressions. This is due to the precise detection of symbols and numbers. Our research has the potential to make valuable contributions to the field of HMER. This could be applied in various practical scenarios, such as assignment grading in schools and information entry of paper documents.
An Intelligent-Detection Network for Handwritten Mathematical Expression Recognition
[ { "figure_caption": "Figure 1 .1Figure 1. Structure of the Proposed Intelligent-Detection Network (IDN).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Structure of the BiGRU and GRU in IDN.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. A BSRT Generated From an Image.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Some Recognition Results of IDN.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. The Comparison of Recognition Results Between IDN and DWAP.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Experimental Results of IDN and Other Algorithms", "figure_data": "MethodExpRate≤1≤2WAP56.58%68.14%74.26%PAL-v257.22%69.21%75.37%DWAP61.85%70.63%77.14%ABM65.93%81.16%87.86%IDN (ours)67.82%82.91%88.37%", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Ziqi Ye
[ { "authors": "J Zhang; J Du; L Dai", "journal": "IEEE Transactions on Multimedia", "ref_id": "b0", "title": "Track, attend, and parse (tap): An end-to-end framework for online handwritten mathematical expression recognition", "year": "2018" }, { "authors": "K Cho; B Van Merriënboer; C Gulcehre; D Bahdanau; F Bougares; H Schwenk; Y Bengio", "journal": "", "ref_id": "b1", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "year": "2014" }, { "authors": "B Li; Y Yuan; D Liang; X Liu; Z Ji; J Bai; . . Bai; X ", "journal": "Springer Nature Switzerland", "ref_id": "b2", "title": "When Counting Meets HMER: Counting-Aware Network for Handwritten Mathematical Expression Recognition", "year": "2022-10-23" }, { "authors": "K F Chan; D Y Yeung", "journal": "International Journal on Document Analysis and Recognition", "ref_id": "b3", "title": "Mathematical expression recognition: a survey", "year": "2000" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "Communications of the ACM", "ref_id": "b4", "title": "Imagenet classification with deep convolutional neural networks", "year": "2017" }, { "authors": "J Zhang; J Du; S Zhang; D Liu; Y Hu; J Hu; . . Dai; L ", "journal": "Pattern Recognition", "ref_id": "b5", "title": "Watch, attend and parse: An end-to-end neural network based approach to handwritten mathematical expression recognition", "year": "2017" }, { "authors": "J Zhang; J Du; L Dai", "journal": "IEEE", "ref_id": "b6", "title": "Multi-scale attention with dense encoder for handwritten mathematical expression recognition", "year": "2018" }, { "authors": "J W Wu; F Yin; Y M Zhang; X Y Zhang; C L Liu", "journal": "Springer International Publishing", "ref_id": "b7", "title": "Image-to-markup generation via paired adversarial learning", "year": "2018" }, { "authors": "J W Wu; F Yin; Y M Zhang; X Y Zhang; C L Liu", "journal": "International Journal of Computer Vision", "ref_id": "b8", "title": "Handwritten mathematical expression recognition via paired adversarial learning", "year": "2020" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "Advances in neural information processing systems", "ref_id": "b9", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "J Redmon; S Divvala; R Girshick; A Farhadi", "journal": "", "ref_id": "b10", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "J Redmon; A Farhadi", "journal": "", "ref_id": "b11", "title": "Yolov3: An incremental improvement", "year": "2018" }, { "authors": "A Bochkovskiy; C Y Wang; H Y M Liao", "journal": "", "ref_id": "b12", "title": "Yolov4: Optimal speed and accuracy of object detection", "year": "2020" }, { "authors": "C Y Wang; A Bochkovskiy; H Y M Liao", "journal": "", "ref_id": "b13", "title": "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors", "year": "2022" }, { "authors": "J Wang; Y Chen; Z Dong; M Gao", "journal": "Neural Computing and Applications", "ref_id": "b14", "title": "Improved YOLOv5 network for real-time multi-scale traffic sign detection", "year": "2022" }, { "authors": "Y Yuan; X Liu; W Dikubab; H Liu; Z Ji; Z Wu; X Bai", "journal": "", "ref_id": "b15", "title": "Syntax-aware network for handwritten mathematical expression recognition", "year": "2022" } ]
[]
2023-11-26
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b7", "b22", "b10", "b5", "b5", "b11", "b28", "b17", "b32", "b33", "b19", "b12", "b20", "b11", "b10", "b5" ], "table_ref": [], "text": "Neural Radiance Fields (NeRFs) have attracted enormous academic interest in 3D scene representation, due to their remarkable ability to produce high-quality synthesized views in diverse 3D environments [17]. Recent works have concentrated on enhancing the performance and practicality of NeRF, thereby broadening its applicability with higher reconstruction quality and faster training speed [8,19,27].\nMoreover, NeRF is also widely used in many downstream applications, including 3D editing and novel view synthesis [14]. With that, the demand for object-specific NeRF datasets has risen. Nevertheless, the inherent limitation of NeRF, which provides only color and density information, presents a challenge for extracting specific objects from multi-view images.\nIn order to extract object-level NeRF from multi-view images, recent works have primarily explored the utilization of 2D visual models like CLIP [23] or DINO [4] along with additional feature images provided by modified NeRFs, such as LERF [11] and Interactive Segment Anything NeRF [6]. However, this approach has limitations, including the absence of required 3D object meshes, excessive additional training costs for the complete scene NeRF, and suboptimal reconstruction quality [6].\nTo address these limitations, our objective is to extract specific object NeRFs from multi-view images representing a 3D scenario. Although there have been continuous advancements in 2D image segmentation, such as the recently proposed Segment Anything Model (SAM) [12], segmenting the required 3D object NeRF from an original scenario encounters numerous challenges. Notably, training a 3D segmentation model similar to SAM for zero-shot segmentation tasks remains a formidable undertaking [5]. While extending segmentation capabilities directly into 3D scenarios presents formidable challenges, combining the 2D segmentation proficiency of SAM with the 3D representation capabilities of NeRF is a feasible and promising approach.\nBased on this idea, in this work, we propose Segment Object NeRF (Obj-NeRF) to extract a certain object and reconstruct its geometry, from a few user prompts. The effectiveness of Obj-NeRF is depicted in Figure 1. Obj-NeRF initially receives prompts indicating a specific object within a single image, selected from a set of multi-view images representing a 3D scenario. Subsequently, Obj-NeRF generates multi-view segmented images through the utilization of the SAM, thereby supervising the construction of the segmented target object NeRF. After acquiring the object NeRF, we further evaluate it on various 3D editing applications. The main contributions of this paper can be summarized as follows:\n• Firstly, we present a comprehensive pipeline for constructing a segmented NeRF targeting a specific object, with the input consisting of initial prompts from a single image. Our pipeline eliminates the need for pre-trained full-scene NeRFs, thereby avoiding unnecessary training expenses and enhancing reconstruction quality. , 28], and expanding their application scenarios [22,29]. There are also many downstream works on NeRFs, such as NeRF editing [18,33,34], 3D mesh extraction [20], and 3D generation tasks [13,14,21]. Segmentation on NeRFs. Significant progress has been made in 2D semantic fields, including DETR [3] and CLIP [10]. Recently, models trained on extremely large-scale datasets, such as the Segment Anything Model (SAM) [12] and SEEM [35], have shown strong ability on zero-shot image segmentation. Based on these, many researchers have made some progress to expand on 3D segmentation fields, by training an extra semantic feature on modified NeRF [11] and distilling segmentation backbone with NeRF [6]. However, these works cannot provide a segmented object NeRF, which is essential in many downstream applications like 3D scenario editing. SA3D [5] has proposed a method to construct a segmented object NeRF from multi-view images with SAM. Nonetheless, SA3D requires a pre-trained original full-scene NeRF, which is impractical and brings extra training costs and low reconstruction quality, especially for large-scale scenes." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b11", "b11" ], "table_ref": [], "text": "In this section, some preliminaries for Obj-NeRF will be introduced briefly, including the Neural Radiance Fields (NeRFs) [17] and the Segment Anything Model (SAM) [12].\nNeural Radiance Field. NeRF presents an effective way to synthesize novel views in 3D scenarios. Specifically, NeRF defines an underlying continuous volumetric scene function F Θ : (x, d) =⇒ (c, σ), which outputs the color c ∈ R 3 and the volume density σ ∈ R + with a given spacial location x ∈ R 3 and viewing direction θ ∈ S 2 . In this way, the rendering color C(r) for a specific camera ray r(t) = o + td can be expressed by a volume rendering algorithm as follows:\nC(r) = t f tn T (t)σ(r(t))c(r(t), d) dt,(1)\nwhere t n and t f are the near and far bounds, and the accumulated transmittance T (t) can be calculated as:\nT (t) = exp - t tn σ(r(s)) ds.(2)\nWith these definitions, NeRFs can be optimized using the loss between the ground-truth color C(r) and the calculated color Ĉ(r) for any image I:\nL I = r∈I C(r) -Ĉ(r) 2 . (3\n)\nSegment Anything Model (SAM) SAM, training by numerous 2D images, has been proved to achieve a state-ofart efficiency in zero-shot segmentation tasks [12]. With an image I and some prompts P, including points (positive or negative) and boxes, SAM can provide a mask for the indicated object mask = S(I, P)." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we will introduce details of Obj-NeRF. First, the overall pipeline will be demonstrated in Sec. 4.1. Then, a one-shot multi-view segmentation method will be presented in Sec. 4.2. After that, we will introduce a selfprompting method to construct an object NeRF dataset including massive objects using the segmentation method above in Sec. 4.3. In the end, some strategies for novel views synthesizing by NeRFs will be provided in Sec. 4.4." }, { "figure_ref": [ "fig_0" ], "heading": "Overall Pipeline", "publication_ref": [ "b6", "b31" ], "table_ref": [], "text": "We consider a set of multi-view images I = I 1 , ..., I n for one specific scenario with known camera poses. If not, structure-from-motion methods like COLMAP [25] can be utilized to estimate them. The objective is to acquire the 3D representation for any object segmented from this scenario with few prompts. To achieve this, a pre-trained full-scene NeRF for the scenario is not required due to the unnecessary training cost and relatively poor quality. Thus, we propose a method to acquire the segmented NeRF for the object from multi-view images directly. The overall pipeline is shown in Fig. 2. To start with, users will first provide a few prompts for one image on the object that is expected to be segmented. Based on this, a COLMAP sparse point cloud [25] can be constructed, which provides the correspondence between 2D images and point clouds used in the next step. Then, the multi-view segmentation procedure with SAM will provide multi-view masks for the object. For large datasets, such as ScanNet [7], ScanNet++ [32], and 3RScan [30], the self-prompting procedure will quickly generate a series of prompts of a kind of objects, which can be used to acquire multi-view segmented for each scene with the method above. In the end, the segmented NeRF of the indicated object will be trained with these multi-view segmented images, which provides novel view synthesizing abilities." }, { "figure_ref": [], "heading": "Multi-view Segmentation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Multi-view Segmentation Algorithm", "publication_ref": [ "b32" ], "table_ref": [], "text": "The first step is quickly obtaining multi-view segmented images from initial prompts on one specific image. It is easy to get the mask for the initial image M 0 with SAM. However, multi-view consistency should be utilized here in order to segment the indicated object on each image. A similar approach is also used by Yin et al. [33] to find a 2D-3D geometry match relationship with prompts spreading, but here we use it for a different task with some effective methods mentioned as follows.\nMore specifically, a sparse point cloud can be easily constructed from input images using COLMAP, a 3D reconstruction toolbox [25]. These sparse point clouds provide the correspondence between feature points on each image and the 3D points in the point cloud. In this way, we can construct a 3D point list D, which contains 3D points belonging to the indicated object. After initializing the list with 3D points that correspond to the feature points on I 0 [M 0 ], the 3D point list and the masks of remnant images can be updated iteratively. Specifically, for a new image I i , the point prompts p i can be selected from the feature points that correspond to 3D points in the list. Then, the mask M i can be obtained using SAM segmentation model S(I i , p i ).\nAfter that, all feature points on I i [M i ] can be added to the list D, which finishes an iterating step. The multi-view segmentation procedure can be summarized in Algorithm 1.\nM i ← S(I i , p i ) 7: X i ← C[I i ] ∪ I i [M i ] 8: D ← D ∪ C(X i\nAfter executing the algorithm above, an interesting byproduct will be obtained from the list D. As the definition of D, it is consisted of those 3D points belonging to the indicated object, which provides its sparse point cloud. Fig. 3 shows the segmented sparse point cloud of these indicated objects. This can be used in the next step and will offer some priors to the novel views synthesizing procedure in Subsection 4.4." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_5", "fig_5", "fig_6", "fig_6", "fig_6" ], "heading": "Obstruction Handling", "publication_ref": [ "b32", "b8" ], "table_ref": [], "text": "There is a thorny problem that when the target object is obstructed by other things, it will be easy for the procedure leading to a wrong result. In [33], the method based on projecting 3D points directly as the prompts will severely suffer from it. Here, our proposed multi-view segmentation procedure overcomes the wrongly prompting effect, by using the feature point correspondence instead of the projection method. However, these partially obstructed segmented images bring multi-view inconsistency for later NeRF training procedures. As Fig. 4 shows, although the target bowl indicated in Fig. 4 (a) has been correctly segmented in Fig. 4 (e), the inconsistency between Fig. 4 (b) and Fig. 4 (e) will lead to performance degradation during novel views synthesizing.\nIn order to eliminate the inconsistency brought by obstructed images, it is important to identify them first. With the segmented sparse point cloud D, we can first project these 3D points to each image to get the 2D coordinates for these feature points. Then, we can construct the concave hall for these 2D points as the mask of the target object using the alpha-shape method [9]. After that, the estimated mask can be smoothed by a Gaussian filter. Fig. 5 shows the procedures above to identify the obstructed images. The IoU between Fig. 5 (d) and Fig. 5 (f) is 0.096, which means the segmented image should be discarded. It should be noted that we cannot simply calculate the convex hull and regard it as the mask, for there are usually some outlier points which will extremely affect it." }, { "figure_ref": [], "heading": "Multi-object Segmentation", "publication_ref": [], "table_ref": [], "text": "The proposed segmentation algorithm can be extended to k-object segmentation tasks. After giving initial prompts for each target object, we can construct k 3D point lists D 1 , D 2 , ..., D k and update them with masks separately with almost little increase in time consumption. In this way, several target objects can be segmented for only one time. The performances of the multi-object segmentation method will be shown in Subsection 5.2. " }, { "figure_ref": [ "fig_7" ], "heading": "Large Dataset Self-prompting", "publication_ref": [ "b6" ], "table_ref": [], "text": "In Sec. 4.2, we propose a method which receives point prompts as input and outputs multi-view segmented views. Thus, if we can segment every object in a large dataset like ScanNet [7], which contains 1000+ scenes, a large multiview 3D object dataset can be constructed and is useful for many downstream works including generative tasks, like zero123 [14]. However, manually labeling the prompts for each object is tedious and unrealistic, which pushes us to find a feasible way to generate prompts for each scene quickly from a text prompt.\nFirst, a text prompt should be converted to something SAM can utilize and then generate a proper mask. Here, we use an object detector named Grounding DINO model [15], which receives text input and outputs boxes and scores that indicate the position and the probability of the target object. Then, the box with the highest score can be considered as an input to SAM, which provides a proper mask for the target object.\nThe next step is generating point prompts to fit the requirements of the segmentation algorithm proposed in Subsection 4.2. These point prompts should fulfill the conditions below: (1) They stay away from each other and represent all parts of the object; (2) They cannot stay too close to the edge. Thus, we can first calculate the distance to their mask for each point on the mask. Then these points near the edge are selected, for the interior points will interfere with the next step. Finally, point prompts can be generated through the k-means method [26]. Fig. 6 shows the steps which provide point prompts from the mask.\nIn this way, we create an object NeRF dataset including a large number of objects with just a few textual inputs. Details are discussed in 5.2. " }, { "figure_ref": [], "heading": "Novel View Synthesizing", "publication_ref": [], "table_ref": [], "text": "With the multi-view segmented images from Subsection 4.2, it is practicable to synthesize novel views for the target object after training a NeRF. However, simply constructing NeRF with segmented images only will not lead to a perfect performance. In this subsection, we will introduce some methods which significantly increase the quality of synthesizing." }, { "figure_ref": [ "fig_9", "fig_9", "fig_9" ], "heading": "Sparse and Dense Depth-Supervised NeRF", "publication_ref": [ "b7", "b7", "b6", "b31" ], "table_ref": [], "text": "In order to acquire better performance and faster convergence, Deng et al. [8] have proposed a method that adds depth information to supervise the NeRF training procedure. Specifically, the segmented object sparse point cloud D mentioned in Subsection 4.2 will provide their 3D coordinate information. Thus, for each image, the depth of feature points corresponding to the 3D point cloud can be calculated respectively. In this way, the sparse depth supervised NeRF training can be realized with the loss as follows,\nL NeRF = L rgb + λ d L depth ,(4)\nwhere L depth = ∥d -d∥ 2 indicates the mean square error of depth. It should be noticed that sparse depth supervision makes better performance in extremely few multi-view images like less than 10. For more input images, it will also improve the quality for reconstruction 3D mesh but may not for the reconstructed RGB images [8]. Sparse depth supervision brings limited performance enhancement due to the scarcity of depth information. To achieve higher reconstruction, dense depth information should be included in NeRF training. Many large multiview datasets include depth image for each RGB image, such as ScanNet [7], ScanNet++ [32], and 3RScan [30], which will provide required dense depth information. Fig. 9 shows the novel-view reconstruction performance comparison with and without dense depth supervision. Comparing Fig. 9 (d) and Fig. 9, reconstruction with depth supervision will provide a significantly higher quality 3D mesh." }, { "figure_ref": [], "heading": "Bounding Box and Ray Pruning", "publication_ref": [ "b0" ], "table_ref": [], "text": "After Segmenting the indicated object from each whole image, there are three notable advantages of the reconstruction as follows: (1) Eliminate the extra components thereby reducing the additional NeRF training cost; (2) Reduce the world size leading to augmented ray sampling density and voxel density; (3) Pruning of rays unrelated to the object, significantly conserving CUDA memory and enabling the utilization of higher resolution images. In order to achieve these advantages above, some methods will be introduced during the NeRF training procedure.\nAccording to the segmented sparse point cloud in Subsection 4.2, a bounding box B can be calculated from the known 3D point coordinates, which provides the scale of the world size in NeRF settings. With the much smaller bounding box, the density of ray sampling and the voxel grid increase accordingly (e.g. in the first column of Fig. 3, the overall voxel grid size decreases to 1% of the original one). Additionally, any rays which not intersect with the bounding box, i.e. out of the box, will be pruned and not be used to supervise the training. In this way, the number of effective training rays is reduced by an order of magnitude, which makes the utilization of higher resolution possible. As shown in Fig. 7, with higher resolution input RGB images, the reconstruction performance increases accordingly." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b15", "b10", "b6", "b31", "b27", "b7" ], "table_ref": [], "text": "Dataset. In order to verify the generality of our proposed comprehensive pipeline, we evaluate the Obj-NeRF on various multi-view datasets, including face-forwarding LLFF dataset [16], Mip NeRF 360 dataset [2], LERF dataset [11], and large indoor datasets such as 3RScan [30], ScanNet [7], and ScanNet++ [32]. Obj-NeRF will provide an indicated object NeRF with only a single prompt input for any scenario in these datasets. For large indoor datasets, the selfprompting procedure mentioned in Subsection 4.3 can be used. It will eventually provide a large multi-view object dataset including thousands of objects.\nNovel-view Synthesis. In our process of synthesizing novel views, we utilize the framework of DVGO NeRF [27]. It is important to note that our method is not limited to DVGO, other implementations of NeRF such as Instant NGP [19] or NeRF Studio [28] can also be used. Moreover, we have improved the quality of reconstruction by adopting the depth-supervision method from DS-NeRF [8]. To achieve object NeRF applications like object removal, replacement, rotation, and color-changing, we have used Blender to generate appropriate camera poses." }, { "figure_ref": [ "fig_10", "fig_10", "fig_1" ], "heading": "Results", "publication_ref": [ "b6" ], "table_ref": [], "text": "Multi-view Segmentation Consistency. As Fig. 10 shown, the proposed multi-view segmentation algorithm demonstrates strong robustness in various datasets, including faceforwarding, 360 • panoramic, and large indoor scenes. Especially in the third row in Fig. 10, images in ScanNet dataset [7] have relatively low resolution and sometimes loss of focus, our proposed procedure also works. Multi-view Object Dataset. We utilize our proposed self-prompting method on some large indoor datasets in order to construct a multi-view object dataset. As Fig. 11 has shown, after indicating a textual input like \"chair\" or \"table\", it will automatically generate initial prompts for the target object in each scene. After that, the multi-view segmentation and NeRF training procedures are followed, constructing an object NeRF for each object.\nNovel-view Synthesis. We construct the object NeRF under the supervision of the multi-view segmentation images mentioned above. With the methods introduced in Section 4, our novel view synthesis procedure overcomes the obstruction effect, enables multi-object reconstruction, and utilizes techniques that improve the reconstruction performance. As shown in Fig. 8, we compare our proposed method to SA3D [5] segmenting foreground NeRF from an pre-trained full-scene NeRF, which suffers from low resolution and floaters. Our proposed method achieves relatively high reconstruction quality across various scenarios, especially for large indoor datasets like the last row of Fig. 8, where the full-scene NeRF required for SA3D is impractical and low-quality." }, { "figure_ref": [ "fig_1" ], "heading": "Applications", "publication_ref": [ "b27" ], "table_ref": [], "text": "In order to verify the effectiveness of the object NeRF dataset, we utilize the extracted object NeRF in various applications as shown in Fig. 12, including object removal, replacement, rotation, and color changing.\nAdd-on. We can integrate the segmented object NeRF into any existing NeRF to realize the add-on task. Dur- ing this process, we can also apply the rotation, resize, and other transformations to the object NeRF. Nerfstudio and blender [28] provides a user-friendly way to construct the required camera poses during the editing procedure.\nRemoval. After obtaining the multi-view segmentation for each image, we can add a reverse alpha channel to the original image, representing the background environment without the foreground object. During the NeRF training procedure, the obstructed areas by the foreground object in one view can be inferred by other views. In this way, the object removal NeRF can be realized." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a comprehensive pipeline for constructing segmented object NeRFs, combining the 2D segmentation proficiency of SAM and the 3D reconstruction ability of NeRF. Without dependence on full-scene NeRF, our proposed Obj-NeRF is widely applicable to various scenarios. Compared to existing works, our method outperforms on reconstruction quality and the extensiveness of application environments. Additionally, we provide a feasible way to construct a large object NeRF dataset, which is verified in some applications like NeRF editing tasks. For future works, the constructed object NeRF dataset can be extended to 3D generation tasks." } ]
Object NeRF NeRF editing Figure 1. Proposed Obj-NeRF: indicate prompts on an image, then Obj-NeRF will output the segmented NeRF for the target object. With the segmented object NeRF, some applications including NeRF editing can be realized.
Obj-NeRF: Extract Object NeRFs from Multi-view Images
[ { "figure_caption": "Figure 2 .2Figure2. The overall pipeline for Obj-NeRF. Starting with multi-view RGB images, a COLMAP sparse point cloud can be constructed, which provides multi-view consistency for segmentation. After initializing several prompts for the first image, we can automatically obtain multi-view segmented images quickly, which are used to construct the required segmented NeRF. For large datasets, the indicated objects will be prompted for each scene.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 11Multi-view Segmentation Input: A set of images I 0 , I 1 , I 2 , ..., I n ; Initial point prompts p 0 ; SAM model S; COLMAP sparse point cloud C. Output: Multi-view segmented masks M 1 , M 2 , ..., M n . 1: Get the mask for the initial image M 0 = S(I 0 , p 0 ) 2: Find all feature points X = C[I 0 ] ∪ I 0 [M 0 ] 3: Init 3D point list D = C(X).", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "4:for i = 1, 2, ..., n do 5: Find point prompts p i for I i from D and C 6:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ")", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3. Segmented point cloud of target objects; Images in the first row are original sparse point clouds; Images in the second row are segmented point clouds; Red points on these images indicate the position of cameras.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Showing the obstruction effect; (a) Initial point prompts for the target bowl; (b) Segmented image for the first image; (c) A view where the target bowl is obstructed by the plant; (d) and (f) Directly projecting method leading to a wrongly segmented object; (e) Proposed method to get the correct segmentation.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Procedures to identify obstructed images; (a) Segmented sparse point cloud of the target object; (b) Original RGB image waiting to be masked; (c) Projecting 3D points to 2D image with known camera pose; (d) Generating mask for the original image; (e) Alpha-shape concave hull for the points; (f) Estimated mask after Gaussian filtering.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. The procedure from mask to point prompts; (a) Mask from SAM and the box prompt; (b) Distance heatmap showing the distance to the edge for each point; (c) Extracted points which are near the edge; (d) Point prompts from k-means method.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .Figure 8 .78Figure 7. Comparison of reconstruction performance with different resolution training images; Left: No down-sampling with ray pruningS; Right: Down-sampling.", "figure_data": "", "figure_id": "fig_8", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Comparison of novel view reconstruction performance with and without dense depth supervision.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Performance of the multi-view segmentation procedure; First row: LLFF dataset; Second row: Mip NeRF 360 dataset; Third row: ScanNet dataset; Last row: LERF dataset.", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .Figure 12 .1112Figure 11. Construction of the multi-view objects dataset; With a textual input like \"chair\" or \"table\", the initial prompts are generated automatically for each scene.", "figure_data": "", "figure_id": "fig_12", "figure_label": "1112", "figure_type": "figure" } ]
Zhiyi Li; Lihe Ding; Tianfan Xue
[ { "authors": "Jonathan T Barron; Ben Mildenhall; Matthew Tancik; Peter Hedman; Ricardo Martin-Brualla; Pratul P Srinivasan", "journal": "", "ref_id": "b0", "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", "year": "2021" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Dor Verbin; Peter Pratul P Srinivasan; Hedman", "journal": "", "ref_id": "b1", "title": "Mip-NeRF 360: Unbounded anti-aliased neural radiance fields", "year": "2022" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b2", "title": "End-toend object detection with transformers", "year": "2020" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b3", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Jiazhong Cen; Zanwei Zhou; Jiemin Fang; Wei Shen; Lingxi Xie; Xiaopeng Zhang; Qi Tian", "journal": "", "ref_id": "b4", "title": "Segment anything in 3D with NeRFs", "year": "2023" }, { "authors": "Xiaokang Chen; Jiaxiang Tang; Diwen Wan; Jingbo Wang; Gang Zeng", "journal": "", "ref_id": "b5", "title": "Interactive segment anything NeRF with feature imitation", "year": "2023" }, { "authors": "Angela Dai; X Angel; Manolis Chang; Maciej Savva; Thomas Halber; Matthias Funkhouser; Nießner", "journal": "", "ref_id": "b6", "title": "ScanNet: Richly-annotated 3D reconstructions of indoor scenes", "year": "2017" }, { "authors": "Kangle Deng; Andrew Liu; Jun-Yan Zhu; Deva Ramanan", "journal": "", "ref_id": "b7", "title": "Depth-supervised NeRF: Fewer views and faster training for free", "year": "2022" }, { "authors": "Kaspar Fischer", "journal": "", "ref_id": "b8", "title": "Introduction to alpha shapes", "year": "2000" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b9", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Justin Kerr; Chung ; Min Kim; Ken Goldberg; Angjoo Kanazawa; Matthew Tancik", "journal": "", "ref_id": "b10", "title": "LERF: Language embedded radiance fields", "year": "2023" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b11", "title": "Segment anything", "year": "2023" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b12", "title": "Magic3D: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b13", "title": "Zero-1-to-3: Zero-shot one image to 3D object", "year": "2023" }, { "authors": "Shilong Liu; Zhaoyang Zeng; Tianhe Ren; Feng Li; Hao Zhang; Jie Yang; Chunyuan Li; Jianwei Yang; Hang Su; Jun Zhu", "journal": "", "ref_id": "b14", "title": "Grounding DINO: Marrying DINO with grounded pre-training for open-set object detection", "year": "" }, { "authors": "Ben Mildenhall; P Pratul; Rodrigo Srinivasan; Nima Ortiz-Cayon; Ravi Khademi Kalantari; Ren Ramamoorthi; Abhishek Ng; Kar", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b15", "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b16", "title": "NeRF: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Ashkan Mirzaei; Tristan Aumentado-Armstrong; Konstantinos G Derpanis; Jonathan Kelly; Marcus A Brubaker; Igor Gilitschenski; Alex Levinshtein", "journal": "", "ref_id": "b17", "title": "SPIn-NeRF: Multiview segmentation and perceptual inpainting with neural radiance fields", "year": "2023" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b18", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Jacob Munkberg; Jon Hasselgren; Tianchang Shen; Jun Gao; Wenzheng Chen; Alex Evans; Thomas Müller; Sanja Fidler", "journal": "", "ref_id": "b19", "title": "Extracting triangular 3d models, materials, and lighting from images", "year": "2022" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b20", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Albert Pumarola; Enric Corona; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b21", "title": "D-NeRF: Neural radiance fields for dynamic scenes", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b22", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Barbara Roessle; Jonathan T Barron; Ben Mildenhall; Matthias Pratul P Srinivasan; Nießner", "journal": "", "ref_id": "b23", "title": "Dense depth priors for neural radiance fields from sparse input views", "year": "2022" }, { "authors": "L Johannes; Jan-Michael Schonberger; Frahm", "journal": "", "ref_id": "b24", "title": "Structurefrom-motion revisited", "year": "2016" }, { "authors": "P Kristina; Miin-Shen Sinaga; Yang", "journal": "IEEE Access", "ref_id": "b25", "title": "Unsupervised kmeans clustering algorithm", "year": "2020" }, { "authors": "Cheng Sun; Min Sun; Hwann-Tzong Chen", "journal": "", "ref_id": "b26", "title": "Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction", "year": "2022" }, { "authors": "Matthew Tancik; Ethan Weber; Evonne Ng; Ruilong Li; Brent Yi; Terrance Wang; Alexander Kristoffersen; Jake Austin; Kamyar Salahi; Abhik Ahuja", "journal": "", "ref_id": "b27", "title": "Nerfstudio: A modular framework for neural radiance field development", "year": "2023" }, { "authors": "Jiaxiang Tang; Xiaokang Chen; Jingbo Wang; Gang Zeng", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Compressible-composable NeRF via rank-residual decomposition", "year": "2022" }, { "authors": "Johanna Wald; Armen Avetisyan; Nassir Navab; Federico Tombari; Matthias Nießner", "journal": "", "ref_id": "b29", "title": "RIO: 3D object instance relocalization in changing indoor environments", "year": "2019" }, { "authors": "Chen Wang; Xian Wu; Yuan-Chen Guo; Song-Hai Zhang; Yu-Wing Tai; Shi-Min Hu", "journal": "", "ref_id": "b30", "title": "NeRF-SR: High quality neural radiance fields using supersampling", "year": "2022" }, { "authors": "Chandan Yeshwanth; Yueh-Cheng Liu; Matthias Nießner; Angela Dai", "journal": "", "ref_id": "b31", "title": "ScanNet++: A high-fidelity dataset of 3D indoor scenes", "year": "2023" }, { "authors": "Youtan Yin; Zhoujie Fu; Fan Yang; Guosheng Lin", "journal": "", "ref_id": "b32", "title": "OR-NeRF: Object removing from 3D scenes guided by multiview segmentation with neural radiance fields", "year": "2023" }, { "authors": "Yu-Jie Yuan; Yang-Tian Sun; Yu-Kun Lai; Yuewen Ma; Rongfei Jia; Lin Gao", "journal": "", "ref_id": "b33", "title": "NeRF-editing: geometry editing of neural radiance fields", "year": "2022" }, { "authors": "Xueyan Zou; Jianwei Yang; Hao Zhang; Feng Li; Linjie Li; Jianfeng Gao; Yong Jae Lee", "journal": "", "ref_id": "b34", "title": "Segment everything everywhere all at once", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 89.8, 338.78, 196.56, 26.29 ], "formula_id": "formula_0", "formula_text": "C(r) = t f tn T (t)σ(r(t))c(r(t), d) dt,(1)" }, { "formula_coordinates": [ 3, 108.34, 401.68, 178.03, 26.29 ], "formula_id": "formula_1", "formula_text": "T (t) = exp - t tn σ(r(s)) ds.(2)" }, { "formula_coordinates": [ 3, 110.61, 476.74, 171.88, 27.02 ], "formula_id": "formula_2", "formula_text": "L I = r∈I C(r) -Ĉ(r) 2 . (3" }, { "formula_coordinates": [ 3, 282.49, 484.01, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 4, 55.87, 235.42, 105.74, 41.96 ], "formula_id": "formula_4", "formula_text": "M i ← S(I i , p i ) 7: X i ← C[I i ] ∪ I i [M i ] 8: D ← D ∪ C(X i" }, { "formula_coordinates": [ 5, 375.93, 564.71, 169.19, 9.81 ], "formula_id": "formula_5", "formula_text": "L NeRF = L rgb + λ d L depth ,(4)" } ]
10.1145/3641289
2015-07-24
[ { "figure_ref": [ "fig_0", "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b24", "b36", "b11", "b14" ], "table_ref": [], "text": "Large language models (LLMs) have unparalleled proficiency in language generation, knowledge application, and intricate reasoning (Zhao et al., 2023). However, these models invariably manifest hallucination (Rawte et al., 2023), as they often generate content that is incongruent with user input, the model's output context, or factual information. Real-world hallucination examples from our UHGEval dataset can be observed in Fig. 1.\nThe fabricated news content depicted in Fig. 1 offers NO utility to journalists; on the contrary, the verification and rectification of such content exacts a toll on the valuable time of journalists. To this concern, it is crucial to first formulate a comprehensive, stringent, and demanding benchmark for the assessment of hallucination in language generation (Zhang et al., 2023;Wang et al., 2023b).\nWhile there have been a bunch of efforts to develop benchmarks for hallucination assessment, they always employ restricted techniques to produce particular kinds of hallucinated utterances. This approach is at odds with real-world scenarios where hallucinations arise in unrestricted, spontaneously generated content. For example, HaluEval specifies the type of hallucination in the prompt when generating hallucinated text: \"You are trying to answer a question but misunderstand the question context and intention\" (Li et al., 2023). Additionally, benchmarks such as HADES annotate hallucinations at a finer granularity by generating token-level hallucinations based on text perturbations (Liu et al., 2022), but the text perturbation method is still constrained. Besides, many benchmarks are centered on the evaluation of hallucinations in English, neglecting the assessment of such phenomena in Chinese. The extensive lexicon of Chinese characters, combined with the complexities introduced by Chinese word segmentation, To address the aforementioned challenges, we introduce a novel benchmark for hallucination assessment, as depicted in Fig. 2. The benchmark dataset is composed of raw Chinese news articles and continuations of those articles freely generated by LLMs but annotated with hallucinations.\nFurthermore, selecting texts from the news domain is intentional, given that news requires utmost precision in conveying factual information and exhibits minimal tolerance for hallucinations, presenting a considerable challenge for the majority of LLMs. Moreover, news data encompasses a wide range of topics, including medicine, technology, finance, sports, etc., incorporating features found in texts from other domains. Lastly, news articles are readily available and frequently employed as training corpora by a large number of LLMs, guaranteeing impartiality in the evaluation of many LLMs (Zhao et al., 2023).\nOur contributions: (1) The development of an unconstrained hallucination evaluation dataset, comprising over 5000 items. Existing methods for constructing datasets often yield biases towards predefined directions, thereby hindering the full simulation of real-world hallucinations. (2) The establishment of a unified and diverse evaluation framework, UHGEval, that encompasses discriminative, selective, and generative evaluations. Current benchmark methods for hallucination evaluation often exhibit a singular approach and lack task speci-ficity. (3) A comprehensive empirical analysis. We evaluated eight prominent Chinese LLMs and three classic GPT series models to explore the credibility of various LLMs. The aforementioned dataset, evaluation framework, and empirical results collectively constitute the UHGEval benchmark, which is openly available on GitHub1 ." }, { "figure_ref": [], "heading": "The UHGEval Dataset", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data Collection and Pre-processing", "publication_ref": [ "b19" ], "table_ref": [ "tab_1" ], "text": "We amassed tens of thousands of historical news articles from leading Chinese news websites, covering the period from January 2015 to January 2017, to serve as the foundation for constructing the dataset. It is worth noting that the decision to eschew more recent news articles (e.g., from 2024) was made to better assess the model's understanding of existing knowledge. Indeed, the knowledge embedded within the training data of existing Chinese LLMs typically encompasses information about significant news between 2015and 2017(Zhao et al., 2023)).\nThe collected news spans various topics, such as sports, education, science, society, finance, and more. This diversity underscores the advantage of choosing news texts for our dataset, as it enables the incorporation of a wide array of text genres. We hypothesize that the occurrence of hallucinations will vary as LLMs generate news across different 1.\nIn the data pre-processing stage, we divide a complete news article into three parts: the beginning text, the following text, and the reference information. The beginning text serves to guide the model in generating the continuation and is typically the opening portion of the news. During evaluation, the LLM is required to generate content following the beginning text. The following text comprises the subsequent sentences in the news article and serves as the ground truth for the continuation task. Finally, all the remaining text, after the beginning text is excluded, serves as a source of reference information. This section provides reference information for labeling and also acts as the reference text for the reference-based evaluation." }, { "figure_ref": [], "heading": "Unconstrained Hallucination Generation", "publication_ref": [ "b11", "b14", "b11", "b16", "b18", "b20", "b6", "b1" ], "table_ref": [], "text": "Unlike directed hallucination generation (Li et al., 2023) or perturbation-based generation (Liu et al., 2022), we have adopted an unconstrained generation methodology for the continuation of natural language content, though it poses difficulties for subsequent annotations. This generation's fashion entails directly inputting the text to be continued into the model without any restrictive prompt instructions, thereby obtaining organic results.\nFurthermore, current benchmarks for evaluating hallucination have predominantly relied on a single LLM to produce a hallucinated dataset. Notable examples include HaluEval (Li et al., 2023) and PHD (Yang et al., 2023b), which exclusively utilize ChatGPT, and FActScore (Min et al., 2023) and FACTOR (Muhlgay et al., 2023), which solely employ InstructGPT (Ouyang et al., 2022). In contrast, our methodology incorporates a suite of five distinct Chinese LLMs to generate hallucinated content. These models include ChatGLM2-6B (Du et al., 2022), Baichuan2-13B (Yang et al., 2023a), Qwen-14B (Bai et al., 2023), InternLM-20B (In-ternLM, 2023), and Xinyu-7B. For additional information about the Xinyu series models, please refer to the Appendix B.1.\nFor each input news article, we concurrently generate five candidate continuations using 5 different LLMs without constraint. Overall, our approach engenders a more unconstrained and heterogeneous generation of hallucinations, mitigating the bias that may arise from the use of a single model or constrained prompting." }, { "figure_ref": [], "heading": "Hallucination Ranking", "publication_ref": [], "table_ref": [], "text": "Given the unconstrained nature of our paradigm, the task of discerning whether the generated content is indeed hallucinated presents a significant challenge. Upon generating the continuations, an exclusive dependence on human annotation would incur substantial costs, whereas a purely machinebased approach, such as utilizing GPT4, could potentially yield less accurate results.\nTo navigate these complexities, we have adopted a two-stage annotation. This approach begins with an initial stage of hallucination ranking (Section 2.3), designed to sort the generated content based on the likelihood of hallucination. The ranking is then followed by the second stage of automatic labeling and human rechecking (Section 2.4). \nf inal ← picked[0] ▷ More Hallucination\nHallucination ranking is a crucial step in selecting the most appropriate continuation from a set of candidate continuations generated by LLMs. We employ a ranking process detailed in Algorithm 1. This process relies on two critical metrics: f luency, ensuring that the continuation does not become too nonsensical, and likelihood, which stands for the likelihood of hallucination occurrence, ensuring that the continuation includes a detectable level of hallucination. By employing such a ranking, it is guaranteed that, in the worstcase scenario, the f inal candidate ranks at least third in fluency and third in the likelihood of hallucination occurrence, achieving a balanced level. The two metrics are computed as follows.\nFluency This refers to the coherence and readability of the text. A fluent text should read smoothly, be grammatically correct, and make logical sense in the context of the continuation. To assess fluency, a reward model developed by the Institute for Advanced Algorithms Research (IAAR) is employed, trained to score text quality based on fluency." }, { "figure_ref": [], "heading": "Likelihood of Hallucination Occurrence", "publication_ref": [ "b23", "b12" ], "table_ref": [], "text": "This dimension evaluates the extent to which the continuation may contain hallucinated content. To estimate the probability, we evaluate the lexical correlation between the generated continuation and the reference information. The lower the correlation, the more likely hallucinations are to occur. Despite existing metrics, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), we believe that these rule-based methods may not effectively discover hallucinations. Therefore, we propose the keyword precision (kwPrec) metric.\nThis metric uses an LLM (e.g., GPT3.5-Turbo) to extract keywords from the continuation and determine whether these keywords have a match in the reference information. The ratio of all matches to the total keywords is then calculated. Since LLMs often extract appropriate keywords more effectively, kwPrec focuses more on factual relevance rather than expressional relevance. Fig. 3 illustrates the tokens segmented by our method compared to those obtained by BLEU-4 and ROUGE-L.\n江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 kwPrec 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 kwPrec 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 ROUGE-L 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 ROUGE-L 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 BLEU-4 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 BLEU-4 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 kwPrec 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 ROUGE-L 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 BLEU-4\nJiangsu is in China for green food production the most developed provinces one of\n江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 kwPrec 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 ROUGE-L 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 BLEU-4\nJiangsu is in China for green food production the most developed provinces one of Figure 3: Tokenization results for BLEU-4, ROUGE-L, and kwPrec, using newsid=num_000432 as an example. The meaning of the above sentence: Jiangsu is one of the most developed provinces in China for green food production." }, { "figure_ref": [ "fig_2" ], "heading": "Automatic Labeling and Human Rechecking", "publication_ref": [], "table_ref": [], "text": "Through hallucination ranking, we can identify continuations that are both articulately expressed and likely to contain hallucinations. To detect continuations with confirmed hallucinations, we propose an annotation scheme that utilizes keywords, which includes automatic labeling and subsequent human verification, as shown in Fig. 4. Automatic labeling We utilize the keywords identified by GPT3.5-Turbo from the candidate continuations, similarly to the process used in the computation of kwPrec previously. These keywords act as the focal points for subsequent verification. Thereafter, we employ GPT4-0613 (OpenAI, 2023) to perform annotation on these keywords. It evaluates the validity of the keywords in the continuations by conducting a cross-reference with the provided original news and provides explanations for any detected unreasonable keywords.\nHuman rechecking We undertake a manual, oneto-one verification process by analyzing the annotated results and explanations provided by GPT4-0613 against the original news. This step ensures the accuracy of the machine-generated annotations.\nIn the end, instances verified as accurate by annotators comprise the final UHGEval dataset. For details on manual annotation, please refer to Appendix A.1." }, { "figure_ref": [], "heading": "Dataset Statistics", "publication_ref": [], "table_ref": [], "text": "Starting with 17,714 candidate hallucinated continuations, we curated a dataset of 5,141 hallucinated continuations, as detailed in the basic statistics in 3 Experiments" }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b6", "b1", "b17", "b26" ], "table_ref": [], "text": "Given that our dataset is tailored for the Chinese language generation domain, we selected eight widely used Chinese LLMs and three models from OpenAI. These LLMs are from eight base models: Aquila2 Base (BAAI, 2023), Baichuan2\nBase (Yang et al., 2023a), GLM Base (Du et al., 2022), GPT Base 2 , InternLM Base (InternLM, 2023), Qwen Base (Bai et al., 2023), BLOOMZ Base (Muennighoff et al., 2023), and LLaMA2 Base (Touvron et al., 2023). Refer to the Appendix B.1 for a detailed overview of the LLMs used in the experiments." }, { "figure_ref": [], "heading": "Evaluation Forms", "publication_ref": [], "table_ref": [], "text": "In this study, we conducted a detailed analysis of evaluation methods across three dimensions: form, metric, and granularity. A more comprehensive report can be found in the Appendix B.2. Here, we introduce the three forms of evaluation. Firstly, there is the discriminative evaluation, which involves having the model determine whether a continuation contains hallucinations. Secondly, similar to discriminative evaluation, selective evaluation allows LLMs to choose the continuation without hallucinations from options with and without such content. Lastly, we have generative evaluation. Specifically, the LLM under evaluation is provided with a beginning text and is then tasked with generating a continuation. Subsequently, various reference-based techniques are employed to assess whether the generated continuation includes hallucinations.\n2 https://openai.com" }, { "figure_ref": [], "heading": "Evaluation Framework", "publication_ref": [], "table_ref": [], "text": "To accommodate different forms of evaluation methods, we have developed a data-secure, easyto-extend, and easy-to-use evaluation framework, as illustrated in Fig. 5. Refer to Appendix B.3 for a more detailed understanding of the various layers of the framework. UHGEval is both intuitive and secure for users, offering efficient usage while concurrently ensuring the integrity of experimental results through robust resistance to exceptions and support for resuming evaluations post unexpected interruptions. For developers and researchers, the modules within the Dependency and Evaluator layers are fully interchangeable, thereby affording considerable flexibility for expansion." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "To establish a robust experimental framework, we have set up some configurations as follows.\nPrompt engineering We apply the technique of \"intent + instruction + 3-shot (explainable) prompting.\" Intent delineates the role, instruction outlines the task, and the prompt incorporates three examples to aid the few-shot learning (Zhao et al., 2023). Furthermore, political content in examples is prohibited to adhere to content policies from model service providers. Explainable prompting entails not merely acquiring results, but also eliciting the model's rationale behind its responses. Refer to Appendix E to view the complete prompt templates.\nExample Balancing To guarantee the reliability of experimental outcomes for all LLMs, we meticulously balance examples in discriminative and also in selective evaluations. Specifically, the LLM under evaluation will encounter an equal number of examples with and without hallucinations.\nHyperparameter settings Managing parameters for heterogeneous LLMs is a multifaceted endeavor, as different LLMs feature unique interface designs, and the same parameters can have varying implications across LLMs. Despite these challenges, we commit to the principle of \"guaranteeing overall output determinism while allowing for slight randomness, and aiming for consistent parameter settings across models.\" Consequently, we set the temperature to 0.1, the top_p to 0.9, the top_k to 5, and the random seed to 22.\nMetrics For discriminative and selective evaluation, accuracy serves as the metric. For generative evaluation, metrics consist of 4-gram BLEU (BLEU-4), the longest common subsequence-based ROUGE (ROUGE-L), kwPrec, and BERTScore." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [ "tab_7", "tab_8", "tab_8" ], "text": "Results are presented in Table 3 andTable 4. Discriminative evaluation Initially, the GPT series models' performance is notably superior in discriminative evaluation, showcasing their formidable foundational capabilities in knowledge recall, utilization, and judgment. Moreover, a comparison of experimental outcomes at the keyword and sentence levels reveals that accuracy is generally superior at the keyword level. This could stem from the fact that the hallucinated continuations in our dataset exhibit sufficient fluency, aligning with the fluency distribution of LLM outputs. This can potentially confuse the evaluated LLM, complicating the judgment of the continuation's authenticity. Conversely, keywords bypass fluency concerns, rendering keyword-level evaluation more amenable to LLMs. This observation implies that detecting hallucinations could be more dependable at the keyword level compared to the sentence level.\nSelective evaluation Firstly, GPT4-1106 clinches the top spot, reaffirming the formidable foundational capabilities of the GPT series models. Concurrently, Xinyu2-70B attains second place, excelling as a model trained on the Chinese news corpus. This achievement, to a degree, confirms the merit of domain-specific LLMs. Secondly, when comparing the outcomes of the selective evaluation with those of the discriminative evaluation at the sentence level, most LLMs exhibit improved accuracy. We think, furnishing LLMs with more contrasting information alleviates the demand for the model's fact recall, thus diminishing the challenge of selective evaluation. Therefore, we posit that selective evaluation is comparatively simpler for LLMs.\nGenerative evaluation Overall, InternLM-20B, Xinyu2-70B, and Aquila-34B have achieved commendable results, but the performance of Aquila-34B could be attributed to its comparatively shorter average generation length. Additionally, the GPT series exhibits subpar performance, possibly due to the insubstantial amount of Chinese data in its training corpus. After all, the Chinese data incorporated into GPT's training from the Common Crawl corpus comprises less than 5%3 .\nEvaluations by Type We focus on selective evaluation results and perform a comprehensive breakdown analysis of these across the four types, as illustrated in Table 4. Initially, most LLMs demonstrate enhanced accuracy for knowledge-intensive and document-intensive news. This may be because the training datasets for LLMs typically include substantial human knowledge and official documentation of major historical events. Furthermore, the majority of LLMs show reduced accuracy in general and number-intensive news. General news often contains societal minutiae, which are not the focus of LLM training. Regarding numberintensive news, it poses a considerable challenge for LLMs, given that encoding identical numbers with varied historical meanings is complex. However, GPT4-1106 attains especially high scores in the demanding number-intensive news." }, { "figure_ref": [], "heading": "Further Discussion", "publication_ref": [ "b11", "b4" ], "table_ref": [], "text": "Each of the three evaluation forms possesses distinct advantages and drawbacks. Discriminative evaluation is often the method of choice for a range of standard benchmarks (Li et al., 2023;Cheng et al., 2023). This approach is intuitive, and the construction of evaluation prompts is straightforward. Selective evaluation resembles discriminative evaluation but is marginally less demanding because it includes a reference option for contrast. In both discriminative and selective evaluations, certain models might be suspected of conjecturing answers from a few shots due to inadequate reasoning skills, which can undermine the reliability of the outcomes. Consequently, the use of explainable prompting becomes essential. Generative evaluation most closely mirrors real-world applications. However, the generated content is unrestricted, which poses challenges for even the most dependable reference-based evaluation techniques. Therefore, employing a combination of metrics simultaneously, including lexical evaluation based on token coverage and semantic evaluation based on textual similarity, is imperative.\nThe foundational capabilities required of LLMs can be arrayed on a spectrum from simple to complex: generative, selective, and discriminative evaluation. Generative evaluation entails the direct invocation of parameters for continuation, bypassing the need for an extensive grasp of instructions. Selective evaluation necessitates a degree of inferen-tial reasoning but offers comparative choices, rendering the level of difficulty moderate. Conversely, discriminative evaluation demands the precise retrieval of facts, thereby increasing the challenge." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "This section outlines benchmark datasets, their characteristics, and evaluation methodologies. These benchmarks are summarized in Table 5. For more related works, you may refer to Appendix C." }, { "figure_ref": [], "heading": "Benchmark dataset construction", "publication_ref": [ "b18", "b11", "b22", "b27", "b15", "b18", "b10", "b4", "b14", "b13" ], "table_ref": [], "text": "Dataset construction usually involves three steps. Firstly, real-world texts for hallucination generation are collected, and most benchmarks directly use existing datasets, such as Wiki (Muhlgay et al., 2023), Alpaca (Li et al., 2023), PubMed (Pal et al., 2023), etc. Secondly, hallucinations are generated usually by LLMs such as GPT3.5-Turbo, and most works use a constrained hallucination generation (CHG) paradigm. STSN (Varshney et al., 2023) and XSum Hallu (Maynez et al., 2020) are the only two benchmarks that use UHG as we do. Thirdly, it is not certain that the content generated by the LLMs actually contains hallucinations, and often requires annotation, which is mostly done by human involvement. There are also works using automatic machine labeling (Muhlgay et al., 2023;Lee et al., 2022;Cheng et al., 2023). These are the basic methods for constructing datasets, but there are also some other paradigms, such as constructing the dataset purely using manual labor, e.g. Chinese-FactEval (Wang et al., 2023a), HADES (Liu et al., 2022), TruthfulQA (Lin et al., 2022), etc." }, { "figure_ref": [], "heading": "Benchmark dataset characteristics", "publication_ref": [ "b4" ], "table_ref": [], "text": "Regarding the granularity of hallucinations labeled in the datasets, most studies assess hallucinations at the sentence and document levels, while a few examine them at the word (or keyword, concept) level. Concerning language, most evaluation datasets are in English. To our knowledge, the only two Chinese benchmarks, ChineseFactEval (Wang et al., 2023a) and HalluQA (Cheng et al., 2023) contain only 125 and 450 questions, respectively. Given the notably limited size of these datasets, our work significantly enhances the pool of data available for Chinese hallucination evaluation." }, { "figure_ref": [], "heading": "Evaluation schemes", "publication_ref": [ "b16", "b13", "b15", "b16", "b7", "b34", "b13" ], "table_ref": [], "text": "Currently, building automatic metrics for evaluation is still dominant, and a small proportion of works use human evaluation (Min et al., 2023;Lin et al., 2022;Maynez et al., 2020). In terms of specific evaluation metrics, most works adopt common classification metrics, e.g., F1, accuracy, precision, and recall. some other works construct their calculation methods, e.g., FACTOR (Muhlgay et al., 2023), FActScore (Min et al., 2023), HaLoCheck (Elaraby et al., 2023), etc. However, the above metrics are rule-based and can only evaluate the ability of LLMs to classify hallucinations, but not the ability of LLMs to generate content without hallucinations. Thus, some benchmarks explore even further in generative evaluation. For example, KoLA (Yu et al., 2024) evaluates knowledge creation (KC) using BLEU and ROUGE, and Truth-fulQA (Lin et al., 2022) evaluates hallucinations using a specially trained classifier, GPT-judge." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "LLMs are rapidly evolving, heralding a new era of potential applications within the realm of professional content generation. The progression of LLMs in this domain necessitates the establishment of robust benchmarks to steer their development effectively. In this work, we introduce a novel hallucination benchmark dataset using an unconstrained fashion, encompassing more than 5,000 instances annotated at the keyword level. Additionally, we propose a secure, scalable, and user-friendly evaluation framework to facilitate comprehensive assessments. Through meticulous experimentation on eleven prominent LLMs, our study has unearthed a series of enlightening findings. Looking ahead, our research endeavors will persist in exploring the intricacies of hallucination phenomena within professional content generation, aiming to further understand and enhance LLM capabilities." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Dataset Firstly, although we have utilized hallucination ranking, automatic labeling, human rechecking, and various other techniques mentioned in Appendix A to ensure the quality of data annotation, with over 5,000 data entries, there is still a possibility of labeling errors. We have mobilized the power of the open-source community to collectively improve our dataset. Additionally, the dataset creation process is flexible, allowing for dataset expansion into English and broader domains, such as mathematical reasoning and programming codes.\nFramework Although our framework simplifies the integration of LLMs through APIs or vLLM 4 , users seeking to utilize custom or diverse Hugging-Face models may face initial hurdles. We need to further enhance the usability of our framework." }, { "figure_ref": [], "heading": "Constrained v.s. Unconstrained", "publication_ref": [], "table_ref": [], "text": "We have determined that constrained generation cannot fully reflect real-world applications, but empirical analysis is required to prove this point. This may involve constructing a text classifier to determine the type of hallucination, followed by comparing the distribution of hallucinations in our dataset with those in other benchmark datasets to observe any significant deviations. We leave this for future work." }, { "figure_ref": [], "heading": "A The UHGEval Dataset", "publication_ref": [], "table_ref": [], "text": "A.1 Dive into Human Rechecking Process\nLeast Hallucination Principle The keywordbased labeling scheme has inherent limitations. Languages exhibit a dependency structure (de Marneffe and Nivre, 2019). For instance, in the phrase \"The rainbow is black,\" the words \"rainbow\" and \"black\" exhibit interdependence. One could contend that \"black\" is incorrect, while another could maintain that \"rainbow\" is erroneous, given that \"night\" is typically described as black.\nTo address the challenges stemming from language dependency structures, we have adopted the Least Hallucination Principle. If a set of words can be selected, and their replacement with contextually appropriate words yields a reasonable sentence, then such a set of words is designated as a hallucinated word group. The words selected for annotation must meet the condition of comprising the minimal number of words in the group, as illustrated in Equation 1. In the equation, W is the set of keywords in a sentence, w is the hallucinated word group, correct(•) is the correction function that modifies hallucinated words to non-hallucinated words, and hallucinated(•) assesses whether a sentence composed of keywords hallucinated.\nmin |w| s.t. w ⊂ W w ′ = correct(w) false = hallucinated(W -w + w ′ ) (1)\nBy this principle, within the phrase \"Journey to the West is an American novel and one of the Four Great Classics,\" the word \"American\" would be marked for annotation, as altering this single keyword to \"Chinese\" dispels the hallucination throughout the sentence.\nEngagement of Annotators Additionally, we acknowledge that hallucination annotation may become somewhat tedious. Consequently, annotators are integrated throughout the entire process, participating in discussions instead of solely evaluating the accuracy of machine annotations. This approach also yields benefits for our work. For example, an annotator with a journalism background offered valuable professional insights into pinpointing news-related hallucinations, emphasizing that fact increment is a critical aspect of news writing." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "A.2 Analysis of Final Dataset", "publication_ref": [], "table_ref": [], "text": "We developed a conversion rate chart to depict the transition from candidate hallucinations to the final dataset, as depicted in Fig. 6. The conversion rate can be interpreted as the likelihood of hallucinations occurring across various categories. Our observations indicate a higher likelihood of hallucinations in number-intensive and general news, whereas this likelihood is reduced in knowledgeintensive and document-intensive news. By analyzing the hallucinated word cloud depicted in Fig. 7 for each news category, we can draw the following conclusions: Number-intensive news often includes numeric values that are challenging to remember, like 0.09% and 6:3, which pose difficulties for both LLMs and humans. General news encompasses a diverse vocabulary, featuring terms such as \"social media\" and \"friendship,\" which are often deemed less critical and thus challenging to incorporate into the training corpora of many LLMs. Knowledge-intensive news frequently features terms such as \"according to incomplete statistics\" and \"key technology,\" which are prevalent in technical literature. However, LLMs may not always use these terms appropriately. Documentintensive news often contains terms associated with official statements, such as \"representation,\" \"president,\" and \"spokesperson.\" This suggests that LLMs are susceptible to introducing unauthorized alterations to the content of documents." }, { "figure_ref": [], "heading": "Document-Intensive General", "publication_ref": [], "table_ref": [], "text": "Knowledge-Intensive Number-Intensive { \"id\": \"num_000432\", \"headLine\": \"(Society) Jiangsu's First Selection of the Top 100 Green Foods Most Loved by Consumers\" , \"broadcastDate\": \"2015-02-11 19:46:49\", \"type\": \"num\", \"newsBeginning\": \"Xinhua News Agency, Nanjing, February 11 (Reporter Li Xiang) 'Food is the paramount necessity of the people, and safety is the top priority of food.' On February 11, Jiangsu announced the results of the 'First Consumers' Favorite Green Foods' selection, with Lao Shan honey and 100 other foods receiving the title of 'Consumers' Favorite Green Food'.\", \"hallucinatedContinuation\": \"Jiangsu is one of the most developed provinces in the country in terms of green food production.\", \"generatedBy\": \"InternLM_20B_Chat\", \"appearedKeywords\": [\"Jiangsu\", \"national\", \"green food production\"], \"allKeywords\": { \"Jiangsu\": \"reasonable\", \"national\": \"reasonable\", \"green food production\": \"reasonable\", \"developed\": \"unreasonable, there is no factual evidence to prove that Jiangsu is one of the provinces with developed green food production in the country, but what can be confirmed is that Jiangsu has active practices and promotions in green food production\", \"province\": \"reasonable\", \"one of\": \"unreasonable, there is no specific factual evidence to show that Jiangsu is one of the developed provinces in terms of green food production in the country\" }, \"realContinuation\": \"61 award-winning production enterprises jointly signed a integrity pact, jointly building a green food integrity alliance.\", \"newsRemainder\": \"61 award-winning production enterprises jointly signed an integrity pact, jointly building a green food integrity alliance. This is an important measure for Jiangsu to ensure food safety and promote green food production. \\n...\" } Figure 8: An example from the UHGEval dataset. (In English) { \"id\": \"num_000432\", \"headLine\": \"(社会)江苏首次评选消费者最喜爱的百种绿色食品\", \"broadcastDate\": \"2015-02-11 19:46:49\", \"type\": \"num\", \"newsBeginning\": \" 新华社南京2月11日电(记者李响)\"民以食为天,食以安为先\"。江苏11日发布\"首届消费者最喜爱的绿色食品\"评选结果, 老山蜂蜜等100种食品获得消费者\"最喜爱的绿色食品\"称号。\", \"hallucinatedContinuation\": \"江苏是全国绿色食品生产最发达的省份之一。\", \"generatedBy\": \"InternLM_20B_Chat\", \"appearedKeywords\": [ \"江苏\", \"全国\", \"绿色食品生产\"], \"allKeywords\": { \"江苏\": \"合理\", \"全国\": \"合理\", \"绿色食品生产\": \"合理\", \"发达\": \"不合理,没有事实证明江苏是全国绿色食品生产发达的省份,但可以确定的是,江苏在绿色食品生产上有积极的实践和推动\", \"省份\": \"合理\", \"之一\": \"不合理,没有具体的事实证据表明江苏是全国绿色食品生产发达的省份之一\" }, \"realContinuation\": \"61家获奖生产企业共同签署诚信公约,共建绿色食品诚信联盟。\", \"newsRemainder\": \n\"61家获奖生产企业共同签署诚信公约,共建绿色食品诚信联盟。这是江苏保障食品安全、推动绿色食品生产的重要举措。 \\n 此次评选由江苏省绿色食品协会等部门主办,并得到江苏省农委、省委农工办、省工商局、省地税局、省信用办、省消协等单位大力支持。评 选历时4个多月,经企业报名、组委会初筛、消费者投票等层层选拔,最终出炉的百强食品榜单由消费者亲自票选得出,网络、短信、报纸及现场投 票共310多万份票数,充分说明了评选结果的含金量。\\n 食品安全一直是社会关注的热点。此次评选过程中,组委会工作人员走街头、进超市, 邀请媒体、消费者、专家深入产地开展绿色食品基地行,除了超市选购外,还搭建\"诚信购微信商城\"\"中国移动MO生活绿色有机馆\"等线上销售平台, 开创江苏绿色食品\"评展销\"结合新局面……\" }" }, { "figure_ref": [], "heading": "B Experiments B.1 LLMs Employed in This Research", "publication_ref": [ "b6", "b9", "b1", "b0", "b17", "b26" ], "table_ref": [ "tab_10" ], "text": "All LLMs used in this study are detailed in Table 6 GPT represents a series of LLMs developed by OpenAI (OpenAI, 2023). In this study, GPT3.5-Turbo, GPT4-0613, and GPT4-1106 are utilized. GLM constitutes a pre-training framework proposed by Tsinghua University (Du et al., 2022), and the ChatGLM2-6B chat model is employed. InternLM serves as an open-source, lightweight training framework, with its development team releasing a spectrum of models utilizing this framework (InternLM, 2023); the InternLM-20B opensource chat model is utilized in the present work. Baichuan2 comprises a series of expansive, multilingual base language models (Yang et al., 2023a), with both the open-source Baichuan2-7B chat model and the closed-source Baichuan2-53B chat model being employed in this investigation. Qwen encompasses a language model series characterized by distinct models with varying parameter counts (Bai et al., 2023), and the Qwen-14B opensource chat model is utilized in the current study. Aquila2 represents a language model series devised by BAAI, noted for surpassing comparable models in terms of performance (BAAI, 2023), and the Aquila2-34B chat model is employed in this research.\nBesides, the Xinyu series models are the results of a collaborative research and development effort between the Institute for Advanced Algorithms Research, Shanghai (IAAR, SH), and the State Key Laboratory of Media Convergence Production Technology and Systems of the Xinhua News Agency. Xinyu-7B is an augmented large-scale lan-guage model derived from the foundational model, BloomZ-7B (Muennighoff et al., 2023) through continued pre-training, news-specific fine-tuning, and alignment optimization. And, Xinyu2-70B is developed based on the open-source LLaMA2-70B (Touvron et al., 2023) framework, incorporating expansions to the Chinese lexicon, ongoing pre-training, and news-specific fine-tuning, thereby endowing it with a robust foundational capability in the news domain." }, { "figure_ref": [], "heading": "B.2 Evaluation Method", "publication_ref": [ "b2", "b11", "b4", "b30", "b19", "b23", "b12", "b35", "b13", "b8" ], "table_ref": [], "text": "The evaluation of hallucinations can be decomposed into three principal dimensions: form, metric, and granularity. Form concerns how the model interacts with the evaluation dataset; metric refers to the precise computational approach utilized for performance assessment; and granularity signifies the depth of detail considered in the evaluation of hallucinations.\nForm This encompasses human evaluation, discriminative evaluation, selective evaluation, and generative evaluation, among others. Human evaluation entails the direct application of human judgment to determine if the model's output contains hallucinations, representing a critical evaluation form (Chang et al., 2024). However, the drawbacks of this approach are evident: evaluating too many data points is tantamount to annotating a new dataset, with the associated time and financial expenditures proving prohibitive.\nDiscriminative evaluation enables LLMs to respond with binary answers of \"yes\" or \"no\" (Li et al., 2023;Cheng et al., 2023). Specifically, this evaluation modality involves presenting the LLM under scrutiny with an initial text followed by a continuation that may or may not include hallucinations. The LLM is tasked with producing a verdict as to the presence of hallucinations. Owing to the efficacy of few-shot prompting, this evaluation paradigm is relatively uncomplicated for LLMs to administer, as it facilitates the elicitation of the requisite responses. However, this method depends solely on the LLM's ability to draw upon the knowledge encoded within its parameters, necessitating the concurrent application of knowledge and reasoning, and thus requiring a robust foundational model capacity.\nSelective evaluation allows LLMs to tackle multiple-choice questions by choosing between option A or B, as exemplified by PandaLM (Wang et al., 2024). Specifically, in selective evaluation, the LLM under evaluation is presented with an initial text followed by two continuations: one that includes hallucinations and another that does not. The LLM's objective is to identify which of the two is hallucinated. This assessment method offers the LLM more contextual information than discriminative evaluation, thereby alleviating the burden of fact-checking and lessening the dependence on retrieving facts from its parameters. Consequently, this reduces the level of difficulty for the LLM.\nHowever, both discriminative and selective evaluations encounter a substantial challenge. They are predicated on the assumption that \"LLMs's capacity to produce reliable text is contingent upon their discernment between hallucinated and nonhallucinated content.\" These methods do not simulate the evaluation of the model's output for hallucinations. Consequently, generative evaluation is crucial as it directly evaluates the presence of hallucinations in the text generated by the LLM under evaluation. However, the challenge arises from the fact that it is not feasible to automatically and accurately ascertain if the newly generated text is hallucinated; if it were, annotated datasets would be redundant. In scenarios of unrestrained text generation, this issue becomes increasingly complex. This complexity stems from the fact that text generated without constraints may introduce a multitude of entities and facts absent in the reference material, complicating the verification of their accuracy. Despite these hurdles, generative evaluation continues to be a predominant strategy in Natural Language Generation (NLG) tasks (Novikova et al., 2017).\nMetric Metrics include classification metrics such as accuracy, precision, recall, and others, which are applicable to human evaluation, discriminative evaluation, and selective evaluation. Generative evaluation, on the other hand, encompasses both lexical and semantic metrics. Lexical metrics evaluate the extent of token overlap between the generated text and the reference information, including metrics such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and the newly proposed metric by us, kwPrec. Semantic metrics gauge the similarity in meaning between sentences, with examples including BERTScore (Zhang et al., 2020), GPT-judge (Lin et al., 2022), and GPTScore (Fu et al., 2023), among others.\nGranularity Evaluations can be conducted at both the sentence and keyword levels. Owing to our annotation methodology, our dataset is marked at the keyword level to signify instances of hallucinations. This approach affords a broader spectrum of possibilities for configuring the evaluation task, enabling the evaluated model to address the presence of hallucinations at either the keyword level, the sentence level, or even the document level." }, { "figure_ref": [], "heading": "B.3 UHGEval Framework in Detail", "publication_ref": [], "table_ref": [], "text": "The framework comprises four ascending layers: the dependency layer, the evaluator layer, the core layer, and the interface layer.\nThe dependency layer defines the essential foundational components needed for the evaluation framework, including datasets, LLM hubs, and various metrics. Importantly, each component is designed for extensibility: datasets can be replaced with custom ones, LLMs can be integrated via APIs or platforms like Hugging Face5 , and metrics can be customized to fit specific needs.\nThe evaluator layer, constituting the second layer, centers on an abstract class, Evaluator, and its various implementations. Within this layer, three distinct types are implemented: GenerativeEvaluator, DiscriminativeEvaluator, and SelectiveEvaluator. Users may also engineer custom evaluators, contingent upon adherence to the interface specifications of the abstract class, necessitating merely three function overloads.\nThe core layer, representing the third stratum, comprises two principal modules: experiment.py and analyst.py. The former facilitates experiments involving multiple LLMs, evaluators, and processes, whereas the latter is tasked with the statistical analysis of experimental outcomes.\nThe interface layer, serving as the final layer, orchestrates the user's interaction with UHGEval.\nTo streamline the initiation process, a succinct 20line demonstration is offered, alongside a run.py script for launching experiments through the command line." }, { "figure_ref": [], "heading": "C More Related Works C.1 Large Language Models", "publication_ref": [ "b26", "b6", "b1", "b25" ], "table_ref": [], "text": "Language models are pivotal in computer science, evolving from statistical language models to neural language models, to pre-trained language models (PLMs), and now to the current generation of LLMs. The advent of models such as ChatGPT has seen contemporary LLMs exhibit new capabilities in handling complex tasks. These models can manage few-shot tasks via in-context learning and tackle mixed tasks by following instructions (Zhao et al., 2023).\nLLMs can be classified according to two dimensions. The first dimension concerns the openness of the model weights. For example, open-source models include Meta's LLaMA (Touvron et al., 2023), Tsinghua University's GLM (Du et al., 2022), and Alibaba's Qwen (Bai et al., 2023), while closedsource models feature OpenAI's GPT (OpenAI, 2023), Baidu's ERNIE Bot (Sun et al., 2021), and Anthropic's Claude 6 , among others. The second dimension differentiates between the use of a PLM or a supervised fine-tuned (SFT) model for specific inferences. A PLM is a language model trained on extensive unlabeled textual data to discern underlying patterns, structures, and semantic knowledge within the corpus. Conversely, an SFT model involves further training a PLM with labeled datasets tailored to a specific task, to improve performance in that area. Many open-source models, including LLaMA, GLM, and Qwen, have made their PLM weights publicly available. For SFT models, users can access the chat variants of open-source models or the API services provided by closed-source models. In our research, we focus primarily on evaluating closed-source GPT series models and open-source Chinese chat models." }, { "figure_ref": [ "fig_1" ], "heading": "C.2 Hallucinations in LLM", "publication_ref": [ "b36", "b24", "b36" ], "table_ref": [], "text": "Despite remarkable advancements in LLMs, they continue to encounter challenges, with hallucination being one of the most notable. Hallucination in language models refers to generating content that strays from factual accuracy, leading to unreliable outputs. Hallucinations occur when the generated content is not aligned with user input, deviates from the model's previous outputs, or is at odds with established real-world knowledge (Zhang et al., 2023).\n6 https://www.anthropic.com/index/ introducing-claude Specific examples include inaccuracies in age, currency, scores, and other numerical values; citing fictional statements; inventing non-existent characters; and muddling timelines by merging events from different periods (Rawte et al., 2023).\nRegarding the causes of hallucinations, several factors can be responsible (Zhang et al., 2023). One contributing factor is the use of inaccurate or incomplete training data. During training, LLMs finetune their parameters with vast quantities of text data. However, this data may be flawed, harboring errors, inaccuracies, or gaps in information. Another factor involves inconsistencies in contextual information. While LLMs typically consider previously generated context when producing content, challenges in managing long-term dependencies or understanding complex contexts can result in inconsistencies. Additionally, hallucinations can arise from lacking or erroneous world knowledge. Although LLMs gain considerable world knowledge via training data, they may be deficient in specific domain knowledge or misinterpret certain facts, leading to hallucinations. Furthermore, model limitations, including generation strategies and alignment methods, can also play a role in hallucinations during content creation. Step 5, concerning the evaluation framework, is detailed in Section 3. (In English: Fig. 2) " }, { "figure_ref": [], "heading": "D Figures in Chinese", "publication_ref": [], "table_ref": [], "text": "PrecedingSentence:2014年,全国新增并⽹光伏发电容量1060万千 瓦,约占全球新增容量的四分之⼀。其中,全国新增光伏电站855万千 瓦,分布式205万千瓦。 LLM Generation 据统计,2014年中国光伏发电量达到了130亿千瓦时,同⽐增⻓超过 200%。 Label 统计 -合理 2014年 -合理 中国 -合理 光伏发电量 -合理 130亿千瓦时 -不合理,与事实冲突,应为250亿千瓦时 同⽐增⻓ -合理 200% -合理 光伏年发电量约250亿千瓦时,同⽐增⻓超过200%。 梁志鹏当⽇在国家能源局举⾏的光伏产业发展情况通⽓会上介绍,2014 年,全国光伏产业整体呈现稳中向好和有序发展局⾯,全年光伏发电累计 并⽹装机容量2805万千瓦,同⽐增⻓60%,其中,光伏电站2338万千 瓦,分布式光伏467万千瓦。 ……(" }, { "figure_ref": [], "heading": "E Prompt Templates", "publication_ref": [], "table_ref": [], "text": "In these templates, the orange text represents intent and instruction, the green text represents demonstrations, and the black text represents specific questions.\nYou are a news worker for Xinhua News Agency. You need to determine whether a key term in the news continuation is realistic. Please directly state whether it is realistic or not, and provide the reason.\nThe beginning of the news: \"\"\"《European Basketball League's Second Phase Group Stage: Alba Berlin Faces Grim Qualification Prospects》\\n2015-03-05 06:01:47\\n Xinhua, Berlin, March 4 (Reporter Wang Dong) -The ninth round of the 2014-2015 season European Basketball League's second phase group stage kicks off tonight. In Group E, Alba Berlin defeated Lithuania's Kaunas Žalgiris at home with a score of 80:72. After nine rounds, with four wins and five losses, they are firmly in fifth place in the group, yet their qualification prospects still look grim.\"\"\" Continuation: \"\"\"Alba Berlin's head coach, John Cummings, stated that although their current ranking is not ideal, they will still give their all and strive for victory in every match.\"\"\" Realistic or not: The continuation's \"\"\"John Cummings\"\"\" is not realistic because such a person does not exist. Demonstration 2; Demonstration 3\nThe beginning of the news: \"\"\"《{headLine}》\\n{broadcastDate}\\n{newsBeginning}\"\"\" Continuation: \"\"\"{continuation}\"\"\" Realistic or not: The continuation's \"\"\"{keyword}\"\"\" 你是一名新华社新闻工作者。 你需要判断新闻续写中的某个关键词是否符合现实。 请直接输出符合现实或不符合现实, 同时给出原因。 新闻的开头:\"\"\"《 (体育)欧洲篮球联赛第二阶段小组赛阿尔巴出线形势不乐观》\\n2015-03-05 06:01:47\\n 新华 社柏林 3 月 4 日电(记者王东)2014-2015 赛季欧洲篮球联赛第二阶段小组赛第九轮今晚开始。在 E 组的比赛中,柏 林阿尔巴队主场以 80:72 力挫立陶宛考纳斯萨拉基利斯队,9 轮过后 4 胜 5 负,稳居小组第五名,但是出线形势仍 不乐观。\"\"\" 续写:\"\"\"柏林阿尔巴队主教练约翰•卡明斯表示,虽然目前的排名并不理想,但他们仍然会全力以赴,争取每个比赛的 胜利。\"\"\" 是否符合现实:续写中的\"\"\"约翰•卡明斯\"\"\"不符合现实,因为不存在此人。 示例 2;示例 3 新闻的开头:\"\"\"《{headLine}》\\n{broadcastDate}\\n{newsBeginning}\"\"\" 续写:\"\"\"{continuation}\"\"\" 是否符合现实:续写中的\"\"\"{keyword}\"\"\" You are a news worker for Xinhua News Agency. You need to assess whether the news continuation is realistic.\nThe beginning of the news: \"\"\"《European Basketball League's Second Phase Group Stage: Alba Berlin Faces Grim Qualification Prospects》\\n2015-03-05 06:01:47\\n Xinhua, Berlin, March 4 (Reporter Wang Dong) -The ninth round of the 2014-2015 season European Basketball League's second phase group stage kicks off tonight. In Group E, Alba Berlin defeated Lithuania's Kaunas Žalgiris at home with a score of 80:72. After nine rounds, with four wins and five losses, they are firmly in fifth place in the group, yet their qualification prospects still look grim.\"\"\" Continuation: \"\"\"Alba Berlin's head coach, John Cummings, stated that although their current ranking is not ideal, they will still give their all and strive for victory in every match.\"\"\" Judgment: The continuation is not realistic because there is no person named \"John Cummings\". Demonstration 2; Demonstration 3\nThe beginning of the news: \"\"\"《{headLine}》\\n{broadcastDate}\\n{newsBeginning}\"\"\" Continuation: \"\"\"{continuation}\"\"\" Judgment: 你是一名新华社新闻工作者。你需要判断新闻续写是否符合现实。 新闻的开头:\"\"\"《 (体育)欧洲篮球联赛第二阶段小组赛阿尔巴出线形势不乐观》\\n2015-03-05 06:01:47\\n 新华 社柏林 3 月 4 日电(记者王东)2014-2015 赛季欧洲篮球联赛第二阶段小组赛第九轮今晚开始。在 E 组的比赛中,柏 林阿尔巴队主场以 80:72 力挫立陶宛考纳斯萨拉基利斯队,9 轮过后 4 胜 5 负,稳居小组第五名,但是出线形势仍 不乐观。\"\"\" 续写:\"\"\"柏林阿尔巴队主教练约翰•卡明斯表示,虽然目前的排名并不理想,但他们仍然会全力以赴,争取每个比赛的 胜利。\"\"\" 判断:续写不符合现实,因为并无\"约翰•卡明斯\"这个人。 示例 2;示例 3 新闻的开头:\"\"\"《{headLine}》\\n{broadcastDate}\\n{newsBeginning}\"\"\" 续写:\"\"\"{continuation}\"\"\" 判断: " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "cultivation of artistic talent nationwide." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Now, I hope you can act as a sequel evaluator, where you need to choose the continuation that is more realistic and accurate.\nThe beginning of the news: \"\"\"《European Basketball League's Second Phase Group Stage: Alba Berlin Faces Grim Qualification Prospects》\\n2015-03-05 06:01:47\\n Xinhua, Berlin, March 4 (Reporter Wang Dong) -The ninth round of the 2014-2015 season European Basketball League's second phase group stage kicks off tonight. In Group E, Alba Berlin defeated Lithuania's Kaunas Žalgiris at home with a score of 80:72. After nine rounds, with four wins and five losses, they are firmly in fifth place in the group, yet their qualification prospects still look grim.\"\"\" A: \"\"\"Alba Berlin's head coach, John Cummings, stated that although their current ranking is not ideal, they will still give their all and strive for victory in every match.\"\"\" B: \"\"\"\\n\nThe European Basketball League games are usually scheduled for Thursdays and Fridays, but the group stage match between Alba and Kaunas was moved forward a day.\"\"\" Which continuation is more realistic and accurate: B is more realistic and accurate Demonstration 2; Demonstration 3\nThe beginning of the news: \"\"\"《{headLine}》\\n{broadcastDate}\\n{newsBeginning}\"\"\" A: \"\"\"{contn1}\"\"\" B: \"\"\"{contn2}\"\"\" Which continuation is more realistic and accurate: 现在,你我希望你作为一个续写评测器,你需要选择出更符合现实,更准确的续写。 新闻的开头:\"\"\"《 (体育)欧洲篮球联赛第二阶段小组赛阿尔巴出线形势不乐观》\\n2015-03-05 06:01:47\\n 新华 社柏林 3 月 4 日电(记者王东)2014-2015 赛季欧洲篮球联赛第二阶段小组赛第九轮今晚开始。在 E 组的比赛中,柏 林阿尔巴队主场以 80:72 力挫立陶宛考纳斯萨拉基利斯队,9 轮过后 4 胜 5 负,稳居小组第五名,但是出线形势仍 不乐观。\"\"\" A:\"\"\"柏林阿尔巴队主教练约翰•卡明斯表示,虽然目前的排名并不理想,但他们仍然会全力以赴,争取每个比赛的胜 利。\"\"\" B:\"\"\"\\n 欧洲篮球联赛一般安排在每周四和周五进行,但是阿尔巴和考纳斯的这场小组赛提前一天进行。\"\"\" 哪个续写更符合现实,更准确:B 更符合现实,更准确 示例 2;示例 3 新闻的开头:\"\"\"《{headLine}》\\n{broadcastDate}\\n{newsBeginning}\"\"\" A:\"\"\"{contn1}\"\"\" B:\"\"\"{contn2}\"\"\" 哪个续写更符合现实,更准确: " } ]
Large language models (LLMs) produce hallucinated text, compromising their practical utility in professional contexts. To assess the reliability of LLMs, numerous initiatives have developed benchmark evaluations for hallucination phenomena. However, they often employ constrained generation techniques to produce the evaluation dataset due to cost and time limitations. For instance, this may involve employing directed hallucination induction or deliberately modifying authentic text to generate hallucinations. These are not congruent with the unrestricted text generation demanded by real-world applications. Furthermore, a wellestablished Chinese-language dataset dedicated to the evaluation of hallucinations is presently lacking. Consequently, we have developed an Unconstrained Hallucination Generation Evaluation (UHGEval) benchmark, containing hallucinations generated by LLMs with minimal restrictions. Concurrently, we have established a comprehensive benchmark evaluation framework to aid subsequent researchers in undertaking scalable and reproducible experiments. We have also evaluated prominent Chinese LLMs and the GPT series models to derive insights regarding hallucination.
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
[ { "figure_caption": "Figure 1 :1Figure 1: Hallucinations from UHGEval. Using the IDs, you can locate the original news articles. Note: MOTIE denotes Ministry of Trade, Industry, and Energy. (In Chinese: Fig. 10)", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The process of creating UHGEval. Steps 1 to 4 regarding the creation of the benchmark dataset are explained in Section 2; Step 5, concerning the evaluation framework, is detailed in Section 3. (In Chinese: Fig. 11)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Labeling and rechecking. (In Chinese: Fig. 12)", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Conversion rates from candidates to hallucinations.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Hallucinated keywords in different types of news", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: An example from the UHGEval dataset. (In Chinese)", "figure_data": "", "figure_id": "fig_5", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :Figure 11 :1011Figure 10: Hallucinations from UHGEval. Using the IDs, you can locate the original news articles. (In English: Fig. 1)", "figure_data": "", "figure_id": "fig_6", "figure_label": "1011", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Labeling and rechecking. (In English: Fig. 4)", "figure_data": "", "figure_id": "fig_7", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Prompt template for discriminative evaluation (keyword level)", "figure_data": "", "figure_id": "fig_8", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Prompt template for discriminative evaluation (sentence level)", "figure_data": "", "figure_id": "fig_9", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Statistics of collected news. DOC, NUM,", "figure_data": "KNO, and GEN denote document-intensive, number-intensive,knowledge-intensive, and general news, respectively.categories. As a result, we have classified thesediverse categories into four main types: document-intensive, number-intensive, knowledge-intensive,and general news, with details provided in Table", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "PrecedingSentence:In 2014, China added a grid-connectedphotovoltaic (PV) power generation capacity of 10.6 million kilowatts,accounting for approximately one-fourth of the global newly addedcapacity. Among these, the newly added capacity for utility-scale PVpower stations was 8.55 million kilowatts, while distributed generationcontributed 2.05 million kilowatts.LLM GenerationAccording to statistics, in 2014, China's photovoltaic power generationreached 13 billion kilowatt-hours, showing a year-on-year growth ofover 200%.Labelstatistics -Reasonablein 2014 -ReasonableChina's -Reasonablephotovoltaic power generation -Reasonable13 billion kilowatt-hours -Unreasonable,conflicts with facts, shouldbe 25 billion kilowatt-hoursyear-on-year growth -Reasonable200% -ReasonableThe annual photovoltaic power generation is approximately 25 billionkilowatt-hours, showing a year-on-year growth of over 200%.On the same day, Liang Zhipeng introduced at the briefing on thedevelopment of the photovoltaic industry held by the National EnergyAdministration that in 2014, the overall situation of the nationalphotovoltaic industry showed steady and orderly development, with atotal accumulated grid-connected capacity of 28.05 million kilowatts for...", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "For further analysis of statistics and an example of the dataset, please refer to the Appendix A.2 and the Appendix A.3, respectively.", "figure_data": "DOCKNONUMGEN#news124232024311148avg. #hallu. kw.2.151.992.542.12avg. #kw.8.438.098.078.17#hallu. kw. / #kw. 25.47% 24.61% 31.44% 26.00%avg. len. contn.46.7748.3644.4745.97avg. len. begin.102.15 102.66 103.20 102.86avg. len. refer.634.17 618.90 624.47 632.47", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Dataset basic statistics. # denotes quantity, avg. denotes average, len. denotes length, contn. denotes hallucinated continuations, begin. denotes news beginnings, and refer. denotes reference information.", "figure_data": "", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Discriminative, selective, and generative evaluation results. #kws denotes the number of keywords and #valid denotes the number of valid evaluations. In the same column, optimal values are bolded, and suboptimal values are underlined.", "figure_data": "Discriminative-KeywordDiscriminative-SentenceSelectiveavg. acc.avg. #kws#validavg. acc.#validacc.#validAquila-34B53.62%3.00371949.86%500954.29%4319Baichuan2-13B51.63%3.128447846.88%504750.23%5130Baichuan2-53B52.13%2.98165650.81%147854.67%4443ChatGLM2-6B50.80%3.10428943.87%513043.59%5130GPT3.5-Turbo53.72%3.08418350.02%503949.03%5103GPT4-061370.04%3.07410057.42%502455.20%5047GPT4-110669.48%3.10418957.38%490360.35%4752InternLM-20B50.92%3.10438851.01%513049.43%5130Qwen-14B52.86%3.125447850.58%513054.74%5130Xinyu-7B49.58%3.12445148.66%501450.58%5130Xinyu2-70B52.94%3.12448255.04%512857.93%5129Generativeavg. bleuavg. rougeavg. kwPrecavg. bertavg. len.#validAquila-34B11.80%6.04%34.36%67.51%43.765130Baichuan2-13B8.84%6.96%25.51%65.69%46.045113Baichuan2-53B10.06%7.55%26.45%67.65%49.403837ChatGLM2-6B9.17%7.17%24.53%64.89%46.275094GPT3.5-Turbo9.02%6.30%27.74%66.39%39.045084GPT4-061310.74%7.19%28.47%67.36%44.415109GPT4-11068.62%6.86%30.94%67.38%44.835121InternLM-20B14.89%7.96%31.10%67.92%51.555125Qwen-14B12.72%6.54%32.95%66.96%45.855125Xinyu-7B10.30%6.52%28.64%67.32%49.844978Xinyu2-70B13.41%7.05%33.93%68.97%51.105130KNODOCGENNUMAquila-34B59.55% 54.97% 53.74% 53.52%Baichuan2-13B 53.75% 52.10% 48.43% 49.67%Baichuan2-53B 57.70% 57.46% 56.26% 52.58%ChatGLM2-6B40.94% 45.56% 44.23% 42.63%GPT3.5-Turbo55.21% 51.06% 47.63% 47.85%GPT4-061359.87% 55.99% 51.93% 55.73%GPT4-110668.73% 60.19% 54.77% 62.04%InternLM-20B51.88% 50.65% 49.56% 48.43%Qwen-14B62.81% 57.35% 53.15% 53.09%Xinyu-7B48.44% 52.02% 50.87% 50.00%Xinyu2-70B63.13% 61.47% 54.46% 57.07%", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Evaluation by different types. In the same row, optimal values are bolded, and suboptimal values are underlined.", "figure_data": "", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Hallucination evaluation benchmarks sorted by name. In the Generation Method column, CHG refers to constrained hallucination generation, UHG refers to unconstrained hallucination generation, Manual indicates manually constructed, and Direct implies utilizing the base dataset without the need for generation. In the Annotation column, Auto denotes automatic machine annotation. In the Metric column, Acc, Prec, and Reca respectively indicate accuracy, precision, and recall. In the Lang. column, CN and EN respectively stand for Chinese and English.", "figure_data": "BenchmarkGeneration Method: Base DatasetAnnotationMetricGranularityLang.ChineseFactEval (Wang et al., 2023a)ManualManualAccSentenceCNCSK-PN (Chen et al., 2023)Direct: Common KGsNo NeedAccWordENFACTOR (Muhlgay et al., 2023)CHG: Wiki, NewsAutoFACTOR AccSentenceENFActScore (Min et al., 2023)CHG: WikiNo NeedFActScore by HumanShort SentenceENFactualityPrompts (Lee et al., 2022)Direct: WikiAutoNE Error, EntailmentDocument, SentenceENHADES (Liu et al., 2022)CHG: WikiManualAcc, G-Mean, BSS, AUC, etc.WordENHalluQA (Cheng et al., 2023)CHG, Manual: TruthfulQA, WikiManual, AutoNon-hallucination RateSentenceCNHaLoCheck (Elaraby et al., 2023)CHGNo NeedHaLoCheck, selfcheckGPTSentenceENHaluEval (Li et al., 2023)CHG: Alpaca, HotpotQA, etc.Manual, AutoAccDocumentENHILT (Rawte et al., 2023)CHG: NYT, PolitifactManualHVIWordENKoLA-KC (Yu et al., 2024)Direct: Wiki, evolving datasetAutoBLEU, ROUGEDocumentENMed-HALT (Pal et al., 2023)Direct: MedMCQA, PubMed, etc.No NeedAcc, Pointwise ScoreAllENPHD (Yang et al., 2023b)CHG: WikiManualF1, Acc, Prec, RecaDocumentENSelfAware (Yin et al., 2023)CHG: Quora, HowStuffWorksManualF1, AccSentenceENSTSN (Varshney et al., 2023)UHGManualAcc, Prec, RecaSentence, ConceptENTruthfulQA (Lin et al., 2022)ManualManualAcc by Human or GPT-judgeSentenceENUHGEval (Ours)UHG: NewsAuto, ManualAcc, kwPrec, BERTScore, etc.Sentence, KeywordCNXSum Hallu (Maynez et al., 2020)UHG: XSumManualROUGE, BERTScore, Acc, etc.Word, DocumentEN", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ". LLMs sorted by release date. All LLMs are chat models. Asterisk (*) denotes estimated value, NaN denotes no public data available, and 175B denotes 175billion.", "figure_data": "Model#Para. PublisherDateGPT3.5-Turbo 175B * OpenAI2023.03 *GPT4-0613NaNOpenAI2023.06ChatGLM26BTsinghua2023.06Xinyu7BIAAR&Xinhua 2023.06InternLM20BShLab2023.07Baichuan213BBaichuan Inc.2023.09Baichuan253BBaichuan Inc.2023.09Qwen14BAlibaba2023.09Aquila234BBAAI2023.10Xinyu270BIAAR&Xinhua 2023.10GPT4-1106NaNOpenAI2023.11", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" } ]
Xun Liang; Shichao Song; Simin Niu; Zhiyu Li; Feiyu Xiong; Bo Tang; Yezhaohui Wang; Dawei He; Peng Cheng; Zhonghao Wang; Haiying Deng
[ { "authors": " Baai", "journal": "", "ref_id": "b0", "title": "Aquila2", "year": "2023" }, { "authors": "Jinze Bai; Shuai Bai; Yunfei Chu", "journal": "", "ref_id": "b1", "title": "Qwen technical report", "year": "2023" }, { "authors": "Yupeng Chang; Xu Wang; Jindong Wang", "journal": "ACM Trans. Intell. Syst. Technol. Just Accepted", "ref_id": "b2", "title": "A survey on evaluation of large language models", "year": "2024" }, { "authors": "Jiangjie Chen; Wei Shi; Ziquan Fu", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Say what you mean! large language models speak too positively about negative commonsense knowledge", "year": "2023" }, { "authors": "Qinyuan Cheng; Tianxiang Sun; Wenwei Zhang", "journal": "", "ref_id": "b4", "title": "Evaluating hallucinations in chinese large language models", "year": "2023" }, { "authors": "Marie-Catherine De Marneffe; Joakim Nivre", "journal": "Annual Review of Linguistics", "ref_id": "b5", "title": "Dependency grammar", "year": "2019" }, { "authors": "Zhengxiao Du; Yujie Qian; Xiao Liu", "journal": "", "ref_id": "b6", "title": "Glm: General language model pretraining with autoregressive blank infilling", "year": "2022" }, { "authors": "Mohamed Elaraby; Mengyin Lu; Jacob Dunn", "journal": "", "ref_id": "b7", "title": "Halo: Estimation and reduction of hallucinations in open-source weak large language models", "year": "2023" }, { "authors": "Jinlan Fu; See-Kiong Ng; Zhengbao Jiang; Pengfei Liu", "journal": "", "ref_id": "b8", "title": "Gptscore: Evaluate as you desire", "year": "2023" }, { "authors": " Internlm", "journal": "", "ref_id": "b9", "title": "Internlm: A multilingual language model with progressively enhanced capabilities", "year": "2023" }, { "authors": "Nayeon Lee; Wei Ping; Peng Xu", "journal": "", "ref_id": "b10", "title": "Factuality enhanced language models for open-ended text generation", "year": "2022" }, { "authors": "Junyi Li; Xiaoxue Cheng; Xin Zhao", "journal": "", "ref_id": "b11", "title": "Halueval: A large-scale hallucination evaluation benchmark for large language models", "year": "2023" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Stephanie Lin; Jacob Hilton; Owain Evans", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "TruthfulQA: Measuring how models mimic human falsehoods", "year": "2022" }, { "authors": "Tianyu Liu; Yizhe Zhang; Chris Brockett", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "A token-level reference-free hallucination detection benchmark for free-form text generation", "year": "2022" }, { "authors": "Joshua Maynez; Shashi Narayan; Bernd Bohnet", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "On faithfulness and factuality in abstractive summarization", "year": "2020" }, { "authors": "Sewon Min; Kalpesh Krishna; Xinxi Lyu", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "FActScore: Fine-grained atomic evaluation of factual precision in long form text generation", "year": "2023" }, { "authors": "Niklas Muennighoff; Thomas Wang; Lintang Sutawika", "journal": "", "ref_id": "b17", "title": "Crosslingual generalization through multitask finetuning", "year": "2023" }, { "authors": "Dor Muhlgay; Ori Ram; Inbal Magar", "journal": "", "ref_id": "b18", "title": "Generating benchmarks for factuality evaluation of language models", "year": "2023" }, { "authors": "Jekaterina Novikova; Ondřej Dušek; Amanda Cercas Curry; Verena Rieser", "journal": "Association for Computational Linguistics. OpenAI", "ref_id": "b19", "title": "Why we need new evaluation metrics for NLG", "year": "2017" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang", "journal": "", "ref_id": "b20", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "Ankit Pal; Logesh Kumar Umapathi; Malaikannan Sankarasubbu", "journal": "", "ref_id": "b22", "title": "Med-halt: Medical domain hallucination test for large language models", "year": "2023" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Swagata Vipula Rawte; Agnibh Chakraborty; Pathak", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "The troubling emergence of hallucination in large language models -an extensive definition, quantification, and prescriptive remediations", "year": "2023" }, { "authors": "Yu Sun; Shuohuan Wang; Shikun Feng", "journal": "", "ref_id": "b25", "title": "Ernie 3.0: Large-scale knowledge enhanced pretraining for language understanding and generation", "year": "2021" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone", "journal": "", "ref_id": "b26", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Neeraj Varshney; Wenlin Yao; Hongming Zhang", "journal": "", "ref_id": "b27", "title": "A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation", "year": "2023" }, { "authors": "Binjie Wang; Ethan Chern; Pengfei Liu", "journal": "", "ref_id": "b28", "title": "Chinesefacteval: A factuality benchmark for chinese llms", "year": "2023" }, { "authors": "Cunxiang Wang; Xiaoze Liu; Yuanhao Yue", "journal": "", "ref_id": "b29", "title": "Survey on factuality in large language models: Knowledge, retrieval and domain-specificity", "year": "2023" }, { "authors": "Yidong Wang; Zhuohao Yu; Zhengran Zeng", "journal": "", "ref_id": "b30", "title": "PandaLM: An automatic evaluation benchmark for LLM instruction tuning optimization", "year": "2024" }, { "authors": "Aiyuan Yang; Bin Xiao; Bingning Wang", "journal": "", "ref_id": "b31", "title": "a. Baichuan 2: Open large-scale language models", "year": "2023" }, { "authors": "Shiping Yang; Renliang Sun; Xiaojun Wan", "journal": "", "ref_id": "b32", "title": "A new benchmark and reverse validation method for passage-level hallucination detection", "year": "2023" }, { "authors": "Zhangyue Yin; Qiushi Sun; Qipeng Guo", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Do large language models know what they don't know?", "year": "2023" }, { "authors": "Jifan Yu; Xiaozhi Wang; Shangqing Tu", "journal": "", "ref_id": "b34", "title": "KoLA: Carefully benchmarking world knowledge of large language models", "year": "2024" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu", "journal": "", "ref_id": "b35", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" }, { "authors": "Yue Zhang; Yafu Li; Leyang Cui", "journal": "", "ref_id": "b36", "title": "Siren's song in the ai ocean: A survey on hallucination in large language models", "year": "2023" }, { "authors": "Kun Wayne Xin Zhao; Junyi Zhou; Li", "journal": "", "ref_id": "b37", "title": "A survey of large language models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 317.05, 587.36, 207.36, 9.81 ], "formula_id": "formula_0", "formula_text": "f inal ← picked[0] ▷ More Hallucination" }, { "formula_coordinates": [ 4, 71.21, 550.79, 215.96, 40.78 ], "formula_id": "formula_1", "formula_text": "江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 kwPrec 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 kwPrec 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 ROUGE-L 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 ROUGE-L 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 BLEU-4 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 BLEU-4 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 kwPrec 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 ROUGE-L 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 BLEU-4" }, { "formula_coordinates": [ 4, 71.21, 550.79, 215.96, 40.78 ], "formula_id": "formula_2", "formula_text": "江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 kwPrec 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 ROUGE-L 江 苏 是 全 国 绿 色 食 品 生 产 最 发 达 的 省 份 之 一 。 BLEU-4" }, { "formula_coordinates": [ 11, 80.68, 464.07, 209.18, 59.42 ], "formula_id": "formula_3", "formula_text": "min |w| s.t. w ⊂ W w ′ = correct(w) false = hallucinated(W -w + w ′ ) (1)" }, { "formula_coordinates": [ 12, 70.87, 536.82, 452.79, 58.01 ], "formula_id": "formula_4", "formula_text": "\"61家获奖生产企业共同签署诚信公约,共建绿色食品诚信联盟。这是江苏保障食品安全、推动绿色食品生产的重要举措。 \\n 此次评选由江苏省绿色食品协会等部门主办,并得到江苏省农委、省委农工办、省工商局、省地税局、省信用办、省消协等单位大力支持。评 选历时4个多月,经企业报名、组委会初筛、消费者投票等层层选拔,最终出炉的百强食品榜单由消费者亲自票选得出,网络、短信、报纸及现场投 票共310多万份票数,充分说明了评选结果的含金量。\\n 食品安全一直是社会关注的热点。此次评选过程中,组委会工作人员走街头、进超市, 邀请媒体、消费者、专家深入产地开展绿色食品基地行,除了超市选购外,还搭建\"诚信购微信商城\"\"中国移动MO生活绿色有机馆\"等线上销售平台, 开创江苏绿色食品\"评展销\"结合新局面……\" }" }, { "formula_coordinates": [ 16, 209.66, 546.48, 175.3, 192.8 ], "formula_id": "formula_5", "formula_text": "PrecedingSentence:2014年,全国新增并⽹光伏发电容量1060万千 瓦,约占全球新增容量的四分之⼀。其中,全国新增光伏电站855万千 瓦,分布式205万千瓦。 LLM Generation 据统计,2014年中国光伏发电量达到了130亿千瓦时,同⽐增⻓超过 200%。 Label 统计 -合理 2014年 -合理 中国 -合理 光伏发电量 -合理 130亿千瓦时 -不合理,与事实冲突,应为250亿千瓦时 同⽐增⻓ -合理 200% -合理 光伏年发电量约250亿千瓦时,同⽐增⻓超过200%。 梁志鹏当⽇在国家能源局举⾏的光伏产业发展情况通⽓会上介绍,2014 年,全国光伏产业整体呈现稳中向好和有序发展局⾯,全年光伏发电累计 并⽹装机容量2805万千瓦,同⽐增⻓60%,其中,光伏电站2338万千 瓦,分布式光伏467万千瓦。 ……(" } ]
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b18", "b46", "b56", "b26", "b27", "b43", "b36", "b64", "b25", "b32", "b34", "b35", "b59", "b28", "b4", "b30", "b52" ], "table_ref": [], "text": "We are witnessing rapid progress in the domain of content generation technology, i.e., models trained on massive amounts of data that can produce highly realistic text [3, 48,49], video [18,46,56] and audio [26,27,43]. Consequently, discriminating between real and fake content is becoming increasingly more challenging even for humans [36,64]. This opens the door for misuse of content generation technology for example to spread misinformation and commit fraud, rendering the development of reliable detection methods vital.\nThe development of such methods is highly dependent on the available deepfake benchmark datasets, which led to the steady increase in the number of publicly available datasets that provide examples of visual-only [25,32,34], audio-only [35,59], and audio-visual [28] content modification strategies (e.g., face-swapping, face-reenactment, etc.). However, the majority of these datasets and methods Table 1. Details for publicly available deepfake datasets in a chronologically ascending order. Cla: Binary classification, SL: Spatial localization, TL: Temporal localization, FS: Face swapping, RE: Face reenactment, TTS: Text-to-speech, VC: Voice conversion. assume that the entirety of the content (i.e., audio, visual, audio-visual) is either real or fake. This leaves the door open for criminals to exploit the embedding of small segments of manipulations in the otherwise real content. As argued in [4], this type of targeted manipulation can lead to drastic changes in the underlying meaning as illustrated in Figure 1. Given that most deepfake benchmark datasets do not include this new type of manipulation strategy, state-ofthe-art detection methods might fail to perform reliably on this new type of deepfake content.\nThis work addresses this gap by releasing a new largescale audio-visual dataset called AV-Deepfake1M specifically designed for the task of temporal deepfake localization. To improve the realism and quality of generated content, the proposed data generation pipeline incorporates the ChatGPT 1 large language model. The pipeline further utilizes the latest open-source state-of-the-art methods for high-quality audio [8,30] and video [52] generation. The scale and novel modification strategies position the proposed dataset as the most comprehensive audio-visual benchmark as illustrated in Figure 1, making it an important asset for building the next generation of deepfake localization methods. The main contributions of this work are,\n• We propose AV-Deepfake1M, a large-scale contentdriven audio-visual dataset for the task of temporal deepfake localization. • We propose an elaborate data generation pipeline employing novel manipulation strategies and incorporating the state-of-the-art in text, video and audio generation. • We perform comprehensive analysis and benchmark of the proposed dataset utilizing state-of-the-art deepfake 1 https://chat.openai.com/ detection and localization methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b20", "b36", "b16", "b64", "b28", "b32", "b16", "b25", "b32", "b36", "b35", "b28", "b4", "b5", "b62" ], "table_ref": [], "text": "The performance of any deepfake detection method is highly dependent on the quantitative and qualitative aspects of the datasets used for development. Over the past few years, many datasets (e.g., [20,31,36]) have been proposed to support the research on deepfake detection. A comprehensive list of the relevant publicly available datasets is given in Table 1. Most of the available datasets provide examples of face manipulations through either face swapping [16,31,64] or face reenactment [28,32] techniques. In terms of the number of samples, earlier datasets are smaller due to the limited availability of face manipulation techniques. With the rapid advancements in content generation technology, several large-scale datasets such as DFDC [16], DeeperForensics [25], KoDF [32], and DF-Platter [36] have been proposed. However, the task associated with these datasets is mainly restricted to coarse-level deepfake detection. Until this point manipulations are mainly applied only to the visual modality, and later, audio manipulations [35] and audio-visual manipulations [28] have been proposed to increase the complexity of the task.\nIn 2022, LAV-DF [4] was introduced to become the first content-driven deepfake dataset for temporal localization. However, the quality and scale of LAV-DF are limited, and the state-of-the-art methods designed for temporal localization [5,62] are already achieving very strong performance. AV-Deepfake1M addresses these gaps by improving the quality, diversity, and scale of the previous datasets designed for temporal deepfake localization. Given that LAV-DF is the only available public dataset that has been You are a helpful text modifier. Your target is to modify the provided text to invert its meaning to the opposite direction. The operation can be one of \"delete\", \"insert\" and \"replace\". Please generate output for the following input with 3 operations. ... the great songbook ... I'm not going to ... and unique ..." }, { "figure_ref": [], "heading": "Audio Generation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "TTS Options", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Transcript Manipulation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Video Generation", "publication_ref": [], "table_ref": [], "text": "[{\"operation\": \"replace\", \"old_word\": \"great\", \"new_word\": \"terrible\", \"index\": 4}, {\"operation\": \"delete\", \"old_word\": \"not\", \"new_word\": None, \"index\": 17}, {\"operation\": \"insert\", \"old_word\": None, \"new_word\": \"not\", \"index\": 24}] " }, { "figure_ref": [], "heading": "Replace", "publication_ref": [ "b40", "b24" ], "table_ref": [], "text": "Insert Delete Fake Frames Fake Audio Figure 2. Data manipulation and generation pipeline. Overview of the proposed three-stage pipeline. Given a real video, the preprocessing consists of audio extraction via FFmpeg followed by Whisper-based transcript generation. In the first stage, transcript manipulation, the original transcript is modified through word-level insertions, deletions, and replacements. In the second stage, audio generation, based on the relevant transcript manipulation, the audio is generated in both speaker-dependent and independent fashion. In the final stage, video generation, based on the generated audio, the subject-dependant video is generated with smooth transitions in terms of lipsynchronization, pose, and other relevant attributes.\ndesigned for the same task as the dataset proposed in this paper, next we do a direct comparison of the two datasets. In addition to the fact that AV-Deepfake1M is significantly larger than LAV-DF, in terms of the number of subjects, and amount of real and fake videos, the following differences further highlight our contributions.\n• LAV-DF uses a rule-based system to find antonyms that maximize the change in sentiment in the transcript manipulation step. We argue that naively choosing the antonyms causes context inconsistencies and low diversity of the fake content. AV-Deepfake1M addresses this issue with the use of a large language model, which results in diverse and context-consistent fake content. • The output quality of the visual generator Wav2Lip [40] and audio generator SV2TTS [24] used for generating LAV-DF is not sufficient for state-of-the-art detection methods. AV-Deepfake1M utilizes the latest open-source state-of-the-art methods for high-quality audio and video generation. • LAV-DF includes only replacement as a manipulation strategy. AV-Deepfake1M includes two additional challenging manipulation strategies, deletion and insertion." }, { "figure_ref": [], "heading": "AV-Deepfake1M Dataset", "publication_ref": [ "b4", "b28" ], "table_ref": [], "text": "AV-Deepfake1M is a large-scale audio-visual deepfake dataset, including 1,886 hours of audio-visual data from 2,068 unique subjects captured in diverse background environments. This positions the proposed dataset as the most comprehensive audio-visual benchmark as illustrated in Figure 1 and Table 1. The generated videos in AV-Deepfake1M preserve the background and identity present in the real videos, while the content is carefully manipulated with content-driven audio-visual data. Following previous deepfake dataset generation research [4,28], the dataset includes three different combinations of modified modalities in the generated fake videos. Please note that here we also introduce the concept of content-driven modifications for unimodal as well as multimodal aspects. We further elaborate on this in the supplementary material.\n• Fake Audio and Fake Visual. Both the real audio and visual frames are manipulated. • Fake Audio and Real Visual. Only the real audio corresponding to replacements and deletions is manipulated.\nTo further increase data quality, the fake audio, and the corresponding length-normalized real visual segments are synchronized. As for the insertions, new visual segments are generated based on the length of the fake audio and are lip-synced to the background noise (i.e., closed mouth). • Real Audio and Fake Visual. Only the real visual frames corresponding to replacements and deletions are manipulated. To further increase data quality, the length of the fake visual segments is normalized to match the length of the real audio. As for the insertions, background noise is inserted for the corresponding fake visual segments." }, { "figure_ref": [], "heading": "Data Generation Pipeline", "publication_ref": [ "b14", "b47", "b41" ], "table_ref": [], "text": "The three-stage pipeline for generating content-driven deepfakes is illustrated in Figure 2. A subset of real videos from the Voxceleb2 [14] dataset is pre-processed to extract the audio using FFmpeg [47], followed by Whisper-based [41] real transcript generation." }, { "figure_ref": [], "heading": "Transcript Manipulation", "publication_ref": [ "b9" ], "table_ref": [], "text": "Manipulation Strategy. The first stage for generating content-driven deepfakes is transcript manipulation. We utilize ChatGPT for altering the real transcripts. Through LangChain [9] the output of ChatGPT is a structured JSON with four fields: 1) operation: This set contains replace, delete, and insert, which has been applied on the input; 2) old word: The word in the input to replace or delete; 3) new word: The word in the input to insert or replace; 4) index: The location of the operation in the input. The number of transcript modifications depends on the video length and is determined by the following equation M = ceil(t/10) where M is the number of modifications and t (sec) is the length of the video. We followed [3] and built a few-shot prompt template for ChatGPT.\nSystem: You are a helpful text modifier. Your target is to modify the provided text to invert its meaning to the opposite direction. Here is the transcript of the audio. Please use the provided operations to modify the transcript to change its meaning. The operation can be one of \"delete\", \"insert\" and \"replace\". This statistical comparison shows that the proposed LLMbased transcript manipulation strategy generates more diverse content compared to the rule-based strategy employed in LAV-DF. We further elaborate on the advantages of using an LLM in this step in the supplementary material." }, { "figure_ref": [], "heading": "Audio Generation", "publication_ref": [ "b17", "b24", "b4", "b28", "b5", "b62", "b30", "b7", "b50" ], "table_ref": [ "tab_3" ], "text": "Manipulation Strategy. The next stage is to generate highquality audio with the same style as the speaker. The audio is first separated into background noise and speech using Denoiser [17]. Zero-shot voice cloning methods such as SV2TTS [24] utilized by previous datasets [4,28] have low signal-to-noise ratio resulting in low-quality audio manipulations that are easily localized by BA-TFD [5] and UMMAFormer [62]. To increase the quality of the generated audio, we employ the identity-dependent text-tospeech method VITS [30] for a subset of the subjects. Further diversity in the audio generation was introduced by utilizing the identity-independent text-to-speech method YourTTS [8] for the rest of the subjects. Audio generation is slightly different for each of the manipulation strategies (i.e., replace, insert and delete). In the case of replace and insert, we need to generate new audio corresponding to new word(s). Generally, there are two ways to generate the new word(s): 1) Generate audio for the final fake transcript and crop it to get the audio for the required new word(s) and 2) Generate audio only for the new word(s). To bring further diversity and challenge, we use both strategies to generate audio for the new word(s). In the case of delete, only the background noise is retained. After the audio manipulation, we normalized the loudness of the fake audio segments to the original audio to add more realism. Finally, to keep the consistency with the environmental noise, we add the background noise previously separated to the final audio output. Analysis. We evaluated the quality of the audio generation following previous works [7,11] (note that for all datasets, we only evaluated the samples where the audio modality is modified). The results are shown in Table 2.\nThe first evaluation metric is speaker encoder cosine similarity (SECS) [50]. It measures the similarity of the speakers given a pair of audio in the range [-1, 1]. We also calculated the signal-to-noise ratio (SNR) for all fake audio and Fréchet audio distance (FAD) [29]. The results indicate that AV-Deepfake1M contains higher quality audio compared to other datasets." }, { "figure_ref": [], "heading": "Video Generation", "publication_ref": [ "b23", "b1", "b44", "b19", "b10", "b52", "b55", "b42", "b16", "b28", "b4" ], "table_ref": [ "tab_4" ], "text": "Manipulation Strategy. The final stage of the generation pipeline is visual content generation. After the audio is generated, the lip-synced visual frames are generated based on the subjects' original pose and the fake audio. We investigated several face reenactment strategies including EAMM [23], AVFR-GAN [2], DiffTalk [44], AD-NeRF [19] and ATVGnet [10] and concluded that these methods are not well suited for zero-shot lip-synced generation of unseen speakers. Thus, we use TalkLip [52] for visual content generation which is primarily designed for zero-shot lip-sync scenarios. LipTalk is 1) Identityindependent, 2) Lip-syncing only without generating new poses, 3) Fast, 4) State-of-the-art, and 5) Open-source. This way we avoid the weaknesses of the aforementioned face reenactment strategies. The pre-trained Talk-Lip model is used to generate fake visual frames that are lip-synchronized with the input audio and can be used for insertion, replacement, and deletion. Analysis. To evaluate the visual quality of the proposed dataset, we used peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) [55] and Fréchet inception distance (FID) [22] metrics as shown in Table 3. Note that for a fair comparison, we pre-processed the videos to a common format. The videos of FF++ [42] and DFDC [16] are 'in-the-wild', whereas FakeAVCeleb [28], LAV-DF [4] and AV-Deepfake1M are facial videos. Thus, we cropped the facial region for FF++ and DFDC for visual quality assessment. Since FakeAVCeleb, LAV-DF and AV-Deepfake1M are multimodal, for a fair comparison, we only used the samples with the visual modality modified to compute the metrics. The results indicate that AV-Deepfake1M is of higher visual quality compared to existing datasets." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Dataset Statistics", "publication_ref": [ "b4" ], "table_ref": [ "tab_6" ], "text": "We split the dataset into train, validation, and test sets. We first randomly select 1,657 subjects for the train set and 411 subjects for the test set without any overlap. The validation set is selected randomly from the train subset. The test set contains only samples with VITS-based identity-dependent audio. The variation in the number of subjects and videos in different sets is presented in Table 4 and Figure 4. Figure 5 illustrates the direct comparison of AV- The left three-row three-column histograms illustrate the fake segment absolute lengths (sec), the fake segment lengths proportion in videos (%) and the video lengths (sec) in the train, validation, and test sets. In the middle, the histograms illustrate the overall statistics for fake segment lengths, proportions and video lengths, compared with LAV-DF. For the fake segment lengths and proportions, the X-axis is in log scale and for video lengths, the X-axis is in linear scale. For all histograms, the Y-axis is in linear scale. The vertical dotted lines and numbers in histograms represent the mean value. On the right side, (a) The number of segments with different modifications and (b) The number of videos with different numbers of segments per video. Deepfake1M and LAV-DF [4]. The results indicate that AV-Deepfake1M is more diverse in terms of modifications, subjects, fake segment and video lengths, and a lower average proportion of fake segments, making the dataset a vital asset for building better deepfake localization methods." }, { "figure_ref": [], "heading": "Human Quality Assessment", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "To investigate if humans can detect the deepfakes in AV-Deepfake1M, we also conducted a user study with 30 participants with prior experience in video manipulation in the computer vision domain (note that the authors did not participate in the study). 200 random samples that contain 0 or 1 modification were selected for the study. Each participant was asked to classify 20 videos (5 real and 15 fake) as real or fake and propose the potential fake segment timestamps.\nThe user study results presented in Table 5 indicate that the deepfake content in AV-Deepfake1M is very challenging to detect for humans." }, { "figure_ref": [], "heading": "Computational Cost", "publication_ref": [ "b41", "b30" ], "table_ref": [], "text": "We spent around ∼600 GPU hours for speech recognition with Whisper [41], ∼2100 GPU hours for training VITS [30] (each of the 721 VITS models requires ∼3hrs), and ∼300 GPU hours for data generation. Overall, we needed ∼3000 GPU hours to generate AV-Deepfake1M with NVIDIA RTX6000 GPUs. " }, { "figure_ref": [], "heading": "Benchmarks and Metrics", "publication_ref": [], "table_ref": [], "text": "This section outlines the benchmark protocol for AV-Deepfake1M along with the used evaluation metrics. The goal is to detect and localize content-driven audio, visual, and audio-visual manipulations." }, { "figure_ref": [], "heading": "Data Partitioning", "publication_ref": [], "table_ref": [], "text": "The dataset is organized in train, validation, and test sets, as described in Section 3.2. The original test set (all modifications) is referred to as fullset in the rest of the text. For a fair comparison with visual-only and audio-only methods, we also prepared subset V (by excluding the videos with audioonly modifications from fullset) and subset A (by excluding the videos with visual-only modifications from fullset)." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b60", "b54", "b4", "b5", "b62", "b62" ], "table_ref": [], "text": "For benchmarking temporal deepfake localization, we consider the following state-of-the-art methods: Pyannote [39] is a pre-trained speaker diarization method. TriDet [45] and ActionFormer [60] are the state-of-the-art in the temporal action localization domain. Since these two methods require pre-trained features, we extracted the state-ofthe-art features VideoMAEv2 [53] and InternVideo [54] for benchmarking. BA-TFD [4], BA-TFD+ [5], and UM-MAFormer [62] are the state-of-the-art methods specifically designed for audio-visual temporal deepfake localization. We followed the original settings for BA-TFD and BA-TFD+. For UMMAFormer [62], we implemented it using Table 6. Temporal deepfake localization benchmark. Performance comparison of state-of-the-art methods on the proposed AV-Deepfake1M dataset. The results are significantly low, indicating that AV-Deepfake1M is an important benchmark for this task." }, { "figure_ref": [], "heading": "Set Method", "publication_ref": [ "b54", "b38", "b0", "b0", "b12", "b15", "b5", "b63", "b0", "b0", "b12", "b15", "b13", "b6", "b61", "b51", "b42" ], "table_ref": [], "text": "Mod. AP@0.5 AP@0.75 AP@0.9 AP@0.95 AR@50 AR@30 AR@20 AR@10 AR@5 the InternVideo [54] visual features and BYOL-A [38] audio features. For image-based classification methods, we consider Meso4 [1], MesoInception4 [1], Xception [12] and EfficientViT [15]. We followed the procedure used in previous works [5,63] to aggregate the frame-level predictions to segments for localization.\nFor benchmarking deepfake detection, we trained the image-based models Meso4 [1], MesoInception4 [1], Xception [12] and EfficientViT [15] with video frames along with the corresponding labels. For the segment-based methods MDS [13] and MARLIN [6], we used a sliding window to sample segments from the video for training and inference. During the inference stage, the frame-and segmentlevel predictions are aggregated to video-level by max voting. The aggregation strategy is discussed in Section 5. We also evaluated the zero-shot performance of several methods, including the LLM-based Video-LLaMA [61], audio pre-trained CLAP [57], and M2TR [51] pre-trained on FF++ [42]. For Video-LLaMA, we also evaluated 5 model ensembles (the majority vote of 5 model inferences). To investigate the impact of the level of label access, we designed 3 different label access levels for training: frame-level la-bels, segment-level labels only, and video-level labels only." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b4", "b20", "b16", "b42" ], "table_ref": [], "text": "Temporal Deepfake Localization. We use average precision (AP) and average recall (AR) as prior works [4,20]. Deepfake Detection. We use the standard evaluation protocol [16,42] and report video-level accuracy (Acc.) and area under the curve (AUC)." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "This section reports the performance of the state-of-the-art deepfake detection and localization methods described in Section 4.2 on AV-Deepfake1M. The reported performance is based on different subsets, described in Section 4.1, and different levels of label access during training, described in Section 4.2." }, { "figure_ref": [], "heading": "Audio-Visual Temporal Deepfake Localization", "publication_ref": [ "b4", "b20" ], "table_ref": [], "text": "The results of this benchmark are depicted in Table 6. All state-of-the-art methods achieve significantly lower performance compared to the performance reported on previous datasets [4,20]. This significant drop indicates that existing temporal deepfake localization methods are falling behind with the rapid advancements in content generation. In other words, we can claim that the highly realistic fake content in AV-Deepfake1M will open an avenue for further research on temporal deepfake localization methods." }, { "figure_ref": [], "heading": "Audio-Visual Deepfake Detection", "publication_ref": [ "b4", "b15", "b21", "b51", "b33" ], "table_ref": [ "tab_9", "tab_10" ], "text": "Similarly to temporal deepfake localization, the results of the classical deepfake detection benchmark are summarized in Table 7. Models that have access only to the video-level labels during training and the zero-shot models all perform poorly on this task. Providing the fine-grained segmentlevel and frame-level labels during training brings an improvement in performance. However, even with the framelevel labels provided during training, the AUC of the bestperforming methods is less than 70, due to the multimodal modifications present in AV-Deepfake1M.\nThe frame-and segment-based deepfake detection methods can only produce frame-and segment-level predictions. Thus, a suitable aggregation strategy is required to generate the video-level predictions. We investigated several popular aggregation strategies, such as max (e.g., [4]), average (e.g., [15,21,51]), and the average of the highest 5 scores (e.g., [33]) for video-level predictions. The results of the experiment are presented in Table 8. The results show that max is the optimal aggregation strategy on AV-Deepfake1M for the considered deepfake detection methods. " }, { "figure_ref": [], "heading": "Unimodal Deepfake Detection and Localization", "publication_ref": [], "table_ref": [], "text": "We also evaluated the performance on subset V and subset A, as described in Section 4.1. As expected, all visual-only methods consistently perform better on subset V compared to fullset for both temporal localization and detection. The same holds for subset A and audio-only methods." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents AV-Deepfake1M, the largest audiovisual dataset for temporal deepfake localization. The comprehensive benchmark of the dataset utilizing state-of-theart deepfake detection and localization methods indicates a significant drop in performance compared to previous datasets, indicating that the proposed dataset is an important asset for building the next-generation of deepfake localization methods.\nLimitations. Similarly to other deepfake datasets, AV-Deepfake1M exhibits a misbalance in terms of the number of fake and real videos. Broader Impact. Owing to the diversified and realistic, content-driven fake videos, AV-Deepfake1M will support the development of robust, generalized, audio-visual deepfake detection and localization models. Ethics Statement. We acknowledge that AV-Deepfake1M may raise ethical concerns such as the potential misuse of facial videos of celebrities, and even the data generation pipeline could have a potential negative impact. Misuse could include the creation of deepfake videos or other forms of exploitation. To avoid such issues, we have taken several measures such as distributing the data with a proper end-user license agreement, where we will impose certain restrictions on the usage of the data, such as the data generation technology and resulting content being restricted to research purposes only usage.\nAV-Deepfake1M: A Large-Scale LLM-Driven Audio-Visual Deepfake Dataset \nSupplementary Material" }, { "figure_ref": [ "fig_4" ], "heading": "Transcript Manipulation", "publication_ref": [ "b4" ], "table_ref": [], "text": "In addition to the quantitative comparison of transcript modifications in AV-Deepfake1M and LAV-DF [4] (see Section 3.1.1), here we also present a qualitative one. Figure 6 illustrates word clouds for old word(s) and new word(s) for both datasets. A comparison between the new words generated by the rule-based strategy utilized in LAV-DF and our LLM-driven generation further demonstrates that the latter results in more natural and diverse transcript manipulations." }, { "figure_ref": [], "heading": "Human Quality Assessment", "publication_ref": [], "table_ref": [], "text": "Here we provide further details on the user study (see Section 3.3) that aims to evaluate humans' performance in detecting the highly realistic deepfake content in AV-Deepfake1M." }, { "figure_ref": [], "heading": "Data", "publication_ref": [], "table_ref": [], "text": "The data used in the user study are 200 videos randomly sampled from the test set of AV-Deepfake1M. Since each video presents content for a unique subject, the data used in the user study evaluate humans' performance in detecting deepfake content 200 out of the total 411 subjects in the test set. The videos include 50 Real videos, 50 Fake Audio and Fake Visual videos, 50 Fake Audio and Real Visual videos, and 50 Real Audio and Fake Visual videos (see Figure 7. Screenshot of the user study interface. On the top is the video with audio, the middle is the textual description of the task, and the bottom is the participant's controls to 1) Select whether the video is real or fake and 2) If the participant selects fake, use a slider to specify the begin and end of the fake segment.\nTable 9. User study results compared with the state-in-the-art in temporal deepfake localization." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Acc. AP@0.1 AP@0. Section 3)." }, { "figure_ref": [], "heading": "Participants", "publication_ref": [], "table_ref": [], "text": "We randomly group the participants into 10 groups where each group evaluates 10% of the videos (i.e., 20 videos including 5 Real videos, 5 Fake Audio and Fake Visual videos, 5 Fake Audio and Real Visual videos, and 5 Real Audio and Fake Visual videos). We utilize a random nonoverlapping selection of videos (and subjects) for each participant, meaning that each participant evaluates videos for 20 out of the 200 random subjects. After watching each video, the participants first answer whether the video is real or fake, and if they think the video is fake, the participants can choose the start and end timestamps for the fake segment. A screenshot of the developed user study interface based on the React 2 framework is shown in Figure 7." }, { "figure_ref": [ "fig_5" ], "heading": "Evaluation and Analysis", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Among the 30 participants that took part in the user study, the binary deepfake detection/classification accuracy is 47.99%. This low performance indicates that the deepfake content in AV-Deepfake1M is very challenging for humans to detect. A similar pattern is observed for the temporal localization of fake segments. Similarly to Table 5, here we report and compare average precision (AP) and average recall (AR) scores in Table 9 and extend that comparison with the state-of-the-art methods using the same subset of videos. The AP score for 0.5 IoU is 00.08. Thus, we reduced the AP threshold to 0.1 IoU, improving the AP score to 05.53. Figure 8 illustrates a similar qualitative comparison. The low human performance in each aspect indicates 2 https://react.dev/ that to detect highly realistic deepfake content, we need more sophisticated detection and localization methods." }, { "figure_ref": [], "heading": "Audio and Video Generation", "publication_ref": [], "table_ref": [], "text": "Here we provide complete details on the manipulations in AV-Deepfake1M (see Section 3). Figure 9 provides visualizations corresponding to each of the three modifications and the resulting deepfake content. Please note that for example for Fake Audio and Real Visual in the cases of deletion and insertion, there are slight modifications in the visual signal as well. The reason we regard the visual signal as real is the fact that words were not inserted or deleted in that modality. Similarly for Real Audio and Fake Visual." }, { "figure_ref": [], "heading": "Replace", "publication_ref": [], "table_ref": [], "text": "Here, we show the audio-visual content manipulation strategy in three setups i.e. fake audio fake video, fake audio real video and real audio fake video. We believe that these three variations of fake content generation add more challenge in the temporal localization task." }, { "figure_ref": [], "heading": "Frame-level", "publication_ref": [ "b6", "b13", "b0", "b0" ], "table_ref": [], "text": "Segment-level Video-level\nFigure 10. Complete details on the label access for training. Green color represents the real and red color represents fake content. The top row represents the original frame-level labels in a video. The middle row represents the segment-and video-level labels based on whether the segment/video contains any fake frames. For fair comparison across different methods, the bottom row represents the mapped segment-and video-level labels to frame-level labels. • In the frame-level configuration, the models are trained using the ground truth labels for each frame in the video. • In the segment-level configuration, if the segment contains any fake frames, it is labelled as fake otherwise it is labelled as real. For the segment-based methods MAR-LIN [6] and MDS [13], we used the segment-level labels during training. For a fair comparison when training the frame-based methods Meso4 [1] and MesoInception4 [1] we mapped the segment-level labels to frame-level." }, { "figure_ref": [], "heading": "Label Access For Training", "publication_ref": [ "b0", "b0" ], "table_ref": [], "text": "• In the video-level configuration, if the video contains any fake frames, it is labelled as fake otherwise it is labelled as real. Similarly to the segment-level configuration, for a fair comparison when training the frame-based methods Meso4 [1] and MesoInception4 [1] we mapped the videolevel labels to frame-level." } ]
Figure 1. AV-Deepfake1M is a large-scale content-driven deepfake dataset generated by utilising a large language model. The dataset contains more than 2K subjects and 1M deepfake videos generated by employing different audio-visual content manipulation strategies. The left figure illustrates examples of word-level replacement, deletion and insertion strategies to manipulate audio-visual content. The right figure illustrates a comparison between the proposed dataset and other publicly available datasets in terms of the number of subjects, and amount of real and fake videos.
[ { "figure_caption": "Figure 3 .3Figure 3. Comparison of transcript modifications in AV-Deepfake1M and LAV-DF.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Please generate output for the following input with {NUM} operations. {INPUT} Analysis. Figure 3 (a) illustrates a comparison of the frequencies of the top 20 words in AV-Deepfake1M and LAV-DF [4]. The results show that few words in LAV-DF have dominant frequencies (> 10%), whereas this issue is drastically reduced in AV-Deepfake1M. Owing to the contribution of ChatGPT, we also observed a significant increase in unique new words (27.7 times more) in the modified transcripts compared to LAV-DF, illustrated in Figure 3 (b).", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Data partitioning in AV-Deepfake1M. (a) The number of subjects in the train, validation, and test sets. (b) The number of videos in the train, validation, and test sets. (c) The number of videos with different audio generation methods in the train set. (d) The number of videos with different audio generation methods in the validation set. (e) The number of videos with different audio generation methods in the test set. In (c, d, e), F denotes audio generation for the full transcript and cropping of the new word(s) while W denotes audio generation only for the new word(s).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure5. Comparison of AV-Deepfake1M and LAV-DF. The left three-row three-column histograms illustrate the fake segment absolute lengths (sec), the fake segment lengths proportion in videos (%) and the video lengths (sec) in the train, validation, and test sets. In the middle, the histograms illustrate the overall statistics for fake segment lengths, proportions and video lengths, compared with LAV-DF. For the fake segment lengths and proportions, the X-axis is in log scale and for video lengths, the X-axis is in linear scale. For all histograms, the Y-axis is in linear scale. The vertical dotted lines and numbers in histograms represent the mean value. On the right side, (a) The number of segments with different modifications and (b) The number of videos with different numbers of segments per video.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Qualitative comparison of transcript modifications in AV-Deepfake1M and LAV-DF. (a) The old words before the manipulations in AV-Deepfake1M. (b) The new words after the LLM-driven manipulations in AV-Deepfake1M. (c) The old words before manipulations in LAV-DF. (d) The new words after the rulebased manipulations in LAV-DF.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Examples of user study results and comparison with the state-in-the-art in temporal deepfake localization. Green color represents real segments and red color represents fake segments. GT: Ground truth.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "SampleCropnoiseto theFind and cropsamelengthAddOption 2", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Audio quality of AV-Deepfake1M. Quality of the generated audio in terms of SECS, SNR and FAD.", "figure_data": "MethodSECS(↑) SNR(↑) FAD(↓)FakeAVCeleb [28]0.5432.166.598LAV-DF [4]0.9847.830.306AV-Deepfake1M (Train)0.9919.400.091AV-Deepfake1M (Validation)0.9919.160.091AV-Deepfake1M (Test)0.9919.420.083AV-Deepfake1M (Overall)0.9919.390.088", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Visual quality of AV-Deepfake1M. Quality of the generated video in terms of PSNR, SSIM and FID.", "figure_data": "MethodPSNR(↑) SSIM(↑) FID(↓)FF++ [42]24.400.8121.06DFDC [16]--5.69FakeAVCeleb [28]29.820.9192.29LAV-DF [4]33.060.8981.92AV-Deepfake1M (Train)39.500.9770.50AV-Deepfake1M (Validation)39.540.9770.49AV-Deepfake1M (Test)39.480.9770.56AV-Deepfake1M (Overall)39.490.9770.49", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Number of subjects and videos in AV-Deepfake1M.", "figure_data": "Subset#Subjects #Real Videos #Fake Videos#VideosTrain Validation1,657186,666 14,235559,514 43,105746,180 54,730Test41185,820257,420343,240Overall2,068286,721860,0391,146,760", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "User study results for AV-Deepfake1M.", "figure_data": "User Study Acc. AP@0.1 AP@0.5 AR@1Human47.9905.5300.0800.22", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Deepfake detection benchmark. Performance comparison of state-of-the-art methods on the proposed AV-Deepfake1M dataset using different evaluation protocols. E5: Ensemble 5.", "figure_data": "Label Access MethodsMod.FullsetSubset VSubset AFor TrainingAUC Acc.AUC Acc.AUC Acc.Zero-ShotVideo-LLaMA (7B) [61]AV50.09 25.23 50.13 33.51 50.08 33.49Video-LLaMA (13B) [61]AV49.50 25.02 49.53 33.35 49.30 33.36Video-LLaMA (7B) E5 [61]AV49.97 25.32 50.01 33.57 49.98 33.62Video-LLaMA (13B) E5 [61]AV50.74 25.05 50.52 33.36 50.78 33.40CLAP [57]A50.83 31.99 50.91 37.83 50.67 37.54M2TR [51]V50.18 74.99 50.24 66.67 50.14 66.66Video-levelMeso4 [1]V50.22 75.00 50.31 66.66 50.17 66.66MesoInception4 [1]V50.05 75.00 50.01 66.66 50.06 66.66Segment-level Meso4 [1]V54.53 55.83 56.81 56.78 53.34 53.89MesoInception4 [1]V57.16 28.24 62.14 37.41 54.64 35.46MDS [13]AV56.57 59.44 54.21 53.70 56.92 58.88MARLIN [6]V58.03 29.01 61.57 38.28 56.23 35.99Frame-levelMeso4 [1]V63.05 49.51 76.30 64.62 56.27 47.82MesoInception4 [1]V64.04 54.13 80.67 69.88 56.28 51.73Xception [12]V68.68 61.33 81.97 81.39 63.19 57.45EfficientViT [15]V65.51 71.80 76.74 70.89 59.75 63.51", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Aggregation strategies. AUC scores on fullset for each method using different aggregation strategies.", "figure_data": "Method → Meso4 MesoInc4 Xception EfficientViT MARLINStrategy ↓[1][1][12][15][6]max63.0564.0468.6865.5158.03avg55.6154.0761.4458.7553.20avg of top562.3259.8268.8163.6056.39", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" } ]
[ { "authors": "Darius Afchar; Vincent Nozick; Junichi Yamagishi; Isao Echizen", "journal": "", "ref_id": "b0", "title": "MesoNet: a Compact Facial Video Forgery Detection Network", "year": "2018" }, { "authors": "Madhav Agarwal; Rudrabha Mukhopadhyay; P Vinay; C V Namboodiri; Jawahar", "journal": "", "ref_id": "b1", "title": "Audio-Visual Face Reenactment", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b2", "title": "Language Models are Few-Shot Learners", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b3", "title": "", "year": "2020" }, { "authors": "Zhixi Cai; Kalin Stefanov; Abhinav Dhall; Munawar Hayat", "journal": "", "ref_id": "b4", "title": "Do You Really Mean That? Content Driven Audio-Visual Deepfake Dataset and Multimodal Method for Temporal Forgery Localization", "year": "2022" }, { "authors": "Zhixi Cai; Shreya Ghosh; Abhinav Dhall; Tom Gedeon; Kalin Stefanov; Munawar Hayat", "journal": "Computer Vision and Image Understanding", "ref_id": "b5", "title": "Glitch in the matrix: A large scale benchmark for content driven audio-visual forgery detection and localization", "year": "2007" }, { "authors": "Zhixi Cai; Shreya Ghosh; Kalin Stefanov; Abhinav Dhall; Jianfei Cai; Hamid Rezatofighi; Reza Haffari; Munawar Hayat", "journal": "", "ref_id": "b6", "title": "MARLIN: Masked Autoencoder for Facial Video Representation LearnINg", "year": "2023" }, { "authors": "Edresson Casanova; Christopher Shulby; Eren Gölge; Nicolas Michael Müller; Frederico Santos De; Arnaldo Oliveira; Anderson Candido; Silva Da; Sandra Maria Soares; Moacir Aluisio; Ponti Antonelli", "journal": "ISCA", "ref_id": "b7", "title": "SC-GlowTTS: An Efficient Zero-Shot Multi-Speaker Text-To-Speech Model", "year": "2021" }, { "authors": "Edresson Casanova; Julian Weber; Christopher D Shulby; Candido Arnaldo; Eren Junior; Moacir A Gölge; Ponti", "journal": "PMLR", "ref_id": "b8", "title": "YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for Everyone", "year": "2022" }, { "authors": "Harrison Chase; Langchain", "journal": "", "ref_id": "b9", "title": "", "year": "2022" }, { "authors": "Lele Chen; Ross K Maddox; Zhiyao Duan; Chenliang Xu", "journal": "", "ref_id": "b10", "title": "Hierarchical Cross-Modal Talking Face Generation With Dynamic Pixel-Wise Loss", "year": "2019" }, { "authors": "Seungwoo Choi; Seungju Han; Dongyoung Kim; Sungjoo Ha", "journal": "ISCA", "ref_id": "b11", "title": "Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length Embedding", "year": "2020" }, { "authors": "Francois Chollet", "journal": "", "ref_id": "b12", "title": "Xception: Deep Learning With Depthwise Separable Convolutions", "year": "2017" }, { "authors": "Komal Chugh; Parul Gupta; Abhinav Dhall; Ramanathan Subramanian", "journal": "", "ref_id": "b13", "title": "Not made for each other-Audio-Visual Dissonance-based Deepfake Detection and Localization", "year": "2020" }, { "authors": "Son Joon; Arsha Chung; Andrew Nagrani; Zisserman", "journal": "ISCA", "ref_id": "b14", "title": "VoxCeleb2: Deep Speaker Recognition", "year": "2018" }, { "authors": "Alessandro Davide; Nicola Coccomini; Claudio Messina; Fabrizio Gennaro; Falchi", "journal": "Springer International Publishing", "ref_id": "b15", "title": "Combining EfficientNet and Vision Transformers for Video Deepfake Detection", "year": "2022" }, { "authors": "Brian Dolhansky; Joanna Bitton; Ben Pflaum; Jikuo Lu; Russ Howes; Menglin Wang; Cristian Canton Ferrer", "journal": "", "ref_id": "b16", "title": "The DeepFake Detection Challenge (DFDC) Dataset", "year": "2020" }, { "authors": "Alexandre Défossez; Gabriel Synnaeve; Yossi Adi", "journal": "ISCA", "ref_id": "b17", "title": "Real Time Speech Enhancement in the Waveform Domain", "year": "2020" }, { "authors": "Songwei Ge; Thomas Hayes; Harry Yang; Xi Yin; Guan Pang; David Jacobs; Jia-Bin Huang; Devi Parikh", "journal": "Springer Nature Switzerland", "ref_id": "b18", "title": "Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer", "year": "2022" }, { "authors": "Yudong Guo; Keyu Chen; Sen Liang; Yong-Jin Liu; Hujun Bao; Juyong Zhang", "journal": "", "ref_id": "b19", "title": "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis", "year": "2021" }, { "authors": "Yinan He; Bei Gan; Siyu Chen; Yichun Zhou; Guojun Yin; Luchuan Song; Lu Sheng; Jing Shao; Ziwei Liu", "journal": "", "ref_id": "b20", "title": "ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis", "year": "2021" }, { "authors": "Young-Jin Heo; Woon-Ha Yeo; Byung-Gyu Kim", "journal": "Applied Intelligence", "ref_id": "b21", "title": "Deep-Fake detection algorithm based on improved vision transformer", "year": "2023" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "", "ref_id": "b22", "title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium", "year": "" }, { "authors": "Xinya Ji; Hang Zhou; Kaisiyuan Wang; Qianyi Wu; Wayne Wu; Feng Xu; Xun Cao", "journal": "Association for Computing Machinery", "ref_id": "b23", "title": "EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model", "year": "2022" }, { "authors": "Ye Jia; Yu Zhang; Ron J Weiss; Quan Wang; Jonathan Shen; Fei Ren; Zhifeng Chen; Patrick Nguyen; Ruoming Pang; Ignacio Lopez Moreno; Yonghui Wu", "journal": "", "ref_id": "b24", "title": "Transfer learning from speaker verification to multispeaker text-to-speech synthesis", "year": "2018" }, { "authors": "Liming Jiang; Ren Li; Wayne Wu; Chen Qian; Chen Change Loy", "journal": "", "ref_id": "b25", "title": "DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection", "year": "2020" }, { "authors": "Ziyue Jiang; Jinglin Liu; Yi Ren; Jinzheng He; Chen Zhang; Zhenhui Ye; Pengfei Wei; Chunfeng Wang; Xiang Yin; Zejun Ma; Zhou Zhao", "journal": "", "ref_id": "b26", "title": "Mega-TTS 2: Zero-Shot Textto-Speech with Arbitrary Length Speech Prompts", "year": "2023" }, { "authors": "Ziyue Jiang; Yi Ren; Zhenhui Ye; Jinglin Liu; Chen Zhang; Qian Yang; Shengpeng Ji; Rongjie Huang; Chunfeng Wang; Xiang Yin; Zejun Ma; Zhou Zhao", "journal": "", "ref_id": "b27", "title": "Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias", "year": "2023" }, { "authors": "Hasam Khalid; Shahroz Tariq; Simon S Woo", "journal": "", "ref_id": "b28", "title": "FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset", "year": "2005" }, { "authors": "Kevin Kilgour; Mauricio Zuluaga; Dominik Roblek; Matthew Sharifi", "journal": "", "ref_id": "b29", "title": "Fr\\'echet Audio Distance: A Metric for Evaluating Music Enhancement Algorithms", "year": "2019" }, { "authors": "Jaehyeon Kim; Jungil Kong; Juhee Son", "journal": "PMLR", "ref_id": "b30", "title": "Conditional Variational Autoencoder with Adversarial Learning for Endto-End Text-to-Speech", "year": "2021" }, { "authors": "Pavel Korshunov; Sebastien Marcel", "journal": "", "ref_id": "b31", "title": "DeepFakes: a New Threat to Face Recognition? Assessment and Detection", "year": "2018" }, { "authors": "Patrick Kwon; Jaeseong You; Gyuhyeon Nam; Sungwoo Park; Gyeongsu Chae", "journal": "", "ref_id": "b32", "title": "KoDF: A Large-Scale Korean DeepFake Detection Dataset", "year": "2021" }, { "authors": "Yuezun Li; Siwei Lyu", "journal": "", "ref_id": "b33", "title": "Exposing DeepFake Videos By Detecting Face Warping Artifacts", "year": "2019" }, { "authors": "Yuezun Li; Xin Yang; Pu Sun; Honggang Qi; Siwei Lyu", "journal": "", "ref_id": "b34", "title": "Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics", "year": "2020" }, { "authors": "Xuechen Liu; Xin Wang; Md Sahidullah; Jose Patino; Héctor Delgado; Tomi Kinnunen; Massimiliano Todisco; Junichi Yamagishi; Nicholas Evans; Andreas Nautsch; Kong Aik; Lee ", "journal": "Conference Name: IEEE/ACM Transactions on Audio, Speech, and Language Processing", "ref_id": "b35", "title": "ASVspoof 2021: Towards Spoofed and Deepfake Speech Detection in the Wild", "year": "2023" }, { "authors": "Kartik Narayan; Harsh Agarwal; Kartik Thakral; Surbhi Mittal; Mayank Vatsa; Richa Singh", "journal": "", "ref_id": "b36", "title": "DF-Platter: Multi-Face Heterogeneous Deepfake Dataset", "year": "2023" }, { "authors": "Dufou Nick; Jigsaw Andrew", "journal": "", "ref_id": "b37", "title": "Contributing Data to Deepfake Detection Research", "year": "2019" }, { "authors": "Daisuke Niizumi; Daiki Takeuchi; Yasunori Ohishi; Noboru Harada; Kunio Kashino", "journal": "", "ref_id": "b38", "title": "BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation", "year": "2021" }, { "authors": "Alexis Plaquet; Hervé Bredin", "journal": "ISCA", "ref_id": "b39", "title": "Powerset multi-class cross entropy loss for neural speaker diarization", "year": "2023" }, { "authors": "Rudrabha K R Prajwal; Mukhopadhyay; P Vinay; C V Namboodiri; Jawahar", "journal": "Association for Computing Machinery", "ref_id": "b40", "title": "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", "year": "2020" }, { "authors": "Alec Radford; Jong Wook Kim; Tao Xu; Greg Brockman; Christine Mcleavey; Ilya Sutskever", "journal": "PMLR", "ref_id": "b41", "title": "Robust Speech Recognition via Large-Scale Weak Supervision", "year": "2023" }, { "authors": "Andreas Rossler; Davide Cozzolino; Luisa Verdoliva; Christian Riess; Justus Thies; Matthias Niessner", "journal": "", "ref_id": "b42", "title": "FaceForen-sics++: Learning to Detect Manipulated Facial Images", "year": "2019" }, { "authors": "Kai Shen; Zeqian Ju; Xu Tan; Yanqing Liu; Yichong Leng; Lei He; Tao Qin; Sheng Zhao; Jiang Bian", "journal": "", "ref_id": "b43", "title": "NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers", "year": "2023" }, { "authors": "Shuai Shen; Wenliang Zhao; Zibin Meng; Wanhua Li; Zheng Zhu; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b44", "title": "DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation", "year": "2023" }, { "authors": "Dingfeng Shi; Yujie Zhong; Qiong Cao; Lin Ma; Jia Li; Dacheng Tao", "journal": "", "ref_id": "b45", "title": "TriDet: Temporal Action Detection With Relative Boundary Modeling", "year": "2023" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni; Devi Parikh; Sonal Gupta; Yaniv Taigman", "journal": "", "ref_id": "b46", "title": "Make-A-Video: Text-to-Video Generation without Text-Video Data", "year": "2022" }, { "authors": "Suramya Tomar", "journal": "Linux Journal", "ref_id": "b47", "title": "Converting video formats with FFmpeg", "year": "2006" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b48", "title": "LLaMA: Open and Efficient Foundation Language Models", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurelien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b49", "title": "Llama 2: Open Foundation and Fine-Tuned Chat Models", "year": "2023" }, { "authors": "Li Wan; Quan Wang; Alan Papir; Ignacio Lopez Moreno", "journal": "", "ref_id": "b50", "title": "Generalized End-to-End Loss for Speaker Verification", "year": "2018" }, { "authors": "Junke Wang; Zuxuan Wu; Wenhao Ouyang; Xintong Han; Jingjing Chen; Yu-Gang Jiang; Ser-Nam Li", "journal": "Association for Computing Machinery", "ref_id": "b51", "title": "M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection", "year": "2022" }, { "authors": "Jiadong Wang; Xinyuan Qian; Malu Zhang; Robby T Tan; Haizhou Li", "journal": "", "ref_id": "b52", "title": "Seeing What You Said: Talking Face Generation Guided by a Lip Reading Expert", "year": "2023" }, { "authors": "Limin Wang; Bingkun Huang; Zhiyu Zhao; Zhan Tong; Yinan He; Yi Wang; Yali Wang; Yu Qiao", "journal": "", "ref_id": "b53", "title": "VideoMAE V2: Scaling Video Masked Autoencoders With Dual Masking", "year": "2023" }, { "authors": "Yi Wang; Kunchang Li; Yizhuo Li; Yinan He; Bingkun Huang; Zhiyu Zhao; Hongjie Zhang; Jilan Xu; Yi Liu; Zun Wang; Sen Xing; Guo Chen; Junting Pan; Jiashuo Yu; Yali Wang; Limin Wang; Yu Qiao", "journal": "", "ref_id": "b54", "title": "InternVideo: General Video Foundation Models via Generative and Discriminative Learning", "year": "2022" }, { "authors": "A C Zhou Wang; H R Bovik; E P Sheikh; Simoncelli", "journal": "", "ref_id": "b55", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "Jay Zhangjie Wu; Yixiao Ge; Xintao Wang; Stan Weixian Lei; Yuchao Gu; Yufei Shi; Wynne Hsu; Ying Shan; Xiaohu Qie; Mike Zheng Shou", "journal": "", "ref_id": "b56", "title": "Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation", "year": "2023" }, { "authors": "Yusong Wu; Ke Chen; Tianyu Zhang; Yuchen Hui; Taylor Berg-Kirkpatrick; Shlomo Dubnov", "journal": "", "ref_id": "b57", "title": "Large-Scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation", "year": "2023" }, { "authors": "Xin Yang; Yuezun Li; Siwei Lyu", "journal": "", "ref_id": "b58", "title": "Exposing Deep Fakes Using Inconsistent Head Poses", "year": "2019" }, { "authors": "Jiangyan Yi; Ruibo Fu; Jianhua Tao; Shuai Nie; Haoxin Ma; Chenglong Wang; Tao Wang; Zhengkun Tian; Ye Bai; Cunhang Fan; Shan Liang; Shiming Wang; Shuai Zhang; Xinrui Yan; Le Xu; Zhengqi Wen; Haizhou Li; Zheng Lian; Bin Liu", "journal": "", "ref_id": "b59", "title": "ADD 2022: the First Audio Deep Synthesis Detection Challenge", "year": "2022" }, { "authors": "Chen-Lin Zhang; Jianxin Wu; Yin Li", "journal": "Springer Nature Switzerland", "ref_id": "b60", "title": "ActionFormer: Localizing Moments of Actions with Transformers", "year": "2022" }, { "authors": "Hang Zhang; Xin Li; Lidong Bing", "journal": "", "ref_id": "b61", "title": "Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding", "year": "2023" }, { "authors": "Rui Zhang; Hongxia Wang; Mingshan Du; Hanqing Liu; Yang Zhou; Qiang Zeng", "journal": "Association for Computing Machinery", "ref_id": "b62", "title": "UMMAFormer: A Universal Multimodal-adaptive Transformer Framework for Temporal Forgery Localization", "year": "2023" }, { "authors": "Yue Zhao; Yuanjun Xiong; Limin Wang; Zhirong Wu; Xiaoou Tang; Dahua Lin", "journal": "", "ref_id": "b63", "title": "Temporal Action Detection With Structured Segment Networks", "year": "2017" }, { "authors": "Tianfei Zhou; Wenguan Wang; Zhiyuan Liang; Jianbing Shen", "journal": "", "ref_id": "b64", "title": "Face Forensics in the Wild", "year": "2021" }, { "authors": "Bojia Zi; Minghao Chang; Jingjing Chen; Xingjun Ma; Yu-Gang Jiang", "journal": "Association for Computing Machinery. 2 GT Human 1 Human 2 Human 3 EffecientViT BA-TFD BA-TFD+ UMMAFormer GT Human 1 Human 2 Human 3 EffecientViT BA-TFD BA-TFD+ UMMAFormer GT", "ref_id": "b65", "title": "WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection", "year": "2020" } ]
[]
10.18653/v1/2022.findings-emnlp.496
2023-11-26
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b47", "b36", "b30", "b7", "b35", "b26", "b2", "b42", "b3", "b14", "b1", "b22", "b33", "b36", "b34", "b23", "b24", "b11", "b13", "b2" ], "table_ref": [], "text": "Enhancing dialogue systems with emotional characteristics is a hotspot in building humanlike chatbots. Recently, empathetic dialogue and emotion support conversation have taken center stage within the research landscape, which plays essential roles in establishing and maintaining harmonious social connections. Due to the inherent complexity in dialogue understanding, several works concentrate on infusing commonsense knowledge to assist in comprehending implicit psychology information and exploring the potential causality (Zhou et al., 2023;Zhao et al., 2023), leading to more sensible and comprehensive responses (Tu et al., 2022;Peng et al., 2022). In other words, introducing commonsense knowledge into dialogue systems can be regarded as a kind of intermediate thought that rationally reasons the interlocutor's mental state and intent.\nThe remarkable capabilities of Large Language Models (LLMs) (Chowdhery et al., 2023;Touvron et al., 2023) in dialogue understanding and commonsense deducing have ignited a new zeitgeist for building a powerful dialogue system (OpenAI, 2022(OpenAI, , 2023)). These scaled powerful models are able to achieve outstanding performance without fine-tuning in specific tasks (Brown et al., 2020) and utilize intermediate thought to solve complex problems (Wei et al., 2023). Therefore, LLMs can also be enhanced by the explicit commonsense reasoning to respond reasonably (Chae et al., 2023). Wang et al. (2023a) proposes a novel framework that incorporates the user's status as an intermediate step in the reasoning process to instruct LLMs to generate responses.\nUnfortunately, current methods fall short of achieving empathetic responses and smooth emo-tional support in conversations. Earlier efforts mostly depend on commonsense Knowledge Bases (KBs) such as ATOMIC (Hwang et al., 2021) or neural KBs like COMET (Bosselut et al., 2019), which often produce contextually irrelevant inferences (Li et al., 2020;Sabour et al., 2021;Tu et al., 2022). This results in a struggle to comprehend implicit causality in multi-turn dialogues. Meanwhile, Shen et al. (2022) employs commonsense reasoning for a complete and static dialogue, which contains a complete context and even dialogue future. This limitation renders it unsuitable for realtime conversation analysis due to its inability to adapt to dynamic discussions. Moreover, Wang et al. (2023a) prompts LLMs to iteratively deduce the psychology and emotional state information of the interlocutor solely based on dialogue history as a chain of thoughts to improve the quality of their responses.\nGiven that these commonsense inferences are derived solely from dialogue history, contemporary approaches neglect to account for the potential trend of interlocutors' intent and possible future developments in the conversation. As demonstrated in Figure 1, the commonsense extracted by COMET is entirely inferred based on the Speaker's last utterance, mainly focusing on the word \"together\", which is irrelevant to the whole dialogue context and misunderstood the emotional state of the Speaker. Similarly, CICERO deduces disadvantaged commonsense inference which is unrelated to the response and even misunderstands the participants of the background information, leading to dull inferences. Delving into this phenomenon, we argue that the primary factors contributing to this issue stem from the boundless scope of commonsense inference, alongside the fact that a single dialogue context frequently has multiple distinct responses that can appropriately answer the given utterance (Liu et al., 2022), validating that the dialogue history might not encompass enough information to generate the intended response.\nTo this end, we introduce the Prophetic Commonsense Inference, a paradigm for the dynamic inference of commonsense knowledge that is closely aligned with the potential dialogue future. Considering potential responses in the future, we instruct models to deduce potential causal indicators from the prior dialogue history, as well as the mental state of interlocutors and possible intent shaping the forthcoming utterance. Specifically, we extract plausible commonsense knowledge from LLMs and instruct tunable Pre-trained Language Models (PLMs) deducing dialogue future solely based on previous dialogue. Fundamentally, these deduced inferences act as chain-of-thought prompts (CoT) that steer language models in comprehending intricate dialogue contexts. They furnish crucial information regarding potential emotional states, intentions, subsequent events, and the scope of dialogue context that can elicit the desired response in the conversation.\nExtensive experiments carried out on the EM-PATHETICDIALOGUES (Rashkin et al., 2019) and Emotion Support Conversation (Liu et al., 2021) datasets illustrate that this innovative dialogue commonsense inference paradigm effectively addresses the aforementioned issues, under multiple settings such as Parameter-efficient fine-tuning (He et al., 2022;Hu et al., 2022) and In-Context Learning (Brown et al., 2020). Our contributions are summarized as follows:\n• Our research addresses the insufficiency of commonsense inference, which results in the absence of foresight in empathetic response generation.\n• We propose a paradigm, that employs LLMs to empower a small tunable model to bridge the huge gap between mitigating dialogue history to dialogue replies.\n• We conduct extensive experiments and provide a detailed analysis to validate the effectiveness of our method under multiple settings, showing marked improvements in automated metrics as well as in evaluations by human and LLM assessors." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "For dialogue response generation, we use the notation θ to represent a dialogue model, while C = [u 1 , u 2 , ..., u t-1 ] stands for the context utterances and Z corresponds to commonsense knowledge. The objective is to forecast the forthcoming response u t for the dialogue context C in the t -1 turn: Incorporating both dialogue history and ground truth responses, the Prophet LLM deduces four categories of prophetic commonsense. These inferences serve as a guiding oracle, aiding LLaMA2 models in learning to infer from dialogue history alone. These trained models then assist in the commonsense inference process for generating content from data that has not been seen before.\nu t ∼ P θ (• | Z, C) (1)" }, { "figure_ref": [], "heading": "Categories of Commonsense Inference", "publication_ref": [ "b34", "b20", "b5", "b10", "b48", "b43" ], "table_ref": [], "text": "In our work, we utilize four categories of commonsense inferences for dialogue. The overarching goal is to enrich our understanding of the dialogue history and to provide a comprehensive forecast of some potential characteristics embedded within the generated response.\nCause What is the cause of the assistant to post the last utterance? We emphasize the crucial role of causality within the dialogue context. Similar to the approach outlined by Shen et al. (2022) and previous investigations (Li et al., 2022;Cheng et al., 2022), we delve into potential words or phrases that could lead to the desired response.\nSubsequent Event What will be the potential subsequent events involving the user that may occur after the assistant's last utterance? Conversations demonstrate a causal connection between past utterances to the ensuing responses. It is obvious that dialogues contain a cause-and-effect connection between the context and the target response. Following (Ghosal et al., 2022), we employ a language model to project potential scenarios that follow the dialogue history, which is a key factor in determining the assistant's response.\nEmotion reaction What is the emotional reaction of the assistant in their last utterance? Emotion is a fundamental element in human conversation (Zhou et al., 2018), acting as a natural means for individuals to express their feelings during dialogues. With explicit emotion traits, it is easier for chatbots to grasp a more profound understanding of the dialogue and anticipate the potential emotional content within the target response.\nIntention What is the assistant's intent to post the last utterance according to the emotion reaction of the user? Dialogue intention is a focal point in the realm of dialogue generation (Welivita and Pu, 2020). It comprises the underlying logic and objectives guiding the forthcoming conversation, thus forming a vital aspect in contextual understanding and response generation.\nThe above four categories of commonsense inference are all used in our paradigm, acting as intermediate reasoning steps for steering language models for better dialogue comprehension and more empathetic responses." }, { "figure_ref": [], "heading": "Proposed Paradigm", "publication_ref": [], "table_ref": [], "text": "In this section, we propose a novel paradigm named Prophetic Commonsense Inference, as demonstrated in Figure 2." }, { "figure_ref": [], "heading": "Prophetic Commonsense acquisition", "publication_ref": [], "table_ref": [], "text": "We carry out prophetic training by using ChatGPT (OpenAI, 2022) to generate prophetic inferences. The four categories of mentioned commonsense inference are initially deduced by powerful LLMs (in this case, ChatGPT) with foresight into the responses found in the training data.\nZ ′ ∼ LLM(C ′ ; u ′ t ) (2) In this context, C ′ = [u ′ 1 , u ′ 2 , ..., u ′ t-1 ]\nfunctions as an illustration of the dialogue history, with u ′ t indicating the response. The input template contains dialogue history and target response, the prompt template is shown in Appendix A.1. Z ′ indicates the four categories of commonsense inference generated by LLMs, C ′ , and u ′ t all come from training datasets." }, { "figure_ref": [], "heading": "Prophetic Commonsense Training", "publication_ref": [ "b13" ], "table_ref": [], "text": "In order to generate prophetic commonsense inferences based on dialogue context by ourselves, we subsequently utilize Supervised Fine-Tuning (SFT) to fine-tune dialogue prophets on tunable language models. Limited by computational resources, we apply LoRA-Tuning (Hu et al., 2022) with LLaMA2 7B models.\nWhen conducting LoRA-Tuning, we carefully designed prompts as hints to guide these models to understand the purpose of performing commonsense inference. Similar to prompting LLMs to generate oracle commonsense inference, we describe the aim of deducing a certain aspect of commonsense knowledge first and give one example of dialogue for tunable Language Models to grasp the demand of reasoning implicitly. Inspired by instruction tuning, the final template of our input consists of 1) Task Definition and instruction; 2) Examples and Answers; and 3) Dialogue context to be inferred. The detailed prompts are illustrated in Appendix A.2.\nThe training loss is the standard negative loglikelihood (NLL) loss on the commonsense knowledge inferred by LLMs:\nL N LL = - T t=1 log(P (z ′ t |C, z ′ <t ))(3)\nwhere T is the length of commonsense inference generated by powerful LLMs Z\n′ = [z ′ 1 , ..., z ′ T ]." }, { "figure_ref": [], "heading": "Commonsense Inference and Response Generation", "publication_ref": [ "b30" ], "table_ref": [], "text": "Following the training phase of the prophetic language models, we utilize these proficient Language Models for commonsense inference tasks, represented as Ψ.\nZ p ∼ Ψ(C)(4)\nWhere C indicates dialogue context, Z p denotes prophetic commonsense inference generated by trained LLaMA models in Sec. 3.2.\nNotably, differing from the process outlined in Sec. 3.1, during both the training and inference phases, these specialized Language Models are presented with input that encompasses solely the dialogue history. In other words, these tunable Language Models are trained to anticipate the dialogue's future, under the instruction of powerful LLMs that possess prior knowledge about the possible response. The template for the inference stage is consistent with the training phase.\nWith the prophetic commonsense inferences Z p , we implement instruction-tuning (Wei et al., 2022), by appending the prophetic commonsense inferences deduced by tunable Language Models to dialogue context as a kind of Chain-of-Thought (CoT) prompts. This facilitates Language Models in conducting intermediate reasoning and generating responses. Specifically, we opt for LoRA-Tuning, to carry out instruction-tuning with the LLaMA2-7B model. The prompt template is detailed in Appendix A.2 The target of the training stage is minimizing the NLL loss on dialogue's ground truth response:\nL LoRA = - G g=1 log(P (y g |X; Z p , y <g )) (5)\nwhere G stands for the length of the ground truth response of the dialogue, y g specifies the g-th token in target response u t .\nFurthermore, we leverage In-Context-Learning to explore further the superiority of our paradigm in generating commonsense inference when finetuning is not capable. Similar to supervised finetuning, we directly use the same template to prompt powerful LLMs to generate empathetic responses based on dialogue context and prophetic commonsense knowledge. As access to GPT-4 is limited for most individuals, including ourselves, we predominantly carry out our experiments using ChatGPT with the official OpenAI API. emotion related to their situation, and the aim of the listener is to accurately comprehend the user's emotions and circumstances and provide appropriate and empathetic feedback." }, { "figure_ref": [], "heading": "Experimentals", "publication_ref": [ "b24", "b12" ], "table_ref": [], "text": "ESConv (Liu et al., 2021) comprises approximately 1,053 multi-turn dialogues, each with an average of 29.8 utterances, totaling 31,410 utterances. In the preprocessing stage, we segment the conversation examples every 10 utterances and subsequently perform a random dataset split into training, validation, and test sets at an 8:1:1 ratio. These dialogues involve two participants: a help seeker experiencing emotional distress and a professional supporter. It is required to smoothly provide emotion support following Hill (2009), with three stages to achieve emotion support: exploration, comfort, and action. The supporter initiates support by employing a specific strategy and subsequently responds.\nThe implementation details are demonstrated in Appendix B." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b37", "b19" ], "table_ref": [], "text": "To examine whether our proposed Prophetic Commonsense Inference (PCI) can improve the perfor-mance of LLMs, we compare the performance of LLaMA21 (7B) equipped with our Prophet Commonsense Inference with following strong baselines: CASE (Zhou et al., 2023): A model trained from scratch with the framework of vanilla transformers (Vaswani et al., 2017) on ED dataset. This work utilizes a conditional graph to represent all plausible causalities between the user's emotions and experience and probably in the future dialogue. MultiESC: Cheng et al. ( 2022) proposed a lookahead method to manage an optimal sequence of support strategies in the emoiton support scenarios. This model is a revised adaptation of the bart-large (Lewis et al., 2020). Explicit-CoT (Wang et al., 2023a): A step-by-step prompting mechanism to trace the status of users during the conversation, performing complex reasoning and planning before generating the final response. This method is primarily employed in In-Context-Learning. LLaMA2 Vanilla: In order to directly the effective- " }, { "figure_ref": [], "heading": "Automatic Evaluation of Generation Quality", "publication_ref": [ "b28", "b18", "b21", "b5" ], "table_ref": [ "tab_1", "tab_4", "tab_5", "tab_1", "tab_6" ], "text": "Our primary automatic metrics for evaluating generation quality include BLEU (Papineni et al., 2002), ROUGE-L (ROU-L.) (Kingma and Ba, 2015a), METEOR (MET) (Lavie and Agarwal, 2007), Distinct-n (Dist-n) (Li et al., 2016), andCIDEr (Vedantam et al., 2015). Additionally, we employ Embedding scores (both Average (Ave) and Extreme (Ext) Cosine Scores) scores to assess semantic similarity.\nAs illustrated in Table 1, integrating our proposed four Commonsense Inference into LLaMA2 outperforms utilizing other knowledge categories, excelling in most of the overlapped metrics for generating empathetic responses 2 . Moreover, our approach significantly boosts generation quality, surpassing all strong baselines by a considerable margin in average (Avg.), extreme (Ext.), and CIDEr scores. However, in terms of the ESConv dataset, MultiESC (Cheng et al., 2022) maintains superior- 2 The decline in the diversity score of PCI is attributed to the increased average length of the generated content. Longer sequences usually result in a higher frequency of repeating tokens within the content. ity in most overlapped scores. Although our proposed approach isn't notably dominant, it exhibits superior diversity compared to other forms of commonsense inference.\nSimilar to Supervised Fine-Tuning, we evaluate LLaMA2 models with varying knowledge categories under In-Context Learning (ICL) settings. The outcomes are detailed in Table 3 andTable 4. Our proposed approach exhibits similar diversity scores and significantly surpasses baseline models across all other metrics for empathetic conversations. Moreover, enhanced by our suggested commonsense inference, LLaMA2 models achieve comparable scores on a certain embedding-based (Ext.) and outstrip baselines on other diversitydriven and overlapping metrics. The superior performance observed under ICL settings demonstrates the effectiveness of our response-aligned paradigm and highlights the of employing commonsense knowledge as Chain-of-Thoughts in dialogue generation. Ablation Study. To assess the influence of different categories of commonsense knowledge on downstream generation, we systematically remove each of these four categories of commonsense knowledge to facilitate a performance comparison with CCI. The results of these comparisons can be found in Table 1 andTable 2. Excluding any of the four commonsense knowledge categories (without cause, without intent, without subs, and without emo) leads to a reduction in the quality of the generated response. Although some variants perform better than the complete method in particular metrics, the overall performance shows a notable decrease. As indicated in variants lacking a specific type of commonsense may excel in one metric, yet none of them manage to achieve a higher Dist-n score. This phenomenon demonstrates that while commonsense knowledge may enhance response diversity, it might concurrently result in lower similarity scores with ground truth (BLEU, ROUGE, METEOR, etc.), which is considered a trade-off in NLG tasks. Furthermore, the notable discrepancy between the variants (w/o intent) and our proposed complete method in both ED and ESConv datasets highlights the importance of predicting the potential intent of future responses, aligning with earlier studies (Chen et al., 2022;Wang et al., 2022). Additionally, we remove the prophetic perspective of the commonsense acquiring step in Sec. 3.1, and obtain a variant without prophetic ability. The overall performance sharply decreased in automatic metric, which reveals the significance of prophetic information in our paradigm." }, { "figure_ref": [], "heading": "Human Interactive Evaluation", "publication_ref": [ "b33" ], "table_ref": [ "tab_7" ], "text": "The human evaluation on the ED dataset adheres to methodologies established in prior studies (Sabour et al., 2021;Wang et al., 2022), conducting a human evaluation based on three aspects 1) Coherence (Coh.): which model's response is more coherent and relevant to the dialogue context? 2) Empathy (Emp.): which model has more appropriate emotional reactions, such as warmth, compassion, and concern? Informativeness (Inf.): which model's response incorporates more information related to the context? In the realm of ESConv, we consider four aspects: 1) Fluency (Flu.): Eval- uating the models based on the fluency of their responses. 2) Comforting (Com.): Assessing the models' skill in providing comfort. 3) Supportive (Sup.): Determining which model offers more supportive or helpful responses. 4) Overall (All.):\nAnalyzing which model provides more effective overall emotional support. We randomly select 100 dialogues and engage five professional annotators to evaluate the modelgenerated responses for both the ED and ESConv datasets. Considering the variation between different individuals, we conduct human A/B tests to directly compare our paradigm with other baselines. Annotators score the questionnaire of the response pairs to choose one of the responses in random order or select \"Tie\" when the quality of those provided sentences is difficult to distinguish. Fleiss's kappa is employed to analyze the evaluations. Table 5 clearly demonstrates CCI's significant advantage over CASE across all metrics. Compared to COMET and CICERO, CCI exhibits considerable progress, highlighting our approach's effectiveness in incorporating commonsense knowledge. These comparisons emphasize our paradigm's superior performance compared to the three baseline commonsense knowledge. Similarly, Table 6 results strongly highlight the effectiveness of PCI within emotion support scenarios. The considerable lead in the overall score over the baselines indicates a more substantial influence, demonstrating the greater supportiveness of the knowledge, acting as cues that guide LLaMA models to be more helpful.\nSpeaker: Hi, I feel so lonely sometimes because all my friends live in a different country.\nListener: Oh, I'm sure you are lonely. Maybe you can join some kind of club that lets you meet new friends?" }, { "figure_ref": [], "heading": "Dialogue Context", "publication_ref": [], "table_ref": [], "text": "Speaker: I was thinking about it! I wanted to join a group for local moms.\nResponse: That's a good idea! This way you can also meet friends for yourself, but also maybe meet new friends for your children to hang out with while you do with their moms! LLaMA2 + COMET That would be fab! Do you live in a big city or a small town? LLaMA2 + CICERO That would be great, I'm sure you will have great luck! Subsequent events: The listener is likely to suggest specific activities or events that the speaker can participate in to meet new friends, showing a proactive and helpful approach to the conversation." }, { "figure_ref": [], "heading": "Commonsense Knowledge", "publication_ref": [], "table_ref": [], "text": "Emoiton: The speaker feels hopeful and appreciates the listener's suggestion to join a group for local moms, as it aligns with their desire to meet new friends.\nCause: The listener is motivated by empathy and the desire to offer practical solutions, encouraging the speaker to pursue social connections .\nIntent: To provide encouragement to the speaker, acknowledging the potential benefits of joining a group for local moms and expressing hope that it will lead to positive outcomes for both the speaker and their children." }, { "figure_ref": [], "heading": "LLaMA2 + PCI (Ours)", "publication_ref": [], "table_ref": [], "text": "That would be a great idea. You can make friends for yourself and for your children.\nTable 7: A case containing LLaMA's generated responses that were enhanced through our inference approach and compared to standard baselines. The words relating to commonsense knowledge are highlighted in red, while phrases in red signify the connection with knowledge and dialogue history." }, { "figure_ref": [], "heading": "LLMs-based Evaluation", "publication_ref": [ "b25", "b6" ], "table_ref": [ "tab_10", "tab_11" ], "text": "We randomly selected 1000 data from both ED and ESConv datasets to perform G-Eval evaluation (Liu et al., 2023;Chiang and yi Lee, 2023). The detail of the evaluation is illustrated in Appendix C.\nCalculating the average weighted score of sampled data, the comparison result is shown in Table 8 and Table 9, our paradigm outperforms all strong baseline of commonsense inference in all aspects." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "To better evaluate the performance of response generation, we selected examples generated by our proposed paradigm and baselines for comparison. The example in Table 7 demonstrates that baseline models using COMET and CICERO knowledge struggled to identify the future direction of the dialogue, resulting in responses that lacked coherence and empathy. Our Prophetic commonsense knowledge, however, concentrates on crucial information, such as the possibility of the speaker having regular interactions with children. The Prophetic LLaMA's red-highlighted words accurately identify this detail, leading to a more sensible and suggestive response." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b33", "b22", "b1", "b45", "b10", "b42", "b46", "b8", "b15", "b3", "b31", "b0", "b23" ], "table_ref": [], "text": "Commonsense knowledge is widely used to build dialogue systems. Sabour et al. (2021); Li et al. (2020) adopt COMET (Bosselut et al., 2019), a pretrained language model to generate commonsense inference for retrieving implicit information of dialogue context. However, several works claim that the knowledge introduced into these models might become a trigger of logical conflicts due to the absence of harmonious knowledge selection (Yang et al., 2022;Wang et al., 2022). Ghosal et al. (2022) train language models to generate context-aware commonsense knowledge by conducting natural language generation (NLG) and multi-choice answer selection (MCQ) tasks, pushing the usage of the commonsense knowledge in dialogue further for research.\nThe concept of 'Chain-of-thought (CoT),' as detailed by Wei et al. (2023), is widely adopted in prompt engineering to stimulate Large Language Models (LLMs) in addressing complex problems by guiding them through the process of reasoning. Various works have emerged with the objective of improving different aspects of the initial reasoning process (Zhang et al., 2022;Wang et al., 2023b;Diao et al., 2023). Simultaneously, several researchers focus on dialogue inference and bind the inference based on commonsense with CoT, utilizing intermediate reasoning for better response generation (Ishii et al., 2023;Chae et al., 2023).\nProphet information is a key factor for language modeling and dialogue generation (Qi et al., 2020;Bao et al., 2022). Liu et al. (2022) regards the possible next utterance of conversation as a key factor that ignites the ability of dialogue systems to respond appropriately.\nIn this paper, we present an innovative paradigm named Prophetic Commonsense Inference for empathetic and emoiton support conversation. We underline the significance of future information and address the insufficiency of current commonsense inference. Extensive experiments validate the effectiveness of our method under multiple settings.\nIn future work, we aim to explore the adaptability of our approach to various open-domain dialogue datasets and will provide detailed interpretability experiments and analyses of our proposed methodology. A Detailed Prompts" }, { "figure_ref": [], "heading": "A.1 Prompts for Prophetic Commonsense acquisition", "publication_ref": [], "table_ref": [], "text": "The template input for prompting Large Language Models generating prophetic commonsense inference is as follows:\nGiven a dyadic dialogue clip between a listener and a speaker, the objective is to comprehend the dialogue and make inferences to identify the underlying cause of the latest utterance stated by the listener (the reason contributing to the utterance stated by the listener). I will provide an example of a conversation clip and the explanation of causes, which is as follows:\n(1)Speaker: Job interviews always make me sweat bullets, makes me uncomfortable in general to be looked at under a microscope like that.\n(2)Listener: Don't be nervous. Just be prepared.\n(3)Speaker: I feel like getting prepared and then having a curve ball thrown at you throws you off.\n(4)Listener: Yes but if you stay calm it will be ok.\nWhat is the cause of speaker to post the last utterance? Please make inference based on the utterances before the last utterance of the conversation.\nPlease generate the answer like this: Answer:\nThe cause of the listener's last utterance is to reassure and encourage the speaker, emphasizing the importance of staying calm despite unexpected challenges during a job interview. Now, generate one concise and relevant inference (no more than 40 words) of the cause of the last utterance. The conversation clip is: {context} What is the cause of speaker to post the last utterance?\nAnswer:" }, { "figure_ref": [], "heading": "A.2 Prompts for Prophet Training and Inference", "publication_ref": [], "table_ref": [], "text": "The prompt we designed as hints to guide tunable models to understand the purpose of performing commonsense inference is as follows:\n1) Task Definition and instruction: You are an expert in the theory of empathy and conversational contextual reasoning. Given a dyadic dialogue clip between a listener and a speaker, the objective is to comprehend the dialogue and make inferences to identify the underlying cause of the latest utterance stated by the listener (the reason contributing to the utterance stated by the listener).\n2) Example and Answers: I will provide an example of a conversation clip and the explanation of causes, which is as follows:\n{example} What is the cause of the speaker to post the last utterance? Please make inferences based on the utterances before the last utterance of the conversation.\nPlease generate the answer like this: Answer: {example answer}.\n3) Dialogue context to be inferred: Now, generate one concise and relevant inference (no more than 40 words) of the cause of the last utterance.\nThe conversation clip is: {context} Answer:\nAt the training stage, we append the oracle commonsense inference generated by powerful LLMs to the prompt above." }, { "figure_ref": [], "heading": "B Implementation Details", "publication_ref": [ "b44" ], "table_ref": [], "text": "For the implementation of LoRA-Tuning LLaMA2 7B models, we utilize the open-source Hugging Face transformers (Wolf et al., 2020). In terms of LoRA-Tuning, the LoRA's rank is set as 8, the alpha is 16, the dropout rate of LoRA is assigned to 0.05, and the target modules are Q and V . We set the learning rate to 3e-5 and training batch size to 16, train up to 15 epochs, and select the best checkpoints based on performance on the validation set under LoRA-Tuning and fine-tuning both. The whole model is optimized with the Adam (Kingma and Ba, 2015b) algorithm. The decoding temperature is set to 1.0 and the sampling strategy is equipped for response generation at the inference stage. All of the experiments are performed on a single NVIDIA A800 GPU." }, { "figure_ref": [], "heading": "C Details of LLMs-based evaluation", "publication_ref": [ "b9", "b25", "b25", "b6", "b25" ], "table_ref": [ "tab_10", "tab_11", "tab_10", "tab_11" ], "text": "The absence of labor-free and practical evaluation metrics has been a persistent challenge within the field of NLP research. Thanks to the rise of LLMs, several studies have explored the utilization of LLMs in assessing content generated by neural models. (Fu et al., 2023) propose a direct approach, using LLMs as reference-free evaluators for Natural Language Generation (NLG), viewing the evaluation process as a probability calculation. Moreover, (Liu et al., 2023) and (Chiang and yi Lee, 2023) introduce a prompt-based framework for LLMs, ensuring adherence to the generated instructions and offering a more detailed continuous score by adjusting the discrete scores based on their respective token probabilities. We apply G-Eval (Liu et al., 2023;Chiang and yi Lee, 2023) to assess the Naturalness (Nat.) and Coherence (Coh.) of responses from baseline models that utilize commonsense knowledge in diverse ways. For task-specific requirements, we compare Empathy (Emp.) in the context of EM-PATHETICDIALOGUES and Supportiveness (Sup.) for ESConv. As the token probabilities of Chat-GPT (OpenAI, 2022) are unavailable, we set 'n = 20, temperature = 1, top p = 1' to sample 20 times to estimate the token probabilities.\nStrictly following the rating strategy (Liu et al., 2023), we prompt gpt3.5-turbo-0613 to discretely rate 1 to 3 points to these generated responses. Specifically, we require the LLMs to rate 1 when the generated response totally fails to meet a certain aspect. Rating a '2-point' means the response is totally ok, and meets the certain requirement to some extent. For responses that actually meet the desired demands, LLM is asked to give a '3-point' rating.\nThe results of the average weighted score are demonstrated in Table 8 andTable 9, CCI outperforms all strong baselines of commonsense inference in all aspects.\nWe randomly selected 1000 data from both ED and ESConv datasets to perform G-Eval evaluation. Calculating the average weighted score of sampled data, the comparison result is shown in Table 8 and Table 9, CCI outperforms all strong baseline of commonsense inference in all aspects." } ]
The interest in Empathetic and Emotional Support conversations among the public has significantly increased. To offer more sensitive and understanding responses, leveraging commonsense knowledge has become a common strategy to better understand psychological aspects and causality. However, such commonsense inferences can be out of context and unable to predict upcoming dialogue themes, resulting in responses that lack coherence and empathy. To remedy this issue, we present Prophetic Commonsense Inference, an innovative paradigm for inferring commonsense knowledge. By harnessing the capabilities of Large Language Models in understanding dialogue and making commonsense deductions, we train tunable models to bridge the gap between past and potential future dialogues. Extensive experiments conducted on EMPATHETICDIALOGUES and Emotion Support Conversation show that equipping dialogue agents with our proposed prophetic commonsense inference significantly enhances the quality of their responses.
Enhancing Empathetic and Emotion Support Dialogue Generation with Prophetic Commonsense Inference
[ { "figure_caption": "don't know what to do, just broke up with my girlfriend, we were 8 years together. 🤵 Sorry to hear! do you have any idea about the break up? Did you think about it ?🤵Yes we decided together with our minds, and know I come home and feel so distant from the world.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: An example from the EMPATHETICDIA-LOGUES dataset shows that the commonsense reasoning provided by COMET and CICERO is fraught with drawbacks. Our proposed Prophetic Commonsense Inference precisely grasps the potential progress of the conversation and the listener's intent in the upcoming response.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Prophe' c Figure 2 :c2Figure2: The overview of our proposed Prophetic Commonsense Inference paradigm. Incorporating both dialogue history and ground truth responses, the Prophet LLM deduces four categories of prophetic commonsense. These inferences serve as a guiding oracle, aiding LLaMA2 models in learning to infer from dialogue history alone. These trained models then assist in the commonsense inference process for generating content from data that has not been seen before.", "figure_data": "", "figure_id": "fig_2", "figure_label": "c2", "figure_type": "figure" }, { "figure_caption": "Automatic Evaluation results on EMPATHETICDIALOGUES dataset. The version of LLaMA2 in our experiments is LLaMA2-chat-7B. The best results are highlighted with bold. \"*\" denotes that the improvement to the best baseline is statistically significant (t-test with p-value < 0.01).", "figure_data": "4.1 Experimental Setup4.1.1 DatasetWe conduct our experiments on two datasets: EM-PATHETICDIALOGUES (ED) and Emotion SupportConversation (ESConv). ED (Rashkin et al., 2019)is a vast multi-turn dialogue dataset encompassing25,000 empathetic conversations between a speakerand a listener. The speaker begins with a particular", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The In-Context-Learning results of ED datasets. The best results are highlighted with bold.", "figure_data": "BLEU-2/3/4Dist-1/2/3ROU_L. MET. Ave.Ext. CIDErExplicit-CoT 5.03/1.89/0.92 6.32/30.97/55.7814.999.27 89.76 42.434.92Vanilla5.06/2.01/0.93 6.43/31.39/56.3814.868.590.14 41.94.01+ COMET5.06/1.99/0.91 5.98/29.56/52.8914.879.44 90.66 42.984.14+ CICERO4.95/1.82/0.80 6.42/31.14/54.2414.979.190.6 42.564.15+ PCI5.04/2.04/0.96 6.48/31.96/56.6014.999.6590.741.94.25", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The In-Context-Learning results of ESConv datasets. The best results are highlighted with bold.", "figure_data": "ness of our proposed paradigm, we apply LLaMA2models which only respond based on dialogue con-text.LLaMA2 + COEMT: A LLaMA2 model en-hanced by external knowledge comes fromCOMET (Bosselut et al., 2019) which makes infer-ences based on the last utterance of context.LLaMA2 + CICIERO: A LLaMA2 modelequipped with contextualized commonsense infer-ence obtained from Shen et al. (2022).", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "it is evident that", "figure_data": "ComparisonsAspects Win Lose TieCoh.53.25.4 41.4PCI vs. CASEEmp.42.2 10.4 47.4Inf.46.45.4 48.2Coh.17.81369.2PCI vs. VanillaEmp.3016.6 53.4Inf.21.8 21.4 56.8Coh.17.8 14.8 67.2PCI vs. COMETEmp.25.2 21.2 53.6Inf.24.9 23.8 51.3Coh.17.56.5 75.5PCI vs. CICEROEmp.49.5 28.5 21.5Inf.40.5 26.5 32.5Table 5: Human A/B test (%) of EMPATHETICDIA-LOGUES. The inter-annotator agreement is evaluated byFleiss's Kappa (denoted as κ), where 0.4 < κ < 0.6indicates moderate agreement.", "figure_id": "tab_6", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The human A/B test results for ESConv (%).", "figure_data": "ComparisonsAspects Win Lose TieFlu.29.5 15.3 55.2PCI vs. MultiESCCom. Sup.42.6 19.9 37.5 45.7 16.6 37.7All.50.3 18.6 31.1Flu.28.2 20.4 51.4PCI vs. VanillaCom. Sup.28.5 20.3 51.2 32.5 29.5 38All.36.7 30.2 33.1Flu.23.5 17.2 59.3PCI vs. COMETCom. Sup.31.9 24.3 43.8 31.3 28.6 40.1All.38.7 29.9 31.4Flu.13.51076.5PCI vs. CICEROCom. Sup.51.5 40.1 8.4 51.3 38.8 9.9All.56.4 37.2 6.4All κ values fall between 0.4 and 0.6, suggesting mod-erate agreement.", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 730-739. AAAI Press.", "figure_data": "Jinfeng Zhou, Chujie Zheng, Bo Wang, Zheng Zhang,and Minlie Huang. 2023. CASE: aligning coarse-to-fine cognition and affection for empathetic responsegeneration. In Proceedings of the 61st Annual Meet-ing of the Association for Computational Linguis-tics (Volume 1: Long Papers), ACL 2023, Toronto,Canada, July 9-14, 2023, pages 8223-8237. Associa-tion for Computational Linguistics.", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "LLMs based Evaluation results on EPATHET-ICDIALOGUES (ED) and ESConv dataset under Supervised Fine-Tuning.", "figure_data": "EDESConvNat. Emp. Coh.Nat.Sup. Coh.Vanilla2.19 2.171 2.192 1.838 1.983 1.713COMET2.188 2.176 2.188 1.842 1.979 1.712CICERO2.126 1.793 2.186 1.841 1.793 1.71Explicit-CoT 2.189 1.792 2.124 1.841 1.982 1.716PCI2.191 2.176 2.191 1.846 1.984 1.717", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "LLMs based Evaluation results on EPATHET-ICDIALOGUES (ED) and ESConv dataset under Supervised In-Context Learning.", "figure_data": "", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" } ]
Lanrui Wang; Jiangnan Li; Chenxu Yang; Zheng Lin; Weiping Wang
[ { "authors": "Junwei Bao; Yifan Wang; Ying Jiangyong; Yeyun Gong; Jing Zhao; Youzheng Wu; Xiaodong He", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "P3LM: probabilistically permuted prophet language modeling for generative pre-training", "year": "2022-12-07" }, { "authors": "Antoine Bosselut; Hannah Rashkin; Maarten Sap; Chaitanya Malaviya; Asli Celikyilmaz; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "COMET: commonsense transformers for automatic knowledge graph construction", "year": "2019-07-28" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Hyungjoo Chae; Yongho Song; Kai Tzu Iunn; Taeyoon Ong; Minjin Kwon; Youngjae Kim; Dongha Yu; Dongyeop Lee; Jinyoung Kang; Yeo", "journal": "", "ref_id": "b3", "title": "Dialogue chain-of-thought distillation for commonsense-aware conversational agents", "year": "2023" }, { "authors": "Siheng Mao Yan Chen; Yujiu Li; Yang", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Emphi: Generating empathetic responses with humanlike intents", "year": "2022-07-10" }, { "authors": "Yi Cheng; Wenge Liu; Wenjie Li; Jiashuo Wang; Ruihui Zhao; Bang Liu; Xiaodan Liang; Yefeng Zheng", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Improving multi-turn emotional support dialogue generation with lookahead strategy planning", "year": "2022-12-07" }, { "authors": "Cheng-Han Chiang; Hung Yi; Lee ", "journal": "", "ref_id": "b6", "title": "A closer look into automatic evaluation using large language models", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "J. Mach. Learn. Res", "ref_id": "b7", "title": "Palm: Scaling language modeling with pathways", "year": "2023" }, { "authors": "Shizhe Diao; Pengcheng Wang; Yong Lin; Tong Zhang", "journal": "", "ref_id": "b8", "title": "Active prompting with chain-ofthought for large language models", "year": "2023" }, { "authors": "Jinlan Fu; See-Kiong Ng; Zhengbao Jiang; Pengfei Liu", "journal": "", "ref_id": "b9", "title": "Gptscore: Evaluate as you desire", "year": "2023" }, { "authors": "Deepanway Ghosal; Siqi Shen; Navonil Majumder; Rada Mihalcea; Soujanya Poria", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "CICERO: A dataset for contextualized commonsense inference in dialogues", "year": "2022-05-22" }, { "authors": "Junxian He; Chunting Zhou; Xuezhe Ma; Taylor Berg-Kirkpatrick; Graham Neubig", "journal": "", "ref_id": "b11", "title": "Towards a unified view of parameter-efficient transfer learning", "year": "2022-04-25" }, { "authors": "Clara E Hill", "journal": "American Psychological Association", "ref_id": "b12", "title": "Helping skills: Facilitating, exploration, insight, and action", "year": "2009" }, { "authors": "Edward J Hu; Yelong Shen; Phillip Wallis; Zeyuan Allen-Zhu; Yuanzhi Li; Shean Wang; Lu Wang; Weizhu Chen", "journal": "", "ref_id": "b13", "title": "Lora: Low-rank adaptation of large language models", "year": "2022-04-25" }, { "authors": "Jena D Hwang; Chandra Bhagavatula; Le Ronan; Jeff Bras; Keisuke Da; Antoine Sakaguchi; Yejin Bosselut; Choi", "journal": "AAAI Press", "ref_id": "b14", "title": "comet-) atomic 2020: On symbolic and neural commonsense knowledge graphs", "year": "2021-02-02" }, { "authors": "Etsuko Ishii; Yan Xu; Bryan Wilie; Ziwei Ji; Holy Lovenia; Willy Chung; Pascale Fung", "journal": "", "ref_id": "b15", "title": "Contrastive learning for inference in dialogue", "year": "2023" }, { "authors": "P Diederik; Jimmy Kingma; ; Ba", "journal": "", "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "Alon Lavie; Abhaya Agarwal", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "METEOR: an automatic metric for MT evaluation with high levels of correlation with human judgments", "year": "2007-06-23" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020-07-05" }, { "authors": "Jiangnan Li; Fandong Meng; Zheng Lin; Rui Liu; Peng Fu; Yanan Cao; Weiping Wang; Jie Zhou", "journal": "", "ref_id": "b20", "title": "Neutral utterances are also causes: Enhancing conversational causal emotion entailment with social commonsense knowledge", "year": "2022-07-29" }, { "authors": "Jiwei Li; Michel Galley; Chris Brockett; Jianfeng Gao; Bill Dolan", "journal": "The Association for Computational Linguistics", "ref_id": "b21", "title": "A diversity-promoting objective function for neural conversation models", "year": "2016-06-12" }, { "authors": "Qintong Li; Piji Li; Zhumin Chen; Zhaochun Ren", "journal": "", "ref_id": "b22", "title": "Empathetic dialogue generation via knowledge enhancing and emotion dependency modeling", "year": "2020" }, { "authors": "Chang Liu; Xu Tan; Chongyang Tao; Zhenxin Fu; Dongyan Zhao; Tie-Yan Liu; Rui Yan", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Prophetchat: Enhancing dialogue generation with simulation of future conversation", "year": "2022-05-22" }, { "authors": "Siyang Liu; Chujie Zheng; Orianna Demasi; Sahand Sabour; Yu Li; Zhou Yu; Yong Jiang; Minlie Huang", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Towards emotional support dialog systems", "year": "2021-08-01" }, { "authors": "Yang Liu; Dan Iter; Yichong Xu; Shuohang Wang; Ruochen Xu; Chenguang Zhu", "journal": "", "ref_id": "b25", "title": "G-eval: NLG evaluation using GPT-4 with better human alignment", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b26", "title": "Chatgpt: Optimizing language models for dialogue", "year": "2022-01-10" }, { "authors": " Openai", "journal": "", "ref_id": "b27", "title": "GPT-4 technical report", "year": "2023" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b28", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002-07-06" }, { "authors": " ", "journal": "", "ref_id": "b29", "title": "", "year": "" }, { "authors": "Wei Peng; Yue Hu; Luxi Xing; Yuqiang Xie; Yajing Sun; Yunpeng Li", "journal": "", "ref_id": "b30", "title": "Control globally, understand locally: A global-to-local hierarchical graph network for emotional support conversation", "year": "2022-07-29" }, { "authors": "Weizhen Qi; Yu Yan; Yeyun Gong; Dayiheng Liu; Nan Duan; Jiusheng Chen; Ruofei Zhang; Ming Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training", "year": "2020-11" }, { "authors": "Eric Michael Hannah Rashkin; Margaret Smith; Y-Lan Li; Boureau", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Towards empathetic opendomain conversation models: A new benchmark and dataset", "year": "2019-07-28" }, { "authors": "Sahand Sabour; Chujie Zheng; Minlie Huang", "journal": "", "ref_id": "b33", "title": "CEM: commonsense-aware empathetic response generation", "year": "2021" }, { "authors": "Siqi Shen; Deepanway Ghosal; Navonil Majumder; Henry Lim; Rada Mihalcea; Soujanya Poria", "journal": "", "ref_id": "b34", "title": "Multiview contextual commonsense inference: A new dataset and task", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurélien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b35", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Quan Tu; Yanran Li; Jianwei Cui; Bin Wang; Ji-Rong Wen; Rui Yan", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "MISC: A mixed strategyaware model integrating COMET for emotional support conversation", "year": "2022-05-22" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b37", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "C Lawrence Ramakrishna Vedantam; Devi Zitnick; Parikh", "journal": "IEEE Computer Society", "ref_id": "b38", "title": "Cider: Consensus-based image description evaluation", "year": "2015-06-07" }, { "authors": "Hongru Wang; Rui Wang; Fei Mi; Zezhong Wang; Ruifeng Xu; Kam-Fai Wong", "journal": "", "ref_id": "b39", "title": "Chain-ofthought prompting for responding to in-depth dialogue questions with LLM", "year": "2023" }, { "authors": "Lanrui Wang; Jiangnan Li; Zheng Lin; Fandong Meng; Chenxu Yang; Weiping Wang; Jie Zhou", "journal": "", "ref_id": "b40", "title": "Empathetic dialogue generation via sensitive emotion recognition and sensible knowledge selection", "year": "2022" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; Quoc Le; Ed Chi; Sharan Narang; Aakanksha Chowdhery; Denny Zhou; Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b41", "title": "Self-consistency improves chain of thought reasoning in language models", "year": "2022-04-25" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b42", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2023" }, { "authors": "Anuradha Welivita; Pearl Pu", "journal": "International Committee on Computational Linguistics", "ref_id": "b43", "title": "A taxonomy of empathetic response intents in human social conversations", "year": "2020-12-08" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander M Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Transformers: State-of-the-art natural language processing", "year": "2020-11-16" }, { "authors": "Chenxu Yang; Zheng Lin; Jiangnan Li; Fandong Meng; Weiping Wang; Lanrui Wang; Jie Zhou", "journal": "International Committee on Computational Linguistics", "ref_id": "b45", "title": "TAKE: topic-shift aware knowledge selection for dialogue generation", "year": "2022-10-12" }, { "authors": "Zhuosheng Zhang; Aston Zhang; Mu Li; Alex Smola", "journal": "", "ref_id": "b46", "title": "Automatic chain of thought prompting in large language models", "year": "2022" }, { "authors": "Weixiang Zhao; Yanyan Zhao; Xin Lu; Bing Qin", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Don't lose yourself! empathetic response generation via explicit self-other awareness", "year": "2023-07-09" }, { "authors": "Hao Zhou; Minlie Huang; Tianyang Zhang; Xiaoyan Zhu; Bing Liu", "journal": "", "ref_id": "b48", "title": "Emotional chatting machine: Emotional conversation generation with internal and external memory", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 376.43, 763.57, 148.71, 10.77 ], "formula_id": "formula_0", "formula_text": "u t ∼ P θ (• | Z, C) (1)" }, { "formula_coordinates": [ 3, 374.24, 759.62, 150.9, 15.64 ], "formula_id": "formula_1", "formula_text": "Z ′ ∼ LLM(C ′ ; u ′ t ) (2) In this context, C ′ = [u ′ 1 , u ′ 2 , ..., u ′ t-1 ]" }, { "formula_coordinates": [ 4, 105.46, 546.3, 184.4, 33.58 ], "formula_id": "formula_2", "formula_text": "L N LL = - T t=1 log(P (z ′ t |C, z ′ <t ))(3)" }, { "formula_coordinates": [ 4, 209.7, 602.32, 66.79, 15.63 ], "formula_id": "formula_3", "formula_text": "′ = [z ′ 1 , ..., z ′ T ]." }, { "formula_coordinates": [ 4, 153.85, 715.54, 136.02, 10.63 ], "formula_id": "formula_4", "formula_text": "Z p ∼ Ψ(C)(4)" }, { "formula_coordinates": [ 4, 322.14, 395.16, 203.01, 33.58 ], "formula_id": "formula_5", "formula_text": "L LoRA = - G g=1 log(P (y g |X; Z p , y <g )) (5)" } ]
2023-12-10
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b2", "b20", "b21", "b22", "b37", "b38", "b39", "b12", "b23", "b42", "b25", "b26", "b38", "b26", "b71", "b48", "b25", "b71", "b35", "b24", "b35", "b11", "b24", "b47", "b50" ], "table_ref": [], "text": "The ubiquitous Web is becoming the ultimate data repository, capable of linking a broad spectrum of objects to form gigantic and complex graphs. The prevalence of graph data enables a series of downstream tasks for Web applications, ranging from online page/article classification to friend recommendation in social networks. Modern approaches for graph analysis generally resort to graph representation learning including graph embedding and graph neural networks (GNNs). Earlier graph embedding approaches [3], [21], [22] usually embed nodes on the graph into a low-• Xingtong Yu is with the University of Science and Technology of China, Hefei, Anhui 230052, China, and also with Singapore Management University, Singapore 188065 (email: yxt95@mail.ustc.edu.cn). Work was done as a visiting student at Singapore Management University. • Zhenghao Liu is with the University of Science and Technology of China, Hefei, Anhui 230052, China (email: salzh@mail.ustc.edu.cn). • Yuan Fang is with Singapore Management University, Singapore 188065 (email: yfang@smu.edu.sg). • Zemin Liu is with the National University of Singapore, Singapore 119077 (e-mail: liu.zemin@hotmail.com). • Sihong Chen is with the Tecent AI, China, Shenzhen, Guangdong 518063 (email: cshwhale@sina.com). • Xinming Zhang is with the University of Science and Technology of China, Hefei, Anhui 230052, China (email: xinming@ustc.edu.cn).\ndimensional space, in which the structural information such as the proximity between nodes can be captured [23]. More recently, GNNs [38], [39], [40], [13] have emerged as the state of the art for graph representation learning. Their key idea boils down to a message-passing framework, in which each node derives its representation by receiving and aggregating messages from its neighboring nodes recursively [24].\nquickly updated through a lightweight fine-tuning step on a smaller number of task-specific labels. However, the \"pre-train, fine-tune\" paradigm suffers from the problem of inconsistent objectives between pretraining and downstream tasks, resulting in suboptimal performance [43]. On one hand, the pre-training step aims to preserve various intrinsic graph properties such as node/edge features [26], [27], node connectivity/links [39], [27], [72], and local/global patterns [49], [26], [72]. On the other hand, the fine-tuning step aims to reduce the task loss, i.e., to fit the ground truth of the downstream task. The discrepancy between the two steps can be quite large. For example, pre-training may focus on learning the connectivity pattern between two nodes (i.e., related to link prediction), whereas fine-tuning could be dealing with a node or graph property (i.e., node classification or graph classification task).\nPrior work. To narrow the gap between pre-training and downstream tasks, prompting [36] has first been proposed for language models, which is a natural language instruction designed for a specific downstream task to \"prompt out\" the semantic relevance between the task and the language model. Meanwhile, the parameters of the pre-trained language model are frozen without any fine-tuning, as the prompt can \"pull\" the task toward the pre-trained model. Thus, prompting is also more efficient than fine-tuning, especially when the pre-trained model is huge. Recently, prompting has also been introduced to graph pre-training in the GPPT approach [25]. While the pioneering work has proposed a sophisticated design of pre-training and prompting, it can only be employed for the node classification task, lacking a universal treatment that appeals to different downstream tasks such as both node classification and graph classification.\nPrior work. To narrow the gap between pre-training and downstream tasks, prompting [36] has first been proposed for language models, which is a natural language instruction designed for a specific downstream task to \"prompt out\" the semantic relevance between the task and the language model [12]. Meanwhile, the parameters of the pre-trained language model are frozen without any fine-tuning, as the prompt can \"pull\" the task toward the pre-trained model. Thus, prompting is also more efficient than fine-tuning, especially when the pre-trained model is huge. Recently, prompting has also been introduced to graph pre-training in the GPPT approach [25]. While the pioneering work has proposed a sophisticated design of pre-training and prompting, it can only be employed for the node classification task, lacking a universal treatment that appeals to different downstream tasks such as both node classification and graph classification.\nResearch problem and challenges. To address the divergence between graph pre-training and various downstream tasks, in this paper we investigate the design of pre-training and prompting for graph neural networks. In particular, we aim for a unified design that can suit different downstream tasks flexibly. This problem is non-trivial due to the following two challenges.\nFirstly, to enable effective knowledge transfer from the pre-training to a downstream task, it is desirable that the pre-training step preserves graph properties that are compatible with the given task. However, since different downstream tasks often have different objectives, how do we unify pre-training with various downstream tasks on graphs, so that a single pre-trained model can universally support different tasks? That is, we try to convert the pre-training task and downstream tasks to follow the same \"template\". Using pre-trained language models as an analogy, both their pre-training and downstream tasks can be formulated as masked language modeling. Secondly, under the unification framework, it is still important to identify the distinction between different downstream tasks, in order to attain task-specific optima. For pretrained language models, prompts in the form of natural language tokens or learnable word vectors have been designed to give different hints to different tasks, but it is less apparent what form prompts on graphs should take. Hence, how do we design prompts on graphs, so that they can guide different downstream tasks to effectively make use of the pre-trained model? Present work: GRAPHPROMPT. To address these challenges, we propose a novel graph pre-training and prompting framework, called GRAPHPROMPT, aiming to unify the pre-training and downstream tasks for GNNs. Drawing inspiration from the prompting strategy for pre-trained language models, GRAPHPROMPT capitalizes on a unified template to define the objectives for both pre-training and downstream tasks, thus bridging their gap. We further equip GRAPHPROMPT with task-specific learnable prompts, which guides the downstream task to exploit relevant knowledge from the pre-trained GNN model. The unified approach endows GRAPHPROMPT with the ability of working on limited supervision such as few-shot learning tasks.\nMore specifically, to address the first challenge of unification, we focus on graph topology, which is a key enabler of graph models. In particular, subgraph is a universal structure that can be leveraged for both node-and graphlevel tasks. At the node level, the information of a node can be enriched and represented by its contextual subgraph, i.e., a subgraph where the node resides in [48], [51]; at the graph level, the information of a graph is naturally represented by the maximum subgraph (i.e., the graph itself). Consequently, we unify both the node-and graph-level tasks, whether in pre-training or downstream, into the same template: the similarity calculation of (sub)graph1 representations. In this work, we adopt link prediction as the self-supervised pre-training task, given that links are readily available in any graph without additional annotation cost. Meanwhile, we focus on the popular node classification and graph classification as downstream tasks, which are node-and graph-level tasks, respectively. All these tasks can be cast as instances of learning subgraph similarity. On one hand, the link prediction task in pre-training boils down to the similarity between the contextual subgraphs of two nodes, as shown in Fig. 1(a). On the other hand, the downstream node or graph classification task boils down to the similarity between the target instance (a node's contextual subgraph or the whole graph, resp.) and the class prototypical subgraphs constructed from labeled data, as illustrated in Figs. 1(b) and (c). The unified template bridges the gap between the pretraining and different downstream tasks.\nToward the second challenge, we distinguish different downstream tasks by way of the READOUT operation on subgraphs. The READOUT operation is essentially an aggregation function to fuse node representations in the subgraph into a single subgraph representation. For instance, sum pooling, which sums the representations of all nodes in the subgraph, is a practical and popular scheme for READOUT. However, different downstream tasks can benefit from different aggregation schemes for their READOUT. In particular, node classification tends to focus on features that can contribute to the representation of the target node, while graph classification tends to focus on features associated with the graph class. Motivated by such differences, we propose a novel task-specific learnable prompt to guide the READOUT operation of each downstream task with an appropriate aggregation scheme. As shown in Fig. 1, the learnable prompt serves as the parameters of the READOUT operation of downstream tasks, and thus enables different aggregation functions on the subgraphs of different tasks. Hence, GRAPHPROMPT not only unifies the pre-training and downstream tasks into the same template based on subgraph similarity, but also recognizes the differences between various downstream tasks to guide task-specific objectives." }, { "figure_ref": [], "heading": "Extension to GRAPHPROMPT+.", "publication_ref": [ "b78", "b31", "b78", "b64", "b31", "b48", "b37", "b38", "b39", "b40" ], "table_ref": [], "text": "Although GRAPHPROMPT bridges the gap between pre-training and downstream tasks, we further propose a more generalized extension called GRAPHPROMPT+ to enhance the versatility of the unification framework. Recall that in GRAPHPROMPT, during the pre-training stage, a simple link prediction-based task is employed that naturally fits the subgraph similaritybased task template. Concurrently, in the prompt-tuning stage, the model integrates a learnable prompt vector only in the READOUT layer of the pre-trained graph model. While these simple designs are effective, we take the opportunity to further raise two significant research questions.\nFirst, toward a universal \"pre-train, prompt\" paradigm, it is essential to extend beyond a basic link prediction task in pre-training. While our proposed template can unify link prediction with typical downstream tasks on graph, researchers have proposed many other more advanced pretraining tasks on graphs, such as DGI [79] and GraphCL [32], which are able to capture more complex patterns from the pre-training graphs. Thus, to improve the compatibility of our framework with alternative pre-training tasks, a natural question arises: How can we unify a broader array of pretraining tasks within our framework? In addressing this research question, we show that how a standard contrastive learningbased pre-training task on graphs can be generalized to fit our proposed task template, using a generalized pre-training loss. The generalization anchors on the core idea of subgraph similarity within our task template, while preserves the established sampling strategies of the original pre-training task. Hence, the uniqueness of each pre-training task is still retained while ensuring compatibility with our task template. We further give generalized variants of popular graph pre-training tasks including DGI [79], InfoGraph [65], GraphCL [32], GCC [49].\nSecond, graph neural networks [38], [39], [40], [41] typically employ a hierarchical architecture in which each layer learns distinct knowledge in a certain aspect or at some resolution. For instance, the initial layers primarily processes raw node features, thereby focusing on the intrinsic properties of individual nodes. However, as the number of layers in the graph encoder increases, the receptive field has been progressively enlarged to process more extensive neighborhood data, thereby shifting the focus toward subgraph or graph-level knowledge. Not surprisingly, different downstream tasks may prioritize the knowledge encoded at different layers of the graph encoder. Consequently, it is important to generalize our prompt design to leverage the layer-wise hierarchical knowledge from the pre-trained encoder, beyond just the READOUT layer in GRAPHPROMPT. Thus, a second research question becomes apparent: How do we design prompts that can adapt diverse downstream tasks to the hierarchical knowledge within multiple layers of the pretrained graph encoder? To solve this question, we extend GRAPH-PROMPT with a generalized prompt design. More specifically, instead of a single prompt vector applied to the READOUT layer, we propose layer-wise prompts, a series of learnable prompts that are integrated into each layer of the graph encoder in parallel. The series of prompts modifies all layers of the graph encoder (including the input and hidden layers), so that each prompt vector is able to locate the layer-specific knowledge most relevant to a downstream task, such as node-level characteristics, and local or global structural patterns." }, { "figure_ref": [], "heading": "Contributions.", "publication_ref": [ "b0", "b1", "b3", "b19" ], "table_ref": [], "text": "To summarize, our contributions are fourfold. (1) We recognize the gap between graph pretraining and downstream tasks, and propose a unification framework GRAPHPROMPT and its extension GRAPH-PROMPT+based on subgraph similarity for both pre-training and downstream tasks, including both node and graph classification tasks. (2) We propose a novel prompting strategy for GRAPHPROMPT, hinging on a learnable prompt to actively guide downstream tasks using task-specific aggregation in the READOUT layer, in order to drive the downstream tasks to exploit the pre-trained model in a taskspecific manner.\n(3) We extend GRAPHPROMPT to GRAPHPROMPT+2 , which further unifies existing popular graph pre-training tasks for compatibility with our task template, and generalizes the prompt design to capture the hierarchical knowledge within each layer of the pre-trained graph encoder. (4) We conduct extensive experiments on five public datasets, and the results demonstrate the superior performance of GRAPHPROMPT and GRAPHPROMPT+in comparison to the state-of-the-art approaches.\nA preliminary version of this manuscript has been published as a conference paper in The ACM Web Conference 2023 [20]. We highlight the major changes as follows.\n(1) Introduction: We reorganized Section 1 to highlight the motivation, challenges, and insights for the extension from GRAPHPROMPT to GRAPHPROMPT+. (2) Methodology: We proposed the extension GRAPHPROMPT+ to allow more general pre-training tasks and prompt tuning in Section 5. The proposed GRAPHPROMPT+ not only broadens the scope of our task template to accommodate any standard contrastive pre-training task on graphs, but also enhances the extraction of hierarchical knowledge from the pre-trained graph encoder with layer-wise prompts.\n(3) Experiments: We conducted additional experiments to evaluate the extended framework GRAPHPROMPT+ in Section 7, which demonstrate significant improvements over GRAPHPROMPT." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b43", "b44", "b45", "b46", "b60", "b25", "b26", "b83", "b38", "b26", "b71", "b48", "b25", "b71", "b78", "b64", "b31", "b33", "b67", "b71", "b72", "b35", "b35", "b28", "b30", "b24", "b7", "b8", "b6", "b76", "b84", "b16", "b37", "b72" ], "table_ref": [], "text": "Graph pre-training. Inspired by the application of pretraining models in language [44], [45] and vision [46], [47] domains, graph pre-training [61] emerges as a powerful paradigm that leverages self-supervision on labelfree graphs to learn intrinsic graph properties. While the pre-training learns a task-agnostic prior, a relatively lightweight fine-tuning step is further employed to update the pre-trained weights to fit a given downstream task. Different pre-training approaches design different self-supervised tasks based on various graph properties such as node features [26], [27], links [84], [39], [27], [72], local or global patterns [49], [26], [72], local-global consistency [79], [65], and their combinations [32], [34], [68].\nHowever, the above approaches do not consider the gap between pre-training and downstream objectives, which limits their generalization ability to handle different tasks. Some recent studies recognize the importance of narrowing this gap. L2P-GNN [72] capitalizes on meta-learning [73] to simulate the fine-tuning step during pre-training. However, since the downstream tasks can still differ from the simulation task, the problem is not fundamentally addressed.\nPrompt-based learning. In other fields, as an alternative to fine-tuning, researchers turn to prompting [36], in which a task-specific prompt is used to cue the downstream tasks. Prompts can be either handcrafted [36] or learnable [29], [31]. On graph data, the study of prompting is still limited. One recent work called GPPT [25] capitalizes on a sophisticated design of learnable prompts on graphs, but it only works with node classification, lacking a unification effort to accommodate other downstream tasks like graph classification. VNT [8] utilizes a pre-trained graph transformer as graph encoder and introduces virtual nodes as soft prompts within the embedding space. However, same as GPPT, its application is only focused on the node classification task. On the other hand, ProG [9] and SGL-PT [7] both utilize a specific pre-training method for prompt-based learning on graphs across multiple downstream tasks. Additionally, HGPrompt [77] extends GraphPrompt to address heterogeneous graph learning. Despite this, they neither explore the unification of different pre-training tasks, nor exploit the hierarchical knowledge inherent in graph encoder. Besides, there is a model also named as GraphPrompt [85], but it considers an NLP task (biomedical entity normalization) on text data, where graph is only auxiliary. It employs the standard text prompt unified by masked language modeling, assisted by a relational graph to generate text templates, which is distinct from our work. Graph prompt has also been applied in recommendation systems. PGPRec [17] implements a novel approach in the realm of cross-domain recommendation by leveraging graph prompts analogous to text prompts. The key idea is to guide the recommendation process through personalized prompts for each user. These prompts are meticulously crafted based on the items relevant to the user, referred to as \"neighbouring items\". For a given item, its neighbouring items encompass those in the target domain sharing identical attributes.\nComparison to other settings. Our few-shot setting is different from other paradigms that also deal with label scarcity, including semi-supervised learning [38] and meta-learning [73]. In particular, semi-supervised learning cannot cope with novel classes not seen in training, while meta-learning requires a large volume of labeled data in their base classes for a meta-training phase, before they can handle few-shot tasks in testing." }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "In this section, we give the problem definition and introduce the background of GNNs." }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [], "table_ref": [], "text": "Graph. A graph can be defined as G = (V, E), where V is the set of nodes and E is the set of edges. Equivalently, the graph can be represented by an adjacency matrix A, such as\nA ij = 1 iff (v i , v j ) ∈ E, for any v i , v j ∈ V .\nWe also assume an input feature matrix of the nodes, X ∈ R |V |×d , is available. Let x i ∈ R d denote the feature vector of node v i ∈ V . In addition, we denote a set of graphs as\nG = {G 1 , G 2 , . . . , G N }.\nProblem. In this paper, we investigate the problem of graph pre-training and prompting. For the downstream tasks, we consider the popular node classification and graph classification tasks. For node classification on a graph G = (V, E), let C be the set of node classes with ℓ i ∈ C denoting the class label of node v i ∈ V . For graph classification on a set of graphs G, let C be the set of graph labels with L i ∈ C denoting the class label of graph G i ∈ G.\nIn particular, the downstream tasks are given limited supervision in a few-shot setting: for each class in the two tasks, only k labeled samples (i.e., nodes or graphs) are provided, known as k-shot classification." }, { "figure_ref": [], "heading": "Graph Neural Networks", "publication_ref": [ "b23", "b37", "b38", "b39", "b40" ], "table_ref": [], "text": "The success of GNNs boils down to the message-passing mechanism [24], in which each node receives and aggregates messages (i.e., features or embeddings) from its neighboring nodes to generate its own representation. This operation of neighborhood aggregation can be stacked in multiple layers to enable recursive message passing. In the l-th GNN layer, the embedding of node v, denoted by h l v , is calculated based on the embeddings in the previous layer, as follows.\nh l v = AGGR(h l-1 v , {h l-1 u : u ∈ N v }; θ l ),(1)\nwhere N v is the set of neighboring nodes of v, θ l is the learnable GNN parameters in layer l. AGGR(•) is the neighborhood aggregation function and can take various forms, ranging from the simple mean pooling [38], [39] to advanced neural networks such as neural attention [40] or multi-layer perceptrons [41]. Note that in the first layer, the input node embedding h 0 v can be initialized as the node features in X. The total learnable GNN parameters can be denoted as Θ = {θ 1 , θ 2 , . . .}. For brevity, we simply denote the output node representations of the last layer as h v .\nFor brevity of notation, GNNs can also be described in a alternative, matrix-based format. Consider the embedding matrix at the l-th layer, denoted as H l , in which each row, h l i , denotes the embedding vector of node v i . The embedding matrix at the l-th layer is calculated based on the embedding matrix from the previous, i.e., (l -1)-th, layer:\nH l = AGGR(H l-1 , A; θ l ).(2)\nThe initial embedding matrix H 0 is set to be the same as the input feature matrix, i.e., H 0 = X. After encoding through all the GNN layers, we simply denote the output embedding matrix as H. For easy of reference, we further abstract the multi-layer encoding process as follows.\nH = GRAPHENCODER(X, A; Θ).(3)" }, { "figure_ref": [], "heading": "PROPOSED APPROACH: GRAPHPROMPT", "publication_ref": [], "table_ref": [], "text": "In this section, we present GRAPHPROMPT, starting with a unification framework for common graph tasks. Then, we introduce the pre-training and downstream phases." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "Unification Framework", "publication_ref": [], "table_ref": [], "text": "We first introduce the overall framework of GRAPHPROMPT in Fig. 2. Our framework is deployed on a set of label-free graphs shown in Fig. 2(a), for pre-training in Fig. 2(b). The pre-training adopts a link prediction task, which is selfsupervised without requiring extra annotation. Afterward, in Fig. 2(c), we capitalize on a learnable prompt to guide each downstream task, namely, node classification or graph classification, for task-specific exploitation of the pre-trained model. we explain how the framework supports a unified view of pre-training and downstream tasks below." }, { "figure_ref": [ "fig_1" ], "heading": "Instances as subgraphs.", "publication_ref": [ "b49", "b85", "b86", "b4", "b47", "b50", "b51", "b40" ], "table_ref": [], "text": "The key to the unification of pretraining and downstream tasks lies in finding a common template for the tasks. The task-specific prompt can then be further fused with the template of each downstream task, to distinguish the varying characteristics of different tasks.\nIn comparison to other fields such as visual and language processing, graph learning is uniquely characterized by the exploitation of graph topology. In particular, subgraph is a universal structure capable of expressing both node-and graph-level instances. On one hand, at the node level, every node resides in a local neighborhood, which in turn contextualizes the node [50], [86], [87]. The local neighborhood of a node v on a graph G = (V, E) is usually defined by a contextual subgraph\nS v = (V (S v ), E(S v )),\nwhere its set of nodes and edges are respectively given by (5) where d(u, v) gives the shortest distance between nodes u and v on the graph G, and δ is a predetermined threshold. That is, S v consists of nodes within δ hops from the node v, and the edges between those nodes. Thus, the contextual subgraph S v embodies not only the self-information of the node v, but also rich contextual information to complement the self-information [48], [51]. On the other hand, at the graph level, the maximum subgraph of a graph G, denoted S G , is the graph itself, i.e., S G = G. The maximum subgraph S G spontaneously embodies all information of G. In summary, subgraphs can be used to represent both node-and graph-level instances: Given an instance x which can either be a node or a graph (e.g., x = v or x = G), the subgraph S x offers a unified access to the information associated with x. Unified task template. Based on the above subgraph definitions for both node-and graph-level instances, we are ready to unify different tasks to follow a common template. Specifically, the link prediction task in pre-training and the downstream node and graph classification tasks can all be redefined as subgraph similarity learning. Let s x be the vector representation of the subgraph S x , and sim(•, •) be the cosine similarity function. As illustrated in Figs. 2(b) and (c), the three tasks can be mapped to the computation of subgraph similarity, which is formalized below.\nV (S v ) = {d(u, v) ≤ δ | u ∈ V }, and(4)\nE(S v ) = {(u, u ′ ) ∈ E | u ∈ V (S v ), u ′ ∈ V (S v )},\n• Link prediction: This is a node-level task. Given a graph G = (V, E) and a triplet of nodes\n(v, a, b) such that (v, a) ∈ E and (v, b) / ∈ E, we shall have sim(s v , s a ) > sim(s v , s b ).(6)\nIntuitively, the contextual subgraph of v shall be more similar to that of a node linked to v than that of another unlinked node. Optimize with pre-training loss (Eq.( 11))\nOptimize with prompt tuning loss (Eq.( 14)) is constructed as the mean representation of the contextual subgraphs of labeled nodes in a given class. Then, given a node v j not in the labeled set D, its class label ℓ j shall be\nℓ j = arg max c∈C sim(s vj , sc ).(8)\nIntuitively, a node shall belong to the class whose prototypical subgraph is the most similar to the node's contextual subgraph. \nsc = 1 k (Gi,Li)∈D,Li=c s Gi .(9)\nThen, given a graph G j not in the labeled set D, its class label L j shall be\nL j = arg max c∈C sim(s Gj , sc ).(10)\nIntuitively, a graph shall belong to the class whose prototypical subgraph is the most similar to itself.\nIt is worth noting that node and graph classification can be further condensed into a single set of notations. Let (x, y) be an annotated instance of graph data, i.e., x is either a node or a graph, and y ∈ Y is the class label of x among the set of classes Y . Then,\ny = arg max c∈Y sim(s x , sc ).(11)\nFinally, to materialize the common task template, we discuss how to learn the subgraph embedding vector s x for the subgraph S x . Given node representations h v generated by a GNN (see Sect. 3.2), a standard approach of computing s x is to employ a READOUT operation that aggregates the representations of nodes in the subgraph S x . That is,\ns x = READOUT({h v : v ∈ V (S x )}). (12\n)\nThe choice of the aggregation scheme for READOUT is flexible, including sum pooling and more advanced techniques [52], [41]. In our work, we simply use sum pooling. In summary, the unification framework is enabled by the common task template of subgraph similarity learning, which lays the foundation of our pre-training and prompting strategies as we will introduce in the following parts." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Pre-Training Phase", "publication_ref": [ "b38", "b26", "b27", "b71", "b54", "b53", "b52", "b12" ], "table_ref": [], "text": "As discussed earlier, our pre-training phase employs the link prediction task. Using link prediction/generation is a popular and natural way [39], [27], [28], [72], as a vast number of links are readily available on large-scale graph data without extra annotation. In other words, the link prediction objective can be optimized on label-free graphs, such as those shown in Fig. 2(a), in a self-supervised manner.\nBased on the common template defined in Sect. 4.1, the link prediction task is anchored on the similarity of the contextual subgraphs of two candidate nodes. Generally, the subgraphs of two positive (i.e., linked) candidates shall be more similar than those of negative (i.e., non-linked) candidates, as illustrated in Fig. 2(b). Subsequently, the pre-trained prior on subgraph similarity can be naturally transferred to node classification downstream, which shares a similar intuition: the subgraphs of nodes in the same class shall be more similar than those of nodes from different classes. On the other hand, the prior can also support graph classification downstream, as graph similarity is consistent with subgraph similarity not only in letter (as a graph is technically always a subgraph of itself), but also in spirit. The \"spirit\" here refers to the tendency that graphs sharing similar subgraphs are likely to be similar themselves, which means graph similarity can be translated into the similarity of the containing subgraphs [55], [54], [53].\nFormally, given a node v on graph G, we randomly sample one positive node a from v's neighbors, and a negative node b from the graph that does not link to v, forming a triplet (v, a, b). Our objective is to increase the similarity between the contextual subgraphs S v and S a , while decreasing that between S v and S b . More generally, on a set of label-free graphs G, we sample a number of triplets from each graph to construct an overall training set T pre . Then, we define the following pre-training loss.\nL pre (Θ) = -(v,a,b)∈Tpre ln exp(sim(sv,sa)/τ ) u∈{a,b} exp(sim(sv,su)/τ ) , (13) where τ is a temperature hyperparameter to control the shape of the output distribution. Note that the loss is parameterized by Θ, which represents the GNN model weights.\nThe output of the pre-training phase is the optimal model parameters Θ 0 = arg min Θ L pre (Θ). Θ 0 can be used to initialize the GNN weights for downstream tasks, thus enabling the transfer of prior knowledge downstream." }, { "figure_ref": [ "fig_1" ], "heading": "Prompting for Downstream Tasks", "publication_ref": [ "b35", "b28", "b30", "b15" ], "table_ref": [], "text": "The unification of pre-training and downstream tasks enables more effective knowledge transfer as the tasks in the two phases are made more compatible by following a common template. However, it is still important to distinguish different downstream tasks, in order to capture task individuality and achieve task-specific optimum.\nTo cope with this challenge, we propose a novel taskspecific learnable prompt on graphs, inspired by prompting in natural language processing [36]. In language contexts, a prompt is initially a handcrafted instruction to guide the downstream task, which provides task-specific cues to extract relevant prior knowledge through a unified task template (typically, pre-training and downstream tasks are all mapped to masked language modeling). More recently, learnable prompts [29], [31] have been proposed as an alternative to handcrafted prompts, to alleviate the high engineering cost of the latter.\nPrompt design. Nevertheless, our proposal is distinctive from language-based prompting for two reasons. Firstly, we have a different task template from masked language modeling. Secondly, since our prompts are designed for graph structures, they are more abstract and cannot take the form of language-based instructions. Thus, they are virtually impossible to be handcrafted. Instead, they should be topology related to align with the core of graph learning. In particular, under the same task template of subgraph similarity learning, the READOUT operation (used to generate the subgraph representation) can be \"prompted\" differently for different downstream tasks. Intuitively, different tasks can benefit from different aggregation schemes for their READOUT. For instance, node classification pays more attention to features that are topically more relevant to the target node. In contrast, graph classification tends to focus on features that are correlated to the graph class. Moreover, the important features may also vary given different sets of instances or classes in a task.\nLet p t denote a learnable prompt vector for a downstream task t, as shown in Fig. 2(c). The prompt-assisted READOUT operation on a subgraph S x for task t is\ns t,x = READOUT({p t ⊙ h v : v ∈ V (S x )}),(14)\nwhere s t,x is the task t-specific subgraph representation, and ⊙ denotes the element-wise multiplication. That is, we perform a feature weighted summation of the node representations from the subgraph, where the prompt vector p t is a dimension-wise reweighting in order to extract the most relevant prior knowledge for the task t.\nNote that other prompt designs are also possible. We could consider a learnable prompt matrix P t , which applies a linear transformation to the node representations:\ns t,x = READOUT({P t h v : v ∈ V (S x )}).(15)\nMore complex prompts such as an attention layer is another alternative. However, one of the main motivation of prompting instead of fine-tuning is to reduce reliance on labeled data. In few-shot settings, given very limited supervision, prompts with fewer parameters are preferred to mitigate the risk of overfitting. Hence, the feature weighting scheme in Eq. ( 26) is adopted for our prompting as the prompt is a single vector of the same length as the node representation, which is typically a small number (e.g., 128).\nPrompt tuning. To optimize the learnable prompt, also known as prompt tuning, we formulate the loss based on the common template of subgraph similarity, using the promptassisted task-specific subgraph representations. Formally, consider a task t with a labeled training set T t = {(x 1 , y 1 ), (x 2 , y 2 ), . . .}, where x i is an instance (i.e., a node or a graph), and y i ∈ Y is the class label of x i among the set of classes Y . The loss for prompt tuning is defined as\nLprompt(pt) = -(x i ,y i )∈T t ln\nexp(sim(s t,x i ,s t,y i )/τ ) c∈Y exp(sim(s t,x i ,s t,c )/τ ) , (16) where the class prototypical subgraph for class c is represented by st,c , which is also generated by the prompt- assisted, task-specific READOUT.\nNote that, the prompt tuning loss is only parameterized by the learnable prompt vector p t , without the GNN weights. Instead, the pre-trained GNN weights Θ 0 are frozen for downstream tasks, as no fine-tuning is necessary. This significantly decreases the number of parameters to be updated downstream, thus not only improving the computational efficiency of task learning and inference, but also reducing the reliance on labeled data." }, { "figure_ref": [], "heading": "EXTENSION TO GRAPHPROMPT+", "publication_ref": [], "table_ref": [], "text": "Next, we generalize the pre-training tasks and prompts in GRAPHPROMPT, extending it to GRAPHPROMPT+." }, { "figure_ref": [], "heading": "Generalizing Pre-Training Tasks", "publication_ref": [ "b16" ], "table_ref": [], "text": "Although our proposed task template in GRAPHPROMPT easily accommodate the link prediction task, which is a simple, effective and popular pre-training task on graphs, it is not immediately apparent how the task template can fit other more advanced pre-training tasks on graphs. This presents a significant limitation in GRAPHPROMPT: Our \"pre-train, prompt\" framework is confined to only one specific pre-training task, thereby precluding other pretraining tasks that can potentially learn more comprehensive knowledge. To unify a broader range of pre-training tasks in GRAPHPROMPT+, we first show that any standard contrastive graph pre-training task can be generalized to leverage subgraph similarity, the pivotal element in our task template. Meanwhile, each contrastive task can still preserve its unique sampling approach for the positive and negative pairs. Next, we illustrate the generalization process using several mainstream contrastive pre-training tasks. TABLE 1: Materializing the target, positive and negative instances for mainstream contrastive graph pre-training approaches for the generalized loss in Eq. (17). We also include link prediction (LP), which is used by GRAPHPROMPT." }, { "figure_ref": [], "heading": "Target instance o", "publication_ref": [ "b19", "b78", "b64", "b31", "b48", "b78", "b64", "b31", "b48" ], "table_ref": [], "text": "Positive instance a Negative instance b Loss LP [20] a node v a node linked to v a node not linked to v Eq. ( 13) DGI [79] a graph G a node in G a node in G ′ , a corrupted graph of G Eq. ( 18)\nInfoGraph [65] a graph G a node in G a node in G ′ ̸ = G Eq. ( 19)\nGraphCL [32] an augmented graph Gi from an augmented graph Gj from an augmented graph G ′ j from Eq. ( 20) a graph G by strategy i a graph G by strategy j a graph G ′ ̸ = G by strategy j GCC [49] a random walk induced subgraph a random walk induced subgraph a random walk induced subgraph Eq. ( 21) G r v from a node v's r-egonet L(Θ) = -o∈Tpre ln a∈P oso exp(sim(sa,so)/τ ) b∈N ego exp(sim(s b ,so)/τ ) ,\nGr v ̸ = G r v from v's r-egonet G r ′ v from v's r ′ -egonet, r ′ ̸ =\nwhere s o denotes the subgraph embedding of the target Template for mainstream contrastive tasks. We further illustrate how the generalized loss term can be materialized for several mainstream contrastive graph pre-training approaches, in order to fit the task template. In Table 1, we summarize how various definitions of instances in these mainstream approaches are materialized, as also detailed below. Note that the strategies for sampling positive and negative instances are preserved as originally established. DGI [79] operates on the principle of mutual information maximization, where the primary objective is to maximize the consistency between the local node representations and the global graph representation. Given a target instance G ∈ T pre , a positive instance is any node a in G, while a negative instance is any node b in G ′ , a corrupted version of G. Then, we can reformulate the pre-training loss of DGI as\nL DGI (Θ) = -G∈Tpre ln a∈V (G) exp(sim(sa,s G )/τ ) b∈V (G ′ ) exp(sim(s b ,s G )/τ ) , (18\n)\nwhere V (G) denotes the set of nodes in G, and G ′ is obtained by corrupting G.\nInfoGraph [65] extends the concept of DGI, which also attempts to maximize the mutual information between the global graph representation and local node representations, but differs from DGI in the sampling of negative instances. Similar to DGI, given a target instance G ∈ T pre , a positive instance is any node a in G. However, a negative instance is a node sampled from a different graph G ′ ̸ = G from the pre-training data, instead of a corrupted view of G in DGI. Another difference from DGI is, the local representation of a node v is derived by fusing the embeddings of v from all layers of the graph encoder, an architecture that would not be affected by our reformulated loss as\nL IG (Θ) = -G∈Tpre ln a∈V (G) exp(sim(sa,s G )/τ ) G ′ ∈Tpre,G ′ ̸ =G b∈V (G ′ ) exp(sim(s b ,s G )/τ ) . (19\n)\nGraphCL [32] aims to maximize the mutual information between distinct augmentations of the same graph, while reducing that between augmentations of different graphs. Given a graph G ∈ T pre , we can obtain distinct augmentations of G, say G i by augmentation strategy i and G j by strategy j. One of them, say G i , can serve as a target instance, while the other, say G j , can serve as a positive instance w.r.t. G i . Meanwhile, we can obtain an augmentation of a different graph G ′ ̸ = G, say G ′ j , which can serve as a negative instance w.r.t. G i . Thus, we can reformulate the pre-training loss as\nL GCL (Θ) = -G∈Tpre ln exp(sim(s G j ,s G i )/τ ) G ′ ∈Tpre ,G ′ ̸ =G exp(sim(s G ′ j ,s G i )/τ ) . (20)\nGCC [49] employs a framework where the model learns to distinguish between subgraphs that are contextually similar or originated from the same root graph, and subgraphs that are derived from different root graphs. Given a node v ∈ T pre , a target instance G r v is a subgraph induced by performing random walk in the r-egonet of node v 3 . To sample a positive instance, another random walk is performed in the same r-egonet, resulting in another induced subgraph Gr v that is different from G r v . On the other hand, a negative instance can be sampled by random walk from the r ′ -egonet of v such that r ′ ̸ = r. Letting G r ′ v denote a set of random walk induced subgraphs from the r ′ -egonet of v for some r ′ ̸ = r, we can reformulate the pre-training loss of GCC as\nL GCC (Θ) = -v∈Tpre ln exp(sim(s Gr v ,s G r v )/τ ) G r ′ v ∈G r ′ v exp(sim(s G r ′ v ,s G r v )/τ ) . (21\n)" }, { "figure_ref": [ "fig_4" ], "heading": "Generalizing Prompts", "publication_ref": [], "table_ref": [], "text": "To adapt different downstream tasks more effectively, we propose a more generalized layer-wise prompt design to strategically utilize hierarchical knowledge across multiple layers of the pre-trained graph encoders. Layer-wise prompt design. Consider a pre-trained graph encoder comprising L layers. In GRAPHPROMPT+, let P be a set of L + 1 prompt vectors, with one prompt vector allocated for each layer, including the input layer (i.e., l = 0):\nP = {p 0 , p 1 , . . . , p L }.(22)\nThat is, p l is a learnable vector representing the prompt vector that modifies the l-th layer of the pre-trained encoder for the downstream task, for 0 ≤ l ≤ L. Note that in GRAPHPROMPT, there is only one prompt vector applied to the last layer l = L, which is then used by the READOUT layer to obtain subgraph-level representations. In contrast, in GRAPHPROMPT+, these L + 1 prompt vectors are applied to different layers in parallel in order to focus on different layers, as illustrated in Fig. 3. Specifically, let H p denote the output from the pretrained graph encoder after applying a single prompt vector p to a specific layer associated with p, as follows.\nH p = GRAPHENCODER p (X, A; Θ),(23)\n3. The r-egonet of node v is defined as a subgraph consisting of all nodes within r hops of v and all edges between these nodes where GRAPHENCODER p (•) indicates that a specific layer of the encoder has been modified by p. Taking the prompt vector p l as an example, it modifies the embedding matrix generated by the l-th layer of the graph encoder via element-wise multiplication, which gives p l ⊙ H l . Here p l is multiplied element-wise with each row of H l in a manner analogous to that in GRAPHPROMPT 4 . Hence, the embedding matrix of the next layer is generated as follows:\nH l+1 = AGGR(p l ⊙ H l , A; θ l+1 ),(24)\nwhile the calculation of other layers remains unchanged, resulting in the output embedding matrix H p l . Finally, we apply each prompt p l ∈ P to the pre-trained graph encoder in parallel, and obtain a series of embedding matrices {H p 0 , H p 1 , . . . , H p L }. We further fuse these \"postprompt\" embedding matrices into an final output embedding matrix H P , which will be employed to calculate the downstream task loss. In particular, we adopt a learnable coefficient w l to weigh H p l in the fused output, i.e.,\nH P = L l=0 w l H p l ,(25)\nas different downstream tasks may depend on information from diverse layers with varying degrees of importance. Note that compared to GRAPHPROMPT, we suffer a Lfold increase in the number of learnable parameters during prompt tuning in GRAPHPROMPT+. However, a typical graph encoder adopts a shallow message-passing architecture with a small number of layers, e.g., L = 3, which does not present a significant overhead.\nPrompt tuning. Given a specific downstream task t, we have a set of task t-specific prompts P t = {p 0 t , p 1 t , . . . , p L t }, corresponding to the L + 1 layers in the graph encoder. After applying P t to the pre-trained encoder, we obtain the embedding for the subgrqaph S x as\ns t,x = READOUT({h Pt,v : v ∈ V (S x )}),(26)\nwhere s t,x is the task t-specific subgraph representation after applying the series of prompts P t , and h Pt,v is a row of H Pt that corresponds to node v.\nTo optimize the generalized prompt vectors, we adopt the same loss function as utilized in GRAPHPROMPT, as given by Eq. ( 16). That is, we also leverage the task-specific subgraph representations after applying the generalized prompts, s t,x , in the prompt tuning loss." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct extensive experiments including node classification and graph classification as downstream tasks on five benchmark datasets to evaluate the proposed GRAPHPROMPT." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b73", "b77", "b75", "b74", "b75", "b37", "b38", "b39", "b40", "b78", "b64", "b31", "b24", "b81", "b80", "b72" ], "table_ref": [ "tab_2" ], "text": "Datasets. We employ five benchmark datasets for evaluation. (1) Flickr [74] is an image sharing network which is collected by SNAP 5 . (2) PROTEINS [78] is a collection of 4. Hence, a prompt vector must adopt the same dimensions as the layer it applies to.\n5. https://snap.stanford.edu/data/ protein graphs which include the amino acid sequence, conformation, structure, and features such as active sites of the proteins.\n(3) COX2 [76] is a dataset of molecular structures including 467 cyclooxygenase-2 inhibitors. ( 4) ENZYMES [75] is a dataset of 600 enzymes collected from BRENDA enzyme database. ( 5) BZR [76] is a collection of 405 ligands for benzodiazepine receptor.\nWe summarize these datasets in Table 2. Note that the \"Task\" column indicates the type of downstream task performed on each dataset: \"N\" for node classification and \"G\" for graph classification.\nBaselines. We evaluate GRAPHPROMPT against the stateof-the-art approaches from three main categories. (1) Endto-end graph neural networks: GCN [38], GraphSAGE [39], GAT [40] and GIN [41]. They capitalize on the key operation of neighborhood aggregation to recursively aggregate messages from the neighbors, and work in an end-to-end manner. (2) Graph pre-training models: DGI [79], InfoGraph [65], and GraphCL [32]. They work in the \"pre-train, fine-tune\" paradigm. In particular, they pre-train the GNN models to preserve the intrinsic graph properties, and fine-tune the pre-trained weights on downstream tasks to fit task labels.\n(3) Graph prompt models: GPPT [25]. GPPT utilizes a link prediction task for pre-training, and resorts to a learnable prompt for the node classification task, which is mapped to a link prediction task.\nOther few-shot learning baselines on graphs, such as Meta-GNN [82] and RALE [81], often adopt a meta-learning paradigm [73]. They cannot be used in our setting, as they require a large volume of labeled data in their base classes for the meta-training phase. In our approach, only label-free graphs are utilized for pre-training." }, { "figure_ref": [], "heading": "Settings and parameters.", "publication_ref": [ "b79", "b80", "b19" ], "table_ref": [], "text": "To evaluate the goal of our GRAPHPROMPT in realizing a unified design that can suit different downstream tasks flexibly, we consider two typical types of downstream tasks, i.e., node classification and graph classification. In particular, for the datasets which are suitable for both of these two tasks, i.e., PROTEINS and ENZYMES, we only pre-train the GNN model once on each dataset, and utilize the same pre-trained model for the two downstream tasks with their task-specific prompting.\nThe downstream tasks follow a k-shot classification setting. For each type of downstream task, we construct a series of k-shot classification tasks. The details of task construction will be elaborated later when reporting the results in Sect. 6.2. For task evaluation, as the k-shot tasks are balanced classification, we employ accuracy as the evaluation metric following earlier work [80], [81].\nFor all the baselines, based on the authors' code and default settings, we further tune their hyper-parameters to optimize their performance. More implementation details of the baselines and our approach can be found in Appendix D of GraphPrompt [20]." }, { "figure_ref": [], "heading": "Performance Evaluation", "publication_ref": [ "b81", "b79", "b80" ], "table_ref": [ "tab_3", "tab_4" ], "text": "As discussed, we perform two types of downstream task, namely, node classification and graph classification in fewshot settings. We first evaluate on a fixed-shot setting, and then vary the shot numbers to see the performance trend.\nFew-shot node classification. We conduct this node-level task on three datasets, i.e., Flickr, PROTEINS, and EN-ZYMES. Following a typical k-shot setup [82], [80], [81], we generate a series of few-shot tasks for model training and validation. In particular, for PROTEINS and ENZYMES, on each graph we randomly generate ten 1-shot node classification tasks (i.e., in each task, we randomly sample 1 node per class) for training and validation, respectively. Each training task is paired with a validation task, and the remaining nodes not sampled by the pair of training and validation tasks will be used for testing. For Flickr, as it contains a large number of very sparse node features, selecting very few shots for training may result in inferior performance for all the methods. Therefore, we randomly generate ten 50-shot node classifcation tasks, for training and validation, respectively. On Flickr, 50 shots are still considered few, accounting for less than 0.06% of all nodes on the graph.\nTable 3 illustrates the results of few-shot node classification. We have the following observations. First, our proposed GRAPHPROMPT outperforms all the baselines across the three datasets, demonstrating the effectiveness of GRAPHPROMPT in transferring knowledge from the pretraining to downstream tasks for superior performance. In particular, by virtue of the unification framework and prompt-based task-specific aggregation in READOUT function, GRAPHPROMPT is able to close the gap between pretraining and downstream tasks, and derive the downstream tasks to exploit the pre-trained model in task-specific manner. Second, compared to graph pre-training models, GNN models usually achieve comparable or even slightly better performance. This implies that the discrepancy between the pre-training and downstream tasks in these pre-training models, obstructs the knowledge transfer from the former to the latter. Even with sophisticated pre-training, they cannot effectively promote the performance of downstream tasks. Third, the graph prompt model GPPT is only comparable to or even worse than the other baselines, despite also using prompts. A potential reason is that GPPT requires much more learnable parameters in their prompts than ours, which may not work well given very few shots (e.g., 1-shot).\nFew-shot graph classification. We further conduct few-shot graph classification on four datasets, i.e., PROTEINS, COX2, ENZYMES, and BZR. For each dataset, we randomly generate 100 5-shot classification tasks for training and validation, following a similar process for node classification tasks.\nWe illustrate the results of few-shot graph classification in Table 4, and have the following observations. First, our proposed GRAPHPROMPT significantly outperforms the baselines on these four datasets. This again demonstrates the necessity of unification for pre-training and downstream tasks, and the effectiveness of prompt-assisted task-specific aggregation for READOUT. Second, as both node and graph classification tasks share the same pre-trained model on PROTEINS and ENZYMES, the superior performance of GRAPHPROMPT on both types of tasks further demonstrates that, the gap between different tasks is well addressed by virtue of our unification framework. Third, the graph pre-training models generally achieve better performance than the end-to-end GNN models. This is because both InfoGraph and GraphCL capitalize on graph-level tasks for pre-training, which are naturally not far away from the downstream graph classification.\nPerformance with different shots. We further tune the number of shots for the two few-shot classification tasks to evaluate its influence on the performance. In particular, for few-shot node classification, we tune the number of shots in {1, 2, 3, 4, 5, 10}, and employ the most competitive baselines (i.e., GIN, DGI, GraphCL, and GPPT) for comparison. Similarly, for few-shot graph classification, we tune the number of shots in {1, 3, 5, 8, 10, 20}, and also employ the most competitive baselines (e.g., GIN, InfoGraph, and GraphCL) for comparison. The number of tasks is identical to the above settings of few-shot node classification and graph classification. We conduct experiments on two datasets, i.e., PROTEINS and ENZYMES for the two tasks.\nWe illustrate the comparison in Figs. 5 for node and graph classification, respectively, and have the following observations. First, for few-shot node classification, in general our proposed GRAPHPROMPT consistently outperforms the baselines across different shots. The only exception occurs on PROTEINS with 10-shot, which is possibly because 10shot labeled data might be sufficient for GIN to work in an end-to-end manner. Second, for few-shot graph classification, our proposed GRAPHPROMPT outperforms the baselines when very limited labeled data is given (e.g., when number of shots is 1, 3, or 5), and might be surpassed by some competitive baselines when fed with adequate labeled data (e.g., GraphCL with number of shot 8, 10, and 20). " }, { "figure_ref": [ "fig_6" ], "heading": "Ablation Study", "publication_ref": [ "b14" ], "table_ref": [], "text": "To evaluate the contribution of each component, we conduct an ablation study by comparing GRAPHPROMPT with different prompting strategies: (1) no prompt: for downstream tasks, we remove the prompt vector, and conduct classification by employing a classifier on the subgraph representations obtained by a direct sum-based READOUT.\n(2) lin. prompt: we replace the prompt vector with a linear transformation matrix in Eq. (15).\nWe conduct the ablation study on three datasets for node classification (Flickr, PROTEINS, and ENZYMES) and graph classification (COX2, ENZYMES, and BZR), respectively, and illustrate the comparison in Fig. 6. We have the following observations. (1) Without the prompt vector, no prompt usually performs the worst in the four models, showing the necessity of prompting the READOUT operation differently for different downstream tasks. (2) Converting the prompt vector into a linear transformation matrix also hurts the performance, as the matrix involves more parameters thus increasing the reliance on labeled data." }, { "figure_ref": [], "heading": "EXPERIMENTS ON GRAPHPROMPT+", "publication_ref": [], "table_ref": [], "text": "In this section, we further conduct experiments to evaluate our extended approach GRAPHPROMPT+ in comparison to the vanilla GRAPHPROMPT.\nWe follow the same experiment setup of GRAPH-PROMPT, as detailed in Sect. 6.1. In particular, we evaluate the same few-shot node classification or graph classification tasks on the same five datasets. Consistent with GRAPH-PROMPT, in the implementation of GRAPHPROMPT+, we utilize a three-layer GIN as the backbone, and set the hidden dimensions as 32. We also set δ = 1 to construct 1-hop subgraphs for the nodes same as GRAPHPROMPT." }, { "figure_ref": [], "heading": "Effect of Generalized Layer-wise Prompts", "publication_ref": [], "table_ref": [ "tab_5", "tab_2", "tab_6" ], "text": ". first investigate the impact of layer-wise prompts to evaluate their ability of extracting hierarchical pretrained knowledge. To isolate the effect of prompts and be comparable to GRAPHPROMPT, we fix the pretraining task in GRAPHPROMPT+ to link prediction; further experiments on alternative pre-training tasks will be presented in Sect. 7.2. Furthermore, we compare to three variants of GRAPHPROMPT+, namely, GRAPH-PROMPT+/0, GRAPHPROMPT+/1 and GRAPHPROMPT+/2. Here, GRAPHPROMPT+/l denotes the variant where only one prompt vector p l is applied to modify the l-th layer of the pre-trained encoder. As we have a total of L = 3 layers, GRAPHPROMPT is equivalent to GRAPHPROMPT+/3. In the following, we perform few-shot node and graph classification as the downstream tasks, and analyze the results of GRAPHPROMPT, GRAPHPROMPT+ and the variants.\nFew-shot node classification. The results of node classification are reported in Table 5. We observe the following.\nFirst, GRAPHPROMPT+ outperforms all variants and GRAPHPROMPT. Except GRAPHPROMPT+, only a single prompt vector is added to a specific layer of the graph encoder. This indicates the benefit of leveraging multiple prompt vectors in a layer-wise manner to extract hierarchical knowledge within the pre-trained graph encoder.\nSecond, the application of prompts at different layers leads to varying performance. This observation confirms the nuanced distinctions across the layers of the graph encoder, suggesting the existence of hierarchical structures within pre-trained knowledge.\nThird, the significance of prompt vectors at different layers varies across datasets. Specifically, for the Flickr dataset, applying the prompt to a deeper layer generally yields progressively better performance. In contrast, for the PRO-TEINS and ENZYMES datasets, implementing the prompt at the last layer (i.e., GRAPHPROMPT) is generally not as good compared to its application at the shallow layers. This discrepancy is attributed to the notable difference in the nature and size of the graphs. Note that the Flickr graph is a relatively large image sharing network, while the graphs in the PROTEINS and ENZYMES datasets describe small chemical structures, as detailed in Table 2. In smaller graphs, knowledge from shallower layers could be adequate, given the relative size of the receptive field to the graph. Nevertheless, GRAPHPROMPT+ automatically selects the most relevant layers and achieves the best performance.\nFew-shot graph classification. The results for graph classification on four datasets, namely, PROTEINS, COX2, EN-ZYMES and BZR, are presented in Table 6. We first observe that GRAPHPROMPT+ continues to outperform all variants and GRAPHPROMPT on graph classification, similar to the results on node classification tasks. This consistency demonstrates the robustness of our layer-wise prompt design on different types of graph tasks. Moreover, compared to node classification, GRAPHPROMPT+ shows a more pronounced improvement over the variants. This difference stems from the nature of graph classification as a graph-level task, which emphsizes the need for more global knowledge from all layers of the graph encoder, unlike a node-level task. Thus, effective integration of hierarchical knowledge across these layers is more important to graph classification, which can enhance performance more significantly." }, { "figure_ref": [], "heading": "Compatibility with Generalized Pre-training Tasks", "publication_ref": [ "b78", "b31" ], "table_ref": [ "tab_7", "tab_8" ], "text": "Finally, we conduct experiments using alternative contrastive learning approaches for pre-training, beyond the simple link prediction task. Specifically, we select the two most popular contrastive pre-training task on graphs, i.e., DGI [79] and GraphCL [32], and implement the generalized pre-training loss for each of them as discussed in Sect. 5.1.\nFor each pre-training task, we compare among GRAPH-PROMPT+, GRAPHPROMPT, and the original architecture in their paper without prompt tuning (denoted as \"Original\"). The results of node and graph classification tasks are reported in Tables 7 and8, respectively. It is evident that both GRAPHPROMPT+ and GRAPHPROMPT exhibit superior performance compared to the original versions of DGI and GraphCL. The results imply that our proposed promptbased framework can flexibly incorporate well-known contrastive pre-training models like DGI and GraphCL, overcoming the limitation of a singular pre-training approach based on link prediction. Furthermore, empirical results also reveal a consistent trend where GRAPHPROMPT+ demonstrates superior performance over GRAPHPROMPT. This trend further corroborates the effectiveness of layer-wise " }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this paper, we studied the research problem of prompting on graphs and proposed GRAPHPROMPT, in order to overcome the limitations of graph neural networks in the supervised or \"pre-train, fine-tune\" paradigms. In particular, to narrow the gap between pre-training and downstream objectives on graphs, we introduced a unification framework by mapping different tasks to a common task template. Moreover, to distinguish task individuality and achieve task-specific optima, we proposed a learnable taskspecific prompt vector that guides each downstream task to make full of the pre-trained model. We further extended GRAPHPROMPT into GRAPHPROMPT+by enhancing both the pre-training and prompt tuning stages. Finally, we conduct extensive experiments on five public datasets, and show that GRAPHPROMPT and GRAPH-PROMPT+significantly outperforms various state-of-the-art baselines." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This research / project is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 (Proposal ID: T2EP20122-0041). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore. This work is also supported in part by the National Key Research and Development Program of China under Grant 2020YFB2103803. The first author extends his heartfelt gratitude to Ms. Mengzhuo Fang for her invaluable support and assistance during challenging periods." } ]
Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised setting, their performance heavily relies on a large amount of task-specific supervision. To reduce labeling requirement, the "pre-train, fine-tune" and "pre-train, prompt" paradigms have become increasingly common. In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner. However, existing study of prompting on graphs is still limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GRAPHPROMPT, a novel pre-training and prompting framework on graphs. GRAPHPROMPT not only unifies pre-training and downstream tasks into a common task template, but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model in a task-specific manner. In particular, GRAPHPROMPT adopts simple yet effective designs in both pre-training and prompt tuning: During pre-training, a link prediction-based task is used to materialize the task template; during prompt tuning, a learnable prompt vector is applied to the READOUT layer of the graph encoder. To further enhance GRAPHPROMPT in these two stages, we extend it into GRAPHPROMPT+ with two major enhancements. First, we generalize a few popular graph pre-training tasks beyond simple link prediction to broaden the compatibility with our task template. Second, we propose a more generalized prompt design that incorporates a series of prompt vectors within every layer of the pre-trained graph encoder, in order to capitalize on the hierarchical information across different layers beyond just the readout layer. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GRAPHPROMPT and GRAPHPROMPT+.
Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs
[ { "figure_caption": "2 Fig. 1 :21Fig. 1: Illustration of the motivation. (a) Pre-training on graphs. (b/c) Downstream node/graph classification.", "figure_data": "", "figure_id": "fig_0", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Overall framework of GRAPHPROMPT.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "•Graph classification: This is a graph-level task. Consider a set of graphs G with a set of graph classes C, and a set of labeled graphs D = {(G 1 , L 1 ), (G 2 , L 2 ), . . .} where G i ∈ G and L i is the corresponding label of G i . In the k-shot setting, there are exactly k pairs of (G i , L i = c) ∈ D for every class c ∈ C. Similar to node classification, for each class c ∈ C, we define a graph class prototypical subgraph, also represented by the mean embedding vector of the (sub)graphs in c:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "rUnification of graph contrastive tasks. We formulate a general loss term for any standard contrastive pre-training task on graphs. The generalized loss is compatible with our proposed task template, i.e., it is based on the core idea of subgraph similarity. The rationale for this generalization centers around two reasons.First, the key idea of contrastive learning boils down to bringing positively related instances closer, while pushing away negatively related ones in their latent space[10]. To encapsulate this objective in a generalized loss, the definition of instances needs to be unified for diverse contrastive tasks on graphs, so that the distance or proximity between the instances can also be standardized. Not surprisingly, subgraphs can serve as a unified definition for a comprehensive suite of instances on graphs, including nodes[79],[32], subgraphs[72],[11] or the whole graphs[65],[11], which are common forms of instance in contrastive learning on graphs. By treating these instances as subgraphs, the contrastive objective can be accomplished by calculating a similarity score between the subgraphs, so as to maximize the similarity between a positive pair of subgraphs whilst minimizing that between a negative pair.Second, the general loss term should be flexible to preserve the unique characteristics of different contrastive tasks, thereby enabling the capture of diverse forms of knowledge through pre-training. Specifically, various contrastive approaches diverge in the sampling or generation of positive and negative pairs of instances. Consequently, in our generalized loss, the set of positive or negative pairs can be materialized differently to accommodate the requirements of each contrastive approach. Given the above considerations, we first define a standard contrastive graph pre-training task. Consider a set of pre-training data T pre , which consists of the target instances (or elements that can derive the target instances). For each target instance o ∈ T pre , a contrastive task samples or constructs a set of positive instances P os o , as well as a set of negative instances N eg o , both w.r.t. o. Hence, {(o, a) : a ∈ P os o } is the set of positive pairs involving the target instance o, such that the objective is to maximize the similarity between each pair of o and a. On the other hand, {(o, b) : b ∈ N eg o } is the set of negative pairs, such that the objective is to minimize the similarity between each pair of o and b. Then, we propose the following generalized loss that can be used for any standard contrastive task.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig.3: Layer-wise prompt tuning for a pre-trained graph encoder with L layers. GRAPHENCODER l represents the l-th layer of the graph encoder, and p l represents the prompt vector that modifies the l-th layer.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :Fig. 5 :45Fig. 4: Impact of shots on few-shot node", "figure_data": "", "figure_id": "fig_5", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Ablation study.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "This is also a node-level task. Consider a graph G = (V, E) with a set of node classes C, and a set of labeled nodes D = {(v 1 , ℓ 1 ), (v 2 , ℓ 2 ), . . .} where v i ∈ V and ℓ i is the corresponding label of v i . As we adopt a k-shot setting, there are exactly k pairs of (v i , ℓ", "figure_data": "GNN Encoder𝛿 1READOUTLearnable node classification promptLearnable graph classification prompt𝑣𝑣𝐩𝐩𝑣𝑣𝐬𝐬𝐬𝐺 𝐺𝑣 𝑣𝑣𝑣 𝑣 𝑣 𝑣𝑣 𝑣 𝑣Link prediction triplets𝑣 𝑣 𝑣𝐬 𝐬Sim Sim Sim𝑣 𝑣 𝑣𝐬 𝐬𝑣 𝑣 𝑣𝐬 𝐬𝑣𝐬 ,Sim?𝑣 𝑣READOUT 𝑣 𝑣 𝑣 𝑣𝐺READOUT Graph class Sim? 𝐺 𝐺𝐬 ,𝐺𝑣𝑣𝑣prototypical subgraph Node classprototypical subgraph𝑣(a) Toy graphs(b) Pre-training(c) Prompting for node classification (left) or graph classification (right)", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Summary of datasets.", "figure_data": "GraphsGraph classesAvg. nodesAvg. edgesNode featuresNode classesTask (N/G)Flickr1-89,250 899,7565007NPROTEINS1,113239.0672.8213N, GCOX2467241.2243.453-GENZYMES600632.6362.14183N, GBZR405235.7538.363-G", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Accuracy evaluation on node classification. Results are in percent, with best bolded and runner-up underlined. ± 9.49 59.60 ± 12.44 61.49 ± 12.87 GRAPHSAGE 13.52 ± 11.28 59.12 ± 12.14 61.81 ± 13.19 GAT 16.02 ± 12.72 58.14 ± 12.05 60.77 ± 13.21 GIN 10.18 ± 5.41 60.53 ± 12.19 63.81 ± 11.28 DGI 17.71 ± 1.09 54.92 ± 18.46 63.33 ± 18.13 GRAPHCL 18.37 ± 1.72 52.00 ± 15.83 58.73 ± 16.47 GPPT 18.95 ± 1.92 50.83 ± 16.56 53.79 ± 17.46 GRAPHPROMPT 20.21 ± 11.52 63.03 ± 12.14 67.04 ± 11.48", "figure_data": "MethodsFlickr 50-shotPROTEINS 1-shotENZYMES 1-shotGCN9.22", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Accuracy evaluation on graph classification. ± 11.20 51.37 ± 11.06 20.37 ± 5.24 56.16 ± 11.07 GRAPHSAGE 52.99 ± 10.57 52.87 ± 11.46 18.31 ± 6.22 57.23 ± 10.95 GAT 48.78 ± 18.46 51.20 ± 27.93 15.90 ± 4.13 53.19 ± 20.61 GIN 58.17 ± 8.58 51.89 ± 8.71 20.34 ± 5.01 57.45 ± 10.54 INFOGRAPH 54.12 ± 8.20 54.04 ± 9.45 20.90 ± 3.32 57.57 ± 9.93 GRAPHCL 56.38 ± 7.24 55.40 ± 12.04 28.11 ± 4.00 59.22 ± 7.42", "figure_data": "MethodsPROTEINS 5-shotCOX2 5-shotENZYMES 5-shotBZR 5-shotGCN54.87", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Effect of layer-wise prompts (node classification).", "figure_data": "MethodsFlickr 50-shotPROTEINS 1-shotENZYMES 1-shotGRAPHPROMPT+/017.18 ± 11.4962.64 ± 13.3168.32 ± 10.54GRAPHPROMPT+/118.14 ± 11.8263.02 ± 12.0468.52 ± 10.77GRAPHPROMPT+/220.11 ± 12.5863.61 ± 11.8969.09 ± 10.19GRAPHPROMPT20.21 ± 11.5263.03 ± 12.1467.04 ± 11.48GRAPHPROMPT+20.55 ± 11.9763.64 ± 11.6969.28 ± 10.74(↑ vs. GRAPHPROMPT)(+1.68%)(+0.97%)(+3.34%)", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Effect of layer-wise prompts (graph classification).", "figure_data": "MethodsPROTEINS 5-shotCOX2 5-shotENZYMES 5-shotBZR 5-shotGRAPHPROMPT+/060.80 ± 2.9651.14 ± 12.9344.93 ± 3.9153.93 ± 5.37GRAPHPROMPT+/156.49 ± 8.7953.53 ± 6.7335.91 ± 4.6448.50 ± 1.80GRAPHPROMPT+/261.63 ± 6.8658.16 ± 6.9734.36 ± 5.1648.50 ± 2.43GRAPHPROMPT64.42 ± 4.3759.21 ± 6.8231.45 ± 4.3261.63 ± 7.68GRAPHPROMPT+67.71 ± 7.0965.23 ± 5.5945.35 ± 4.1568.61 ± 3.99(↑ vs. GRAPHPROMPT)(+5.11%)(+10.17%)(+44.20%)(+11.33%)", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Compatibility with popular contrastive pretraining on graphs, for downstream node classification.", "figure_data": "Pre-trainingMethodsFlickr 50-shotPROTEINS 1-shotENZYMES 1-shotOriginal17.71 ± 1.0954.92 ± 18.4663.33 ± 18.13DGIGRAPHPROMPT17.78 ± 4.9860.79 ± 12.0066.46 ± 11.39GRAPHPROMPT+20.98 ± 11.6065.24 ± 12.5168.92 ± 10.77Original18.37 ± 1.7252.00 ± 15.8358.73 ± 16.47GraphCLGRAPHPROMPT19.33 ± 4.1160.15 ± 13.3163.14 ± 11.02GRAPHPROMPT+19.95 ± 12.4862.55 ± 12.6368.17 ± 11.30", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Compatibility with popular contrastive pretraining on graphs, for downstream graph classification.", "figure_data": "Pre-trainingMethodsPROTEINS 5-shotCOX2 5-shotENZYMES 5-shotBZR 5-shotOriginal54.12 ± 8.2054.04 ± 9.4520.90± 3.3257.57± 9.93DGIGRAPHPROMPT54.32 ± 0.6154.60 ± 5.9627.69 ± 2.2358.53 ± 7.41GRAPHPROMPT+56.16 ± 0.6754.30 ± 6.5841.40 ± 3.7762.83 ± 8.97Original56.38 ± 7.2455.40 ± 12.0428.11 ± 4.0059.22± 7.42GraphCLGRAPHPROMPT55.30 ± 1.0557.87 ± 1.5229.82 ± 2.8761.35 ± 7.54GRAPHPROMPT+55.59 ± 2.6359.32 ± 17.0041.42 ± 3.7361.75 ± 7.56prompts under alternative pre-training tasks, extending theanalysis in Sect. 7.1.", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
Xingtong Yu; Zhenghao Liu; Yuan Fang; Zemin Liu; Sihong Chen; Xinming Zhang
[ { "authors": "L Page; S Brin; R Motwani; T Winograd", "journal": "Tech. Rep", "ref_id": "b0", "title": "The pagerank citation ranking: Bringing order to the web", "year": "1999" }, { "authors": "G Jeh; J Widom", "journal": "", "ref_id": "b1", "title": "Simrank: a measure of structural-context similarity", "year": "2002" }, { "authors": "B Perozzi; R Al-Rfou; S Skiena", "journal": "", "ref_id": "b2", "title": "DeepWalk: Online learning of social representations", "year": "2014" }, { "authors": "S Yun; M Jeong; R Kim; J Kang; H J Kim", "journal": "NeurIPS", "ref_id": "b3", "title": "Graph transformer networks", "year": "2019" }, { "authors": "Z Hu; Y Dong; K Wang; Y Sun", "journal": "WWW", "ref_id": "b4", "title": "Heterogeneous graph transformer", "year": "2020" }, { "authors": "C Ying; T Cai; S Luo; S Zheng; G Ke; D He; Y Shen; T.-Y Liu", "journal": "NeurIPS", "ref_id": "b5", "title": "Do transformers really perform badly for graph representation?", "year": "2021" }, { "authors": "Y Zhu; J Guo; S Tang", "journal": "", "ref_id": "b6", "title": "Sgl-pt: A strong graph learner with graph prompt tuning", "year": "2023" }, { "authors": "Z Tan; R Guo; K Ding; H Liu", "journal": "", "ref_id": "b7", "title": "Virtual node tuning for fewshot node classification", "year": "2023" }, { "authors": "X Sun; H Cheng; J Li; B Liu; J Guan", "journal": "", "ref_id": "b8", "title": "All in one: Multi-task prompting for graph neural networks", "year": "2023" }, { "authors": "A Jaiswal; A R Babu; M Z Zadeh; D Banerjee; F Makedon", "journal": "Technologies", "ref_id": "b9", "title": "A survey on contrastive self-supervised learning", "year": "2020" }, { "authors": "M Xu; H Wang; B Ni; H Guo; J Tang", "journal": "", "ref_id": "b10", "title": "Self-supervised graph-level representation learning with local and global structure", "year": "2021" }, { "authors": "Y Zhu; X Zhou; J Qiang; Y Li; Y Yuan; X Wu", "journal": "", "ref_id": "b11", "title": "Prompt-learning for short text classification", "year": "2022" }, { "authors": "D Zhang; W Feng; Y Wang; Z Qi; Y Shan; J Tang", "journal": "IEEE TKDE", "ref_id": "b12", "title": "Dropconn: Dropout connection based random gnns for molecular property prediction", "year": "2023" }, { "authors": "L Hu; Z Liu; Z Zhao; L Hou; L Nie; J Li", "journal": "IEEE TKDE", "ref_id": "b13", "title": "A survey of knowledge enhanced pre-trained language models", "year": "2023" }, { "authors": "X Xu; F Zhou; K Zhang; S Liu", "journal": "IEEE TKDE", "ref_id": "b14", "title": "Ccgl: Contrastive cascade graph learning", "year": "2022" }, { "authors": "X Liu; F Zhang; Z Hou; L Mian; Z Wang; J Zhang; J Tang", "journal": "IEEE TKDE", "ref_id": "b15", "title": "Self-supervised learning: Generative or contrastive", "year": "2021" }, { "authors": "Z Yi; I Ounis; C Macdonald", "journal": "ACM TOIS", "ref_id": "b16", "title": "Contrastive graph prompttuning for cross-domain recommendation", "year": "2023" }, { "authors": "N Keriven", "journal": "NeurIPS", "ref_id": "b17", "title": "Not too little, not too much: a theoretical analysis of graph (over) smoothing", "year": "2022" }, { "authors": "Q Li; Z Han; X.-M Wu", "journal": "AAAI", "ref_id": "b18", "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "year": "2018" }, { "authors": "Z Liu; X Yu; Y Fang; X Zhang", "journal": "WWW", "ref_id": "b19", "title": "GraphPrompt: Unifying pre-training and downstream tasks for graph neural networks", "year": "2023" }, { "authors": "J Tang; M Qu; M Wang; M Zhang; J Yan; Q Mei", "journal": "WWW", "ref_id": "b20", "title": "LINE: Large-scale information network embedding", "year": "2015" }, { "authors": "A Grover; J Leskovec", "journal": "", "ref_id": "b21", "title": "node2vec: Scalable feature learning for networks", "year": "2016" }, { "authors": "H Cai; V W Zheng; K C ; -C Chang", "journal": "TKDE", "ref_id": "b22", "title": "A comprehensive survey of graph embedding: Problems, techniques, and applications", "year": "2018" }, { "authors": "Z Wu; S Pan; F Chen; G Long; C Zhang; S Y Philip", "journal": "TNNLS", "ref_id": "b23", "title": "A comprehensive survey on graph neural networks", "year": "2020" }, { "authors": "M Sun; K Zhou; X He; Y Wang; X Wang", "journal": "", "ref_id": "b24", "title": "Gppt: Graph pretraining and prompt tuning to generalize graph neural networks", "year": "2022" }, { "authors": "W Hu; B Liu; J Gomes; M Zitnik; P Liang; V Pande; J Leskovec", "journal": "", "ref_id": "b25", "title": "Strategies for pre-training graph neural networks", "year": "2020" }, { "authors": "Z Hu; Y Dong; K Wang; K.-W Chang; Y Sun", "journal": "", "ref_id": "b26", "title": "GPT-GNN: Generative pre-training of graph neural networks", "year": "2020" }, { "authors": "D Hwang; J Park; S Kwon; K Kim; J.-W Ha; H J Kim", "journal": "NeurIPS", "ref_id": "b27", "title": "Selfsupervised auxiliary learning with meta-paths for heterogeneous graphs", "year": "2020" }, { "authors": "X Liu; Y Zheng; Z Du; M Ding; Y Qian; Z Yang; J Tang", "journal": "", "ref_id": "b28", "title": "GPT understands, too", "year": "2021" }, { "authors": "Z Wen; Y Fang", "journal": "", "ref_id": "b29", "title": "TREND: temporal event and node dynamics for graph representation learning", "year": "2022" }, { "authors": "B Lester; R Al-Rfou; N Constant", "journal": "", "ref_id": "b30", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Y You; T Chen; Y Sui; T Chen; Z Wang; Y Shen", "journal": "NeurIPS", "ref_id": "b31", "title": "Graph contrastive learning with augmentations", "year": "2020" }, { "authors": "Y Zhu; Y Xu; F Yu; Q Liu; S Wu; L Wang", "journal": "WWW", "ref_id": "b32", "title": "Graph contrastive learning with adaptive augmentation", "year": "2021" }, { "authors": "Y You; T Chen; Y Shen; Z Wang", "journal": "", "ref_id": "b33", "title": "Graph contrastive learning automated", "year": "2021" }, { "authors": "D Xu; W Cheng; D Luo; H Chen; X Zhang", "journal": "NeurIPS", "ref_id": "b34", "title": "Infogcl: Information-aware graph contrastive learning", "year": "2021" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "NeurIPS", "ref_id": "b35", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b36", "title": "Bert: Pretraining of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b37", "title": "Semi-supervised classification with graph convolutional networks", "year": "2017" }, { "authors": "W Hamilton; Z Ying; J Leskovec", "journal": "NeurIPS", "ref_id": "b38", "title": "Inductive representation learning on large graphs", "year": "2017" }, { "authors": "P Veličković; G Cucurull; A Casanova; A Romero; P Lio; Y Bengio", "journal": "ICLR", "ref_id": "b39", "title": "Graph attention networks", "year": "2018" }, { "authors": "K Xu; W Hu; J Leskovec; S Jegelka", "journal": "", "ref_id": "b40", "title": "How powerful are graph neural networks?", "year": "2019" }, { "authors": "J Snell; K Swersky; R Zemel", "journal": "NeurIPS", "ref_id": "b41", "title": "Prototypical networks for fewshot learning", "year": "2017" }, { "authors": "P Liu; W Yuan; J Fu; Z Jiang; H Hayashi; G Neubig", "journal": "", "ref_id": "b42", "title": "Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2021" }, { "authors": "L Dong; N Yang; W Wang; F Wei; X Liu; Y Wang; J Gao; M Zhou; H.-W Hon", "journal": "NeurIPS", "ref_id": "b43", "title": "Unified language model pre-training for natural language understanding and generation", "year": "2019" }, { "authors": "I Beltagy; K Lo; A Cohan", "journal": "", "ref_id": "b44", "title": "Scibert: A pretrained language model for scientific text", "year": "2019" }, { "authors": "J Lu; D Batra; D Parikh; S Lee", "journal": "NeurIPS", "ref_id": "b45", "title": "ViLBERT: Pretraining taskagnostic visiolinguistic representations for vision-and-language tasks", "year": "2019" }, { "authors": "H Bao; L Dong; S Piao; F Wei", "journal": "", "ref_id": "b46", "title": "BEit: BERT pre-training of image transformers", "year": "2022" }, { "authors": "M Zhang; Y Chen", "journal": "NeurIPS", "ref_id": "b47", "title": "Link prediction based on graph neural networks", "year": "2018" }, { "authors": "J Qiu; Q Chen; Y Dong; J Zhang; H Yang; M Ding; K Wang; J Tang", "journal": "", "ref_id": "b48", "title": "GCC: Graph contrastive coding for graph neural network pre-training", "year": "2020" }, { "authors": "Z Liu; W Zhang; Y Fang; X Zhang; S C H Hoi", "journal": "", "ref_id": "b49", "title": "Towards locality-aware meta-learning of tail node embeddings on networks", "year": "2020" }, { "authors": "K Huang; M Zitnik", "journal": "NeurIPS", "ref_id": "b50", "title": "Graph meta learning via local subgraphs", "year": "2020" }, { "authors": "Z Ying; J You; C Morris; X Ren; W Hamilton; J Leskovec", "journal": "NeurIPS", "ref_id": "b51", "title": "Hierarchical graph representation learning with differentiable pooling", "year": "2018" }, { "authors": "M Togninalli; E Ghisu; F Llinares-L Ópez; B Rieck; K Borgwardt", "journal": "NeurIPS", "ref_id": "b52", "title": "Wasserstein weisfeiler-lehman graph kernels", "year": "2019" }, { "authors": "M Zhang; Z Cui; M Neumann; Y Chen", "journal": "AAAI", "ref_id": "b53", "title": "An end-to-end deep learning architecture for graph classification", "year": "2018" }, { "authors": "N Shervashidze; P Schweitzer; E J Van Leeuwen; K Mehlhorn; K M Borgwardt", "journal": "JMLR", "ref_id": "b54", "title": "Weisfeiler-lehman graph kernels", "year": "2011" }, { "authors": "J Lee; I Lee; J Kang", "journal": "", "ref_id": "b55", "title": "Self-attention graph pooling", "year": "2019" }, { "authors": "Y Ma; S Wang; C C Aggarwal; J Tang", "journal": "", "ref_id": "b56", "title": "Graph convolutional networks with eigenpooling", "year": "2019" }, { "authors": "H Gao; S Ji", "journal": "", "ref_id": "b57", "title": "Graph u-nets", "year": "2019" }, { "authors": "D K Duvenaud; D Maclaurin; J Iparraguirre; R Bombarell; T Hirzel; A Aspuru-Guzik; R P Adams", "journal": "NeurIPS", "ref_id": "b58", "title": "Convolutional networks on graphs for learning molecular fingerprints", "year": "2015" }, { "authors": "J Gilmer; S S Schoenholz; P F Riley; O Vinyals; G E Dahl", "journal": "", "ref_id": "b59", "title": "Neural message passing for quantum chemistry", "year": "2017" }, { "authors": "J Xia; Y Zhu; Y Du; S Z Li", "journal": "", "ref_id": "b60", "title": "A survey of pretraining on graphs: Taxonomy, methods, and applications", "year": "2022" }, { "authors": "Z Zhang; Q Liu; H Wang; C Lu; C.-K Lee", "journal": "NeurIPS", "ref_id": "b61", "title": "Motif-based graph self-supervised learning for molecular property prediction", "year": "2021" }, { "authors": "Y Rong; Y Bian; T Xu; W Xie; Y Wei; W Huang; J Huang", "journal": "NeurIPS", "ref_id": "b62", "title": "Self-supervised graph transformer on large-scale molecular data", "year": "2020" }, { "authors": "R D Hjelm; A Fedorov; S Lavoie-Marchildon; K Grewal; P Bachman; A Trischler; Y Bengio", "journal": "", "ref_id": "b63", "title": "Learning deep representations by mutual information estimation and maximization", "year": "2018" }, { "authors": "F.-Y Sun; J Hoffman; V Verma; J Tang", "journal": "", "ref_id": "b64", "title": "Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization", "year": "2020" }, { "authors": "Z Peng; W Huang; M Luo; Q Zheng; Y Rong; T Xu; J Huang", "journal": "WWW", "ref_id": "b65", "title": "Graph representation learning via graphical mutual information maximization", "year": "2020" }, { "authors": "K Hassani; A H Khasahmadi", "journal": "", "ref_id": "b66", "title": "Contrastive multi-view representation learning on graphs", "year": "2020" }, { "authors": "S Suresh; P Li; C Hao; J Neville", "journal": "NeurIPS", "ref_id": "b67", "title": "Adversarial graph augmentation to improve graph contrastive learning", "year": "2021" }, { "authors": "H Zhang; Q Wu; J Yan; D Wipf; P S Yu", "journal": "NeurIPS", "ref_id": "b68", "title": "From canonical correlation analysis to self-supervised graph neural networks", "year": "2021" }, { "authors": "Y You; T Chen; Z Wang; Y Shen", "journal": "", "ref_id": "b69", "title": "Bringing your own view: Graph contrastive learning without prefabricated data augmentations", "year": "2022" }, { "authors": "J Xia; L Wu; J Chen; B Hu; S Z Li", "journal": "WWW", "ref_id": "b70", "title": "Simgrace: A simple framework for graph contrastive learning without data augmentation", "year": "2022" }, { "authors": "Y Lu; X Jiang; Y Fang; C Shi", "journal": "", "ref_id": "b71", "title": "Learning to pre-train graph neural networks", "year": "2021" }, { "authors": "C Finn; P Abbeel; S Levine", "journal": "", "ref_id": "b72", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "year": "2017" }, { "authors": "Z Wen; Y Fang; Z Liu", "journal": "", "ref_id": "b73", "title": "Meta-inductive node classification across graphs", "year": "2021" }, { "authors": "S Wang; Y Dong; X Huang; C Chen; J Li", "journal": "", "ref_id": "b74", "title": "FAITH: Few-shot graph classification with hierarchical task graphs", "year": "2022" }, { "authors": "R A Rossi; N K Ahmed", "journal": "", "ref_id": "b75", "title": "The network data repository with interactive graph analytics and visualization", "year": "2015" }, { "authors": "X Yu; Z Liu; Y Fang; X Zhang", "journal": "", "ref_id": "b76", "title": "Hgprompt: Bridging homogeneous and heterogeneous graphs for few-shot prompt learning", "year": "2023" }, { "authors": "K M Borgwardt; C S Ong; S Sch Önauer; S Vishwanathan; A J Smola; H.-P Kriegel", "journal": "Bioinformatics", "ref_id": "b77", "title": "Protein function prediction via graph kernels", "year": "2005" }, { "authors": "P Velickovic; W Fedus; W L Hamilton; P Li Ò; Y Bengio; R D Hjelm", "journal": "ICLR", "ref_id": "b78", "title": "Deep graph infomax", "year": "2019" }, { "authors": "N Wang; M Luo; K Ding; L Zhang; J Li; Q Zheng", "journal": "", "ref_id": "b79", "title": "Graph few-shot learning with attribute matching", "year": "2020" }, { "authors": "Z Liu; Y Fang; C Liu; S C Hoi", "journal": "", "ref_id": "b80", "title": "Relative and absolute location embedding for few-shot node classification on graph", "year": "2021" }, { "authors": "F Zhou; C Cao; K Zhang; G Trajcevski; T Zhong; J Geng", "journal": "", "ref_id": "b81", "title": "Meta-GNN: On few-shot node classification in graph metalearning", "year": "2019" }, { "authors": "D Chen; Y Lin; W Li; P Li; J Zhou; X Sun", "journal": "", "ref_id": "b82", "title": "Measuring and relieving the over-smoothing problem for graph neural networks from the topological view", "year": "2020" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b83", "title": "Variational graph auto-encoders", "year": "2016" }, { "authors": "J Zhang; Z Wang; S Zhang; M M Bhalerao; Y Liu; D Zhu; S Wang", "journal": "", "ref_id": "b84", "title": "Graphprompt: Biomedical entity normalization using graph-based prompt templates", "year": "2021" }, { "authors": "Z Liu; Y Fang; C Liu; S C Hoi", "journal": "", "ref_id": "b85", "title": "Node-wise localization of graph neural networks", "year": "2021" }, { "authors": "Z Liu; T.-K Nguyen; Y Fang", "journal": "", "ref_id": "b86", "title": "Tail-gnn: Tail-node graph neural networks", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 324.92, 572.89, 200.89, 9.68 ], "formula_id": "formula_0", "formula_text": "A ij = 1 iff (v i , v j ) ∈ E, for any v i , v j ∈ V ." }, { "formula_coordinates": [ 4, 312, 619.24, 98.7, 9.65 ], "formula_id": "formula_1", "formula_text": "G = {G 1 , G 2 , . . . , G N }." }, { "formula_coordinates": [ 5, 90.99, 194.7, 209.01, 12.69 ], "formula_id": "formula_2", "formula_text": "h l v = AGGR(h l-1 v , {h l-1 u : u ∈ N v }; θ l ),(1)" }, { "formula_coordinates": [ 5, 119.24, 404.05, 180.77, 11.56 ], "formula_id": "formula_3", "formula_text": "H l = AGGR(H l-1 , A; θ l ).(2)" }, { "formula_coordinates": [ 5, 103.71, 488.43, 196.29, 9.51 ], "formula_id": "formula_4", "formula_text": "H = GRAPHENCODER(X, A; Θ).(3)" }, { "formula_coordinates": [ 5, 466.2, 160.32, 97.81, 9.65 ], "formula_id": "formula_5", "formula_text": "S v = (V (S v ), E(S v ))," }, { "formula_coordinates": [ 5, 336.19, 189.85, 227.81, 9.65 ], "formula_id": "formula_6", "formula_text": "V (S v ) = {d(u, v) ≤ δ | u ∈ V }, and(4)" }, { "formula_coordinates": [ 5, 336.28, 202.21, 203.53, 11.72 ], "formula_id": "formula_7", "formula_text": "E(S v ) = {(u, u ′ ) ∈ E | u ∈ V (S v ), u ′ ∈ V (S v )}," }, { "formula_coordinates": [ 5, 320.83, 515.38, 243.17, 39.18 ], "formula_id": "formula_8", "formula_text": "(v, a, b) such that (v, a) ∈ E and (v, b) / ∈ E, we shall have sim(s v , s a ) > sim(s v , s b ).(6)" }, { "formula_coordinates": [ 6, 116.14, 302.29, 183.86, 9.68 ], "formula_id": "formula_9", "formula_text": "ℓ j = arg max c∈C sim(s vj , sc ).(8)" }, { "formula_coordinates": [ 6, 118.48, 468.06, 181.52, 13.47 ], "formula_id": "formula_10", "formula_text": "sc = 1 k (Gi,Li)∈D,Li=c s Gi .(9)" }, { "formula_coordinates": [ 6, 114.48, 522.12, 185.52, 9.68 ], "formula_id": "formula_11", "formula_text": "L j = arg max c∈C sim(s Gj , sc ).(10)" }, { "formula_coordinates": [ 6, 114.77, 635.03, 185.23, 9.68 ], "formula_id": "formula_12", "formula_text": "y = arg max c∈Y sim(s x , sc ).(11)" }, { "formula_coordinates": [ 6, 98.46, 733.88, 197.59, 9.68 ], "formula_id": "formula_13", "formula_text": "s x = READOUT({h v : v ∈ V (S x )}). (12" }, { "formula_coordinates": [ 6, 296.04, 734.25, 3.96, 9.14 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 7, 84.74, 662.04, 215.26, 9.68 ], "formula_id": "formula_15", "formula_text": "s t,x = READOUT({p t ⊙ h v : v ∈ V (S x )}),(14)" }, { "formula_coordinates": [ 7, 354.1, 87.66, 209.9, 9.68 ], "formula_id": "formula_16", "formula_text": "s t,x = READOUT({P t h v : v ∈ V (S x )}).(15)" }, { "formula_coordinates": [ 7, 320.55, 310.77, 119.56, 11.11 ], "formula_id": "formula_17", "formula_text": "Lprompt(pt) = -(x i ,y i )∈T t ln" }, { "formula_coordinates": [ 8, 275.45, 139.28, 189.42, 9.15 ], "formula_id": "formula_18", "formula_text": "Gr v ̸ = G r v from v's r-egonet G r ′ v from v's r ′ -egonet, r ′ ̸ =" }, { "formula_coordinates": [ 8, 326, 594.54, 234.04, 17.63 ], "formula_id": "formula_20", "formula_text": "L DGI (Θ) = -G∈Tpre ln a∈V (G) exp(sim(sa,s G )/τ ) b∈V (G ′ ) exp(sim(s b ,s G )/τ ) , (18" }, { "formula_coordinates": [ 8, 560.04, 599.23, 3.96, 9.14 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 9, 50.97, 70.55, 246.05, 27.31 ], "formula_id": "formula_22", "formula_text": "L IG (Θ) = -G∈Tpre ln a∈V (G) exp(sim(sa,s G )/τ ) G ′ ∈Tpre,G ′ ̸ =G b∈V (G ′ ) exp(sim(s b ,s G )/τ ) . (19" }, { "formula_coordinates": [ 9, 296.25, 89.21, 3.75, 8.66 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 9, 53.91, 220.12, 246.09, 19.72 ], "formula_id": "formula_24", "formula_text": "L GCL (Θ) = -G∈Tpre ln exp(sim(s G j ,s G i )/τ ) G ′ ∈Tpre ,G ′ ̸ =G exp(sim(s G ′ j ,s G i )/τ ) . (20)" }, { "formula_coordinates": [ 9, 54.82, 400.05, 241.23, 21.75 ], "formula_id": "formula_25", "formula_text": "L GCC (Θ) = -v∈Tpre ln exp(sim(s Gr v ,s G r v )/τ ) G r ′ v ∈G r ′ v exp(sim(s G r ′ v ,s G r v )/τ ) . (21" }, { "formula_coordinates": [ 9, 296.04, 405.65, 3.96, 9.14 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 9, 126.84, 545.85, 173.16, 11.06 ], "formula_id": "formula_27", "formula_text": "P = {p 0 , p 1 , . . . , p L }.(22)" }, { "formula_coordinates": [ 9, 98.2, 708.61, 201.8, 9.68 ], "formula_id": "formula_28", "formula_text": "H p = GRAPHENCODER p (X, A; Θ),(23)" }, { "formula_coordinates": [ 9, 367.45, 142.06, 196.55, 11.56 ], "formula_id": "formula_29", "formula_text": "H l+1 = AGGR(p l ⊙ H l , A; θ l+1 ),(24)" }, { "formula_coordinates": [ 9, 395.8, 271.96, 168.2, 13.65 ], "formula_id": "formula_30", "formula_text": "H P = L l=0 w l H p l ,(25)" }, { "formula_coordinates": [ 9, 354.11, 452.57, 209.89, 9.68 ], "formula_id": "formula_31", "formula_text": "s t,x = READOUT({h Pt,v : v ∈ V (S x )}),(26)" } ]
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b23", "b33", "b23", "b13", "b0", "b3", "b36", "b3" ], "table_ref": [], "text": "Adaptive resource allocation problems with multiple resource types (e.g., fire trucks, ambulances, police vehicles) are ubiquitous in the real world [13,23,32]. One example is allocating security resources and emergency response vehicles to different areas depending on incidents [23]. There are many other similar problems in aggregation systems for mobility/transportation, logistics etc [13]. In this paper, we are interested in addressing the multiple difficult challenges present in such adaptive resource allocation problems: (a) Combinatorial action space, as the number of resource allocations is combinatorial; (b) Categorical action space, as there is no ordering of resource allocations with respect to the overall objective (as increasing or decreasing different resource types at different locations can have different impact on overall objective) ; (c) Constraints on feasible allocations and switching between allocations; and (d) Finally, uncertainty in demand for resources.\nExisting research in RL for constrained action spaces has considered resource allocation problems with a single type of resource, thereby introducing order in action space [1]. In such ordered action spaces, actions can be converted into continuous space and this allows for the usage of continuous action RL methods (e.g., DDPG). In problems of interest in this paper, we are interested in problems with multiple resource types (e.g., fire truck, ambulance, police). These problems have a large action space that is discrete and unordered (categorical) and there are constraints on feasible allocations (e.g., no two police vehicles can be more than 3 km away, cannot move vehicles too far away from time step to time step). Furthermore, such constraints are easy to validate by a validity oracle given any allocation action, but are hard to represent as mathematical constraints on the support of the distribution of actions (in each state) as they often require exponentially many inequalities [4,35]. An important consideration in allocation problems is randomized (stochastic) allocation arising from issues of fair division of indivisible resources so that an allottee is not starved of resources forever [4]. Thus, we aim to output stochastic optimal policies, if one exists.\nTowards addressing such resource allocation problems at scale, we propose to employ generative policies in RL. Specifically, we propose a new approach that incorporates discrete normalizing flow policy in an actor-critic framework to explore and learn in the aforementioned constrained, categorical and adaptive resource allocation problems. Prior RL approaches for large discrete multidimensional action space include ones that assume a factored action space with independent dimensions, which we call as the factored or marginal approach, since independence implies that any joint distribution over actions can be represented as the product of marginal distributions over each action dimension. Other approaches convert the selection of actions in multiple dimensions into a sequential selection approach. Both these approaches are fundamentally limited in expressivity, which we reveal in detail in our experiments. Next, we provide a formal description of the problem that we tackle.\nProblem Statement: A Markov Decision Process (MDP) is represented by the tuple ⟨S, A, P, r, γ, b 0 ⟩, where an agent can be in any state s t ∈ S at a given time t. The agent takes an action a t ∈ A, causing the environment to transition to a new state s t+1 with a probability P : S × A × S → [0, 1]. Subsequently, the agent receives a reward r : S × A → R. In the infinite-horizon setting, the discounted factor is 0 < γ < 1. The distribution of the initial state is b 0 .\nIn our work, we focus on a categorical action space, A. Categorical action spaces consist of discrete, unordered actions with no inherent numerical relationship between them. We assume that for any state s, there is a set of valid actions, denoted by C(s) ⊆ A. There is an oracle to answer whether an action a ∈ C(s), but the complex constraint over categorical space cannot be expressed succinctly using closed form mathematical formula. Note that the constraint is not the same for every state. Our objective is to learn a stochastic policy, π(•|s), which generates a probability distribution over actions for state s with support only over the valid actions C(s) in state s. We call the set of such stochastic policies as valid policies Π C given the per state constraint C. We aim to maximize the long-term expected rewards over valid policies, that is, max π∈Π C J(π), where J is as follows:\nJ(π) = E s∼b0 [V (s; π)] where V (s; π) = E ∞ t=0 γ t r (s t , a t ) |s 0 = s; π(1)\nIn addition, we also consider settings with partial observability where the agent observes o t ∈ O at time t where the observation arises from the state s t with probability O : O × S → [0, 1]. In this case, the optimal policy is a stochastic policy, π(•|h), where h is the history of observations till current time. For partial observability, we consider an unconstrained setting, as the lack of knowledge of the true state results in uncertainty about which constraint to enforce, which diverges from the focus in this work but is an interesting future research direction to explore. Thus, with partial observability, we search over stochastic policies Π that maximize the long term return, that is, max π∈Π J(π). J(π) is the same as stated in Equation 1 but where the expectation is also over the observation probability distribution in addition to the standard transition and stochastic policy probability distributions." }, { "figure_ref": [ "fig_8" ], "heading": "Contribution:", "publication_ref": [ "b26", "b16" ], "table_ref": [], "text": "We propose two key innovations to address the problem above. First, we present a conditional Normalizing Flow-based [26] Policy Network, which leverages Argmax Flow [16] to create a minimal representation of the policy for policy gradient algorithms. To the best of our knowledge, this is the first use of discrete normalizing flow in RL. Second, we demonstrate how to train the flow policies within the A2C framework. In particular, we need an estimate of the log probability of the action sampled from the stochastic policy but Argmax Flow provides only a biased lower bound via the evidence lower bound (ELBO). Thus, we design an effective sandwich estimator for the log probability that is sandwiched between the ELBO lower bound and an upper bound based on χ 2 divergence. Third, we propose a policy gradient approach which is able to reject invalid actions (that do not satisfy constraints) referred to as Invalid Action Rejection Advantage Actor-Critic (IAR-A2C). IAR-A2C queries the constraint oracle and ensures validity of actions in every state (in fully observable setting) by rejecting all invalid actions. We derive a new policy gradient estimator for IAR-A2C. Figure 1 provides an overview of our architecture. Finally, our extensive experimental results reveal that our approach outperforms prior baselines in different environments and settings." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b26", "b16", "b7" ], "table_ref": [], "text": "Normalizing Flows: Normalizing flows are a family of generative models that can provide both efficient sampling and density estimation. Their main idea is to construct a series of invertible and differentiable mappings that allow transforming a simple probability distribution into a more complex one. Given V = Z = R d with densities p V and p Z respectively, normalizing flows [26] aim to learn a bijective and differentiable transformation f : Z → V. This deterministic transformation allows us to evaluate the density at any point v ∈ V based on the density of z ∈ Z, as follows:\np V (v) = p Z (z) • det dz dv , v = f (z)(2)\nIn this context, p Z can be any density, though it is typically chosen as a standard Gaussian and f is represented by a neural network. Consequently, normalizing flows offer a powerful tractable framework for learning complex density functions. However, the density estimation presented in Equation 2 is limited to continuous probability distributions. To enable the learning of probability mass functions (P ) on categorical discrete data, such as natural language, Argmax Flow [16] proposed to apply the argmax operation on the output of continuous flows. Let's consider v ∈ R D×M and x ∈ {1, . . . , M } D . The argmax operation is interpreted as a surjective flow layer v → x, which is deterministic in one direction (x d = arg max m v d , written compactly as x = arg max v) and stochastic in the other (v ∼ q(•|x)). With this interpretation, the argmax operation can be considered a probabilistic right-inverse in the latent variable model expressed by:\nP (x) = P (x|v)p(v)dv, P (x|v) = δ(x = argmax(v))(3)\nwhere argmax is applied in the last dimension of v. In this scenario, the density model p(v) is modeled using a normalizing flow. The learning process involves introducing a variational distribution q(v|x), which models the probabilistic right-inverse for the argmax surjection, and optimizing the evidence lower bound (ELBO), which is the RHS of the following inequality:\nlog P (x) ≥ E v∼q(•|x) [log P (x|v)+log p(v)-log q(v|x)] = E v∼q(•|x) [log p(v)-log q(v|x)] = L\nThe last equality holds under the constraint that the support of q(v|x) is enforced to be only over the region S = {v ∈ R D×M : x = arg max v} which ensures that P (x|v) = 1. From standard variational inference results, log P (x) -L = KL(q(v|x)||p(v|x)), which also approaches 0 as the approximate posterior q(v|x) comes closer to the true posterior p(v|x) over the training time.\nχ 2 Upper Bound: Variational inference involves proposing a family of approximating distributions and finding the family member that is closest to the posterior. Typically, the Kullback-Leibler (KL) divergence KL(q||p) is employed to measure closeness, where q(v|x) represents a variational family. This approach yields ELBO of the evidence log P (x) as described above.\nInstead of using KL divergence, the authors in [8] suggest an alternative of χ 2 -divergence to measure the closeness. As a result, they derived an upper bound of the evidence, known as CUBO:\nlog P (x) ≤ 1 2 log E v∼q(•|x) p(x,v) q(v|x)" }, { "figure_ref": [], "heading": "2", "publication_ref": [], "table_ref": [], "text": ". Similar to the ELBO in Argmax Flow, CUBO can be further simplified under the constraint that the support of q(•|x) is restricted to the region S:\nlog P (x) ≤ 1 2 log E v∼q(•|x) p(v) q(v|x) 2 = L χ 2 (4) Also, L χ 2 -log P (x) = 1 2 log(1 + D χ 2 (p(v|x))||q(v|x)))\n, hence the gap between the ELBO and CUBO approaches 0 as the approximate posterior q(•|x) becomes closer to the true posterior p(•|x)." }, { "figure_ref": [ "fig_8" ], "heading": "Flow-based Policy Gradient Algorithm with Invalid Action Rejection", "publication_ref": [], "table_ref": [], "text": "We first present the Flow-based Policy Network, which leverages Argmax Flow to create a minimal representation of the policy for policy gradient algorithms, and then construct flow policies within Figure 1: Our IAR-A2C framework. At each time step, an initial batch of action samples, along with their log probabilities, are generated using the Flow Policy. Invalid actions from this batch are rejected using an oracle. A single action is then uniformly sampled from the remaining valid ones, and executed. This selected action and the valid action set are stored along with the resulting state and reward. This collective experience is subsequently utilized to update the Flow Policy. the A2C framework. Our exposition will present policies conditioned on state, but the framework works for partial observability also by using the sequence of past observations as state. After this, for the fully observable setting only, we introduce our novel policy gradient algorithm called IAR-A2C that enforces state dependent constraints." }, { "figure_ref": [ "fig_0" ], "heading": "Flow-based Policy Network", "publication_ref": [ "b16", "b12", "b15", "b29", "b29", "b26" ], "table_ref": [], "text": "In policy gradient algorithms for categorical actions, the standard approach is to model the entire policy, denoted as π(•|s), which allows us to sample actions and obtain their corresponding probabilities. The policy is parameterized using a neural network where the network generates logits for each action, and then converts these logits into probabilities. The size of the output layer matches that of the action space, which can be prohibitively large for resource allocation problems of interest in this paper. We posit that we can have a better policy representation, as it is sufficient to require samples from the support set of the policy, represented by a (i) ∼ π(•|s) and probability value π(a (i) |s) > 0. Based on this observation, our first contribution is a compact policy representation using Argmax Flow [16], which we refer to as the Flow Policy. Argmax Flow is a state-of-theart discrete normalizing flow model and it has shown great capability of learning categorical data such as in sentence generation. In our context, Flow Policy will output the action (a sample of the flow policy) and its probability, instead of explicitly outputting the entire distribution and sampling from it as in prior work. Once trained, we can sample from the Argmax Flow and, more importantly, estimate the probability of the sample. A normalizing flow model transforms a base distribution, given by a random variable z 0 , to the desired distribution. In our approach, the desired distribution is the distribution given by policy π. We adapt Argmax Flow approach for learning a stochastic policy.\nBefore diving into the specifics of the Flow Policy, we first discuss our rationale for choosing the normalizing flow model over other contemporary deep generative models, such as Generative Adversarial Networks (GANs) [12] and Diffusion Models [15,29]. GANs, categorized as implicit generative models, do not allow for an estimation of the data's log probability. While prior research has successfully devised an approach for exact log probability estimation with Diffusion Models, these Algorithm 1: ELBO Optimization\n1 Input: Invertible flow F w = f w,k • . . . • f w,1 , State encoder E ρ , Posterior q ψ , rollout τ 2 Sample States = {(s (i) )} B i=1 ∼ τ 3 for s ∈ States do 4 for j ← 1 : n_elbo_steps do 5 z (j) = F w (z (j) 0 ), where z (j) 0 ∼ E ρ (s) 6 a (j) = argmax z (j) 7 for j ← 1 : n_elbo_steps do 8 z ′ (jn) = threshold T (u (jn) ), where {(u (jn) )} N n=1 ∼ q ψ (•|a (j) , s) 9 z ′(jn) 0 = F -1 w (z ′(jn) )10\nTake a gradient ascending step of ELBO w.r.t. ρ, w and\nψ 11 L = 1 N N i=1 log p ρ (z ′ (jn) 0 |s) - K k=1 log det ∂f w,k ∂z ′(jn) k-1 -log q ψ (z ′ (jn) |a (j) , s)\nmodels encounter the problem of slow sampling and expensive probability estimation, necessitating the resolution of an ordinary differential equation [29]. In contrast, normalizing flow has low cost for sampling and probability evaluation in our case, as we find the simple flow model is sufficient to learn a good policy. So normalizing flow is a natural choice to construct our policy.\nWe demonstrate how to construct the flow policy within the A2C framework. An illustration is provided in Figure 2. To define the policy, π(a|s), we first encode the state that is then used to define the base distribution z 0 . In accordance with standard practice, we select z 0 as a Gaussian distribution with parameters µ, σ defined by the state encoder (a state-dependent neural network):\nz 0 = µ ρ (s) + σ ρ (s)\n• ϵ where ϵ ∼ N (0, 1). We write this as z 0 ∼ E ρ (s), where ρ denotes the weights of the state encoder. We then apply a series of invertible transformations given as functions f k that define the flow, followed by the argmax operation (as defined in background on Argmax flow). Consequently, the final sampled action is given by a [26] and we use F w = f w,K • . . . • f w,1 to denote the composed function, where w, k is the weight of the network representing f k . Thus, sampled action a = argmax (F w (z 0 )) for z 0 ∼ E ρ (s). We use shorthand θ = (ρ, w) when needed. Also, we use z k = f w,k (z k-1 ) to denote the output of f w,k and p ρ (z 0 |s) to denote the probability density of z 0 ∼ N (µ ρ , σ ρ ).\n= argmax (f K • . . . • f 1 (z 0 )). Each f k is an invertible neural network\nWe use a variational distribution q ψ for the reverse mapping from a discrete action (conditional on state s) to a continuous output of F w (z 0 ). The corresponding estimate of the log probability of the sampled action a, denoted by lπ , is the evidence lower bound ELBO, computed as follows:\nlπ (a|s) = L = E z K ∼q ψ (•|a,s) log p ρ (z 0 |s) - K k=1 log det ∂f k ∂z k-1 -log q ψ (z K |a, s)(5)\nTo ensure that our approximation closely approximates the evidence, we optimize ELBO progressively, following the training scheme of Argmax flow, as shown in the subroutine Algorithm 1 used within the outer A2C framework in Algorithm 2. The target distribution in this subroutine is given by actions a (j) sampled from the current policy (line 6); thus, this subroutine aims to update the flow network components (ρ, w) to make the distribution of overall policy be closer to this target distribution and to improve the posterior q ψ estimate (ELBO update in line 11), which gets used in estimating the log probability of action required by the outer A2C framework (shown later in Algorithm 2). The various quantities needed for ELBO update are obtained in lines 8, 9 by passing a (j) back through the invertible flow layers (using q ψ for the discrete to continuous inverse map)." }, { "figure_ref": [ "fig_8" ], "heading": "Policy Gradient with Invalid Action Rejection", "publication_ref": [], "table_ref": [], "text": "We aim to use the flow-based policy within an A2C framework, illustrated in Figure 1; here we describe the same along with how to enforce arbitrary state based constraint on actions. We propose a sandwich estimator which combines ELBO and CUBO to obtain a low-bias estimation of log π(a|s), thus improving the convergence of policy gradient learning. Moreover, to tackle the challenge of aforementioned complex constraints, our algorithm proceeds by sampling a number of actions from the flow-based policy and then utilizing the constraint oracle to filter the invalid actions. We then provide theoretical results showing the adaptation of the policy gradient computation accordingly." }, { "figure_ref": [], "heading": "Sandwich estimator:", "publication_ref": [ "b4", "b34", "b5", "b20", "b24", "b22" ], "table_ref": [], "text": "The ELBO lπ is a biased estimator of log π(a|s), but it is known in literature [5] that lπ is a consistent estimator (i.e., converges to log π(a|s) in the limit) and there are generative methods based on this consistency property, such as [33]. However, we aim to use lπ in the policy gradient and stochastic gradient descent typically uses an unbiased gradient estimate (but not always, see [6]). Therefore, we propose to use a new technique which combines ELBO and CUBO to reduce the bias, improving the convergence of our policy gradient based learning process. In particular, we estimate an upper bound of log π(a|s) using the following CUBO:\nlu π (a|s) = L χ 2 = 1 2 log E z K ∼q ψ (z K |a,s) p ρ (z 0 |s) K k=1 det ∂f w,k ∂z k-1 q ψ (z K |a, s) 2(6)\nWe then use a weighted average of the upper and lower bounds as a low-bias estimate of log π(a|s), denoted by log p θ,ψ = α lπ + (1 -α) lu π where α is a hyperparameter. We call this the sandwich estimator of log probability. We observed in our experiments that an adaptive α( lπ , lu π ) as a function of the two bounds provides better results than a static α = 1 2 (see Appendix B.1 for more details). Constraints: Since the agent only has oracle access to the constraints, existing safe exploration approaches [20,24] are not directly applicable to this setting. If the agent queries the validity of all actions, then it gains complete knowledge of the valid action set C(s). However, querying action validity for the entire action space at each timestep can be time-consuming, particularly when the action space is large, e.g., 1000 categories. We demonstrate this issue in our experiments.\nTo address this challenge of complex state-dependent constraints in the full observation setting, we propose a new policy gradient algorithm called Invalid Action Rejection Advantage Actor-Critic (IAR-A2C). IAR-A2C operates by sampling a set of actions from the current flow-based policy and then leveraging the constraint oracle to reject all invalid actions, enabling the agent to explore safely with only valid actions. We then derive the policy gradient estimator for this algorithm. Recall that C(s) is the set of valid actions in state s and let C I (s) be the set of invalid actions. We sample an action from π θ (a|s) (recall θ = (ρ, w) when using Flow Policy), rejecting any invalid action till a valid action is obtained. Clearly, the effective policy π ′ induced by this rejection produces only valid actions. In fact, by renormalizing we obtain π ′ θ (a|s) = π θ (a|s)\na i ∈C(s) π θ (ai|s) for a ∈ C(s). For the purpose of policy gradient, we need to obtain the gradient of the long term reward with π ′ : J(θ) = E s∼b0 [V (s; π ′ )]. We show that:\nTheorem 1 With π ′ defined as above, ∇ θ J(θ) = E π ′ Q π ′ (s, a)∇ θ log π θ (a|s) -Q π ′ (s, a) a∈C(s) ∇ θ π θ (a|s) a∈C(s) π θ (a|s)(7)\nIn practice, analogous to standard approach in literature [22] to reduce variance, we use the TD error (an approximation of advantage) form instead of the Q function in Theorem 1. Then, given the network π θ , the empirical estimate of the first gradient term is readily obtainable from trajectory samples of π ′ . To estimate the second term, for every state s in the trajectories, we sample S actions from π θ (a|s) and reject any a ∈ C I (s) to get l ≤ S valid actions a 1 , ..., a l . We then apply Lemma 1:\nLemma 1 1 l j∈[l]\n∇ θ log π θ (a j |s) and l S are unbiased estimates of a∈C(s) ∇ θ π θ (a|s) and a∈C(s) π θ (a|s) respectively.\nThe full approach is shown in Algorithm 2, with the changes from standard A2C highlighted in red. Recall that, θ = (ρ, w) is the parameter of the flow-based policy network, ϕ is the parameter of the critic, and ψ is the parameter of the variational distribution network q ψ for ELBO." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b10", "b31", "b6", "b3", "b36", "b35", "b37", "b11", "b14", "b11", "b1", "b17", "b20", "b8", "b24", "b27", "b18", "b21", "b34", "b21", "b34", "b18" ], "table_ref": [], "text": "Large discrete action space. Dulac-Arnold et al. [10] attempt to solve this problem by embedding discrete actions into a continuous space; however, this approach does not work for our unordered Algorithm 2: IAR-A2C \n+ (R -V ϕ (s i )) ∇ θ,ψ log p θ,ψ (a i |s i ) - Sj l 2 j j∈[l] ∇ θ,ψ log p θ,ψ (a ij |s i ) 15 Accumulate gradients w.r.t. ϕ: dϕ ← dϕ + ∂(R -V ϕ (s i )) 2 /∂ϕ16\nPerform update of θ using dθ, of ψ using dψ and of ϕ using dϕ Execute ELBO updating by running Algorithm 1 with τ as input\n18 E ← E + 1 19 until E > E max\ndiscrete space as we reveal by thorough comparison in our experiments. Another type of approach relies on being able to represent a joint probability distribution over multi-dimensional discrete actions using marginal probability distribution over each dimension (also called factored representation). Tang and Agrawal [30] apply this approach to discretized continuous control problems to decrease the learning complexity. Similarly, Delalleau et al. [7] assumes independence among dimensions to model each dimension independently. However, it is well-known in optimization and randomized allocation literature [4,35] that dimension independence or marginal representation is not valid in the presence of constraints on the support of the probability distribution. Another type of approach [34,36] converts the choice of multi-dimensional discrete action into a sequential choice of action across the dimensions at each time step (using a LSTM based policy network), where the choice of action in any dimension is conditioned on actions chosen for prior dimensions.\nSafe actions. Our approach falls into the field of constrained action space in RL. Situated in the broader safe RL literature [11,14], these works aim to constrain actions at each step for various purposes, including safety. Notably, we do not have an auxiliary cost that we wish to bound, thus, frameworks based on Constrained MDP [11] cannot solve our problem. Also, our state dependent constraints if written as inequalities can be extremely large and hence Lagrangian methods [2] that pull constraints into the objective are infeasible. Most of the methods that we know for constrained actions need the constraints to be written as mathematical inequalities and even then cannot handle state dependent constraints. Although action masking [17] does offer a solution for state-dependent constraints, it necessitates supplementary information in the form of masks. Thus, they do not readily apply to our problem [20]. Some of these methods [9,24] aim at differentiating through an optimization; these methods are slow due to the presence of optimization in the forward pass and some approaches to make them faster [27] work only with linear constraints; all these approaches scale poorly in the number of inequality constraints.\nNormalizing flow policy. The application of normalizing flows in RL has been an emerging field of study with a focus on complex policy modeling, efficient exploration, and control stability [18,21,33]. Normalizing flow was integrated into the Soft Actor-Critic framework to enhance the modelling expressivity beyond conventional conditional Gaussian policies and achieve more efficient exploration and higher rewards [21,33]. In robotic manipulation, Khader et al. [18] proposed to improve the control stability by a novel normalizing-flow control structure. However, these works primarily focus on continuous actions. To the best of our knowledge, ours is the first work that has successfully incorporated a discrete flow, extending the scope of flow policies into discrete action spaces." }, { "figure_ref": [ "fig_2" ], "heading": "Experiments", "publication_ref": [ "b2", "b32", "b10", "b22", "b17" ], "table_ref": [], "text": "Through our experiments, we aim to address two main research questions related to our primary contributions: (1) Is the flow-based policy effective in representing categorical actions? (2) Does IAR-A2C offer advantages in constrained action space with oracle constraints? We address these two questions in Sections 5.1 and 5.2 accordingly. Following these, we present an ablation study that examines the effectiveness of various modules in our approach. We first describe our set-up. We evaluate IAR-A2C against prior works across a diverse set of environments, including lowdimensional discrete control tasks such as CartPole and Acrobot [3], the visually challenging Pistonball task [31] with high-dimensional image inputs and an extremely large action space (upto 59, 049 categories), and an emergency resource allocation simulator in a city, referred to as Emergency Resource Allocation (ERA). CartPole and Acrobot are well-established environments. Pistonball is also a standard environment where a series of pistons on a line needs to move (up, down, or stay) to move a ball from right to left (Figure 3). While the Pistonball environment was originally designed for a multi-agent setting with each piston controlled by a distinct agent, we reconfigure this as a single-agent control of all pistons. This modification presents a challenging task for the central controller as the action space is exponentially large. We show results for Acrobot and three versions of Pistonball: v1 with 3 5 , v2 with 3 8 , and v3 with 3 10 actions. Results for CartPole are in the appendix.\nFinally, our custom environment, named ERA, simulates a city that is divided into different districts, represented by a graph where nodes denote districts and edges signify connectivity between them. An action is to allocate a limited number of resources to the nodes of the graph. A tuple including graph, current allocation, and the emergency events is the state of the underlying MDP. The allocations change every time step but an allocation action is subject to constraints, namely that a resource can only move to a neighboring node and resources must be located in proximity (e.g. within 2 hops on the graph) as they collaborate to perform tasks. Emergency events arise at random on the nodes, and the decision maker aims to minimize the costs associated with such events by attending to them as quickly as possible. Moreover, we explore a partially observable scenario (with no constraints) in which the optimal allocation is randomized, thus, the next node for a resource is sampled from the probability distribution over neighboring nodes that the stochastic policy represents (see appendix for set-up details). We show results for five versions of ERA: ERA-Partial with 9 actions and partial observability in unconstrained scenario, while v2 with 7 3 actions, v3 with 8 3 actions, v4 with 9 3 actions, and v5 with 10 3 actions in constrained scenario. The results for ERA-v1 are in Appendix C.\nBenchmarks: In the unconstrained setting, we compare our approach to Wol-DDPG [10], A2C [22], factored representation (Factored, discussed in Section 4), and autoregressive approach (AR, discussed in Section 4). Wol-DDPG is chosen as it is designed to handle large discrete action spaces without relying on the dimension independence assumption. In the oracle constraints setting, we compare our method to action masking (MASK) [17], which determines the action mask by querying all actions within the action space. As far as we know action masking is currently the only existing approach for constraining action with oracle constraints, we also include a comparison with IAR-augmented AR (AR+IAR) which is able to handle the constraints by utilizing our invalid action rejection technique, as well as a comparison with Wol-DDPG to demonstrate the performance of a method that cannot enforce the constraints." }, { "figure_ref": [ "fig_3", "fig_5" ], "heading": "Learning in Categorical Action Space without Constraints", "publication_ref": [ "b10", "b28" ], "table_ref": [], "text": "In the Acrobot environments, we use A2C as the benchmark because this task is relatively simple and A2C can easily learn the optimal policy. For the Pistonball environment, we consider both Wol-DDPG and A2C as benchmarks. The results are displayed in Figure 4. We also conduct experiments on CartPole and ERA-v5 with all constraints removed, which can be found in Appendix C. The results of Acrobot demonstrate that the flow-based policy exhibits comparable performance to the optimal policy (A2C), which also highlights the effectiveness of our sandwich estimator for log π(a|s). In more challenging environments with highdimensional image observations and extremely large action spaces, our model has comparable performance to Factored and AR, while significantly outperforming A2C and Wol-DDPG. Interestingly, we observe that Wol-DDPG struggles to learn even in the simplest Pistonball environment, despite being designed for large discrete action spaces. We hypothesize that Wol-DDPG might function properly if the actions are discrete yet ordered, as demonstrated in the quantized continuous control task presented in the Wol-DDPG paper [10].\nOn ERA-Partial, we demonstrate the known fact that the optimal policy in environments with partial observability may be a stochastic one [28]. We compare with the factored approach (Figure 6). In this environment, the optimal stochastic policy is a joint distribution that is not factorizable into a product of independent distributions over each dimension. Thus, the factored approach cannot effectively represent the joint distribution due to its independence assumption among dimensions. The experiment results show our approach significantly outperforms the factored approach. We further present a smaller example in the appendix which shows that the inability to represent an arbitrary joint distribution makes the factored approach extremely suboptimal in partial observable settings." }, { "figure_ref": [ "fig_4", "fig_4", "fig_6" ], "heading": "Learning in Categorical Action Space with State-Dependent Constraints", "publication_ref": [], "table_ref": [], "text": "We now address the second question about constraint enforcement by IAR-A2C. The results are in Figure 5. We observe that our approach demonstrates better or comparable performance compared to the benchmarks. AR is known as a strong baseline for environments with large discrete action space, but surprisingly, it performs poorly. We hypothesize this is due to the case that the autoregressive model does not have a sense of the constraints of the remaining dimensions when it outputs the action for the first dimension, thereby producing first dimension actions that may be optimal without constraints but are suboptimal with constraints. Detailed analysis and experimental evidence to support our hypothesis are provided in Appendix D. Also, action masking achieves its performance by querying all actions of the entire action space, whereas our approach only requires querying a batch of actions, which is substantially smaller(e.g., 64 versus 1, 000 for ERA-v5). Thus, while Figure 5 shows IAR-A2C taking more iteration for convergence, the same figure when drawn with the x-axis as wall clock time in Figure 7 (shown only for v4 here, others are in appendix C) shows an order of magnitude faster convergence in wall clock time. Another critical property of our approach is the guaranteed absence of constraint violations, similar to the action masking method. However, while action masking demands the full knowledge of the validity of all actions, our method merely requires the validity of the sampled actions within a batch. Note that Wol-DDPG can and does violate constraints during the learning process. Further details regarding the action violation of Wol-DDPG are provided in the appendix C. We conduct ablation studies of IAR-A2C on ERA-v4 to investigate the effect of various choices of modules and estimators in our approach." }, { "figure_ref": [ "fig_7", "fig_7", "fig_7" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Policy Gradient: We compare the performance of approaches using the policy gradient provided by Theorem 1 (gradient correction) and the original policy gradient of A2C (standard policy gradient), while still applying invalid action rejection. We observe in Figure 8a that the number of valid actions in the batch decreases rapidly, and the program may crash if no valid actions are available.\nSandwich Estimator: We examine the effects if we use only the ELBO estimator for log probability of action instead of our sandwich estimator. We find that the ELBO estimator is also prone to a reduction in valid actions in Figure 8a and unstable learning as a consequence, similar to the observation when using the standard policy gradient.\nPosterior Type: The posterior q(z|a, s) can be modeled by a conditional Gaussian or a normalizing flow. In our experiments, we discover that modelling q with a flow posterior is crucial for learning, as it can approximate the true posterior more effectively than a Gaussian, as seen in Figure 8b." }, { "figure_ref": [], "heading": "Conclusion and Limitations", "publication_ref": [], "table_ref": [], "text": "We introduced a novel discrete normalizing flow based architecture and an action rejection approach to enforce constraints on actions in order to handle categorical action spaces with state dependent oracle constraints in RL. Our approach shows superior performance compared to available baselines, and we analyzed the importance of critical modules of our approach. A limitation of our method is in scenarios where the fraction of valid actions out of all actions is very small, and hence our sampling based rejection will need a lot of samples to be effective, making training slower. This motivates future work on improved sampling; further, better estimation of the log probability of actions is also a promising research direction for improved performance." }, { "figure_ref": [], "heading": "A Proofs and Derivation", "publication_ref": [], "table_ref": [], "text": "A.1 Proof for Theorem 1\nThe following sequence of equations show that proof, relying on the fact that π ′ (a|s) = π θ (a|s) a i ∈C(s) π θ (ai|s) . We start with the standard policy gradient for any policy π ′ , shown in the first line below, and then replace π ′ (a|s) = π θ (a|s) a i ∈C(s) π θ (ai|s) in the second line, followed by standard manipulation of the log function.\n∇ θ J(θ) = E π ′ Q π ′ (s, a)∇ θ log π ′ (a|s)(8)\n= E π ′ Q π ′ (s, a)∇ θ log π θ (a|s) ai∈C(s) π θ (a i |s)(9)\n= E π ′ Q π ′ (s, a)∇ θ log π θ (a|s) -Q π ′ (s, a)∇ θ log ai∈C(s) π θ (a i |s)(10)\n= E π ′ Q π ′ (s, a)∇ θ log π θ (a|s) -Q π ′ (s, a) ai∈C(s) ∇ θ π θ (a i |s) ai∈C(s) π θ (a i |s)(11)\nA.2 Proof for Lemma 1\nFix state s and consider a function\nF (a) = ∇ θ π θ (a|s) π θ (a|s) for a ∈ C(s) 0 otherwise . Then, E π [F (a)] = a∈C(s) ∇ θ π θ (a|s)\nThus, if we obtain a sample average estimate for E π [F (a)] then it is an unbiased estimate for a∈C(s) ∇ θ π θ (a|s). For S samples from π with l being valid samples, the sample average estimate for\nE π [F (a)] is 1 l j∈[l] ∇ θ log π θ (a j |s).\nSimilarly, for the next estimate, consider a function G(a) = 1 for a ∈ C(s) 0 otherwise . Clearly, then\nE π [G(a)] = a∈C(s) π θ (a|s) and a sample avergae estimate of E π [G(a)] is l S ." }, { "figure_ref": [], "heading": "A.3 Soft Threshold Function", "publication_ref": [ "b16" ], "table_ref": [], "text": "In Argmax Flow [16], a threshold function was introduced to enforce the argmax constraints, i.e. the variational distribution q(v|x) should have support limited to S(x) = {v ∈ R D×K : x = arg max v}. The thresholding-based q(v|x) was defined by Alg. 3 in Argmax Flow. However, the formula to evaluate det dv/du is not given, which is essential when estimating ELBO lπ and CUBO lu π . We derive it here. Let's follow the notations in Alg. 3 of Argmax Flow. Suppose index i is the one that we want to be largest (i is a fixed index). The soft threshold function is given by\nv j = u i -log(1 + e ui-uj )\nNote that the threshold T = v x = u x (we cannot use v x to define v itself, so T is u x ). Then, α is a trainable parameter, updated by the policy gradient Adaptive α( lπ , lu π ) α is conditioned on the ELBO and the CUBO det dv/du is a K × K determinant, where only the elements on the diagonal and on the column i is non-zero, other elements are zero. We can unfold the determinant by the i-th row. Finally, we have\n• if j = i then v j = u i , ∂vj ∂u k = 1 • if k ̸ = j or k ̸ = i then ∂vj ∂u k = 0 • if k = i then ∂vj ∂u k = 1 - 1 1+e u i -u j × e ui-uj • if k = j then ∂vj ∂u k = 1 1+e u i -u j × e ui-uj\ndet dv/du = K j=1,j̸ =i ∂v j ∂u j = K j=1,j̸ =i sigmoid(u i -u j )\nB Experimental Details " }, { "figure_ref": [], "heading": "B.2 Effect of Invertible Functions F w", "publication_ref": [ "b16", "b1" ], "table_ref": [], "text": "We have explored different types of invertible function F w (called latent flow model) in our study, including affine coupling bijections2 , as well as more advanced models such as AR Flow and Coupling Flow, as described in section B.1 of Argmax Flow [16]. The AR Flow and Coupling Flow methods offer enhanced capabilities for modeling complex distributions, and they have been successfully applied to language modeling tasks within the Argmax Flow framework. However, through our experiments, we have observed that even a simple latent flow model is sufficient for achieving good performance and exhibits faster convergence. We attribute this finding to the fact that the increased parameterization in AR Flow requires a larger amount of training data to effectively learn. otherwise specified. When evaluating the model's performance at a specific timestep with a specific seed, we employ a separate set of 10 testing environments and report the mean return over these environments. Further details can be found in Tables A. 2. Note that n_envs denotes the number of environments running in parallel, lr denotes the learning rate, and batch size refers to the batch size when we execute ELBO updating (Algorithm 1 in the main paper). Furthermore, we will make the code used to reproduce these results publicly available." }, { "figure_ref": [], "heading": "B.4 Range of Considered hyperparameters", "publication_ref": [], "table_ref": [], "text": "We conducted experiments varying the number of samples used for estimating log π(a|s), specifically considering the values {1, 2, 4, 8}, as well as the inclusion of reward normalization. We find that using 2 or 4 samples generally leads to good performance across most of our experiments." }, { "figure_ref": [], "heading": "B.5 Network structure", "publication_ref": [ "b25", "b19" ], "table_ref": [], "text": "Our implementation is built on Stable-Baseline3 [25]. In different environments, different state encoders were exploited. We used MLP encoder for discrete control tasks and CNN encoder for Pistonball task. In ERA environment, a customized state encoder was applied to handle the graph state based on the implementation from [19]." }, { "figure_ref": [], "heading": "B.6 Computational resources", "publication_ref": [], "table_ref": [], "text": "Experiments were run on NVIDIA Quadro RTX 6000 GPUs, CUDA 11.0 with Python version 3.8.13 in Pytorch 1.11." }, { "figure_ref": [ "fig_7", "fig_8" ], "heading": "C Additional Experiment", "publication_ref": [ "b28" ], "table_ref": [ "tab_2" ], "text": "In this section, we present additional experimental results obtained from our study. In the non-constrained setting, we conduct experiments on CartPole, ERA-v5, and a toy environment with partial observability -Toy-Partial. For ERA-v5, we remove all constraints, allowing for the allocation of resources in any nodes. Figure A.2a illustrates that our approach achieves comparable performance to the best benchmark when the action space is not excessively large. Toy-Partial is built exactly on the example described in Section 3.1 of [28], in which the optimal stochastic policy can be arbitrarily better than any deterministic policy. We only modify the setting by augmenting the dimensionality of actions. There are two actions A and B in their example, while we use a 2-dimensional representation ((0, 1) representing action A, (1, 0) representing action B, (1, 1) and (0, 1) staying at the current state.) to simulate the case in multi-dimensional action space. Figure A.3 shows that our approach can perform significantly better than Factored approach in Toy-Partial.\nIn the constrained setting, we perform additional experiments on ERA-v1, Pistonball-v1, and Pistonball-v2. In the Pistonball environment, we introduce a constraint that restricts the upward movement of pistons on the left side to ensure the ball continues rolling to the left. Our experiments demonstrate that this constraint presents significant difficulty when the action space is large.\nRegarding ERA-v1, our model outperforms the benchmark, as depicted in For Pistonball, our model demonstrates comparable performance in the smaller environment (Pistonball-v1) and superior performance in the larger environment (Pistonball-v2) where the benchmarks struggle to effectively learn. These findings are presented in Figure A.5, which includes the average return over timesteps in the top plot and the best-till-now returns in the bottom plot. By best-till-now we mean the best evaluated return till the current timestamp, which is a commonly used metric particularly when the return done not increase monotonically over time steps and hence the best model might be an intermediate one.\nWe also observed that our approach can learn a stochastic optimal policy in ERA environment, which corresponds to our motivation that a stochastic policy is preferred in many resource allocation problems. For example, we have observed that in ERA-v4, a stochastic optimal allocation was learned by our approach in a given state (shown as a distribution of 200 sampled actions (Table A.3).\nFinally, we investigate the constraint violation of Wol-DDPG across ERA-v1 to v5 in Figure A.6.\nOur observations reveal that the constraint violation of Wol-DDPG can be significant, with a valid action ratio reaching only 15% in the worst-case scenario. Note that the valid action ratio mentioned here specifically pertains to the ratio of valid actions generated by the agent (the action output by the policy at each timestep) to all the actions output in a single episode, which differs from the valid action rate discussed in the ablation study in the main paper. In the ablation study shown in Figure 8a, the valid action rate refers to the fraction (l/S) of valid actions l within the S samples at a particular timestep in IAR framework shown in Figure 1. Thus, valid action rate is metric specific to our framework; note that valid action ratio for our framework is always 1 as IAR always outputs valid actions." }, { "figure_ref": [], "heading": "D AR in Constrained Scenario", "publication_ref": [], "table_ref": [ "tab_2", "tab_2" ], "text": "We have observed that AR approach does not perform well in the constrained scenario. We give our analysis and experimental evidence here.\nFirst, we describe our specific ERA set-up. We are aiming to allocate 3 resources to 9 areas with the 9 areas lying on a graph (we do not show the graph here). The constraints are that in any allocation, R2 and R3 must be within 2 hops (inclusive) of each other.\nEach allocation of resource R1, R2, R3 is given as a vector, e.g., (4, 0, 3) is the allocation of R1 to area 4, R2 to area 0, R3 to area 3. In a AR approach there is a dimension dependency in the policy network. Here allocation of R2 (output by a neural network, which we call R2 network) depends on allocation of R1 and allocation of R3 (output of R3 network) depends on allocation of R2 and R1. The R1 network, which outputs location of R1, is conditionally independent (note that R1, R2, R3 networks use the shared weights).\nShown below is an optimal allocation learned by our approach in state 1 (current allocation (8, 2, 8)), shown as a distribution of 200 sampled actions (Table A.3). This shows that R3 network has learned a good policy, but the R1 network is unable to independently produce the area 4 that could trigger the optimal output. We conjecture this could be due to fact that R1 is not aware of constraints since the constraint can be solely enforced by R2 network and R3 network. Hence, R1 might be outputting areas that can be optimal if there were no constraints. We have further evidence of the same happening in another state:\nShown below is an optimal allocation learned by our approach in state 2 (current allocation (1, 7, 5)), shown as a distribution of 200 sampled actions (Table A While this is not a complex example and may be resolved through another ordering, we wish to highlight that constraints can also be complex and they will always introduce issues with ordering dependent approaches. To handle such issues, one may need knowledge on the dimension dependency, or make efforts on trying different orders of generating the allocations. " }, { "figure_ref": [], "heading": "E Details of Emergency Resource Allocation Environment", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We describe the details of our custom environment ERA in this section. This environment simulates a city that is divided into different districts, represented by a graph where nodes denote districts and edges signify connectivity between them. An action is to allocate a limited number of resources to the nodes of the graph. A tuple including graph, current allocation, and the emergency events is the state of the underlying MDP. The allocations change every time step but an allocation action is subject to constraints, namely that a resource can only move to a neighboring node and resources must be located in proximity (e.g. within 2 hops on the graph) as they collaborate to perform tasks.\nEmergency events arise at random on the nodes and the decision maker aims to minimize the costs associated with such events by attending to them as quickly as possible. Finally, the optimal allocation is randomized, thus, the next node for a resource is sampled from the probability distribution over neighboring nodes that the stochastic policy represents.\nEach version of the ERA environment is characterized by an adjacency matrix that defines the connectivity of districts within the simulated city and a cost matrix that quantifies the expenses associated with traversing from one node to another. The agent's performance is evaluated based on the successful resolution of emergency events, leading to rewards, while penalties are incurred for failure to address the emergency. The agent's utility at each timestep encompasses the reward (or penalty) and the negative moving costs. To increase the complexity of the task, we introduce different types of emergency events. Each event type follows a fixed distribution over the nodes, and the event type itself is determined by a categorical distribution. For example, a distribution of [0.3, 0.7] means that 30% of the events are of type 1 and 70% are of type 2. The specifics of each version are presented in Table A.9, where the columns # rsc, # nodes, and hops represent the number of resources, the number of nodes in the graph, and the maximal allocation distance between two resources, respectively.\nERA-Partial is one setting with partial observability, which has three states but one possible observation. The RL agent is encouraged to change its (unobserved) state by obtaining a reward, otherwise it obtains a penalty. To perform well, the RL agent needs to perform stochastically. The ERA implementation, including configuration file that consists of the adjacency matrix, cost matrix and other relevant parameters, will be made available for reproducibility purposes." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This research is supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: AISG2-RP-2020-016). Dr. Nguyen was supported by grant W911NF-20-1-0344 from the US Army Research Office." } ]
Many problems in Reinforcement Learning (RL) seek an optimal policy with large discrete multidimensional yet unordered action spaces; these include problems in randomized allocation of resources such as placements of multiple security resources and emergency response units, etc. A challenge in this setting is that the underlying action space is categorical (discrete and unordered) and large, for which existing RL methods do not perform well. Moreover, these problems require validity of the realized action (allocation); this validity constraint is often difficult to express compactly in a closed mathematical form. The allocation nature of the problem also prefers stochastic optimal policies, if one exists. In this work, we address these challenges by (1) applying a (state) conditional normalizing flow to compactly represent the stochastic policy -the compactness arises due to the network only producing one sampled action and the corresponding log probability of the action, which is then used by an actor-critic method; and (2) employing an invalid action rejection method (via a valid action oracle) to update the base policy. The action rejection is enabled by a modified policy gradient that we derive. Finally, we conduct extensive experiments to show the scalability of our approach compared to prior methods and the ability to enforce arbitrary state-conditional constraints on the support of the distribution of actions in any state 1 .
Generative Modelling of Stochastic Actions with Arbitrary Constraints in Reinforcement Learning
[ { "figure_caption": "Figure 2 :2Figure 2: Composition of a conditional flow p ρ,w (z K |s) and argmax transformation resulting in the policy π ρ,w (a|s). The flow maps from a base distribution p ρ (z 0 |s) by using a bijection F w . The diagram is adapted from Hoogeboom et al. [16].", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "1717", "figure_data": "", "figure_id": "fig_1", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Pistonball, goal is to move the ball to the left border by operating the pistons.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Learning curves over time in (a) Acrobot and (b) Pistonball-v[1-3] (without constraints). All approaches are trained with 5 seeds, except for Wol-DDPG, trained with 3 seeds due to its expensive training cost on Pistonball.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Learning curves in ERA-v[2-5] (with constraints). All settings are trained with 5 seeds.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Learning curves in ERA-Partial. Our approach obtains higher return than Factored approach.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Learning curves for wall clock time. Our approach converges much faster than action masking.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: (a) Ablation of gradient correction and sandwich estimator (b) Ablation of posterior type.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "B. 11Figure A.1: Performance with adaptive α and static α in ERA-v4.", "figure_data": "", "figure_id": "fig_8", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure A. 2 :2Figure A.2: (a) Learning curves in CartPole and ERA-v5 (without constraints); (b) Learning curves in ERA-v1 (with constraints).", "figure_data": "", "figure_id": "fig_9", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure A. 3 :3Figure A.3: Learning curves in Toy-Partial.", "figure_data": "", "figure_id": "fig_10", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure A. 4 :4Figure A.4: Learning curves for wall clock time.", "figure_data": "", "figure_id": "fig_11", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure A. 5 :5Figure A.5: Top: Learning curves in Pistonball-v1 and v2 (with constraints); Bottom: Best-till-now returns in Pistonball-v1 and v2.", "figure_data": "", "figure_id": "fig_12", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure A. 6 :6Figure A.6: Valid action ratio of Wol-DDPG during training.", "figure_data": "", "figure_id": "fig_13", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure A.2b. Additionally, we analyze the average return over wall clock time for ERA-v1, v2, v3, and v5, and our model exhibits an order of magnitude faster convergence, as illustrated in Figure A.4.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Initialize step count t ← 1, Initialize episode counter E ← 1 2 repeat", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Reset gradients: d(θ, ψ) ← 0, dϕ ← 0, Initialize experience set τ , t start = t, get state s t Execute a t according to policy π ′ θ (a t |s t ) // Defined in Section 3.2 Get a number of valid and sampled actions l t and S t Add new experience τ ← τ ∪ {s t , a t , s t+1 , r t }, and update t ← t + 1 9 until terminal s t or t -t start = t max (a i |s i ) = α lπ (a i |s i ) + (1 -α) lu π (a i |s i ) // Sandwich estimator", "figure_data": "4repeat567Receive reward r t and new state s t+1810R =0 V ϕ (s t )for terminal s t for non-terminal s t // Bootstrap from last state11for i ∈ {t -1, ..., t start } do12R ← r i + γR13 log p θ,ψ 14 Accumulate gradients w.r.t. (θ, ψ): d(θ, ψ) ←d(θ, ψ)", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "1: Various Types of Alpha α", "figure_data": "Types of αRemarkStatic αα is fixed to be 0.5Trainable α", "figure_id": "tab_2", "figure_label": "A", "figure_type": "table" }, { "figure_caption": "The total timesteps of training for each environment are determined based on the convergence of our model and benchmarks. Typically, we train each setting using 5 different random seeds, unless", "figure_data": ".2: Optimization detailsEnvironmentn_envs lrOptimizer batch sizeCartPole83e-4 RMSprop 256Acrobot83e-4 RMSprop 256ERA-v5 w/o cstr643e-4 RMSprop 512Pistonball-v[1-3] w/o cstr 643e-4 RMSprop 512ERA-v[1-5] w/ cstr643e-4 RMSprop 256Pistonball-v[1-2] w/ cstr643e-4 RMSprop 512B.3 Optimization Details", "figure_id": "tab_3", "figure_label": "A", "figure_type": "table" }, { "figure_caption": "3: Optimal policy (ours) in state 1. Showing actions with top-3 probability. However, the above is not optimal in state 1. But, if we fix R1 to be in area 4 and R2 to be in area 0 and provide that as forced input to the R3 network, then we get the TableA.5 from the AR network.", "figure_data": "ActionNumber Distribution(4, 0, 3) 1000.5(4, 0, 6) 1000.5(0, 0, 0) 00.0.........Shown below is the best allocation learned by the AR approach in state 1, shown as a distribution of200 sampled actions (Table A.4).Table A.4: AR policy in state 1.ActionNumber Distribution(3, 6, 6) 1730.865(3, 3, 6) 170.085(6, 3, 6) 50.025.........Table A.5: AR policy in state 1 after fixing R1 in area 4 and R2 in area 0.ActionNumber Distribution(4, 0, 6) 1640.82(4, 0, 3) 360.18(0, 0, 0) 00.00.........", "figure_id": "tab_4", "figure_label": "A", "figure_type": "table" }, { "figure_caption": ".6).Shown in TableA.7 is the best allocation learned by the AR approach in state 2, shown as a distribution of 200 sampled actions. Again, if we fix R1 to be in area 4 and R2 to be in area 0 and provide that as forced input to the R3 network, then we get the TableA.8 from the AR network. This supports our claim that the R1 network finds it difficult to reason about constraints.", "figure_data": "Table A.6: Optimal policy (ours) in state 2.ActionNumber Distribution(4, 0, 3) 2001.0(4, 0, 6) 00.0(0, 0, 0) 00.0.........", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "7: AR policy in state 2.", "figure_data": "ActionNumber Distribution(3, 6, 3) 810.405(3, 6, 6) 810.405(6, 6, 3) 160.008.........Table A.8: AR policy in state 2.ActionNumber Distribution(4, 0, 3) 1970.985(4, 0, 1) 20.010(4, 0, 2) 10.005.........", "figure_id": "tab_6", "figure_label": "A", "figure_type": "table" } ]
Chen Changyu; Ramesha Karunasena; Thanh Hong Nguyen; Arunesh Sinha; Pradeep Varakantham
[ { "authors": "Abhinav Bhatia; Pradeep Varakantham; Akshat Kumar", "journal": "", "ref_id": "b0", "title": "Resource constrained deep reinforcement learning", "year": "2019" }, { "authors": "Steven Bohez; Abbas Abdolmaleki; Michael Neunert; Jonas Buchli; Nicolas Heess; Raia Hadsell", "journal": "", "ref_id": "b1", "title": "Value constrained model-free continuous control", "year": "2019" }, { "authors": "Greg Brockman; Vicki Cheung; Ludwig Pettersson; Jonas Schneider; John Schulman; Jie Tang; Wojciech Zaremba", "journal": "", "ref_id": "b2", "title": "Openai gym", "year": "2016" }, { "authors": "Eric Budish; Yeon-Koo Che; Fuhito Kojima; Paul Milgrom", "journal": "American economic review", "ref_id": "b3", "title": "Designing random allocation mechanisms: Theory and applications", "year": "2013" }, { "authors": "Yuri Burda; Roger B Grosse; Ruslan Salakhutdinov", "journal": "", "ref_id": "b4", "title": "Importance weighted autoencoders", "year": "2016" }, { "authors": "Jie Chen; Ronny Luss", "journal": "", "ref_id": "b5", "title": "Stochastic gradient descent with biased but consistent gradient estimators", "year": "2018" }, { "authors": "Olivier Delalleau; Maxim Peter; Eloi Alonso; Adrien Logut", "journal": "", "ref_id": "b6", "title": "Discrete and continuous action representation for practical rl in video games", "year": "2019" }, { "authors": "Adji Bousso; Dieng ; Dustin Tran; Rajesh Ranganath; John Paisley; David Blei", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Variational inference via chi upper bound minimization", "year": "2017" }, { "authors": "L Priya; David Donti; J Zico Rolnick; Kolter", "journal": "", "ref_id": "b8", "title": "DC3: A learning method for optimization with hard constraints", "year": "2021" }, { "authors": " Openreview", "journal": "", "ref_id": "b9", "title": "", "year": "2021" }, { "authors": "Gabriel Dulac-Arnold; Richard Evans; Hado Van Hasselt; Peter Sunehag; Timothy Lillicrap; Jonathan Hunt; Timothy Mann; Theophane Weber; Thomas Degris; Ben Coppin", "journal": "", "ref_id": "b10", "title": "Deep reinforcement learning in large discrete action spaces", "year": "2015" }, { "authors": "Javier Garcıa; Fernando Fernández", "journal": "Journal of Machine Learning Research", "ref_id": "b11", "title": "A comprehensive survey on safe reinforcement learning", "year": "2015" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Communications of the ACM", "ref_id": "b12", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "Sergei Pavlovich Grachev; Aleksei Aleksandrovich Zhilyaev; Vladimir Borisovich Laryukhin; Dmitrii Evgen'evich Novichkov; Vladimir Andreevich Galuzin; Elena V Simonova; Peter O Maiyorov; Skobelev", "journal": "Automation and Remote Control", "ref_id": "b13", "title": "Methods and tools for developing intelligent systems for solving complex real-time adaptive resource management problems", "year": "2021" }, { "authors": "Shangding Gu; Long Yang; Yali Du; Guang Chen; Florian Walter; Jun Wang; Yaodong Yang; Alois Knoll", "journal": "", "ref_id": "b14", "title": "A review of safe reinforcement learning: Methods, theory and applications", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b15", "title": "Denoising diffusion probabilistic models", "year": "2020-12-06" }, { "authors": "Emiel Hoogeboom; Didrik Nielsen; Priyank Jaini; Patrick Forré; Max Welling", "journal": "", "ref_id": "b16", "title": "Argmax flows and multinomial diffusion: Learning categorical distributions", "year": "2021-12-06" }, { "authors": "Shengyi Huang; Santiago Ontañón", "journal": "", "ref_id": "b17", "title": "A closer look at invalid action masking in policy gradient algorithms", "year": "2022" }, { "authors": "Abdul Shahbaz; Hang Khader; Pietro Yin; Danica Falco; Kragic", "journal": "IEEE", "ref_id": "b18", "title": "Learning stable normalizingflow control for robotic manipulation", "year": "2021" }, { "authors": "Dexun Li; Meghna Lowalekar; Pradeep Varakantham", "journal": "PMLR", "ref_id": "b19", "title": "Claim: Curriculum learning policy for influence maximization in unknown social networks", "year": "2021" }, { "authors": "Jyun-Li Lin; Wei Hung; Shang-Hsuan; Ping-Chun Yang; Xi Hsieh; Liu", "journal": "AUAI Press", "ref_id": "b20", "title": "Escaping from zero gradient: Revisiting action-constrained reinforcement learning via frank-wolfe policy optimization", "year": "2021-07-30" }, { "authors": "Bogdan Mazoure; Thang Doan; Audrey Durand; Joelle Pineau; Devon Hjelm", "journal": "PMLR", "ref_id": "b21", "title": "Leveraging exploration in off-policy algorithms via normalizing flows", "year": "2020" }, { "authors": "Volodymyr Mnih; Adrià Puigdomènech Badia; Mehdi Mirza; Alex Graves; Timothy P Lillicrap; Tim Harley; David Silver; Koray Kavukcuoglu", "journal": "", "ref_id": "b22", "title": "Asynchronous methods for deep reinforcement learning", "year": "2016" }, { "authors": "Ayan Mukhopadhyay; Geoffrey Pettet; Mohsen Sayyed; Di Vazirizade; Alejandro Lu; Said El Jaimes; Hiba Said; Yevgeniy Baroud; Mykel Vorobeychik; Abhishek Kochenderfer; Dubey", "journal": "Accident Analysis & Prevention", "ref_id": "b23", "title": "A review of incident prediction, resource allocation, and dispatch models for emergency management", "year": "2022" }, { "authors": "Tu-Hoa Pham; Giovanni De Magistris; Ryuki Tachibana", "journal": "IEEE", "ref_id": "b24", "title": "Optlayer-practical constrained optimization for deep reinforcement learning in the real world", "year": "2018" }, { "authors": "Antonin Raffin; Ashley Hill; Adam Gleave; Anssi Kanervisto; Maximilian Ernestus; Noah Dormann", "journal": "Journal of Machine Learning Research", "ref_id": "b25", "title": "Stable-baselines3: Reliable reinforcement learning implementations", "year": "2021" }, { "authors": "Danilo Jimenez; Rezende ; Shakir Mohamed", "journal": "", "ref_id": "b26", "title": "Variational inference with normalizing flows", "year": "2015-07-11" }, { "authors": "Sanket Shah; Arunesh Sinha; Pradeep Varakantham; Andrew Perrault; Milind Tambe", "journal": "AAAI Press", "ref_id": "b27", "title": "Solving online threat screening games using constrained action space reinforcement learning", "year": "2020" }, { "authors": "P Satinder; Tommi Singh; Michael I Jaakkola; Jordan", "journal": "Elsevier", "ref_id": "b28", "title": "Learning without state-estimation in partially observable markovian decision processes", "year": "1994" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; Diederik P Kingma; Abhishek Kumar; Stefano Ermon; Ben Poole", "journal": "", "ref_id": "b29", "title": "Score-based generative modeling through stochastic differential equations", "year": "2021" }, { "authors": " Openreview", "journal": "", "ref_id": "b30", "title": "", "year": "2021" }, { "authors": "Yunhao Tang; Shipra Agrawal", "journal": "AAAI Press", "ref_id": "b31", "title": "Discretizing continuous action space for on-policy optimization", "year": "2020" }, { "authors": "Justin K Terry; Benjamin Black; Nathaniel Grammel; Mario Jayakumar; Ananth Hari; Ryan Sullivan; Luis S Santos; Clemens Dieffendahl; Caroline Horsch; Rodrigo Perez-Vicente; Niall L Williams; Yashas Lokesh; Praveen Ravi", "journal": "", "ref_id": "b32", "title": "Pettingzoo: Gym for multi-agent reinforcement learning", "year": "2021-12-06" }, { "authors": " Ivan B Vermeulen; Sylvia G Sander M Bohte; Han Elkhuizen; Piet Jm Lameris; Han Bakker; Poutré La", "journal": "Artificial intelligence in medicine", "ref_id": "b33", "title": "Adaptive resource allocation for efficient patient scheduling", "year": "2009" }, { "authors": "Patrick Nadeem; Ward ; Ariella Smofsky; Avishek Joey Bose", "journal": "", "ref_id": "b34", "title": "Improving exploration in soft-actor-critic with normalizing flows policies", "year": "2019" }, { "authors": "Mason Wright; Yongzhao Wang; Michael P Wellman", "journal": "EC", "ref_id": "b35", "title": "Iterated deep reinforcement learning in games: History-aware training for improved stability", "year": "2019" }, { "authors": "Haifeng Xu", "journal": "", "ref_id": "b36", "title": "The mysteries of security games: Equilibrium computation becomes combinatorial algorithm design", "year": "2016" }, { "authors": "Yiming Zhang; Quan Ho Vuong; Kenny Song; Xiao-Yue Gong; Keith W Ross", "journal": "", "ref_id": "b37", "title": "Efficient entropy for policy gradient with multidimensional action space", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 154.93, 459.49, 349.74, 20.09 ], "formula_id": "formula_0", "formula_text": "J(π) = E s∼b0 [V (s; π)] where V (s; π) = E ∞ t=0 γ t r (s t , a t ) |s 0 = s; π(1)" }, { "formula_coordinates": [ 3, 224.81, 206, 279.86, 22.31 ], "formula_id": "formula_1", "formula_text": "p V (v) = p Z (z) • det dz dv , v = f (z)(2)" }, { "formula_coordinates": [ 3, 183.27, 351.51, 321.4, 8.96 ], "formula_id": "formula_2", "formula_text": "P (x) = P (x|v)p(v)dv, P (x|v) = δ(x = argmax(v))(3)" }, { "formula_coordinates": [ 3, 108, 421.49, 396, 9.96 ], "formula_id": "formula_3", "formula_text": "log P (x) ≥ E v∼q(•|x) [log P (x|v)+log p(v)-log q(v|x)] = E v∼q(•|x) [log p(v)-log q(v|x)] = L" }, { "formula_coordinates": [ 3, 109.2, 547.28, 394.8, 29.52 ], "formula_id": "formula_4", "formula_text": "log P (x) ≤ 1 2 log E v∼q(•|x) p(x,v) q(v|x)" }, { "formula_coordinates": [ 3, 107.64, 601.76, 397.03, 50.54 ], "formula_id": "formula_5", "formula_text": "log P (x) ≤ 1 2 log E v∼q(•|x) p(v) q(v|x) 2 = L χ 2 (4) Also, L χ 2 -log P (x) = 1 2 log(1 + D χ 2 (p(v|x))||q(v|x)))" }, { "formula_coordinates": [ 5, 106.81, 92.34, 359.52, 123.2 ], "formula_id": "formula_6", "formula_text": "1 Input: Invertible flow F w = f w,k • . . . • f w,1 , State encoder E ρ , Posterior q ψ , rollout τ 2 Sample States = {(s (i) )} B i=1 ∼ τ 3 for s ∈ States do 4 for j ← 1 : n_elbo_steps do 5 z (j) = F w (z (j) 0 ), where z (j) 0 ∼ E ρ (s) 6 a (j) = argmax z (j) 7 for j ← 1 : n_elbo_steps do 8 z ′ (jn) = threshold T (u (jn) ), where {(u (jn) )} N n=1 ∼ q ψ (•|a (j) , s) 9 z ′(jn) 0 = F -1 w (z ′(jn) )10" }, { "formula_coordinates": [ 5, 106.81, 207.28, 368.91, 31.4 ], "formula_id": "formula_7", "formula_text": "ψ 11 L = 1 N N i=1 log p ρ (z ′ (jn) 0 |s) - K k=1 log det ∂f w,k ∂z ′(jn) k-1 -log q ψ (z ′ (jn) |a (j) , s)" }, { "formula_coordinates": [ 5, 108, 368.82, 84.12, 9.65 ], "formula_id": "formula_8", "formula_text": "z 0 = µ ρ (s) + σ ρ (s)" }, { "formula_coordinates": [ 5, 108, 401.55, 396, 19.87 ], "formula_id": "formula_9", "formula_text": "= argmax (f K • . . . • f 1 (z 0 )). Each f k is an invertible neural network" }, { "formula_coordinates": [ 5, 113.97, 503.26, 390.7, 23.22 ], "formula_id": "formula_10", "formula_text": "lπ (a|s) = L = E z K ∼q ψ (•|a,s) log p ρ (z 0 |s) - K k=1 log det ∂f k ∂z k-1 -log q ψ (z K |a, s)(5)" }, { "formula_coordinates": [ 6, 152.47, 189.42, 352.2, 28.08 ], "formula_id": "formula_11", "formula_text": "lu π (a|s) = L χ 2 = 1 2 log E z K ∼q ψ (z K |a,s) p ρ (z 0 |s) K k=1 det ∂f w,k ∂z k-1 q ψ (z K |a, s) 2(6)" }, { "formula_coordinates": [ 6, 108, 473.91, 396.67, 44.74 ], "formula_id": "formula_12", "formula_text": "Theorem 1 With π ′ defined as above, ∇ θ J(θ) = E π ′ Q π ′ (s, a)∇ θ log π θ (a|s) -Q π ′ (s, a) a∈C(s) ∇ θ π θ (a|s) a∈C(s) π θ (a|s)(7)" }, { "formula_coordinates": [ 6, 108, 593.65, 79.78, 13.47 ], "formula_id": "formula_13", "formula_text": "Lemma 1 1 l j∈[l]" }, { "formula_coordinates": [ 7, 106.41, 267.78, 367.37, 42.24 ], "formula_id": "formula_14", "formula_text": "+ (R -V ϕ (s i )) ∇ θ,ψ log p θ,ψ (a i |s i ) - Sj l 2 j j∈[l] ∇ θ,ψ log p θ,ψ (a ij |s i ) 15 Accumulate gradients w.r.t. ϕ: dϕ ← dϕ + ∂(R -V ϕ (s i )) 2 /∂ϕ16" }, { "formula_coordinates": [ 7, 106.01, 323.57, 78.86, 21.56 ], "formula_id": "formula_15", "formula_text": "18 E ← E + 1 19 until E > E max" }, { "formula_coordinates": [ 14, 151.23, 196.54, 353.44, 13.37 ], "formula_id": "formula_16", "formula_text": "∇ θ J(θ) = E π ′ Q π ′ (s, a)∇ θ log π ′ (a|s)(8)" }, { "formula_coordinates": [ 14, 185.99, 218.19, 318.68, 24.72 ], "formula_id": "formula_17", "formula_text": "= E π ′ Q π ′ (s, a)∇ θ log π θ (a|s) ai∈C(s) π θ (a i |s)(9)" }, { "formula_coordinates": [ 14, 185.99, 248.22, 318.68, 24.25 ], "formula_id": "formula_18", "formula_text": "= E π ′ Q π ′ (s, a)∇ θ log π θ (a|s) -Q π ′ (s, a)∇ θ log ai∈C(s) π θ (a i |s)(10)" }, { "formula_coordinates": [ 14, 185.99, 278.5, 318.68, 26.6 ], "formula_id": "formula_19", "formula_text": "= E π ′ Q π ′ (s, a)∇ θ log π θ (a|s) -Q π ′ (s, a) ai∈C(s) ∇ θ π θ (a i |s) ai∈C(s) π θ (a i |s)(11)" }, { "formula_coordinates": [ 14, 245.43, 338.22, 174.56, 60.18 ], "formula_id": "formula_20", "formula_text": "F (a) = ∇ θ π θ (a|s) π θ (a|s) for a ∈ C(s) 0 otherwise . Then, E π [F (a)] = a∈C(s) ∇ θ π θ (a|s)" }, { "formula_coordinates": [ 14, 122.11, 430.98, 153.18, 13.47 ], "formula_id": "formula_21", "formula_text": "E π [F (a)] is 1 l j∈[l] ∇ θ log π θ (a j |s)." }, { "formula_coordinates": [ 14, 108, 478.13, 315.96, 13.47 ], "formula_id": "formula_22", "formula_text": "E π [G(a)] = a∈C(s) π θ (a|s) and a sample avergae estimate of E π [G(a)] is l S ." }, { "formula_coordinates": [ 14, 251.53, 616.4, 108.94, 11.72 ], "formula_id": "formula_23", "formula_text": "v j = u i -log(1 + e ui-uj )" }, { "formula_coordinates": [ 14, 112.98, 657.86, 185.48, 67.65 ], "formula_id": "formula_24", "formula_text": "• if j = i then v j = u i , ∂vj ∂u k = 1 • if k ̸ = j or k ̸ = i then ∂vj ∂u k = 0 • if k = i then ∂vj ∂u k = 1 - 1 1+e u i -u j × e ui-uj • if k = j then ∂vj ∂u k = 1 1+e u i -u j × e ui-uj" }, { "formula_coordinates": [ 15, 194.97, 196.16, 222.06, 30.55 ], "formula_id": "formula_25", "formula_text": "det dv/du = K j=1,j̸ =i ∂v j ∂u j = K j=1,j̸ =i sigmoid(u i -u j )" } ]
2023-11-26
[ { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b17", "b2", "b5", "b40", "b40", "b7", "b39", "b47", "b48", "b49", "b46", "b39", "b49", "b47", "b48", "b39", "b49", "b39", "b47", "b49", "b47", "b49", "b39", "b39", "b49", "b33", "b38", "b50", "b39", "b47", "b49", "b27", "b7", "b39", "b47", "b48", "b49", "b40", "b49" ], "table_ref": [], "text": "Video anomaly detection (VAD) [18,23] aims to detect and locate abnormal events in videos, which is of great importance in various real-world applications, such as intelligent surveillance [33] and autonomous driving [6]. Yet, collecting a large-scale dataset with detailed temporal annotations of abnormal events is labor-intensive and time-consuming, which hinders the development of VAD. In recent years, * Corresponding author. [41] Figure 1. The illustration of the abnormality ratio distribution of test sets. The abnormality ratio varies across different videos, especially in XD-Violence [41] with higher abnormality ratios.\nweakly supervised video anomaly detection (WVAD), requiring solely video-level labels denoting the presence or absence of abnormal events, has attracted increasing attention [7,8,16,35,40,46,[48][49][50] and outperformed unsupervised methods [44,47] by a large margin.\nIn such a weakly supervised fashion, the principal challenge of WVAD stems from the lack of temporal annotations for abnormal events. To address this challenge, existing methods resort to certain abnormality criteria, such as feature magnitude [35,40], or attention [50], to identify topk potential abnormal snippets in labeled abnormal videos. These selected snippets also serve as pseudo temporal annotations [48,49], which are expected to provide supervision for distinguishing abnormal events from normal ones. Rather than directly training on pseudo temporal annotations, existing methods [35,40,50] draw inspiration from Multi-Instance Learning (MIL) [3,17] to improve the tolerance to the presence of mislabeled snippets. Specifically, the selected abnormal snippets are gathered as the positive bag and paired with a negative bag constructed from normal videos to train the anomaly classifier.\nAlthough existing methods [35,40,48,50] have demonstrated promising performance, they still suffer from three primary limitations. 1) Unreliable abnormality criteria. Previous abnormality criteria primarily rely on some as- sumptions [35] or black-box models [48,50], leading to less reliable pseudo temporal annotations. For instance, the widely employed feature magnitude [35,40] is based on a plausible assumption that abnormal snippets exhibit a larger feature magnitude than normal snippets. However, the mere reliance on large feature magnitudes does not guarantee sufficient discrimination for abnormal snippets. 2) Limitation of sample-level selection strategy. Previous methods [35,40,50] select top-k potential abnormal snippets for each video, without considering the varying abnormality ratio across different videos as shown in Fig. 1, where the abnormality ratio is defined as the proportion of abnormal snippets in each video. Uniformly selecting potential abnormal snippets in each video may neglect significant abnormal snippets in videos with higher abnormality ratios, thus missing instructive supervision for anomaly recognition. 3) Sensitivity to the misselection in abnormal video. Misselection of abnormal snippets is inevitable in WVAD, introducing label noise in pseudo temporal annotations. Despite the adoption of MIL, the anomaly classifier remains susceptible to label noise, trapped in the dilemma of recognizing mislabeled 'abnormal' snippets. By introducing the statistical principles underlying BatchNorm [14], we propose a novel BatchNorm-based WVAD model, dubbed BN-WVAD, to tackle the above limitations. From the statistical perspective, we observe that temporal features of abnormal events often exhibit characteristics of outliers [34,39] concerning the mean vector computed by BatchNorm, which predominantly captures the normality of the feature distribution [51]. In other words, the mean vector of BatchNorm can be regarded as a reference to distinguish potential abnormal snippets from normal ones. Accordingly, our BN-WVAD introduces the Divergence of Feature from Mean (DFM) as a novel abnormality criterion to supersede existing ones [35,40,48,50], discerning reliable potential abnormal snippets. Furthermore, we propose a Mean-based Pull-Push (MPP) loss to enhance the separation of DFM for abnormal features compared to normal features, as illustrated in Fig. 2.\nTo overcome the limitation of sample-level top-k selection, we draw inspiration from the focus of BatchNorm and introduce a Batch-level Selection (BLS) strategy to filter more potential abnormal snippets in the video with a higher occurrence of abnormal events. A Sample-Batch Selection (SBS) strategy is further devised to combine the advantages of sample-level and batch-level selection strategies. To enhance the tolerance to mislabeled abnormal snippets, we only train the vulnerable anomaly classifier in our BN-WVAD on certainly normal snippets from normal videos, mitigating confusion induced by label noise. Additionally, the proposed DFM criterion serves as the other discrimination criterion, which is acquired in the dense feature space and proves more resilient to the misselection [28,42]. As shown in Fig. 3, the final anomaly scores in the proposed BN-WVAD are calculated by aggregating the DFM scores with the prediction of the anomaly classifier.\nOur BN-WVAD is a straightforward yet effective model, surpassing existing methods [7,8,16,35,40,46,[48][49][50] and achieving SOTA performance on UCF-Crime [33] and XD-Violence [41]. Notably, the flexibility of incorporating our DFM criterion and BLS strategy [35,50] The insight that dense feature space exhibits increased robustness to misselection is also instructive for future research. Our main contributions are summarized as follows:\n• We introduce a novel BatchNorm-based WVAD model termed BN-WVAD, where our DFM criterion plays a crucial role in screening reliable abnormal snippets. The MPP loss is further proposed to gather normal features and enlarge DFM of potential abnormal features. • Inspired by the introduction of BatchNorm, we devise a sample-batch selection strategy to fully exploit instructive abnormal snippets within abnormal videos. • Our BN-WVAD calculates the final anomaly scores by aggregating the DFM scores with the prediction of the anomaly classifier, where the proposed DFM criterion is discriminative and more resilient to label noise." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b17", "b18", "b23", "b25", "b30", "b46", "b10", "b21", "b25", "b10", "b18", "b25", "b49", "b50", "b49" ], "table_ref": [], "text": "Unsupervised video anomaly detection. Restricted to the difficulty of collecting and annotating large-scale abnormal videos [33], unsupervised video anomaly detection (UVAD) [18] has been widely studied in the early years. Due to the only availability of normal videos, UVAD methods mainly focus on learning the normality and detecting abnormal events by identifying the deviations from normality, which is also deemed as the one-class classification problem [30]. The representative methods can be roughly divided into two categories: reconstruction-based methods [19,21,24,26,31,44,47] and regression-based methods [2, 11,13,22,25,26]. The former focuses on learning normal video representations by reconstruction, and the latter uses self-training [11,25] [19,26,50] explicitly model the prototypes of normality into the additional memory module. Although there is no explicit normality modeling in our method, Batch-Norm [14] serves as a simple memory module to store the normality of the feature distribution. Specifically, the mean vector computed by BatchNorm is statistically proved to be a good representation of normality [51], due to the overwhelming majority of normal snippets in videos. In particular, BatchNorm also spontaneously gathers the normal features in abnormal videos, which are neglected by previous methods [50]. Therefore, the mean vector of BatchNorm can be regarded as a statistical reference to separate potential abnormal and normal snippets." }, { "figure_ref": [ "fig_2" ], "heading": "The Proposed BN-WVAD Model", "publication_ref": [ "b3", "b19", "b35", "b49" ], "table_ref": [], "text": "In weakly supervised anomaly detection (WVAD), a training set consists of N untrimmed videos V = {V i } N i=1 , where each video V i is associated with a video-level label Y i ∈ {0, 1} denoting the absence or presence of abnormal events. Correspondingly, the training set can be divided into two subsets: a normal set V n ={V n i } N n i=1 and an abnormal set V a ={V a i } N a i=1 , where N n + N a =N . In practice, the raw videos {V i } N i=1 are beforehand encoded to snippet features {X i } N i=1 using pre-trained backbones [4,20,36,37,43]. A feature enhancer [50] is applied to enhance the feature representation, resulting in X e . Our method operates based on these enhanced features, and the overall framework of our BN-WVAD model is depicted in Fig. 3.\nWe first address the underestimated significance of BatchNorm in WVAD in Sec. 3.1, which motivates us to propose our novel BatchNorm-based WVAD model. Subsequently, we elaborate on the key components of our BN-WVAD model, including the DFM criterion in Sec. 3.2, the SBS strategy in Sec. 3.3, and our specific anomaly score calculation in Sec. 3.4. Finally, we present the overall training objective in Sec. 3.5." }, { "figure_ref": [], "heading": "The Significance of BatchNorm in WVAD", "publication_ref": [ "b39", "b47", "b49", "b50", "b18", "b49", "b28" ], "table_ref": [], "text": "Besides the well-known effect of BatchNorm [14] in improving training stability and model generalization, its inherent superiority in statistical modeling of normality is underestimated in WVAD. Consider a mini-batch of B videos, the hidden features X h ∈ R B×T ×C are fed into the Batch-Norm layer, where C denotes the dimension of X h . During training, the mean vector µ ∈ R C is automatically computed by BatchNorm as follows:\nµ = E(X h ) = 1 B × T B b=1 T t=1 X h [b, t],(1)\nwhere E(•) denotes the expectation operator and X h [b, t] denotes the t-th snippet of the b-th video in the mini-batch.\nIn typical WVAD implementations [35,40,48,50], each mini-batch is constructed with an equal distribution of normal and abnormal videos, ensuring the majority of normal snippets. Therefore, the mean vector µ is primarily determined by sufficient normal representations, in other words, capturing the normality of the feature distribution [51]. Importantly, BatchNorm naturally aggregates normal features even from abnormal videos, which are neglected by previous explicit memory modules [19,50]. Furthermore, µ is derived from B × T snippet features, leading to the distribution in mini-batch that is statistically proven to follow a normal distribution, as asserted by the Central Limit Theorem (CLT) [29]. From this perspective, the features of abnormal snippets are more likely to be outliers and exhibit a notable divergence from the mean vector. This observation motivates the introduction of the DFM criterion, which plays a central role in our BN-WVAD model." }, { "figure_ref": [], "heading": "BatchNorm-based Abnormality Criterion", "publication_ref": [ "b39", "b47", "b49", "b11", "b4", "b0", "b26", "b8" ], "table_ref": [], "text": "Following our insight into statistical normality modeling of BatchNorm in WVAD, the mean vector µ can be regarded as a statistical representation of normality to distinguish potential abnormal snippets from normal ones. As a result, our In particular, the visualized features are sorted by the DFM criterion (Eq. 2) in descending order, for the convenience of illustrating the different selection strategies adopted in normal and abnormal videos. Only one hidden feature X h is visualized here for better illustration.\nBN-WVAD utilizes the Divergence of Feature from Mean (DFM) vector of BatchNorm as a novel abnormality criterion to substitute the plausible ones [35,40,48,50], improving the reliability of abnormal snippet selection.\nAccommodating the anisotropic Gaussian distribution among different dimensions of the multivariate feature space [12], we employ the Mahalanobis distance [5] to quantify the divergence between the hidden features X h and the mean vector µ computed by BatchNorm. Specifically, the proposed DFM is formulated as follows:\nDFM(X h [b, t], µ, σ 2 ) = (X h [b, t] -µ) T Σ -1 (X h [b, t] -µ),(2)\nwhere Σ ∈ R C×C is the covariance matrix of the hidden features X h , presented as Σ = diag(σ 2 ) with σ 2 ∈ R C being the variances of each dimension. Notably, in common practice [1,27], the running mean vector μ and variance vector σ2 are updated according to exponential moving average (EMA) [9] with momentum α = 0.1 as follows:\nμ = (1 -α)μ + αµ, (3\n) σ2 = (1 -α)σ 2 + ασ 2 . (4\n)\nThe EMA-based statistics μ and σ2 capture the long-term statistics of the feature distribution, avoiding potential bias that may arise from statistics calculated in mini-batches.\nCompared with the statistics derived from mini-batches, running statistics are more representative of normality and more robust to the presence of abnormal features. Furthermore, the utilization of μ and σ2 maintains the consistency between training and testing.\nTo encourage the divergence between the potential abnormal features with the mean vector μ, and to gather the normal features, we propose an incident Mean-based Pull-Push (MPP) loss for optimization. In particular, according to our DFM criterion, K potential abnormal features in abnormal videos and K normal features in normal videos with the largest K DFM scores are selected from X h and denoted as X a dfm ∈ R K×C and X n dfm ∈ R K×C , respectively. Borrowing the intuition of Triplet loss [38], we treat the mean vector μ as the only anchor, selected normal features X n dfm as positives, and selected abnormal features X a dfm as negatives. Correspondingly, based on the DFM criterion (Eq. 2), the proposed MPP loss is formulated as follows:\nL mpp (X n dfm , X a dfm , μ,σ 2 ) = 1 K K k=1 [m + DFM(X n dfm [k], μ, σ2 )(5)\n-DFM(X a dfm [k], μ, σ2 )],\nwhere m is the margin, set to 1 in our BN-WAVD implementation, to enlarge the separation of X n dfm and X a dfm ." }, { "figure_ref": [], "heading": "Sample-Batch Selection (SBS) Strategy", "publication_ref": [ "b39", "b47", "b49" ], "table_ref": [], "text": "In addition to the prevalent sample-level selection (SLS) strategy [35,40,48,50] in WVAD, our BN-WVAD also incorporates the statistical notion of BatchNorm into abnormal snippet selection, introducing a batch-level selection (BLS) strategy. Drawing inspiration from our insight into the statistical modeling of BatchNorm, we conjecture that, despite the varying abnormality ratio across different videos, the overall abnormality ratio of the entire minibatch is relatively stable. Hence, the proposed BLS strategy screens the potential abnormal snippets within each mini- batch rather than each video, which is more flexible to the unequal abnormality ratio distribution. Specifically, two selection ratios, denoted as ρ s and ρ b , are introduced to regulate the proportion of selected abnormal snippets within each video and mini-batch, respectively. For intuitive illustration, we assume that the minibatch is composed of B=4 abnormal videos with T =5 snippets, and both ρ s and ρ b are set to 40% in Fig. 4. When only the SLS strategy is adopted, the abnormal snippets with large abnormality scores (0.8 and 0.7) in the 4th video are ignored, as illustrated in Fig. 4a. In contrast, the proposed BLS strategy filters abnormal snippets from the perspective of statistics and successfully captures all 4 potential abnormal snippets in the 4th video. However, when facing the inconspicuous abnormal snippets with relatively small abnormality scores (0.3 and 0.4) in the 1st video, our BLS strategy fails to discern them.\nWe further devise a Sample-Batch Selection (SBS) strategy to complement the disadvantages of SLS in insufficient selection and BLS being insensitive to hard abnormal snippets. As depicted in Fig. 4c, our SBS strategy considers the union of selected snippets from SLS and BLS as the final selection. Particularly, our BN-WVAD model adopts the introduced SBS strategy in abnormal videos, while exclusively applying SLS strategy to normal videos. The overall selection amount in normal videos is equal to the number of selected abnormal snippets according to SBS strategy, enabling the computation of our pairwise MPP loss (Eq. 5)." }, { "figure_ref": [], "heading": "Anomaly Score Calculation", "publication_ref": [ "b39", "b47", "b49", "b39", "b49", "b27" ], "table_ref": [], "text": "In existing methods [35,40,48,50], the ultimate anomaly scores are directly derived from the predictions of the anomaly classifier C(•). However, the anomaly classifier remains susceptible to the misselection of abnormal snippets, even with the adoption of MIL, resulting in potential misclassifications of normal snippets as abnormal instances. In our BN-WAVD, the anomaly classifier is solely trained on the certainly normal snippets from normal videos, eliminating the confusion induced by label noise. Instead of using bag-level binary cross-entropy loss, we employ a snippetlevel regression loss to supervise the anomaly classifier C(•) with hidden features X n ∈ R B 2 ×T ×C of normal videos, formulated as follows:\nL nor (X n ; C) = B/2 b=1 ∥C(ReLU(BN(X n [b])))∥ 2 , (6)\nwhere ∥ • ∥ 2 denotes the L 2 norm, and X n [b] denotes the hidden feature of the b-th video in the normal mini-batch. Notably, predicted scores of all T snippets are used for supervision, without the need for hard sample mining like previous methods [35,40,50].\nTo enhance the discriminative capacity further, we introduce our DFM criterion (Eq. 2) as an additional discrimination criterion, which is acquired in dense feature space and exhibits increased robustness to label noise [28]. The final anomaly scores are calculated by combining DFM scores and the prediction of the anomaly classifier as follows:\nScore = C(ReLU(BN(X h ))) * DFM(X h , μ, σ2 ), (7)\nwhere the DFM scores and the prediction of the anomaly classifier are aggregated by element-wise multiplication ' * '." }, { "figure_ref": [], "heading": "The Training Objective", "publication_ref": [], "table_ref": [], "text": "As for implementation, we utilize two Conv1d layers to obtain the hidden features X h1 and X h2 , followed by Batch-Norm and ReLU. Both X h1 and X h2 are supervised by the proposed MPP loss (Eq. 5), and their DFM criterion (Eq. 2) values are summed up for the final anomaly score calculation (Eq. 7). The overall loss objective of our method is formulated as follows:\nL = L nor + λ 1 L mpp 1 + λ 2 L mpp 2 ,(8)\nwhere L mpp 1 and L mpp 2 denote the MPP losses calculated on X h1 and X h2 , respectively. λ 1 and λ 2 are the hyperparameters to balance the loss terms." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Evaluate Protocols", "publication_ref": [ "b40" ], "table_ref": [], "text": "Datasets. We evaluate our proposed BN-WVAD on two prominent WVAD datasets: UCF-Crime [33] and XD-Violence [41], where video-level labels are accessible.\nUCF-Crime collects 1900 real-world surveillance videos annotating 13 types of anomalous events, e.g., abuse, robbery, explosion, and road accidents. In the training set with video-level labels only, there are 800 normal and 810 abnormal videos. The testing set comprises 140 normal and 150 abnormal videos with temporal annotations for the evaluation of frame-level anomaly detection.\nXD-Violence is a multisource dataset, collected from movies, surveillance cameras, etc. It is the largest WVAD dataset with video-level labels available, composed of 4754 Evaluation protocols. We adhere to established evaluation protocols to ensure fair comparisons with previous methods. Specifically, we utilize the area under the curve (AUC) of the frame-level receiver operating characteristic (ROC) curve as the primary metric for UCF-Crime. On XD-Violence, frame-level average precision (AP) is the key metric for assessment." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b49", "b3", "b49", "b14" ], "table_ref": [], "text": "Following the previous SOTA UR-DMU [50], our BN-WVAD employs the I3D [4] to extract the snippet features.\nFor the XD-Violence, raw audio is embedded as audio features through VGGish [10]. The untrimmed video features are linearly interpolated to a standardized length of 200 snippets. We leverage a Transformer-based enhancer [50] to enhance feature representation, with an output dimension of 512. Upon the enhanced features, our BN-WVAD utilizes two Conv1d layers to obtain the hidden features X h1 and X h2 , followed by BatchNorm and ReLU. The kernel size of two Conv1d layers is set to 1, and the output dimension is 32 and 16. The hyper-parameters λ 1 and λ 2 are set to 5 and 20, respectively. Two selection ratios are set according to the abnormality distribution of datasets, i.e., ρ s = 0.1 and ρ b = 0.2 on UCF-Crime, and ρ s = 0.2 and ρ b = 0.4 on XD-Violence. We utilize Adam [15] as the optimizer with a learning rate of 0.0001 and a weight decay " }, { "figure_ref": [ "fig_3", "fig_4", "fig_3" ], "heading": "Ablation Study", "publication_ref": [ "b50", "b49", "b40", "b40", "b40" ], "table_ref": [ "tab_4", "tab_5", "tab_5", "tab_6" ], "text": "Effectiveness of key components. To comprehensively assess the effectiveness of key components in our BN-WVAD, we report both AP and AUC scores on UCF-Crime and XD-Violence datasets by incrementally integrating each component, as presented in Table 3. To emphasize the underestimated significance of BatchNorm in WVAD, we substitute BatchNorm with Dropout [32] as an alternative to alleviate overfitting. When solely supervised by the normal loss L nor , our BN-WVAD model with Dropout exhibits poor performance. The incorporation of BatchNorm significantly improves the performance to be comparable to some existing methods [33, 41, 49], supporting our insight into the normality modeling of BatchNorm. In this case, despite the absence of an explicit loss for abnormal videos, the gradients derived from L nor are attached with knowledge of abnormal representations when back-propagating through Batch-Norm [51], facilitating recognition of abnormal events. The addition of the proposed DFM criterion for selection and MPP loss for optimization further enhances our BN-WVAD, making it comparable to the SOTA method UR-DMU [50] on both datasets. Finally, the introduction of the BLS strategy further boosts the performance of the proposed BN-WVAD to be SOTA, especially on XD-Violence, demonstrating an impressive improvement of 1.6% AP. Additionally, AUC abn and AP abn , calculated on abnormal videos only, are consistently improved with the integration of each component, demonstrating the effectiveness of the proposed components in our BN-WVAD. Applicability of DFM criterion. To highlight the superiority of the proposed DFM criterion compared to the widely used Feature Magnitude (FM) [35], we integrate the FM criterion and the proposed DFM criterion into RTFM [35] and our BN-WAVD. As reported in Table 4, regardless of adopting the SLS or SBS strategy, replacing the FM criterion with our DFM criterion in RTFM leads to a significant improvement in performance. This demonstrates the applicability of our DFM criterion to enhance existing methods. Conversely, substituting our DFM criterion with the FM criterion in our BN-WVAD results in a notable performance decline, with a 1.4% decrease in AUC on UCF-Crime and a 2.94% drop in AP on XD-Violence. The performance drops further underscore the superiority of our proposed DFM criterion as the selection foundation in WVAD. Effectiveness of SBS strategy. The versatility of the proposed SBS strategy extends beyond our BN-WVAD, making it adaptable to other existing methods. As reported in Table 4, the introduction of the SBS strategy into RTFM [35] with the FM criterion consistently enhances performance, yielding improvements of 0.25% AUC on UCF-Crime and 1.27% AP on XD-Violence. The perfor-mance gains of our SBS strategy are more remarkable when incorporated with our DFM criterion, improving the performance by 0.63% AUC and 2.52% AP on two datasets, respectively. The improvement divergence derived from different criteria reaffirms the efficacy of our DFM criterion in measuring the abnormality from the statistical perspective. The visualization in Fig. 5 offers an intuitive insight into the selection results of SLS, BLS, and SBS strategies based on the DFM scores. For clarity, we fabricate a mini-batch by selecting two abnormal videos with distinct abnormality ratios from XD-Violence. In alignment with our earlier analysis in Sec. 3.3, the SLS strategy exposes its limitation by choosing partial abnormal snippets in the second video with a substantial abnormality ratio. Meanwhile, the BLS strategy struggles to identify the inconspicuous abnormal snippets in the first video. By combining these two strategies, the proposed SBS strategy successfully mitigates the limitations of individual strategies, capturing all potential abnormal snippets in both videos. However, our SBS strategy fails to overcome the misselection of normal snippets in the video with a low abnormality ratio, which is inevitable in WVAD with only video-level labels accessible. Ablation of selection ratios. We also conduct ablation studies on the selection ratios ρ s and ρ b within the SBS strategy on two datasets with distinct abnormality ratio distributions, as depicted in Fig. 6. Generally, performance improves within a certain range as both selection ratios increase, after which it experiences a decline when these ratios become excessively large. For UCF-Crime [33], optimal performance is attained when ρ b =20% and ρ s =10%, with the batch-level selection ratio closely aligning with the overall abnormality ratio in the testing set (18.2%). Differently, XD-Violence [41], characterized by a larger overall abnormality ratio (49.8%), requires a larger selection ratio to capture potential abnormal snippets, leading to the best performance when ρ b =40% and ρ s =20%. Despite differing optimal selection ratios, these values are relative to the overall abnormality ratio in the respective testing sets, providing instructive insights for practical application. Notably, the individual adoption of the BLS strategy (ρ s =0%) results in a significant performance drop on both datasets. The absence of the SLS strategy induces our BN-WAVD to be trained on the abnormal snippets from major abnormal events (e.g. Fighting). However, it fails to address the challenging abnormal snippets associated with less frequent events such as Abuse in XD-Violence [41]. This observation is validated by the visualization of DFM scores for Fighting and Abuse in Fig. 5. Ablation of loss terms. The proposed BN-WVAD resorts to two loss terms: L nor to supervise the anomaly classifier C(•) and L mpp to separate normal and abnormal features. As reported in Table 5, when solely supervised by the normal loss L nor , our DFM criterion still demonstrates considerable discrimination, achieving an AUC of 81.8% on UCF-Crime. The discriminative ability of our DFM criterion is significantly boosted by incorporating the proposed MPP loss, reaching 85.6% AUC on UCF-Crime and 81.6% AP on XD-Violence. The t-SNE visualization in Fig. 7 provides an intuitive illustration of the effectiveness of our MPP loss in enhancing feature discrimination. The performance derived from the DFM criterion is further elevated to SOTA by incorporating L nor and L mpp simultaneously. The supervision of L nor is also beneficial to the representation learning of normality. The performance derived from the DFM criterion is further improved to SOTA by incorporating L nor and L mpp simultaneously, where the supervision of L nor is also beneficial to the representation learning of normality. When aggregating prediction (Pred.) and our DFM criterion, the performance of BN-WVAD is better than individual scores, demonstrating the effectiveness of our anomaly score calculation strategy. Additionally, when incorporating the abnormal loss L abn to supervise classifier in our BN-WVAD, the performance degrades significantly, especially on XD-Violence [41] of 6.0% AP decrease, even with L mpp . This observation is consistent with our earlier analysis in Sec. 3.3, where the classifier is susceptible to label noise." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we revisited the BatchNorm and introduced its statistical capacity to WVAD, presenting a novel BatchNorm-based model (BN-WVAD). The DFM criterion was introduced to assess the abnormality of snippets, providing a statistical perspective on anomaly detection. Moreover, we proposed an SBS strategy, inspired by BatchNorm considerations, to address the limitation within the SLS strategy. All components introduced in our method have demonstrated effectiveness and flexibility in WVAD." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "Besides the experimental results reported in the main paper, we provide more experiments and analysis on our BN-WVAD in this supplementary material. Firstly, we evaluate the proposed BN-WVAD on the other video anomaly detection dataset ShanghaiTech [21] with video-level labels available during training, as demonstrated in Sec. A. To further investigate the effectiveness of our BN-WAVD, we conduct more ablation studies in Sec. B, including the effect of different metrics in DFM calculation, the effect of momentum in BatchNorm, the effect of batch size, and comprehensive empirical analysis on the limitation of BLS." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "A. Comparison on ShanghaiTech", "publication_ref": [ "b40", "b48", "b7", "b39", "b48", "b7", "b39", "b48", "b49", "b39", "b40" ], "table_ref": [ "tab_7" ], "text": "ShanghaiTech [21] is a medium-scale video anomaly detection dataset compared with UCF-Crime [33] and XD-Violence [41]. It collects 437 videos from fixed-angle street video surveillance, including 307 normal videos and 130 anomaly videos. This dataset was initially published targeting unsupervised video anomaly detection, where only normal videos are accessible during training. Zhong while the frame-level labels are not provided. Specifically, 238 videos are used for training and 199 videos are used for testing. Both training and testing sets contain all 13 abnormal classes. We leverage AUC as the evaluation metric following [49] and compare our BN-WVAD with previous methods [8,16,33,35,40,46,49]. In particular, the sample-level selection ratio ρ s and batch-level ratio ρ b are set to 0.3 and 0.4, respectively, positively correlated with the abnormality ratio of the ShanghaiTech dataset, i.e., 46.6%, as presented in Fig. 8a. With these selection ratio settings, our BN-WVAD achieves the best performance on Shang-haiTech [21], as illustrated in Fig 8b.\nWe report the empirical comparison on Shang-haiTech [21] in Table 6. Consistently, our BN-WVAD outperforms previous methods [8,16,33,35,40,46,49,50], demonstrating the effectiveness and generalization of our proposed method in the weakly supervised setting. Although the performance gap between our BN-WVAD and the previous SOTA method S3R [40] is not as significant as that on UCF-Crime [33] and XD-Violence [41], our BN-WVAD still achieves the best performance without finetuning the hyper-parameters on ShanghaiTech " }, { "figure_ref": [], "heading": "B. More Ablation Studies", "publication_ref": [], "table_ref": [], "text": "In this section, we provide more ablation studies on the proposed BN-WVAD. Specifically, we investigate the effect of different metrics within DFM calculation, the effect of varying momentum settings in BatchNorm, and the effect of different batch size settings. Additionally, we comprehensively analyze the limitation of BLS in our BN-WVAD by reporting the AP of each abnormal class on XD-Violence. " }, { "figure_ref": [], "heading": "B.1. Different Metric of DFM calculation", "publication_ref": [ "b4", "b40" ], "table_ref": [], "text": "Besides the Mahalanobis distance [5] used in the main paper, we also investigate the effect of other metrics in DFM calculation, including common Euclidean distance and cosine similarity. The results are reported in Table 7.\nWe can observe that the Mahalanobis distance achieves the best performance, which is consistent with the results reported in the main paper. When employing the Euclidean distance, the performance is slightly worse than the Mahalanobis distance, which is because the Euclidean distance is a special case of the Mahalanobis distance under the assumption that the Gaussian distributions of different features are independent and scale-invariant, sharing the same variance of 1. The Cosine Similarity performs the worst compared to the other two metrics, with an AUC of 85.33% on UCF-Crime [33] and an AP of 81.82% on XD-Violence [41]. We conjecture the inferior performance derived from the Cosine Similarity is because the magnitude of the feature vectors is not considered in the calculation. However, the divergence of feature magnitude is also significant in distinguishing abnormal snippets from normal snippets , which even independently serves as an abnormality criterion in RTFM [35]." }, { "figure_ref": [], "heading": "B.2. The Effect of Momentum in BatchNorm", "publication_ref": [ "b26", "b40" ], "table_ref": [], "text": "The momentum in BatchNorm is a hyper-parameter that controls the contribution of the current batch statistics to the running mean and variance, which works as an exponential moving average (EMA) update as follows:\nμ = (1 -α)μ + αµ,(9)\nσ2 = (1 -α)σ 2 + ασ 2 ,(10)\nwhere μ and σ2 are the running mean and variance, respectively, and µ and σ 2 are the mean and variance of the current batch, respectively. The momentum α is set to 0.1 by default in PyTorch [27], which is also adopted in the proposed BN-WVAD.\nIn this section, we investigate the effect of different momentum α settings in BatchNorm. The results are reported in Table 8 with momentum α ∈ {0.01, 0.1, 0.2, 0.5, 1}. When α is set to 0.01, the performance is slightly worse than the default setting of α=0.1, which is because the running mean and variance are updated too sluggishly to cap- ture the normality representation of the current mini-batch.\nIncreasing the momentum α to be larger than 0.1, the performance drops gradually, especially when α is set to 1, the performance degrades significantly to 81.64% AUC on UCF-Crime [33] and 68.69% AP on XD-Violence [41]. In this specific case of α=1, the running mean μ and variance σ2 are not updated at all, where the statistics of each mini-batch are used to normalize the features of the whole training process. The absence of the EMA update in Batch-Norm leads to the overfitting to the training data of each mini-batch, resulting in a dramatic performance drop. This observation is consistent with our earlier analysis in the main text, motivating the introduction of the momentum in BatchNorm." }, { "figure_ref": [ "fig_8" ], "heading": "B.3. The Effect of Batch Size", "publication_ref": [ "b40", "b49" ], "table_ref": [], "text": "The essential motivation of the proposed BN-WVAD is to leverage the statistics captured by BatchNorm to distinguish abnormal snippets from normal snippets. Despite the EMA update in BatchNorm to capture the normality representation of the whole training set, the statistics of each minibatch still play a crucial role in the training process. On the one hand, the statistics of each mini-batch are used to normalize the features of the whole training process. On the other hand, the ratios of normal and abnormal input videos within each mini-batch also determine the statistics captured by BatchNorm. Therefore, we investigate the effect of different batch size settings on the performance of our BN-WVAD by varying the batch size of normal and abnormal videos in each mini-batch, respectively. The results are reported in Fig. 9, where the batch size of normal videos b nor and the batch size of abnormal videos b abn are set to be 16, 32, 64, and 128, respectively. We only vary the batch size of normal videos and abnormal videos, while keeping other training hyper-parameters fixed to the default settings in the main paper. In particular, due to the demand for pairwise MPP loss calculation, b abn can only be set to 16, 32 when b nor =16, and 16, 32, 64 when b nor =32, respectively. We can observe that the performance on both UCF-Crime [33] and XD-Violence [41] batch size setting is consistent with the default settings of UR-DMU [50], where our BN-WVAD implementation is heavily based. When concurrently changing b nor and b abn to be smaller than 64, i.e., 32 and 16, the performance degrades significantly, which is because the statistics captured by Batch-Norm are partial and prone to overfitting to the training data of each mini-batch. When b nor and b abn are enlarged to be 128, the performance on both datasets also slightly drops, which is because the statistics captured by BatchNorm are diluted by the enlarged batch size, leading to a less discriminative representation of normality. Furthermore, when b nor is larger than b abn , the captured statistics are dominated by the normal videos, motivating a more discriminative representation of normality. However, the training focus on the abnormal snippets is reduced, leading to a performance drop. We infer that tuning the training weight on normal and abnormal snippets may mitigate this issue and achieve better performance. On the other hand, when b nor is smaller than b abn , the performance is even worse than the case when b nor is larger than b abn . We conjecture the main reason is that the statistics computed by BatchNorm are distracted by multiple abnormal snippets, failing to capture a prototypical representation of normality." }, { "figure_ref": [ "fig_9", "fig_9" ], "heading": "B.4. The Limitation of BLS Strategy", "publication_ref": [ "b40" ], "table_ref": [ "tab_11" ], "text": "To comprehensively analyze the limitation of BLS strategy in our BN-WVAD, we report the AP of each abnormal class on XD-Violence [41] in Table 9. The sample-level selection ratio ρ s and batch-level selection ratio ρ b are set to 0.2 and 0.4, respectively. We can observe that individually adopting the BLS strategy performs worse than the SLS strategy in all abnormal classes, especially for the abnormal classes with low abnormality ratios, such as Abuse, Car Accident, and Explosion, as illustrated in Fig. 10. This observation is consistent with our earlier analysis in the main text, where the BLS strategy may overlook the inconspicuous abnormal snippets in videos with a low abnormality ratio. Notably, the SLS strategy even performs better than the BLS strat- egy on the abnormal class, Riot, which is characterized by a high abnormality ratio. We conjecture the main reason for this counterintuitive observation is that this specific abnormal class is long-lasting but stationary, short snippets are sufficient to capture the abnormality. When combining the SLS and BLS strategies, the proposed SBS strategy mitigates the limitations of individual strategies, achieving the best overall performance. However, on the abnormal classes, Abuse and Explosion, the SBS strategy is still inferior to the SLS strategy, which is because the incorporation of the BLS strategy reduces the training focus on the abnormal snippets from these two abnormal classes, whose abnormality ratios are relatively low as illustrated in Fig. 10. " } ]
In weakly supervised video anomaly detection (WVAD), where only video-level labels indicating the presence or absence of abnormal events are available, the primary challenge arises from the inherent ambiguity in temporal annotations of abnormal occurrences. Inspired by the statistical insight that temporal features of abnormal events often exhibit outlier characteristics, we propose a novel method, BN-WVAD, which incorporates BatchNorm into WVAD. In the proposed BN-WVAD, we leverage the Divergence of Feature from Mean vector (DFM) of Batch-Norm as a reliable abnormality criterion to discern potential abnormal snippets in abnormal videos. The proposed DFM criterion is also discriminative for anomaly recognition and more resilient to label noise, serving as the additional anomaly score to amend the prediction of the anomaly classifier that is susceptible to noisy labels. Moreover, a batch-level selection strategy is devised to filter more abnormal snippets in videos where more abnormal events occur. The proposed BN-WVAD model demonstrates state-of-the-art performance on UCF-Crime with an AUC of 87.24%, and XD-Violence, where AP reaches up to 84.93%.
BatchNorm-based Weakly Supervised Video Anomaly Detection
[ { "figure_caption": "Figure 2 .2Figure 2. Intuition of the proposed DFM criterion and Mean-based Pull-Push (MPP) loss. The mean vector of BatchNorm is regarded as a statistical reference to separate potential abnormal and normal snippets, and MPP Loss encourages their separation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure3. The overall framework of our proposed BN-WVAD model. The input mini-batch is composed of half normal videos and half abnormal videos and embedded by a frozen I3D[4] followed by a Transformer-based enhancer[50], yielding enhanced features X e . In particular, the visualized features are sorted by the DFM criterion (Eq. 2) in descending order, for the convenience of illustrating the different selection strategies adopted in normal and abnormal videos. Only one hidden feature X h is visualized here for better illustration.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization of DFM scores and selection results of SLS, BLS, and SBS strategies on two abnormal videos in XD-Violence[41] with different abnormality ratios, i.e., 6.9% and 77.7%. Snippet-level labels are denoted by the color of the background, ■ for normal snippets and ■ for abnormal snippets.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. The ablation of sample-level selection ratio ρs and batchlevel selection ratio ρ b on UCF-Crime [33] and XD-Violence [41]. When one of the ratios is set to 0, the corresponding selection strategy is disabled. The best results are framed by red boxes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure7. The t-SNE visualization of hidden features with or without the supervision of MPP loss L mpp on XD-Violence[41].", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. (a) The illustration of abnormality ratio distribution on ShanghaiTech [21]. (b) The ablation of sample-level selection ratio ρs and batch-level selection ratio ρ b on ShanghaiTech [21]. When one of the ratios is set to 0, the corresponding selection strategy is disabled. The best results are framed by red boxes.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "[21]. On the other hand, due to the limited number of training data in ShanghaiTech [21], the statistics captured by BatchNorm in our BN-WVAD are prone to overfitting to the training data, leading to a performance drop compared to the results on the other two large-scale datasets [33, 41].", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. The illustration of batch size settings on UCF-Crime [33] and XD-Violence [41]. The batch size of normal videos bnor and the batch size of abnormal videos babn are set to be 16, 32, 64, and 128, respectively. The best results are framed by blue boxes.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. The illustration of abnormality ratio distribution of different abnormal classes on XD-Violence [41].", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "to grasp the normality. However, the performance of UVAD methods is suboptimal due to the absence of abnormal videos during training.", "figure_data": "Weakly supervised video anomaly detection. Althoughthe fine-grained temporal annotations are impractical to ob-tain, the video-level labels are relatively feasible to anno-tate [33]. With the practical accessibility of video-level la-bels [33, 41], weakly supervised video anomaly detection(WVAD) has gained increasing attention in recent years.Relying solely on video-level labels, existing WVAD meth-ods [8, 16, 33, 35, 40, 41, 50] often employ Multi-InstanceLearning (MIL) [3, 17]. They train anomaly classifiers us-ing positive (abnormal) and negative (normal) bags, gen-erated based on tailored abnormality criteria such as fea-ture magnitude [35]. Despite the absence of MIL in othermethods [46, 48, 49], they still heavily depend on abnor-mality criteria for pseudo temporal annotation generation.Although these methods have achieved promising results,they grapple with the unreliability of abnormality criteriaand the limitation of top-k selection strategy. Introducingthe statistical notion of BatchNorm, we propose the novelDFM criterion to measure the abnormality of snippets and abatch-level selection strategy to address the shortcoming oftop-k selection strategy in overlooking abnormal snippetsfrom videos with high abnormality ratios.Normality modeling. In WVAD, since the ambiguity oftemporal labels of abnormal events, normality modeling ofdefinitely normal features in normal videos is of great im-portance. Besides most methods [35, 47] embedding theknowledge of normality into the anomaly classifier, somemethods", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "=5 snippets. Values in boxes denote the abnormality criterion score of each snippet. Both sample-level selection ratio ρs and batch-level selection ratio ρ b are set to 40%.", "figure_data": "TTT0.1 0.2 0.2 0.3 0.40.1 0.2 0.2 0.3 0.40.1 0.2 0.2 0.3 0.4B0.4 0.6 0.8 0.9 0.2 0.2 0.1 0.1 0.8 0.1B0.4 0.6 0.8 0.9 0.2 0.2 0.1 0.1 0.8 0.1B0.4 0.6 0.8 0.9 0.2 0.2 0.1 0.1 0.8 0.10.3 0.8 0.8 0.7 0.90.3 0.8 0.8 0.7 0.90.3 0.8 0.8 0.7 0.9(a) Sample-level Selec-(b) Batch-level Selec-(c) Sample-Batch Selec-tiontiontionFigure 4. Illustration of different selection strategies adopted inB=4 abnormal videos with T", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of AUC (%) on UCF-Crime [33]. The methods are divided into two categories: unsupervised (Un.) and weakly supervised (Weakly). ' †' denotes the reproduced results of open-source code [50] by ourselves.", "figure_data": "MethodVenueFeatureAUC (%)Un.GCL [47] FPDM [44]CVPR ′ 23 ResNeXt ICCV ′ 23 Image74.20 74.70Sultani et al. [33] CVPR ′ 18C3D75.41Sultani et al. [33] CVPR ′ 18I3D76.21GCN [49]CVPR ′ 19TSN82.12HL-Net [41]ECCV ′ 20I3D82.44WeaklyCLAWS [46] MIST [8] RTFM [35] MSL [16]ECCV ′ 20 CVPR ′ 21 ICCV ′ 21 AAAI ′ 22C3D I3D I3D I3D83.03 82.30 84.30 85.30S3R [40]ECCV ′ 22I3D85.99SAS [7]arXiv ′ 23I3D86.19CU-Net [48]CVPR ′ 23I3D86.22UR-DMU [50]AAAI ′ 23I3D86.97UR-DMU † [50]AAAI ′ 23I3D86.23BN-WVAD (Ours)I3D87.24", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of AP (%) on XD-Violence[41]. '+VGGish' refers to the methods with audio features as additional inputs.of 0.00005. The model is trained for 3000 iterations with the mini-batch of 64 normal and abnormal videos. During inference, the multi-crop aggregation is adopted to obtain the final anomaly scores, where the number of crops is set to 10 for UCF-Crime and 5 for XD-Violence.", "figure_data": "MethodVenueFeatureAP (%)Sultani et al. [33] CVPR ′ 18I3D73.20HL-Net [41]ECCV ′ 20I3D73.67HL-Net [41]ECCV ′ 20 I3D+VGGish78.64RTFM [35]ICCV ′ 21I3D77.81MSL [16]AAAI ′ 22I3D78.28S3R [40]ECCV ′ 22I3D80.26CU-Net [48]CVPR ′ 23I3D78.74CU-Net [48]CVPR ′ 23 I3D+VGGish81.43UR-DMU [50]AAAI ′ 23I3D81.66UR-DMU [50]AAAI ′ 23 I3D+VGGish81.77MACIL-SD [45]MM ′ 22I3D+VGGish83.40SAS [7]arXiv ′ 23I3D83.59BN-WVAD (Ours)I3D84.93BN-WVAD (Ours)I3D+VGGish85.26methodsunder unsupervised [44, 47] and weakly supervised [8, 33,35, 40, 41, 46, 48-50] fashions, as reported in Table 1.Leveraging video-level labels in WVAD proves advanta-geous, leading to a significant performance gap comparedto UAD methods. Compared with previous weakly super-vised methods, the proposed BN-WVAD further improvesthe AUC score to 87.24%. Despite achieving a modest im-provement of 0.27% compared to the reported results ofUR-DMU [50], our performance gain is commendable, es-pecially when contrasted with the result (86.23%) of ourreproduction based on the official code 1 .XD-Violence. Table 2 showcases the AP scores of video-only methods [7, 16, 35, 40, 48, 50] and audio-visual meth-ods [41, 45] on this multi-modal dataset [41]. This chal-lenging dataset contains more videos with high abnormalityratios, as illustrated in Fig. 1b. Our batch-level selectionstrategy demonstrates its effectiveness in capturing poten-tial abnormal snippets, boosting the proposed BN-WVADto achieve an impressive AP score of 84.93% AP when onlytrained on video features. Notably, our BN-WVAD outper-forms the previous video-only methods [7, 16, 33, 35, 40]by a large margin, even surpassing the audio-visual SOTA", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The ablation of our proposed components on UCF-Crime [33] and XD-Violence[41]. Both AUC and AP scores are reported, and AUC abn and AP abn denote the AUC and AP scores calculated on the subset of abnormal videos, respectively.", "figure_data": "ModuleUCF-CrimeXD-ViolenceNormal Loss Dropout BatchNorm DFM+MPP BLS AUCAPAUC abn AP abn AUCAPAUC abn AP abn✓✓65.21 23.7955.6226.11 61.96 61.5455.9964.94✓✓82.97 25.1859.4028.08 90.74 72.9974.1374.91✓✓✓86.44 35.9470.8736.67 94.57 83.3383.1684.60✓✓✓✓87.24 36.2671.7138.13 94.71 84.9383.5985.450% 5% 10% 15% 20% 25% BLS Ratio B0% 25% 30% 35% 40% 45% BLS Ratio B0%82.4 82.7 82.6 82.3 82.30%78.2 77.9 78.1 78.5 76.15%86.0 86.3 86.5 84.7 85.1 85.110%81.5 82.7 82.7 83.2 82.7 82.2SSSLS Ratio8% 10% 12%85.7 86.0 86.5 86.7 86.9 85.7 86.0 86.1 85.9 86.5 87.2 86.5 85.4 86.0 86.1 85.8 85.9 84.7SLS Ratio15% 20% 25%82.7 83.5 83.8 84.3 84.2 82.8 83.3 83.5 84.2 84.9 84.9 83.1 82.5 83.7 84.2 84.3 84.9 83.715%85.9 85.7 85.8 86.4 86.0 85.830%80.0 81.1 82.8 83.9 83.2 83.3(a) UCF-Crime [33]", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The superiority of our DFM criterion to the widely used FM criterion[35] and the applicability of DFM criterion SBS strategy. ' †' denotes the reproduced results by ourselves. L nor L abn L mpp Pred. DFM Mul. Pred. DFM Mul.", "figure_data": "MethodCriterion Selection UCF (AUC) XD (AP)RTFM † [35]FMSLS84.1174.80RTFM † [35]FMSBS84.3676.07RTFM † [35]DFMSLS85.5880.10RTFM † [35]DFMSBS86.2182.62BN-WVAD (Ours)FMSBS85.8481.99BN-WVAD (Ours)DFMSBS87.2484.93", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "et al. [49] reorganized the dataset by introducing a subset of anomaly videos into the training set, satisfying the weakly supervised setting. The video-level labels are available during training, Comparison of AUC (%) on ShanghaiTech [21]. ' †' denotes the reproduction of open-source code [50] by ourselves.", "figure_data": "MethodVenueFeature AUC (%)Sultani et al. [50] CVPR ′ 18C3D86.30GCN [49]CVPR ′ 19TSN84.44CLAWS [46]ECCV ′ 20C3D89.67MIST [8]CVPR ′ 21I3D94.83RTFM [35]ICCV ′ 21I3D97.32MSL [16]AAAI ′ 22I3D97.32S3R [40]ECCV ′ 22I3D97.48UR-DMU † [50]AAAI ′ 23I3D96.90BN-WVAD (Ours)I3D97.61", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "achieve the best when b nor and b abn are both set to be 64. This optimal", "figure_data": "16 32 64 128 b nor1685.2 86.0 84.1 83.732 64 b abn84.9 86.2 84.7 84.3 86.1 87.2 85.212886.9 86.8", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Selection Abu. C.A. Expl. Figt. Riot Shoot All SLS 43.50 37.48 55.71 79.15 95.69 57.84 83.55 BLS 30.71 33.28 51.83 75.09 92.17 56.69 78.55 SBS 41.90 39.18 54.74 84.90 96.18 58.52 84.93 Class-wise AP (%) of different selection strategies on XD-Violence [41]. 'Abu.' denotes Abuse, 'C.A.' denotes Car Accident, 'Expl.' denotes Explosion, 'Figt.' denotes Fighting, and 'Shoot' denotes Shooting.", "figure_data": "", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" } ]
Yixuan Zhou; Yi Qu; Xing Xu; Fumin Shen; Jingkuan Song; Hengtao Shen
[ { "authors": "Martín Abadi; Paul Barham; Jianmin Chen; Zhifeng Chen; Andy Davis; Jeffrey Dean; Matthieu Devin; Sanjay Ghemawat; Geoffrey Irving; Michael Isard", "journal": "", "ref_id": "b0", "title": "Tensorflow: a system for large-scale machine learning", "year": "2016" }, { "authors": "Davide Abati; Angelo Porrello; Simone Calderara; Rita Cucchiara", "journal": "", "ref_id": "b1", "title": "Latent space autoregression for novelty detection", "year": "2019" }, { "authors": "Stuart Andrews; Ioannis Tsochantaridis; Thomas Hofmann", "journal": "NeurIPS", "ref_id": "b2", "title": "Support vector machines for multiple-instance learning", "year": "2002" }, { "authors": "Joao Carreira; Andrew Zisserman", "journal": "", "ref_id": "b3", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "Roy De Maesschalck; Delphine Jouan-Rimbaud; Désiré L Massart", "journal": "Chemometrics and Intelligent Laboratory Systems", "ref_id": "b4", "title": "The mahalanobis distance", "year": "2000" }, { "authors": "Giancarlo Di Biase; Hermann Blum; Roland Siegwart; Cesar Cadena", "journal": "", "ref_id": "b5", "title": "Pixel-wise anomaly detection in complex driving scenes", "year": "2021" }, { "authors": "Yidan Fan; Yongxin Yu; Wenhuan Lu; Yahong Han", "journal": "", "ref_id": "b6", "title": "Weakly-supervised video anomaly detection with snippet anomalous attention", "year": "2023" }, { "authors": "Jiachang Feng; Fating Hong; Weishi Zheng", "journal": "", "ref_id": "b7", "title": "Mist: Multiple instance self-training framework for video anomaly detection", "year": "2009" }, { "authors": "Everette S Gardner", "journal": "Journal of forecasting", "ref_id": "b8", "title": "Exponential smoothing: The state of the art", "year": "1985" }, { "authors": " Jort F Gemmeke; P W Daniel; Dylan Ellis; Aren Freedman; Wade Jansen; R Channing Lawrence; Manoj Moore; Marvin Plakal; Ritter", "journal": "IEEE", "ref_id": "b9", "title": "Audio set: An ontology and humanlabeled dataset for audio events", "year": "2017" }, { "authors": "Mariana-Iuliana Georgescu; Antonio Barbalau; Tudor Radu; Fahad Ionescu; Marius Shahbaz Khan; Mubarak Popescu; Shah", "journal": "", "ref_id": "b10", "title": "Anomaly detection in video via selfsupervised and multi-task learning", "year": "2021" }, { "authors": "Hamid Ghorbani", "journal": "Facta Universitatis, Series: Mathematics and Informatics", "ref_id": "b11", "title": "Mahalanobis distance and its application for detecting multivariate outliers", "year": "2019" }, { "authors": "Or Hirschorn; Shai Avidan", "journal": "", "ref_id": "b12", "title": "Normalizing flows for human pose anomaly detection", "year": "2023" }, { "authors": "Sergey Ioffe; Christian Szegedy", "journal": "", "ref_id": "b13", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Shuo Li; Fang Liu; Licheng Jiao", "journal": "", "ref_id": "b15", "title": "Self-training multisequence learning with transformer for weakly supervised video anomaly detection", "year": "2009" }, { "authors": "Weixin Li; Nuno Vasconcelos", "journal": "", "ref_id": "b16", "title": "Multiple instance learning for soft bags via top instances", "year": "2015" }, { "authors": "Weixin Li; Vijay Mahadevan; Nuno Vasconcelos", "journal": "PAMI", "ref_id": "b17", "title": "Anomaly detection and localization in crowded scenes", "year": "2013" }, { "authors": "Zhian Liu; Yongwei Nie; Chengjiang Long; Qing Zhang; Guiqing Li", "journal": "", "ref_id": "b18", "title": "A hybrid video anomaly detection framework via memory-augmented flow reconstruction and flow-guided frame prediction", "year": "2021" }, { "authors": "Ze Liu; Jia Ning; Yue Cao; Yixuan Wei; Zheng Zhang; Stephen Lin; Han Hu", "journal": "", "ref_id": "b19", "title": "Video swin transformer", "year": "2021" }, { "authors": "Weixin Luo; Wen Liu; Shenghua Gao", "journal": "", "ref_id": "b20", "title": "A revisit of sparse coding based anomaly detection in stacked rnn framework", "year": "2017" }, { "authors": "Amir Markovitz; Gilad Sharir; Itamar Friedman; Lihi Zelnik-Manor; Shai Avidan", "journal": "", "ref_id": "b21", "title": "Graph embedded pose clustering for anomaly detection", "year": "2020" }, { "authors": "Ramin Mehran; Alexis Oyama; Mubarak Shah", "journal": "IEEE", "ref_id": "b22", "title": "Abnormal crowd behavior detection using social force model", "year": "2009" }, { "authors": "Trong-Nguyen Nguyen; Jean Meunier", "journal": "", "ref_id": "b23", "title": "Anomaly detection in video sequence with appearance-motion correspondence", "year": "2019" }, { "authors": "Guansong Pang; Cheng Yan; Chunhua Shen; Anton Van Den; Xiao Hengel; Bai", "journal": "", "ref_id": "b24", "title": "Self-trained deep ordinal regression for end-to-end video anomaly detection", "year": "2020" }, { "authors": "Hyunjong Park; Jongyoun Noh; Bumsub Ham", "journal": "", "ref_id": "b25", "title": "Learning memory-guided normality for anomaly detection", "year": "2020" }, { "authors": "Adam Paszke; Sam Gross; Soumith Chintala; Gregory Chanan; Edward Yang; Zachary Devito; Zeming Lin; Alban Desmaison; Luca Antiga; Adam Lerer", "journal": "", "ref_id": "b26", "title": "Automatic differentiation in pytorch", "year": "2017" }, { "authors": "Yuntao Qu; Shasha Mo; Jianwei Niu", "journal": "", "ref_id": "b27", "title": "Dat: Training deep networks robust to label-noise by matching the feature distributions", "year": "2021" }, { "authors": "Murray Rosenblatt", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b28", "title": "A central limit theorem and a strong mixing condition", "year": "1956" }, { "authors": "Lukas Ruff; Robert Vandermeulen; Nico Goernitz; Lucas Deecke; Ahmed Shoaib; Alexander Siddiqui; Emmanuel Binder; Marius Müller; Kloft", "journal": "PMLR", "ref_id": "b29", "title": "Deep one-class classification", "year": "2018" }, { "authors": "Mohammad Sabokrou; Mohammad Khalooei; Mahmood Fathy; Ehsan Adeli", "journal": "", "ref_id": "b30", "title": "Adversarially learned one-class classifier for novelty detection", "year": "2018" }, { "authors": "Nitish Srivastava; Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov", "journal": "JMLR", "ref_id": "b31", "title": "Dropout: a simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "Waqas Sultani; Chen Chen; Mubarak Shah", "journal": "", "ref_id": "b32", "title": "Real-world anomaly detection in surveillance videos", "year": "2018" }, { "authors": "Yiyou Sun; Chuan Guo; Yixuan Li", "journal": "NeurIPS", "ref_id": "b33", "title": "React: Out-ofdistribution detection with rectified activations", "year": "2021" }, { "authors": "Yu Tian; Guangsong Pang; Yuanhong Chen; Rajvinder Singh; Johan W Verjans; Gustavo Carneiro", "journal": "", "ref_id": "b34", "title": "Weaklysupervised video anomaly detection with robust temporal feature magnitude learning", "year": "2021" }, { "authors": "Du Tran; Lubomir D Bourdev; Rob Fergus; Lorenzo Torresani; Manohar Paluri", "journal": "", "ref_id": "b35", "title": "Learning spatiotemporal features with 3d convolutional networks", "year": "2015" }, { "authors": "Limin Wang; Yuanjun Xiong; Zhe Wang; Yu Qiao; Dahua Lin; Xiaoou Tang; Luc Van Gool", "journal": "PAMI", "ref_id": "b36", "title": "Temporal segment networks for action recognition in videos", "year": "2018" }, { "authors": "Q Kilian; Lawrence K Weinberger; Saul", "journal": "JMLR", "ref_id": "b37", "title": "Distance metric learning for large margin nearest neighbor classification", "year": "2009" }, { "authors": "Samuel Wilson; Tobias Fischer; Feras Dayoub; Dimity Miller; Niko Sünderhauf", "journal": "", "ref_id": "b38", "title": "Safe: Sensitivity-aware features for out-of-distribution object detection", "year": "2023" }, { "authors": "Jhih-Ciang Wu; He-Yen Hsieh; Ding-Jie Chen; Chiou-Shann Fuh; Tyng-Luh Liu", "journal": "Springer", "ref_id": "b39", "title": "Self-supervised sparse representation for video anomaly detection", "year": "2009" }, { "authors": "Peng Wu; Jing Liu; Yujia Shi; Fangtao Shao; Zhapyang Wu; Zhiwei Yang", "journal": "", "ref_id": "b40", "title": "Not only look, but also listen: Learning multimodal violence detection under weak supervision", "year": "2020" }, { "authors": "Pengxiang Wu; Songzhu Zheng; Mayank Goswami; Dimitris Metaxas; Chao Chen", "journal": "NeurIPS", "ref_id": "b41", "title": "A topological filter for learning with label noise", "year": "2020" }, { "authors": "Saining Xie; Ross Girshick; Piotr Dollár; Zhuowen Tu; Kaiming He", "journal": "", "ref_id": "b42", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "Cheng Yan; Shiyu Zhang; Yang Liu; Guansong Pang; Wenjun Wang", "journal": "", "ref_id": "b43", "title": "Feature prediction diffusion model for video anomaly detection", "year": "2023" }, { "authors": "Jiashuo Yu; Jinyu Liu; Ying Cheng; Rui Feng; Yuejie Zhang", "journal": "", "ref_id": "b44", "title": "Modality-aware contrastive instance learning with self-distillation for weakly-supervised audio-visual violence detection", "year": "2022" }, { "authors": "Muhammad Zaigham Zaheer; Arif Mahmood; Marcella Astrid; Seung-Ik Lee", "journal": "", "ref_id": "b45", "title": "Claws: Clustering assisted weakly supervised learning with normalcy suppression for anomalous event detection", "year": "2009" }, { "authors": "Muhammad Zaigham Zaheer; Arif Mahmood; Muhannad Haris Khan; Mattia Segu; Fisher Yu; Seung-Ik Lee", "journal": "", "ref_id": "b46", "title": "Generative cooperative learning for unsupervised video anomaly detection", "year": "2022" }, { "authors": "Chen Zhang; Guorong Li; Yuankai Qi; Shuhui Wang; Laiyun Qing; Qingming Huang; Ming-Hsuan Yang", "journal": "", "ref_id": "b47", "title": "Exploiting completeness and uncertainty of pseudo labels for weakly supervised video anomaly detection", "year": "2006" }, { "authors": "Jiaxing Zhong; Nannan Li; Weijie Kong; Shan Liu; Thomas H Li; Ge Li", "journal": "", "ref_id": "b48", "title": "Graph convolutional label noise cleaner: Train a plug-and-play action classifier for anomaly detection", "year": "2009" }, { "authors": "Hang Zhou; Junqing Yu; Wei Yang", "journal": "", "ref_id": "b49", "title": "Dual memory units with uncertainty regulation for weakly supervised video anomaly detection", "year": "2023" }, { "authors": "Yixuan Zhou; Peiyu Yang; Yi Qu; Xing Xu; Fumin Shen; Heng Tao Shen", "journal": "", "ref_id": "b50", "title": "Anoonly: Semi-supervised anomaly detection without loss on normal data", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 345.51, 372.38, 199.6, 30.55 ], "formula_id": "formula_0", "formula_text": "µ = E(X h ) = 1 B × T B b=1 T t=1 X h [b, t],(1)" }, { "formula_coordinates": [ 4, 66.58, 438.36, 219.79, 32.52 ], "formula_id": "formula_1", "formula_text": "DFM(X h [b, t], µ, σ 2 ) = (X h [b, t] -µ) T Σ -1 (X h [b, t] -µ),(2)" }, { "formula_coordinates": [ 4, 123.35, 580.2, 159.14, 8.99 ], "formula_id": "formula_2", "formula_text": "μ = (1 -α)μ + αµ, (3" }, { "formula_coordinates": [ 4, 118.49, 580.55, 167.87, 25.16 ], "formula_id": "formula_3", "formula_text": ") σ2 = (1 -α)σ 2 + ασ 2 . (4" }, { "formula_coordinates": [ 4, 282.49, 597.07, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 330.08, 472.51, 215.03, 48.34 ], "formula_id": "formula_5", "formula_text": "L mpp (X n dfm , X a dfm , μ,σ 2 ) = 1 K K k=1 [m + DFM(X n dfm [k], μ, σ2 )(5)" }, { "formula_coordinates": [ 4, 418.5, 524.29, 105.39, 13.11 ], "formula_id": "formula_6", "formula_text": "-DFM(X a dfm [k], μ, σ2 )]," }, { "formula_coordinates": [ 5, 332.21, 106.2, 212.91, 31.41 ], "formula_id": "formula_7", "formula_text": "L nor (X n ; C) = B/2 b=1 ∥C(ReLU(BN(X n [b])))∥ 2 , (6)" }, { "formula_coordinates": [ 5, 319.64, 287.47, 225.47, 11.29 ], "formula_id": "formula_8", "formula_text": "Score = C(ReLU(BN(X h ))) * DFM(X h , μ, σ2 ), (7)" }, { "formula_coordinates": [ 5, 364.97, 451.84, 180.15, 13.49 ], "formula_id": "formula_9", "formula_text": "L = L nor + λ 1 L mpp 1 + λ 2 L mpp 2 ,(8)" }, { "formula_coordinates": [ 10, 123.35, 545.84, 163.01, 8.99 ], "formula_id": "formula_10", "formula_text": "μ = (1 -α)μ + αµ,(9)" }, { "formula_coordinates": [ 10, 118.49, 559.97, 167.87, 11.37 ], "formula_id": "formula_11", "formula_text": "σ2 = (1 -α)σ 2 + ασ 2 ,(10)" } ]
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "019", "publication_ref": [ "b6", "b42", "b33", "b22", "b47", "b46", "b23", "b34", "b44", "b45", "b29", "b26", "b9" ], "table_ref": [], "text": "Video inpainting aims at guaranteeing the integrity of con-020 tent within each frames, while handle the inter-frame tem-021 poral dynamics, which plays an essential role in computer vision such as object removal [7], logo removal [43], video restoration [34], and watermark removal [23].\nThe objective of video inpainting method is to shift pixels across frames while hallucinating deficient [25] pixels.\nExisting approaches [18,48,52] adopt end-to-end transformers within optical flow module for temporal consistent.\nHowever, they often result in blurred or mosaic-like outcome for deficient pixels, even with the use of a discriminator supervising [20,47]. video inpainting remains a persistent problem that may benefit from stronger solution, such as leveraging image models as a generative prior.\nWhile image inpainting has seen impressive advances [24,35,45,46], particularly with diffusion models.\nDiffusion model [11,30,33] is capable of generating realistic content [27]. The process of iterative sampling allows for easier integration of control signals [10,49] and the reconstruction of more fine-grained details. However, the extra time dimension in video inpainting demands preserving temporal consistent and accounting for the complex motion, which is different from image inpainting. The embarrassment lies in the extensive inference time of diffusion models in multi-step reasoning, which remains inefficient for video despite advancements in existing acceleration techniques [5,22,33]. Therefore, employing a well-trained dif- \n1" }, { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "Video inpainting has been challenged by complex scenarios like large movements and low-light conditions. Current methods, including emerging diffusion models, face limitations in quality and efficiency. This paper introduces the Flow-Guided Diffusion model for Video Inpainting (FGDVI), a novel approach that significantly enhances temporal consistency and inpainting quality via reusing an offthe-shelf image generation diffusion model. We employ optical flow for precise one-step latent propagation and introduces a model-agnostic flow-guided latent interpolation technique. This technique expedites denoising, seamlessly integrating with any Video Diffusion Model (VDM) without additional training. Our FGDVI demonstrates a remarkable 10% improvement in flow warping error E warp over existing state-of-the-art methods. Our comprehensive experiments validate superior performance of FGDVI, offering a promising direction for advanced video inpainting. The code and detailed results will be publicly available in https://github.com/NevSNev/FGDVI." }, { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b42", "b33", "b22", "b46", "b49", "b23", "b34", "b44", "b45", "b47", "b29", "b26", "b36", "b54", "b9", "b50", "b3", "b11", "b29", "b16", "b18" ], "table_ref": [], "text": "Video inpainting aims at guaranteeing the integrity of content within each frames, while handle the inter-frame temporal dynamics, which plays an essential role in computer vision such as object removal [7], logo removal [43], video restoration [34], and watermark removal [23].\nThe objective of video inpainting method is to shift pixels across frames while hallucinating deficient [25,47] pixels. Existing approaches [18,50,54] adopt end-to-end transformers within optical flow module for temporal consistent. However, they often result in blurred or mosaic-like outcomes for deficient pixels, even with the use of a discriminator supervising [20,49]. video inpainting remains a persistent problem that may benefit from stronger solutions, such as leveraging image models as a generative prior.\nWhile image inpainting has seen impressive advances [24,35,45,46,48], particularly with diffusion models. Diffusion model [11,30,33] is capable of generating realistic content [27,37,55]. The process of iterative sampling allows for easier integration of control signals [10,51] and the reconstruction of more fine-grained details. However, the extra time dimension in video inpainting demands preserving temporal consistency and accounting for the complex motion, which is different from image inpainting. The embarrassment lies in the extensive inference time of diffusion models in multi-step reasoning, which remains inefficient for video despite advancements in existing acceleration techniques [5,22,33]. Therefore, employing a well-trained diffusion model as a prior is a non-trivial challenge.\nTo address the issues mentioned, we hypothesize that adjacent video frames should share a similar sampling knowledge. In this paper, we take one step further by using optical flow to propagate latent-level features, reducing the number of frames that require denoising while maintaining temporal consistency. Specifically, we propose a novel flow-guided latent interpolation approach for diffusion-based denoising. Instead of inferring the latent codes for each frame at every time step [4,12,32], we aim to infer a subset of latent codes and then propagate these as the latent codes for the remaining via optical flow warping.\nBy equipping a pre-trained unconditional image generation diffusion model with optical flow, we present a holistic framework, the Flow-Guided Diffusion model for Video Inpainting (FGDVI). In particular, we adopt the latent diffusion model [30] and design series of modules to harness optical flow. To process optical flow from masked frame inputs, we utilize an decoupled flow completion module to predict and mend the flow. Moreover, a dedicated one-step latent propagation module is designed to inpaint corrupted video frames under with guidance of the reconstructed flow. The completed flow also plays a role in latent interpolation to efficiently propagate information through a simple yet effective warping operation. To unlock the capabilities of the pretrained image diffusion model for video, we also incorporate spatiotemporal attention networks into its U-Net architecture. We carry out extensive experiments for object removal and free-form video inpainting in terms of both quantitative and qualitative evaluations. In summary, our paper contributes significantly in the following ways:\n• We are the first to reveal the effectiveness of a diffusionbased method in video inpainting, achieving comparable performance with state-of-the-art methods. Our proposed FGDVI leverages optical flow to notably improve inpainting quality and temporal consistency, especially achieving a large margin of 10% enhancement in flow warping error E warp [17]. • We propose a model-agnostic flow-guided latent interpolation method to accelerate denoising sampling, which can be integrated into any video diffusion model (VDM).\nCompared to the vanilla diffusion, our approach significantly boosts inference speed by approximately 29%. Existing and concurrent diffusion-based studies, such as M3DDM [8], also employ pre-trained LDM for video outpainting task but are hindered by the high cost of training. In contrast, our FGDVI, has been trained using just three GPUs. Additionally, two video editing methods, Mag-icEdit [19] and VideoComposer [38], have demonstrated proficiency in text-guided video completion. However, these approaches are not specialized in video inpainting and do not yet present state-of-the-art results." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "LDMs leverage a pretrained Variational Autoencoder (VAE) to operate in the latent space instead of pixel space. The diffusion forward process is imposing nosie on a clean latent z 0 for T times. A property of the forward process is that it admit sampling z t at random timestep t:\nq(z t |z 0 ) = Q(z 0 , t) = N (z t ; √ α t z 0 , (1 -α t )I),(1)\nwhere α t = t s=1 1β s , β s is the variance schedule for the timestep s, and we use Q(•, •) to represent this one-step noising process. The backward process applies a trained U-Net ϵ θ for denoising: p θ (z t-1 |z t ) = N (z t-1 ; µ θ (z t , t), Σ θ (z t , t)), where distribution parameters µ θ and Σ θ are computed by the denoising model θ. To train a conditional LDM, the objective is given by:\narg min θ E z,ϵ∼N (0,1),t,c ∥ϵ -ϵ θ (z t , t, c)∥ 2 2 ,(2)\nwhere ϵ θ (z t , t, c) is the predicted noise based on z t , the time step t and the condition c. Once trained, we could leverage the deterministic sampling of DDIM [11] to denoise z t :\nz t-1 = √ α t-1 ẑt→0 predicted 'z0' + 1 -α t-1 -σ 2 t ϵ θ (z t , t, c) direction pointing to zt + σ t ϵ t random noise ,(3)\nwhere σ t are hyperparameters. The term z t t→0 represents the predicted z 0 at time step t, which is characterized through the operation P(•, •), as delineated in the equation below. For conciseness and to circumvent any potential confusion with the concept of optical flow, we subsequently refer to ẑt→0 as ẑ0 . The precise formulation is as follows:\nẑ0 = P(z t , ϵ θ ) = (z t - √ 1 -α t ϵ θ (z t , t, c))/ √ α t ,(4)" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Given a corrupted video sequence represented as x = {x 0 , x 1 , . . . , x N } with dimensions R N ×3×H×W , consisting of N frames, we process this input alongside its associated binary mask sequence\nm = {m 0 , m 1 , . . . , m N } in R N ×1×H×W .\nThe corruption in x is modeled by the Hadamard product (⊙) of the original video y and the mask m, resulting in x = y ⊙ m. Our FGDVI aims to generate a set of spatio-temporally consistent inpainted video frames." }, { "figure_ref": [ "fig_2", "fig_1" ], "heading": "Turning Static Latents into Video Inpainter", "publication_ref": [ "b28" ], "table_ref": [], "text": "Our pipeline initiates by encoding the masked video into frame-wise latent representations, aimed at reducing the inference burden. This process is denoted as\nE(x) = z ϕ ∈ R N ×C×H ↓ ×W ↓\n, where the encoded latent reduces dimensionality, and (H ↓ , W ↓ ) denotes the spatial dimensions\n( H 4 , W4\n) of the encoded latent space. C represents the number of latent channels. In this context, and E refers to the VAE encoder, and the decoder D serves as its left inverse.\nOur key insight for efficiently training a video inpainting model is to re-use a pre-traiend, fixed unconditional LDM. We adopt θ ψ to represent spatial self-attention layers of LDM, which is parameterized by parameter θ. However, due to a lack of temporal modeling, while the model can produce high-quality individual frames, using it to directly render a video with T consecutive frames fails. We therefore introduce spatiotemporal attention neural network layers θ τ , to supplant spatial self-attention layers θ ψ of U-Net. These are designed to learn coherent spatiotemporal transformations for filling in missing regions. Our temporal LDM circumvents the bulky 3D convolution layers. For visualization, see the right part of Figure 3. Specifically, the original spatial LDM treats the video as a collection of independent images, tokenizing them into n patches with a window size of (h,w) prior to the spatial attention layers. In contrast, our spatiotemporal layers shift the temporal axis into the patch dimension and then reshape it back into the video dimension as follows:\nz ← rearrange(z, (b t) n (h w) → b n (t h w)) z ← θ τ (z) z ← rearrange(z, b n (t h w) → (b t) n (h w)),\nwhere we employ rearrange from the einops notation [29] to denote dimension transposition. For clarity, we add a batch dimension b and designate t to represent the time dimension.\nTo get seamless content for masked image, blend noisy latent with unmasked region [3] at each denoising steps is straightforward for image generation model. But it falls short in lacks of sptial awareness of uncorrupted area while generation. We concatenated video latent z ϕ and binary mask m with stochastic latent code z 0 along channel axis as input of LDM, as in Fig. 2. Despite this concatenated input no longer fits the original distribution of LDM, we find that its intrinsic pattern can be revealed by finetuning.\nWe fix the VAE modules and train the temporal LDM using the same noise schedule as the base image model. Our denoising criterion, as shown in Equation 2, is termed L diff . Additionally, we impose an L1 loss at the latent level of E(I) for reconstruction, denoted as L rec = ∥ẑ t→0 , E(I)∥ 1 , where ẑt→0 represents the estimated z 0 at an intermediate timestep as in Equation 4. The overall diffusion loss L inpaint is given as follows:\nL inpaint = L diff + L rec .\n(5)" }, { "figure_ref": [], "heading": "Flow Completion and Latent Propagation", "publication_ref": [ "b41" ], "table_ref": [], "text": "In video inpainting, it's simpler to fill masked regions using optical flow rather than hallucinating RGB pixels from scratch. And employing flow for pixel propagation aids in maintaining natural temporal consistency [42]. To achieve this goal, we partition the flow process into two parts as described in Sec. 4.2.1 and Sec. 4.2.2, which acquiring a complete flow field for corrupted videos and imposing propagation to decrease the pressure of video inpainting." }, { "figure_ref": [], "heading": "Decoupled Optical Flow Completion", "publication_ref": [ "b30", "b27" ], "table_ref": [], "text": "To represent the varying motion direction and velocity of objects over time for masked videos, previous methods [18,31] have trained flow completion networks together with inpainting-oriented loss functions. However, they may lead to a suboptimal learning process and result in less precise completed flows [54]. Therefore, we decouple the stages of optical flow completion and inpainting in our methodology. We utilize a swift (< 0.01s/flow) model for flow estimation, executed end-to-end, and initialized with the pretrained SpyNet [28] checkpoint. Prior to predicting flow, we downscale the corrupted frames x, to a quarter resolution, aligning with the latent code z dimensions. For refining the model towards flow completion, which entails generating bidirectional completed optical flow, we conduct training on the same dataset with diffusion. The optical flow loss, inspired by prior work [18], is defined as:\nE i∈I,j∈J ∥ fi,i+1 -f i,i+1 ∥ 1 + ∥ fj,j-1 -f j,j-1 ∥ 1 ,(6)\nwhere I = {1, . . . , N -1} and J = {2, . . . , N } signify the index sets for forward and backward temporal directions, respectively. Here, fi,i+1 and f i,i+1 are the predicted and true forward flows between consecutive frames, while fj,j-1 and f j,j-1 denote the backward flows. Further details will be elaborated in the experimental section 5." }, { "figure_ref": [ "fig_1", "fig_3" ], "heading": "One-step Latent Propagation", "publication_ref": [ "b41", "b49", "b41", "b49", "b6" ], "table_ref": [], "text": "Although content can now be propagated using complete flows in image [54] or feature spaces [18], the repeated process of aggregating flows across frames [42,50] is timeconsuming. Alternatively mechanism like E 2 FGVI [18] and ProPainter [54] performs propagation at the feature level between adjacent frames, but that is only suitable for end-toend workflows. It is not compatible with diffusion models, which require iterating a U-Net over T timesteps (see Figure 2), rendering these existing methods computationally expensive. To address this, we propose a one-step latent propagation that shifts information in the latent space. This approach enhances encoded frames z ϕ prior to feeding the U-Net, thereby reducing the need to just a single propagation instead of T . Differing from previous methods [18,42,50,54], our approach strikes efficiency for diffusion models while maintaining flow coherence.\nAs illustrated in Figure 4, for adjacent frame latent codes z i , z j , we initially warp z j using the complete optical flow fi,j to align it with the i-th frame, yielding the warped backward propagation latent. We concatenate it with the i-th frame's latent code z i , mask m i , and flow fi,j . Subsequently, we apply a series of convolutions to compute the offset o i→j and modulation weight w i→j : o i→j , w i→j = Conv(W(z j , fi,j ), fi,j , z i , m i ), (7) where W denotes the warping operation. In line with the approaches [18,54], our method also incorporates flowguided deformable convolution D(•) to enhance alignment during latent propagation:\nẑi = Conv(D(ẑ j |o i→j , w i→j + fi,j ), z i , m i ),(8)\nwhere ẑi represents the enhanced latent code for the i-th frame. The mask condition m i is explicitly concatenated in the convolution blocks Conv(•) to improve the precision of alignment during latent code propagation." }, { "figure_ref": [ "fig_2" ], "heading": "Flow-guided Latent Interpolation", "publication_ref": [ "b15", "b12" ], "table_ref": [], "text": "The inference in vanilla diffusion models is inefficient, which becomes even more challenging in the video domain where multiple frames need to be processed. To alleviate this issue, we propose a hypothesis: in diffusion-based video inpainting, adjacent frames share similar latent, aggregating them provides only sparse information, making it exceedingly uneconomical to infer noise for each frame at every time step. As a solution, we propose a novel flowbased latent interpolation that tailored for the VDM to release the pressure of the memory and computation burden.\nAs shown in Fig. 3 (along with Algorithm 1), the noisy latent code z from corrupted video frames is divided into two subsets by parity. Specifically, the process entails a two-step alternating loop: even-indexed frame latents undergo denoising, whereas odd-indexed frame latents z i are obtained by interpolation using bidrectional optical flow f i,i+1 and f i,i-1 , instead of denoising. In the next step, only the interpolated latents are inputted into the LDM U-Net. Owing to the negligible time cost of warping latents, the duration of diffusion denoising is halved when only half of the frame latents are processed at each sampling timestep. Notably, we solely apply adjacent frames for wrapping due to optical flow fails at long-range distances.\nIdeally, we could iterate this process until latent becomes clean, but the significant artifacts arise from using downsampled flow, which blurs spatial details, leading to poor warping outcomes. To circumvent this, we limit the interpolation to the initial S denoising steps, during which the overarching structure of the image is shaped [16]. Further more, to counter potential occlusion issues in the flow warpping [13], we propose to perform the warping operation at the z 0 stage (as per Equation 4), supplemented by a corrective frame-wise mask (see Algorithm 1 line 10). These strategies ensure that when occlusions are either partially or \na 0 = m i * W(â 0 , f ) + (1 -m i ) * z ϕ 12:\na t-1 = Q(a 0 , t -1) (Eqn. 1)\n13:\nz t-1 ← a t-1 ∪ a t-1\n14:\ni ← odd if i == even else even 15: end for Figure 6. Ablation study of the optimal speeding step S." }, { "figure_ref": [ "fig_1" ], "heading": "Experiment", "publication_ref": [ "b5", "b25", "b8", "b16", "b35", "b39", "b14" ], "table_ref": [], "text": "Datasets and Metrics. We utilize YouTube-VOS [41], comprising 3,471 and 474 video clips for training and validation, generating random shape masks with diverse motion. For evaluation, we use MOSE [6] and DAVIS [26], assessing 50 and 48 test clips, respectively. Performance is gauged using official object masks and custom large free masks, simulating complex scenarios. All video frames are resized to 256 × 256 for both training and evaluation. This resizing standardizes the input data, ensuring consistency across various testing conditions. In line with prior research, we apply PSNR, SSIM [39], and LPIPS [52] for assessing reconstruction quality, alongside flow warping error E warp [17] and VFID [36] to evaluate temporal consistency. These metrics collectively provide a comprehensive evaluation of the performance of our model, covering both spatial and temporal aspects of video processing.\nImplementation details. For the decoupled optical flow completion module, we train it on the 256x256 resolution and input a flow sequence of length 10, while running for 70K iterations on two TITAN XP GPU(12G) cards with a batch size of 5. We adopt the Ranger optimizer [40] with initial learning rate of 0.00005. As shown in Fig. 2, during training, the latent propagation latent is jointly trained with diffusion model, where flow completion module is fixed. We set the input to be video clips of length 4 concatenated with 5 reference frames, and only local video clips are improved by propagation. Besides, we leverage the Adam [15] optimizer with initial learning rate of 0.0001 while running 70K iterations on three Tesla V100 GPU(32G) cards with a batch size of 1. Considering the DDIM sampling, we set S = 5, T = 10 for all our experiments. This training approach, with its distinct phase separation and resource allocation, ensures optimal learning and efficiency." }, { "figure_ref": [ "fig_15" ], "heading": "Comparison", "publication_ref": [ "b49", "b5", "b25", "b49", "b49" ], "table_ref": [ "tab_1", "tab_1" ], "text": "Quantitative comparisons. We conducted a comprehensive comparison of our FGDVI method with six leadingedge approaches, namely Propainter [54], E 2 FGVI [18], FGT [50], FuseFormer The evaluations were performed on the MOSE [6] and DAVIS [26] datasets. To realistically represent scenarios such as object removal, our analysis initially focused on using the official object masks from MOSE and DAVIS. Additionally, we introduced stationary, extensive free masks to replicate more complex situations, as detailed in Table 1.\nThe quantitative assessments were executed under identical conditions, employing a neighbor window of size 5 and a reference distance of 12. Despite the inherent disadvantage of resizing other state-of-the-art methods, which were trained at a resolution of 432×240, to 256×256, our findings indicate that FGDVI outperforms these methods significantly. As evidenced in Table 1, FGDVI exhibits superior performance in PSNR in object removal scenarios and yields impressive results in settings involving large masks. Moreover, FGDVI consistently excels in LPIPS across all comparative analyses. Regarding temporal consistency, FGDVI demonstrates enhanced outcomes in E warp , marking a substantial improvement of 10%. These results underscore FGDVI's exceptional proficiency in video inpainting, achieving higher quality and improved consistency. Qualitative comparison. For the quantitative comparison, we compare FGDVI with three flow-guided frameworks that based on transformer, including Propainter [54], E 2 FGVI [18] and FGT [50]. The qualitative comparisons are conducted under the same setting for inference. As shown in Fig. 5 and Fig. 11, FGT [50] leads to enormous artifacts within the the missing region under condition of large free masks at line 2 nd . While E 2 FGVI [18] and Propainter [54] generates the blurry results, which lack enough details at lines 3 rd and 6 th . Besides, they fails to accomplish inpainting both under complex situations and object removal at lines 2 nd and 4 th . In contrast, FGDVI synthesizes more realistic results regardless of the conditions, which verifies the superiority of it over SOAT solutions. Specifically, FGDVI recoveries the crowd at line 2 nd as much as possible without pronounced artifacts under the large free mask setting. Meanwhile, it also manages to produce the convincing results under the object removal setting, such as the leg of the horse at lines 1 st , vivid texture at line 3 th , and iron railings at line 4 th , where it seems difficult for other SOTA methods. For further examples, see appendix video demos." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Speeding Steps. In order to find the optimal value of latent interpolation step S, we conducted experiments on PSNR and VFID to illustrate the performance's variation when S changes from 0 to T . When S = 0, there is no latent interpolation involved, and in contrast, S = T means we expedite the whole diffusion sampling process. As the Fig. 6 exhibits, PSNR will rise steadily to the peak till it comes to the 80% of T , while the VFID also shows the sim- " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this work, we are the first to reveal the effectiveness of a diffusion-based method in video inpainting, Our proposed ilar tendency. Therefore, we choose S = T /2, especially S = 5, T = 10 as the basic setting for all our experiments. As shown in Fig. 7, under this condition, flow-guided interpolation has a positive effect in terms of refinement. The qualitative and quantitative results both demonstrate latent interpolation achieving two birds with one stone: when it accelerates the denosing process, the performance on both inpainting quality and temporal consistency of FGDVI are naturally improved.\nEfficiency analysis. In order to demonstrate the effectiveness of our proposed flow-guided latent interpolation method for the diffusion model, we calculate the per-frame sampling time under the speeding steps S = T /2 from T = 10 to 50. As displayed in Fig. 8, when T = 50, compared to the vanilla diffusion baseline, our approach significantly boosts inference speed by approximately 29%, which can be seamlessly integrated into any various video diffusion model applications without any training. Study of spatiotemporal attention mechanism. For purpose of unlocking the capabilities of the pretrained image diffusion model for video, we extend vanilla attentions to the spatiotemporal domain, the results in Tab. 3 verify its huge improvement on the inpainting quality (PSNR) and temporal consistency (VFID)." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b0" ], "table_ref": [], "text": "In this work, we are the first to reveal the effectiveness of a diffusion-based method in video inpainting, Our proposed FGDVI leverages optical flow to notably improve inpainting quality and temporal consistency. Besides, we introduce model-agnostic flow-guided latent interpolation method to expedite denoising sampling process, which can be seamlessly integrated into any other Video Diffusion Model applications without any training. As a baseline of video inpainting diffusion model, extensive experiments show our method's superiority in complex situations compared to SOTA methods. For sake of the traditional video inpainting, we temporarily employ a pre-trained LDM instead of the Stable Diffusion's [1], where U-Net contain cross attention layer for text input. But in the future, we aims at adding text modal as input with more powerful SD as diffusion model. Besides, for pursuing better temporal consistency, we leverage adjacent frames for flow-based interpolation. But in the future, we plan to design a more challenge algorithm with fewer key frames, while bringing greater improvements. " } ]
Figure 1. Vivid inpainting results in huge movement and dark situations.
Flow-Guided Diffusion for Video Inpainting
[ { "figure_caption": "Figure 1 .1Figure 1. Our FGDVI utilizes flow-guided diffusion for video inpainting, excelling in scenarios with substantial motion and darkness.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Overview of the flow-guided diffusion model. The VAE and flow completion module are fixed during training.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Process of the Flow-based Interpolation (left), temporal LDM and unidirectional interpolation (right).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The process of the Flow-based Propagation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Algorithm 11Flow-guided Latent Interpolation Input: Stochastic latent z T , encoded video latent z ϕ and mask sequence m with N frames, completed flow f , diffusion timesteps T , truncation timestamp S, U-Net of temporal LDM θ. Output: z S 1: odd := {1, 3, • • • , N -1} 2: even := {0, 2, • • • , N } 3: i ← odd if T mod 2 = 0 else even 4: i := {0, 1, • • • , N }\\i 5: z ϕ := z ϕ,i 6: for t = T to S do7:", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "a t := z t i 9 :9a t-1 , ϵ θ ← denoise(θ; a t , t, [z ϕ ; m]) 10: â0 = P(a t , ϵ θ ) (Eqn. 4) 11:", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 8 .T = 10 to 50 .Figure 7 .Figure 8 .85078Figure 8. The visual comparison of the latent interpolation. sampling time under the speeding steps S = T/2 from", "figure_data": "", "figure_id": "fig_7", "figure_label": "85078", "figure_type": "figure" }, { "figure_caption": "[20], DSTT [49], and STTN [49].", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 .T = 10 to 50 .Figure 9 .8509Figure 8. The visual comparison of the latent interpolation. sampling time under the speeding steps S = T/2 from", "figure_data": "", "figure_id": "fig_9", "figure_label": "8509", "figure_type": "figure" }, { "figure_caption": "413 432 T = 10 to 50 .Figure 10 .4325010Fig. 6 exhibits, PSNR will rise steadily to the peak till it", "figure_data": "", "figure_id": "fig_10", "figure_label": "4325010", "figure_type": "figure" }, { "figure_caption": "Fig. 10, under the enhancement of the optical flows, FGDVI has a better performance in processing temporal cues when large movements occur. The comparison between the 2 st and the 4 st line of Tab. 2 also indicates the superiority of our latent propagation module.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FGDVIFigure 10 .10Figure 10. Ablation studies of latent propagation module.", "figure_data": "", "figure_id": "fig_12", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Study of decoupled flow completion module. In Tab. 2, we compare different flow conditions for latent propagation, which evidences the effectiveness of our decoupled flow completion module. Besides, Fig.9displays the representative results under different flow conditions, which also prove the strength of flow completion module.Study of latent propagation module.To examine the improvement of the one-step latent propagation module, we directly use the origin masked video latent z ϕ without latent propagation to concatenate with the binary masks m and stochastic latent code z 0 as the LDM's input. As shown in Fig.10, under the enhancement of the optical flows, FGDVI has a better performance in processing temporal cues when large movements occur. The comparison between the 2 st and the 4 st line of Tab. 2 also indicates the superiority of our latent propagation module.", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11. More qualitative comparison with SOTA video inpainting methods. Please zoom in for better view.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. More qualitative comparison with SOTA video inpainting methods. Please zoom in for better view.2 Figure 11. More qualitative comparison with SOTA video inpainting methods. Please zoom in for better view.", "figure_data": "", "figure_id": "fig_15", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Quantitative comparisons with SOTA methods on MOSE[6] and DAVIS[26] under object removal and large free masks settings. The best two results are highlighted in bold and underline. ↑ indicates higher is better. ↓ indicates lower is better. Ewarp * denotes Ewarp × 10 -2 .", "figure_data": "Masked FramesOursPropainterE 2 FGVIFGTSTTN [49]23.080.81440.1883.3122.540.80690.1693.37↓DSTT [49]25.170.86550.1953.7424.610.86390.1533.82Masked Frames FuseFormer [20] FGT [50] E2FGVI [18] Propainter [54]25.59 24.62 26.17 25.75Ours 0.8770 0.8628 0.8855 0.88180.190 0.176 0.163 0.176Propainter 3.76 3.60 3.31 3.5225.02 24.94 25.57 25.46E 2 FGVI 0.8761 0.8713 0.8850 0.88530.145 0.106 0.117 0.1113.87 FGT 3.38 3.41 3.40Ours25.900.87320.1503.0325.570.88040.0873.03Object Seg. MaskMOSE [6]DAVIS [26]STTN [49] DSTT [49] FuseFormer [20] FGT [50] E2FGVI [18] Propainter [54]22.05 24.37 24.38 24.18 24.53 24.250.7956 0.8543 0.8547 0.8469 0.8526 0.84890.188 0.199 0.199 0.187 0.173 0.1893.15 3.72 3.73 3.61 3.25 3.5721.06 22.40 22.30 22.49 22.46 22.370.7518 0.8040 0.8018 0.8039 0.7987 0.79890.171 0.160 0.159 0.132 0.105 0.1343.38 3.79 3.84 3.48 2.97 3.42↓Ours24.580.84910.154102.9322.600.80060.1052.99fully encompassed by the inpainting mask, subsequent steps are optimally leveraged to enhance the final image quality.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "CVPR #2741CVPR Free Large Mask STTN [47]23.08MOSE [6] 0.8144 0.1883.3122.54DAVIS [26] 0.8069 0.169⇤ # 3.37CVPR #2741DSTT [47]25.170.86550.1953.7424.610.86390.1533.82FuseFormer [20]25.590.87700.1903.7625.020.87610.1453.87FGT [48] E2FGVI [18] Propainter [52]24.62 26.17 25.750.8628 0.8855 0.88180.176 0.163 0.1763.60 3.31 3.5224.94 25.57 25.460.8713 0.8850 0.88530.106 0.117 0.1113.38 3.41 3.40Ours25.900.87320.1503.0325.570.88040.0873.03Object Seg. Mask STTN [47] DSTT [47] FuseFormer [20] FGT [48] E2FGVI [18] Propainter [52]22.05 24.37 24.38 24.18 24.53 24.25MOSE [6] 0.7956 0.188 0.8543 0.199 0.199 0.8547 0.8469 0.187 0.8526 0.173 0.8489 0.1893.15 3.72 3.73 3.61 3.25 3.5721.06 22.40 22.30 22.49 22.46 22.37DAVIS [26] 0.7518 0.171 0.8040 0.160 0.8018 0.159 0.8039 0.132 0.7987 0.105 0.7989 0.134⇤ # 3.38 3.79 3.84 3.48 2.97 3.42Ours24.580.84910.1542.9322.600.80060.1052.99", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative", "figure_data": "", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative comparisons with SOTA methods on MOSE[6] and DAVIS[26] under object removal and large free masks settings.", "figure_data": "CVPR #2741", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Effectiveness of flow completion module.", "figure_data": "Case w/o optical flow w/ corrupted flow w/ completed flow w/ gt flowPSNR↑ SSIM↑ VID ↓ 25.16 0.8687 0.700 25.42 0.8778 0.690 26.11 0.8948 0.588 26.12 0.8950 0.563", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of the attention mechanism.", "figure_data": "Case LDM w/ spatial attention w/o latent propagation FGDVIPSNR \" SSIM \" VID # 20.13 0.7440 1.188 22.96 0.8107 0.895 25.16 0.8687 0.700 26.11 0.8948 0.588", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study of the attention mechanism.", "figure_data": "Case LDM w/ spatial attention w/o latent propagation FGDVIPSNR ↑ SSIM ↑ VID ↓ 20.13 0.7440 1.188 22.96 0.8107 0.895 25.16 0.8687 0.700 26.11 0.8948 0.588", "figure_id": "tab_8", "figure_label": "3", "figure_type": "table" } ]
Bohai Gu; Yongsheng Yu; Heng Fan; Libo Zhang
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Stable Diffusion", "year": "2022" }, { "authors": "Jie An; Songyang Zhang; Harry Yang; Sonal Gupta; Jia-Bin Huang; Jiebo Luo; Xi Yin", "journal": "", "ref_id": "b1", "title": "Latent-shift: Latent diffusion with temporal shift for efficient text-to-video generation", "year": "2023" }, { "authors": "Omri Avrahami; Ohad Fried; Dani Lischinski", "journal": "ACM Trans. Graph", "ref_id": "b2", "title": "Blended latent diffusion", "year": "2023" }, { "authors": "Andreas Blattmann; Robin Rombach; Huan Ling; Tim Dockhorn; Seung Wook Kim; Sanja Fidler; Karsten Kreis", "journal": "IEEE", "ref_id": "b3", "title": "Align your latents: High-resolution video synthesis with latent diffusion models", "year": "2023" }, { "authors": "Hyungjin Chung; Byeongsu Sim; Jong Chul; Ye ", "journal": "", "ref_id": "b4", "title": "Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction", "year": "2022" }, { "authors": "Henghui Ding; Chang Liu; Shuting He; Xudong Jiang; H S Philip; Song Torr; Bai", "journal": "", "ref_id": "b5", "title": "MOSE: A new dataset for video object segmentation in complex scenes", "year": "2023" }, { "authors": "Mounira Ebdelli; Olivier Le Meur; Christine Guillemot", "journal": "IEEE Trans. Image Process", "ref_id": "b6", "title": "Video inpainting with short-term windows: Application to object removal and error concealment", "year": "2015" }, { "authors": "Fanda Fan; Chaoxu Guo; Litong Gong; Biao Wang; Tiezheng Ge; Yuning Jiang; Chunjie Luo; Jianfeng Zhan", "journal": "ACM MM", "ref_id": "b7", "title": "Hierarchical masked 3d diffusion model for video outpainting", "year": "2023" }, { "authors": "Chen Gao; Ayush Saraf; Jia-Bin Huang; Johannes Kopf", "journal": "", "ref_id": "b8", "title": "Flow-edge guided video completion", "year": "2020" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b9", "title": "Classifier-free diffusion guidance", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b10", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey A Gritsenko; William Chan; Mohammad Norouzi; David J ", "journal": "NeurIPS", "ref_id": "b11", "title": "Fleet. Video diffusion models", "year": "2022" }, { "authors": "Zhihao Hu; Dong Xu", "journal": "", "ref_id": "b12", "title": "Videocontrolnet: A motionguided video-to-video translation framework by using diffusion model with controlnet", "year": "2023" }, { "authors": "Jaeyeon Kang; Seoung Wug Oh; Seon Joo Kim", "journal": "", "ref_id": "b13", "title": "Error compensation framework for flow-guided video inpainting", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "ICLR", "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Mingi Kwon; Jaeseok Jeong; Youngjung Uh", "journal": "ICLR", "ref_id": "b15", "title": "Diffusion models already have A semantic latent space", "year": "2023" }, { "authors": "Wei-Sheng Lai; Jia-Bin Huang; Oliver Wang; Eli Shechtman; Ersin Yumer; Ming-Hsuan Yang", "journal": "", "ref_id": "b16", "title": "Learning blind video temporal consistency", "year": "2018" }, { "authors": "Zhen Li; Chengze Lu; Jianhua Qin; Chun-Le Guo; Ming-Ming Cheng", "journal": "", "ref_id": "b17", "title": "Towards an end-to-end framework for flowguided video inpainting", "year": "2022" }, { "authors": "Jun Hao Liew; Hanshu Yan; Jianfeng Zhang; Zhongcong Xu; Jiashi Feng", "journal": "", "ref_id": "b18", "title": "Magicedit: High-fidelity and temporally coherent video editing", "year": "2023" }, { "authors": "Rui Liu; Hanming Deng; Yangyi Huang; Xiaoyu Shi; Lewei Lu; Wenxiu Sun; Xiaogang Wang; Jifeng Dai; Hongsheng Li", "journal": "", "ref_id": "b19", "title": "Fuseformer: Fusing fine-grained information in transformers for video inpainting", "year": "2021" }, { "authors": "Rui Liu; Hanming Deng; Yangyi Huang; Xiaoyu Shi; Lewei Lu; Wenxiu Sun; Xiaogang Wang; Jifeng Dai; Hongsheng Li", "journal": "", "ref_id": "b20", "title": "Decoupled spatial-temporal transformer for video inpainting", "year": "2021" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "NeurIPS", "ref_id": "b21", "title": "Dpm-solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "Alasdair Newson; Andrés Almansa; Matthieu Fradet; Yann Gousseau; Patrick Pérez", "journal": "CoRR", "ref_id": "b22", "title": "Video inpainting of complex scenes", "year": "2015" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b23", "title": "GLIDE: towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022" }, { "authors": "Tengfei Hao Ouyang; Qifeng Wang; Chen", "journal": "", "ref_id": "b24", "title": "Internal video inpainting by implicit long-range propagation", "year": "2021" }, { "authors": "Federico Perazzi; Jordi Pont-Tuset; Brian Mcwilliams; Luc Van Gool; Markus H Gross; Alexander Sorkine-Hornung", "journal": "", "ref_id": "b25", "title": "A benchmark dataset and evaluation methodology for video object segmentation", "year": "2016" }, { "authors": "Dustin Podell; Zion English; Kyle Lacey; Andreas Blattmann; Tim Dockhorn; Jonas Müller; Joe Penna; Robin Rombach", "journal": "", "ref_id": "b26", "title": "SDXL: improving latent diffusion models for high-resolution image synthesis", "year": "2023" }, { "authors": "Anurag Ranjan; Michael J Black", "journal": "", "ref_id": "b27", "title": "Optical flow estimation using a spatial pyramid network", "year": "2017" }, { "authors": "Alex Rogozhnikov", "journal": "ICLR", "ref_id": "b28", "title": "Einops: Clear and reliable tensor manipulations with einstein-like notation", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b29", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Hao Shi; Qi Jiang; Kailun Yang; Xiaoting Yin; Kaiwei Wang", "journal": "", "ref_id": "b30", "title": "Flowlens: Seeing beyond the fov via flow-guided clip-recurrent transformer", "year": "2022" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni; Devi Parikh; Sonal Gupta; Yaniv Taigman", "journal": "", "ref_id": "b31", "title": "Make-a-video: Text-to-video generation without text-video data", "year": "2023" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "ICLR", "ref_id": "b32", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Nick C Tang; Chiou-Ting Hsu; Chih-Wen Su; Timothy K Shih; Hong-Yuan Mark Liao", "journal": "IEEE Trans. Multim", "ref_id": "b33", "title": "Video inpainting on digitized vintage films via maintaining spatiotemporal continuity", "year": "2011" }, { "authors": "Su Wang; Chitwan Saharia; Ceslee Montgomery; Jordi Pont-Tuset; Shai Noy; Stefano Pellegrini; Yasumasa Onoe; Sarah Laszlo; David J Fleet; Radu Soricut; Jason Baldridge; Mohammad Norouzi; Peter Anderson; William Chan", "journal": "", "ref_id": "b34", "title": "Imagen editor and editbench: Advancing and evaluating textguided image inpainting", "year": "2023" }, { "authors": " Ting-Chun; Ming-Yu Wang; Jun-Yan Liu; Nikolai Zhu; Andrew Yakovenko; Jan Tao; Bryan Kautz; Catanzaro", "journal": "NeurIPS", "ref_id": "b35", "title": "Video-to-video synthesis", "year": "2018" }, { "authors": "Wenjing Wang; Huan Yang; Zixi Tuo; Huiguo He; Junchen Zhu; Jianlong Fu; Jiaying Liu", "journal": "", "ref_id": "b36", "title": "Videofactory: Swap attention in spatiotemporal diffusions for text-to-video generation", "year": "2023" }, { "authors": "Xiang Wang; Hangjie Yuan; Shiwei Zhang; Dayou Chen; Jiuniu Wang; Yingya Zhang; Yujun Shen; Deli Zhao; Jingren Zhou", "journal": "", "ref_id": "b37", "title": "Videocomposer: Compositional video synthesis with motion controllability", "year": "2023" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE Trans. Image Process", "ref_id": "b38", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "Less Wright", "journal": "", "ref_id": "b39", "title": "Ranger -a synergistic optimizer", "year": "2019" }, { "authors": "Ning Xu; Linjie Yang; Yuchen Fan; Jianchao Yang; Dingcheng Yue; Yuchen Liang; Brian L Price; Scott Cohen; Thomas S Huang", "journal": "", "ref_id": "b40", "title": "Youtube-vos: Sequence-to-sequence video object segmentation", "year": "2018" }, { "authors": "Rui Xu; Xiaoxiao Li; Bolei Zhou; Chen Change Loy", "journal": "", "ref_id": "b41", "title": "Deep flow-guided video inpainting", "year": "2019" }, { "authors": "Jiahui Yu; Zhe Lin; Jimei Yang; Xiaohui Shen; Xin Lu; Thomas S Huang", "journal": "", "ref_id": "b42", "title": "Free-form image inpainting with gated convolution", "year": "2019" }, { "authors": "Sihyun Yu; Kihyuk Sohn; Subin Kim; Jinwoo Shin", "journal": "", "ref_id": "b43", "title": "Video probabilistic diffusion models in projected latent space", "year": "2023" }, { "authors": "Yongsheng Yu; Dawei Du; Libo Zhang; Tiejian Luo", "journal": "", "ref_id": "b44", "title": "Unbiased multi-modality guidance for image inpainting", "year": "2022" }, { "authors": "Yongsheng Yu; Libo Zhang; Heng Fan; Tiejian Luo", "journal": "", "ref_id": "b45", "title": "High-fidelity image inpainting with GAN inversion", "year": "2022" }, { "authors": "Yongsheng Yu; Fan Heng; Libo Zhang", "journal": "", "ref_id": "b46", "title": "Deficiencyaware masked transformer for video inpainting", "year": "2023" }, { "authors": "Yongsheng Yu; Hao Wang; Tiejian Luo; Fan Heng; Libo Zhang", "journal": "", "ref_id": "b47", "title": "Magic: Multi-modality guided image completion", "year": "2023" }, { "authors": "Yanhong Zeng; Jianlong Fu; Hongyang Chao", "journal": "", "ref_id": "b48", "title": "Learning joint spatial-temporal transformations for video inpainting", "year": "2020" }, { "authors": "Kaidong Zhang; Jingjing Fu; Dong Liu", "journal": "", "ref_id": "b49", "title": "Flow-guided transformer for video inpainting", "year": "2008" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b50", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b51", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Daquan Zhou; Weimin Wang; Hanshu Yan; Weiwei Lv; Yizhe Zhu; Jiashi Feng", "journal": "", "ref_id": "b52", "title": "Magicvideo: Efficient video generation with latent diffusion models", "year": "2022" }, { "authors": "Shangchen Zhou; Chongyi Li; Kelvin C K Chan; Chen Change Loy", "journal": "", "ref_id": "b53", "title": "Propainter: Improving propagation and transformer for video inpainting", "year": "2008" }, { "authors": "Junchen Zhu; Huan Yang; Wenjing Wang; Huiguo He; Zixi Tuo; Yongsheng Yu; Wen-Huang Cheng; Lianli Gao; Jingkuan Song; Jianlong Fu", "journal": "", "ref_id": "b54", "title": "Mobilevidfactory: Automatic diffusion-based social media video generation for mobile devices from text", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 63.42, 426.58, 222.94, 24.2 ], "formula_id": "formula_0", "formula_text": "q(z t |z 0 ) = Q(z 0 , t) = N (z t ; √ α t z 0 , (1 -α t )I),(1)" }, { "formula_coordinates": [ 3, 78.13, 541.17, 208.24, 19.53 ], "formula_id": "formula_1", "formula_text": "arg min θ E z,ϵ∼N (0,1),t,c ∥ϵ -ϵ θ (z t , t, c)∥ 2 2 ,(2)" }, { "formula_coordinates": [ 3, 76.43, 613.09, 209.94, 69.68 ], "formula_id": "formula_2", "formula_text": "z t-1 = √ α t-1 ẑt→0 predicted 'z0' + 1 -α t-1 -σ 2 t ϵ θ (z t , t, c) direction pointing to zt + σ t ϵ t random noise ,(3)" }, { "formula_coordinates": [ 3, 318.9, 124.09, 226.21, 25.27 ], "formula_id": "formula_3", "formula_text": "ẑ0 = P(z t , ϵ θ ) = (z t - √ 1 -α t ϵ θ (z t , t, c))/ √ α t ,(4)" }, { "formula_coordinates": [ 3, 308.86, 220.88, 236.25, 22.69 ], "formula_id": "formula_4", "formula_text": "m = {m 0 , m 1 , . . . , m N } in R N ×1×H×W ." }, { "formula_coordinates": [ 3, 308.86, 330.56, 236.25, 29.24 ], "formula_id": "formula_5", "formula_text": "E(x) = z ϕ ∈ R N ×C×H ↓ ×W ↓" }, { "formula_coordinates": [ 3, 308.86, 364.99, 26.44, 14.33 ], "formula_id": "formula_6", "formula_text": "( H 4 , W4" }, { "formula_coordinates": [ 3, 327.65, 639.47, 198.67, 47.17 ], "formula_id": "formula_7", "formula_text": "z ← rearrange(z, (b t) n (h w) → b n (t h w)) z ← θ τ (z) z ← rearrange(z, b n (t h w) → (b t) n (h w))," }, { "formula_coordinates": [ 4, 124.18, 559.29, 88.12, 17.29 ], "formula_id": "formula_8", "formula_text": "L inpaint = L diff + L rec ." }, { "formula_coordinates": [ 4, 315.87, 526.46, 229.24, 19.67 ], "formula_id": "formula_9", "formula_text": "E i∈I,j∈J ∥ fi,i+1 -f i,i+1 ∥ 1 + ∥ fj,j-1 -f j,j-1 ∥ 1 ,(6)" }, { "formula_coordinates": [ 5, 76.08, 395.03, 210.29, 19.67 ], "formula_id": "formula_10", "formula_text": "ẑi = Conv(D(ẑ j |o i→j , w i→j + fi,j ), z i , m i ),(8)" }, { "formula_coordinates": [ 5, 310.63, 649.47, 176.85, 23.38 ], "formula_id": "formula_11", "formula_text": "a 0 = m i * W(â 0 , f ) + (1 -m i ) * z ϕ 12:" }, { "formula_coordinates": [ 5, 335.76, 673.38, 83.42, 18.75 ], "formula_id": "formula_12", "formula_text": "z t-1 ← a t-1 ∪ a t-1" } ]
2024-03-23
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b38", "b9", "b17", "b57", "b0", "b36", "b61", "b66", "b32", "b55", "b60", "b21", "b15", "b4", "b42", "b34", "b35", "b33", "b12", "b5", "b44", "b32", "b24", "b52", "b1", "b8" ], "table_ref": [], "text": "3D Visual Grounding (3DVG) aims to localize specific objects within 3D scenes by using a series of textual descriptions. This has become a crucial component in a variety of burgeoning applications, such as autonomous robotics [12,52,57], virtual reality [39,54], and metaverse [10,32]. For illustration, given a 3D scan in Figure 1(a) along with its description -It is the keyboard closest to the door, the goal of 3DVG is to accurately pinpoint the keyboard in the green box, while eliminating potential distractions such as tables and desks. Despite the apparent simplicity of this task for humans, it poses a significant Comparative overview of two 3DVG approaches, where (a) Supervised 3DVG involves input from 3D scans combined with text queries, guided by object-text pair annotations, (b) Zero-shot 3DVG identifies the location of target objects using programmatic representation generated by LLMs, i.e., target category, anchor category, and relation grounding, thereby highlighting its superiority in decoding spatial relations and object identifiers within a given space, e.g., the location of the keyboard (outlined in green) can be retrieved based on the distance between the keyboard and the door (outlined in blue). challenge for machines due to their inherently limited perceptual capabilities.\nTraditional supervised 3DVG approaches [18,58,60] achieve this objective by leveraging the rich annotations in public datasets, such as ScanRefer [4] and Referit3D [1]. These approaches typically define 3DVG as a matching problem, generating possible objects via 3D detectors [21,37], and identifying the best match by fusing the visual and textual features. While these approaches can yield precise results, the acquisition of sufficient annotations is prohibitively resource-intensive for real-world applications.\nFurthermore, these approaches are often constrained by the pre-defined vocabulary during training, making them suboptimal in open-vocabulary scenarios.\nTo address these issues, we propose a novel visual programming approach for 3DVG that integrates zero-shot learning and large language models (LLMs). Zero-shot learning [62,64,67] can generalize across new categories by leveraging the pre-trained capabilities of CLIP [40] in the 3D domain. LLMs [33,46,48] can facilitate 3DVG due to their strong planning and reasoning capabilities. Regarding this, we first propose a vanilla version dialog with LLM. It describes the location and size of all objects in the scene and instructs the LLM to distinguish the object of interest through an interactive dialog. Despite the simplicity of the base approach, the inherent stochasticity and control limitations of LLMs make it hard to capture the view-dependent queries and decipher spatial relations in 3D space, which are the main challenges of 3DVG. To overcome this limitation, we further develop a new visual programming approach, as shown in Figure 1(b). It mainly consists of three steps: (1) generating a 3D visual program using LLMs, (2) interpreting the program into Python code, and (3) identifying the target bounding box by executing the code. To enhance the localization accuracy, we further introduce a novel language-object correlation (LOC) module capable of merging the geometric discernment of 3D point clouds with the fine-grained appearance acumen of 2D images.\nIn summary, contributions are summarized as follows: [56,61] attempt to explore the object attributes and relations between different proposals. Moreover, some works [7,22] have also investigated 3D language pretraining using advanced techniques, such as mask modeling and contrastive learning on paired object-caption data, followed by finetuning on downstream tasks. Additionally, NS3D [16] has employed CodeX [5] to generate hierarchical programs. However, it still needs many data annotations to train the neurosymbolic networks, thus lacking open-vocabulary and zeroshot capabilities.\nIndoor 3D Scene Understanding. 3D scene understanding of indoor environments has been widely studied. In specific, the emergence of RBG-D scans datasets [9, 43,51] greatly push the boundary of several tasks, including 3D object classification [35,36] OpenScene [34] extracts image features using 2D openvocabulary segmentation models [13,26], then trained a 3D network to produce point features aligned with multi-view fused pixel features. OpenMask3D [45] utilizes the closedvocabulary network to generate instance masks while discarded the classification head. Despite these advancements, these methods still lack spatial and commonsense reasoning abilities.\nLLMs for Vision-Language Tasks. Recent progress on LLMs has provided impressive zero-shot planning and reasoning abilities [33,46,48]. Advanced prompting technologies such as Least-to-Most [66], Think-Step-by-Step [25], and Chain-of-Thought [53] are proposed to elicit the capabilities of LLMs. These methods can understand human instructions, break complex goals into sub-goals, and control robot agents to execute tasks without additional training [2,19,29]. Moreover, when combined with specialized vision models, LLMs can significantly enhance the performance of vision-language tasks. For instance First, we describe the 3DVG task and provide the text descriptions of the room. Then, LLMs identify the objects relevant to the query sentence and perform human-like reasoning. (b) presents the 3D visual programming approach. We first input in-context examples into LLMs. Then, LLMs generate 3D visual programs through the grounding descriptions and perform human-like reasoning. Next, these programs are transformed into executable Python codes via the LOC module for predicting the location of the object. For example, the upper example uses the view-independent module, i.e., CLOSEST to determine the proximity in 3D space, while the lower example applies the view-dependent module, i.e., RIGHT to establish the relative positioning.\nthen generates executable Python code for image grounding. However, leveraging these capabilities for zero-shot 3D language grounding remains an unexplored area." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In Section 3.1, we introduce the vanilla approach, i.e., dialog with LLM to overcome the annotation issue in 3DVG. From Section 3.2 to Section 3.4, we present the visual programming approach, address the issue of view-dependent relations, and design the LOC module, respectively." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Dialog with LLM", "publication_ref": [ "b10" ], "table_ref": [], "text": "To accomplish the goal of 3DVG, we propose to initiate a dialogue with LLMs. The input for the dialogue consists of a real-world RGB-D scan and a free-form text description T . The text description provides specific information about the target object within a point cloud representation P ∈ R N ×6 , where P is a collection of color-enriched 3D points and N is the total number of such points. The LLM acts as an agent located in the scanned room, which aims to identify the specified object based on the given text description. To bridge the gap between the model's proficiency in understanding text and the spatial nature of the 3DVG task, we first transform the scene into a textual narrative. This narrative can provide a comprehensive account of the ob-jects O presented in the scene, including their positions and dimensions, which can be expressed as:\nObject <id> is a <category> located at (x, y, z) with sizes (width, length, height).\nGiven such textual layout, we dialog with the LLM by providing the scene's description and query. Our objective is to guide the LLM to identify the object mentioned in the query, while also understand and explain its reasoning process in the identification duration. Particularly, LLM is capable of mimicking the reasoning steps undertaken by humans. As illustrated in Figure 2(a), if the LLM get the object information, it can extract the objects relevant to the query sentence, i.e., targets keyboard and anchors door, and successfully identify the correct target keyboard by calculating its distance with door.\nWhile LLMs show powerful human-like reasoning capabilties, they still have some limitations. First, it cannot handle the view-dependent issue such as the right window. This is becasue the 3D scene can freely rotate to different views while it keeps static in 2D images. LLMs usually make decisions by comparing their x-y values of 3D coordinates despite hinting it in the conversation. Second, mathematical calculation is a common weakness of LLMs but is necessary for 3DVG [11]. For example, in Figure 2(a), distance computing is crucial to solve the closest relations, whereas the LLMs cannot always provide accurate results. These two issues stem from LLM's training limitations, which affect the reliability of the dialog with LLM approach." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "3D Visual Programming", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "To address the above two issues, we now introduce a new approach that generates visual programs through LLMs. As shown in Figure 2(b), we first construct a set of sample programs to encapsulate human-like problem-solving tactics in 3DVG. Each program includes a sequence of operations, where each operation contains a module name, several input parameters, and an assigned output variable. The output of each step can be reused in the subsequent step, thus creating an interlinked sequence that reflects logical reasoning within a visual context.\nWe transform the reasoning process of 3DVG into a scripted visual program. Specifically, we collect a set of in-context examples and the corresponding grounding descriptions, and then use LLMs to extrapolate new visual programs tailored to the task. For example, in Figure 2(b), we consider the task prompted by the following description:\nThe round cocktail table in the corner of the room with the blue and yellow poster. In this case, the objective is to identify the round cocktail table, which can be transformed into a operation: BOX0 = LOC('round cocktail table'), where the LOC operator processes the textual query and outputs the bounding boxes for the target objects. We will elaborate the design of LOC module in Section 3.4. Nevertheless, since there may exist multiple similar objects in 3D scenairos, the identified results may not be unique. To overcome this issue, we further pinpoint the blue and yellow poster as an auxiliary reference point by a operation: BOX1 = LOC('blue and yellow poster'). Then, the CLOSEST module computes the proximity between BOX0 (potential tables) and BOX1 (poster), and selects the table closest to the poster as the result.\nTable 1 summarizes the common relations in 3DVG. Based on this, we present the detailed visual program by developing three types of modules tailored for 3D contexts: • View-independent modules: They operate on the 3D spatial relations between objects. For example, the CLOS-EST module can discern proximity independent of the viewer's position. • View-dependent modules: They depend on the observer's vantage point. For instance, the RIGHT module determines the right window (TARGET) when looking at cabinets (BOX1) from all windows (BOX0). • Functional modules: They include multiple operations such as MIN and MAX, which select objects based on the extremal criteria. These three types of modules allow the output of one operation to be fed into another operation, thus providing flexible composability. They not only facilitate structured and accurate inference sequences, but also integrate 3D and 2D data to yield a robust and interpretable result for 3DVG." }, { "figure_ref": [ "fig_3" ], "heading": "Addressing View-Dependent Relations", "publication_ref": [ "b48" ], "table_ref": [ "tab_4" ], "text": "In this section, we discuss the intricacies of the viewdependent relations, which are essential for interpreting spatial relations within 3D space. Particularly, the main challenge is the dynamic nature of these relations that will change with the observer's viewpoint. Although traditional supervised approaches can learn these relations implicitly, they cannot provide a definitive resolution.\nOn 2D planes, the relations, especially the left and right are well defined. More specifically, right often corresponds to the positive direction of the x-axis while the left implies the negative direction. Motivated by this, we adopt a 2D egocentric view approach to ensure a consistent frame of reference for the spatial relations in Table 1.\nOur view-dependent modules accept a target argument and an optional anchors parameter. They output the target objects that fulfill the spatial relation to the anchors. When grounding queries do not specify targets, we treat targets as anchors as well. This approach aligns with our intuition, such as identifying the left window by treating all windows themselves as the anchors.\nAs shown in Figure 3, we assume there is a virtual camera in the center of the room, i.e., P center , which can rotate to align with the location of the anchor objects, i.e., P oa . The 3D objects are projected onto a 2D plane from this vantage point. Assume that the orthogonal camera has a intrinsic matrix I, then the 2D projections can be obtained by R, T = Lookat(P center , P oa , up),\n(1)\n(u, v, w) T = I • (R|t) • P,(2)\nwhere Lookat(•) is a view transformation function that computes the rotation matrix R and translation matrix T [49], P = (x, y, z, 1) T denotes the 3D coordinate vector, u and v respectively signify the x-axis and y-axis on the 2D plane, and w is the depth value. According to the value of u of an object's center, we can determine its left or right position -a lower u value indicates left. Similarly, w allows us to distingulish the front from behind. By synthesizing these concepts, we can define the between relation.\nThe transition from 3D to 2D egocentric perspectiveprovides a clear and consistent solution to interprete view-dependent relations in 3D space, thus enhancing our model's spatial reasoning ability." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Language-Object Correlation Module", "publication_ref": [ "b36", "b41", "b23" ], "table_ref": [], "text": "Although our zero-shot 3DVG approach does not need extensive grounding annotations, it still requires a basic vision model for object localization. To overcome this issue, previous works [4, 60] usually use pre-trained 3D detectors [21,37] to generate object proposals and the corresponding labels within a fixed vocabulary. However, this approach is restricted to a predefined object class set, thus limiting the scope of class prediction. To enable open-vocabulary segmentation, we develop an LOC module, combining the advantages of 3D and 2D networks to extend the labeling capability beyond the closed set. For example, in Figure 4, considering the operation: BOX0 = LOC('round cocktail table'), we first filter a subset of objects whose predicted label is table using a 3D instance segmentation network [42]. Then we only need to identify a round cocktail table from this subset using the corresponding 2D imagery. By mapping each 3D proposal to its 2D image, we can extract the color and texture details pertinent to our query. To further pinpoint the round cocktail table shown in the Figure 4, we consider three types of 2D multi-modal models: • Image classification models: We construct a dynamic vocabulary, including both the query term \"round cocktail table\" and the class \"table\" using popular tools such as CLIP [40]. Then we evaluate the cosine similarity between these terms and the imagery to find the best correlation to our query. • Visual question answering models: We raise the question:\nIs there a [query]? to the model such as ViLT [24]. Then the model sifts through its dictionary to suggest the most likely answer, i.e., yes or no. • General large models: We submit the same inquiry and anticipate a response based on the generated text. This process is crucial for verifying the alignment between the detected table and the query. We shall note that our approach is not limited to specific 3D or 2D models, allowing versatile incorporation of various models. In the experiments, we will demonstrate that the benefit of the LOC modules by comparing with the 3Donly and 2D-only couterparts. Our design indicates a leap forward in 3D open-vocabulary instance segmentation and can improve the object recognition accuracy in 3DVG." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b0", "b0", "b16", "b19", "b33", "b2" ], "table_ref": [], "text": "Datasets. We use two popular datasets, i.e., ScanRefer [4] and Nr3D [1] for experiments. ScanRefer is tailored for 3DVG that contains 51,500 sentence descriptions for 800 ScanNet scenes [9]. Nr3D is a human-written and free-form dataset for 3DVG, collected by 2-player reference game in 3D scenes. The sentences are divided into \"easy\" and \"hard\" subsets, where the target object only contains one same-class distractor in the \"easy\" subset but contains multiple ones in the \"hard\" subset. Depending on whether the sentence requires a specific viewpoint to ground the referred object, the dataset can also be partitioned into \"view depedent\" and \"view independent\" subsets. For both datasets, we evaluate the zero-shot approaches on the validation split. Evaluation metrics. We consider two settings for performance evaluation. The first one mandates the generation of object proposals, aligning closely with real-world applications. The evaluation metrics are Acc@0.25 and Acc@0.5, representing the percentage of correctly predicted bounding boxes whose IoU exceeds 0.25 or 0.5 with the groundtruth, respectively. This is the default setting for ScanRefer dataset. The second one furnishes ground-truth object masks, necessitating only classification, with an objective to eradicate localization error and achieve high grounding accuracy. This is the default setting for Nr3D dataset. Baselines. We use six supervised and two open-vocabulary 3D scene understanding approaches for performance com-Unique Multiple Overall Methods Supervision Acc@0.25 Acc@0.5 Acc@0.25 Acc@0.5 Acc@0.25 Acc@0. We evaluate the top-1 accuracy using ground-truth boxes.\nparison. For supervised approaches, ScanRefer [4] and ReferIt3DNet [1] encode the 3D point clouds and language separately, and then fuse them to rank the objects by predicted scores. TGNN [17] and InstanceRefer [60] make one further step by learning instance-wise features. 3DVG-Transformer [65] and BUTD-DETR [20] respectively utilize the Transformer [47] and DETR [3] architectures, representing the SoTA approaches. For open-vocabulary approaches, OpenScene [34] and LERF [23] aims to learn a 3D representation aligned with the 2D CLIP feature, thus enabling free-form language grounding. The query T is processed by the CLIP text encoder, and its similarity is computed against the extracted point features. Finally, they cluster the points with the highest score to determine the target object." }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "ScanRefer. Table 2 provides a quantitative assessment of the proposed approach on the ScanRefer dataset. We can see that our zero-shot approach outperforms all baseline approaches. Specifically, our approach can achieve a 32.7 Acc@0.5 score, which surpasses the supervised approaches, including the ScanRefer and TGNN. On the other hand, the open-vocabulary approaches LERF and Open-Scene can respectively achieve the overall accuracy of 4.8 and 13.2, even with the 0.25 IoU threshold. This is due to their limitations in reasoning and localization preci-sion. Moreover, our zero-shot approach outperforms the approaches that only utilize the 3D or 2D information in the LOC module. This result demonstrates the effectiveness of incorporating visual programming and perception modules, highlighting our zero-shot approach in navigating the realm of 3DVG.\nNr3D. Table 3 shows the performance of different approaches on the Nr3D dataset, in which the ground-truth instance mask is also provided. We can see that our zeroshot approach further excels the supervised approach In-stanceRefer. Specifically, our zero-shot approach on the \"view-dependent\" split can achieve a 2% accuracy gain than the 3DVG-Transformer approach. This performance gain comes from the relation modules, strengthing the potential of our zero-shot approach for 3DVG tasks." }, { "figure_ref": [ "fig_6", "fig_6", "fig_6", "fig_6", "fig_6", "fig_6" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "Figure 5 shows the visualizations of the selected samples from the ScanRefer validation set. The four columns present the ground-truth result, the supervised approach BUTD-DETR, the dialog with LLM, and the visual programming approaches, respectively. From Figure 5(a) and Figure 5(b), we can observe that the dialog with LLM and the visual programming approaches can achieve accurate prediction results for view-independent relations, i.e., (above, under) without much training. On the contrary, both the BUTD-DETR and the dialog with LLM approaches cannot address the view-dependent relations, i.e., (left, front), as shown in Figure 5(c) and Figure 5(d). The inherent uncertainty of these relations reflects the limitations of existing methods. However, our visual programming approach can leverage the 2D egocentric views, thus achieving accurate predictions in 3D scenarios. Figure 5(e) presents a failure case, where the dialog with LLM approach cannot recognize chair has wheels since it lacks open-vocabulary detection ability. Besides, the visual programming approach makes wrong predictions because " }, { "figure_ref": [], "heading": "(b) (a)", "publication_ref": [], "table_ref": [], "text": "There is a square beige armchair. It is left of a square table." }, { "figure_ref": [], "heading": "(c)", "publication_ref": [], "table_ref": [], "text": "This is a brown piano bench. It is in front of the piano." }, { "figure_ref": [], "heading": "(d)", "publication_ref": [], "table_ref": [], "text": "A desk chair is pushed into a small computer desk. The chair has wheels .\n(e) the LLM cannot correctly recognize the relation pushed.\nFortunately, when we correct the program using the CLOS-EST module, the visual programming approach can make correct predictions." }, { "figure_ref": [ "fig_8" ], "heading": "Ablation Studies", "publication_ref": [ "b41", "b35", "b37", "b58", "b23" ], "table_ref": [ "tab_8", "tab_9", "tab_6", "tab_7", "tab_11" ], "text": "Dialog with LLM vs. visual programming. We compare the performance of the two proposed zero-shot 3DVG approaches on the ScanRefer validation set with 700 examples. For both approaches, we use two GPT versions, i.e., GPT-3.5-turbo-0613 and GPT-4-0613. The cost of each GPT version depends on the number of input and output tokens. The experimental results are shown in Table 4. We can observe that for both zero-shot approaches, GPT4based approach can achieve higher accuracy than GPT3.5based approach, even it induces a larger economic cost. On the other hand, the visual programming approach always outperforms the dialog with LLM approach in terms of accuracy and cost, which demonstrates the effectiveness of the proposed visual programming approach. For other experiments, we use GPT3.5 to save cost. Relation modules. We now ablate different relation modules in Section 3.2 to analyze their impact on the system performance. The most important view-dependent and view-independent modules are presented in Table 5 and 6, respectively. We can see that LEFT and RIGHT are the most important view-dependent relations, while CLOSEST is the most important view-independent relation. This result is coherent with our motivation and design. LOC module. We juxtapose our approach by separately omitting the 3D component and 2D component. Both models utilize the instance mask prediction of Mask3D [42].\nParticularly, the 2D-only model solely employs the paired 2D images for classification, while the 3D-only model just uses the 3D result. As can be seen from Tables 2 and3 Generalization. As discussed in Section 3.4, our framework has strong adaptability for a spectrum of 3D and 2D perception models. To validate this claim, we conduct experiments using several representative models. For 3D perception, we utilize three backbones, i.e., PointNet++ [36],\nMethod LLM Acc@0. PointNeXt [38], and PointBERT [59]. For 2D perception, we use an image classification model proposed in [40], a visual question answering model in [24], and a general large model BLIP-2 [27] for testing. The results are shown in Tables 7 and8. We can observe that our framework is compatible with other models. Also, it can leverage the advancements within both 2D and 3D foundational models to improve the performance. This cross-model effectiveness demonstrates the robustness and future-proof nature of our approach in the ever-evolving landscape of visual perception models. LLMs to handle more cases in the visual program generation process. Meanwhile, it still follows the law of diminishing marginal utility. Moreover, we test the voting technique [15] to aggregate the results from multiple runs, which brings some performance gains. Error analysis. To better understand the limitations of our framework, we conduct error analysis in the following. For each dataset, we select a representative subset with around 100 samples and manually check the rationales offered by the visual program. This introspective method helps identify the dominant error sources and provide guidance to improve our framework. The result is illustrated in Figure 7, which reveals that the generation of accurate visual programs is the primary error source. Therefore, the performance can be improved by using more in-context examples and more powerful LLMs. The second error source is the object localization and classification, indicating that object detection and classification in 3D space remains a critical component. Additionally, the results point out a need for developing additional modules to handle a wider array of spatial relations like \"opposite\". These issues have not been well addressed in current framework." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we present a novel zero-shot approach for 3DVG to eliminate the need for extensive annotations and predefined vocabularies. A vanilla dialog with LLM ap-proach is first proposed by taking interactive dialog with LLMs. A visual programming approach is further developed, which leverages three types of modules to navigate the intricate 3D relations. To adapt to open-vocabulary scenarios, we also develop a LOC module to seamlessly integrate both 3D and 2D features. Experimental results demonstrate the superiority of the proposed approach and highlight its potential to advance the field of 3DVG." }, { "figure_ref": [], "heading": "Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Contents", "publication_ref": [], "table_ref": [], "text": "The following two items are included in the supplementary material:\n• Visualization examples for zero-shot 3DVG in Section 6.\n• Prompts for generating 3D visual program in Section 7.\nAnswer: Based on the description, we are looking for a storage shelf that is white in color and is above a desk with a chair in front of it. Looking at the room information, we can see that there are several objects identified as shelves, chairs, and desks. However, the description also mentions that the shelf is above a desk, which means we need to look at the coordinates to determine the vertical positioning of these objects.\nThe objects identified as shelves are: <obj 24>, <obj 75>, and <obj 76>. The objects identified as chairs are: <obj 1>, <obj 36>, <obj 77>. The objects identified as desks are not explicitly mentioned, but we can consider objects like tables or nightstands as potential desks: <obj 18>, <obj 58>. Now, we need to find a combination of these objects where the shelf is above a desk and there is a chair in front of it. This requires the shelf to have a higher z-coordinate (vertical position) than the desk, and the chair to have similar x and y coordinates (horizontal position) as the desk. After analyzing the coordinates, none of the objects seem to fit the description perfectly. However, the closest match could be <obj 24> (shelf), <obj 36> (chair), and <obj 18> (table as desk). Please note that this is an approximation as the description does not perfectly match any set of objects in the room. Answer: <obj 24> Query: This is a storage shelf. It is white in color and is above a desk with a chair in front of it. " }, { "figure_ref": [], "heading": "Visualization Examples for Zero-shot 3DVG", "publication_ref": [], "table_ref": [], "text": "We provide three examples to visualize the effectiveness of the proposed two zero-shot 3DVG approaches, i.e., dialog with LLM and visual programming. Concretely, the first Answer: First, let's identify the desk with monitors. There are several desks and monitors in the room, but the desks with monitors on them are <obj 18>, <obj 24>, and <obj 40>. Next, we need to find a window that is behind one of these desks. The only window in the room is <obj 11>. Looking at the coordinates, the window <obj 11> has a coordinate of (0.1, -2.8, 1.3). The desk that is in front of this window (meaning it has a smaller y-coordinate) is <obj 24> with a coordinate of (0.1, -1.5, 0.4). So, the desk with monitors in front of the window is <obj 24>." }, { "figure_ref": [ "fig_9", "fig_0" ], "heading": "Answer: <obj 24>", "publication_ref": [ "b0", "b1", "b0", "b1" ], "table_ref": [], "text": "Query: There is a rectangular window. It is behind a desk with monitors .\nProgram: BOX0=LOC(object='rectangular window') BOX1=LOC(object='desk with monitors') TARGET=BEHIND(targets=BOX0, anchors=BOX1) example, i.e., Figure 8, confirms that LLMs can effectively perform zero-shot 3DVG while also delivering commendable results. The second example, as illustrated in Figure 9, shows that LLMs may encounter limitations in the tasks requiring spatial reasoning. However, this issue can be effec- [0] Description: the handicapped stall [1] Description: the blue bag in the closet [2] Description: end table with flowers on it ...\n[0] Program: BOX0=LOC(object='handicapped stall') [1] Program: BOX0=LOC(object='blue bag') BOX1=LOC(object='closet') TARGET=CLOSEST(targets=BOX0, anchors=BOX1) [2] Program: BOX0=LOC(object='end table') BOX1=LOC(object='flowers') TARGET=CLOSEST(targets=BOX0, anchors=BOX1) ...\nHere are some tips: # Use the provided functions to implement the logic for each description # DO NOT create new functions, variables, or constants, or modify the provided functions, variables, or constants, find the nearest function if not exact match # Replace `Description`with the actual description you want to parse and generate the target object for # Adjust the program according to the specific requirements of each description # Wall should not be considered as an object tively addressed by the visual programming approach. The third example, i.e., Figure 10, further exemplifies that the visual programming approach is capable of executing multistep reasoning, which involves initially identifying blinds that are positioned above the monitors, followed by selecting the desired one among them." }, { "figure_ref": [ "fig_12" ], "heading": "Prompts for Generating 3D Visual Program", "publication_ref": [], "table_ref": [], "text": "As illustrated in Figure 11, the prompts for generating 3D visual programs include four components as follows: • Task explanation: We first describe the 3DVG task in natural language and provide it to the LLMs.\n• Function and variable definition: We define a set of functions and variables corresponding to the modules in the visual programming approach, such as LOC and LEFT. • In-context examples: We provide contextual examples illustrating how visual programs are structured and applied to guide LLMs. • Best practices and tips: We conclude with essential tips and best practices to ensure the effectiveness of the programs, highlighting the key aspects that guarantee optimal performance. These four components collaboratively facilitate the LLM to understand the task requirement, thereby allowing it to construct effective visual programs for the 3DVG task." } ]
3D Visual Grounding (3DVG) aims at localizing 3D object based on textual descriptions. Conventional supervised methods for 3DVG often necessitate extensive annotations and a predefined vocabulary, which can be restrictive. To address this issue, we propose a novel visual programming approach for zero-shot open-vocabulary 3DVG, leveraging the capabilities of large language models (LLMs). Our approach begins with a unique dialogbased method, engaging with LLMs to establish a foundational understanding of zero-shot 3DVG. Building on this, we design a visual program that consists of three types of modules, i.e., view-independent, view-dependent, and functional modules. These modules, specifically tailored for 3D scenarios, work collaboratively to perform complex reasoning and inference. Furthermore, we develop an innovative language-object correlation module to extend the scope of existing 3D object detectors into open-vocabulary scenarios. Extensive experiments demonstrate that our zero-shot approach can outperform some supervised baselines, marking a significant stride towards effective 3DVG.
Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding
[ { "figure_caption": "Figure 1 .1Figure1. Comparative overview of two 3DVG approaches, where (a) Supervised 3DVG involves input from 3D scans combined with text queries, guided by object-text pair annotations, (b) Zero-shot 3DVG identifies the location of target objects using programmatic representation generated by LLMs, i.e., target category, anchor category, and relation grounding, thereby highlighting its superiority in decoding spatial relations and object identifiers within a given space, e.g., the location of the keyboard (outlined in green) can be retrieved based on the distance between the keyboard and the door (outlined in blue).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "a) Dialog with LLM b) 3D Visual Programming", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Overview of two zero-shot approaches for 3DVG. (a) shows the working mechanism of the vanilla dialog with LLM approach.First, we describe the 3DVG task and provide the text descriptions of the room. Then, LLMs identify the objects relevant to the query sentence and perform human-like reasoning. (b) presents the 3D visual programming approach. We first input in-context examples into LLMs. Then, LLMs generate 3D visual programs through the grounding descriptions and perform human-like reasoning. Next, these programs are transformed into executable Python codes via the LOC module for predicting the location of the object. For example, the upper example uses the view-independent module, i.e., CLOSEST to determine the proximity in 3D space, while the lower example applies the view-dependent module, i.e., RIGHT to establish the relative positioning.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Addressing view-dependent relations: A shift to 2D egocentric view.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Illustration of the language-object correlation module.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "It is a window. It is located above a recycle bin that has a blue top. chair.The chair is under the desk.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization results of 3D visual grounding. Rendered images of 3D scans are presented, including the ground-truth (blue), incorrectly identified objects (red), and correctly identified objects (green).", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": ", the 2D-only model performs worst when the images of indoor scenes are complicated and have domain gaps with the training samples. The 3D-only model performs better since it can utilize the geometric information and is trained on closed-set labels. Our full model can always achieve the best performance because it integrates the geometric distinctiveness of point clouds and the open-vocabulary ability of the image models.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Breakdown of error sources.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Visualization example (a) for zero-shot 3DVG.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .Figure 10 .910Figure 9. Visualization example (b) for zero-shot 3DVG.", "figure_data": "", "figure_id": "fig_10", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Your task is to parse the description to program, thus the program can be executed to find the target object. Here are the functions and variables you can use to parse the description to program: # Function Definitions def LOC(object): # Implement the LOC function to locate the object in the room # Return the objects of query text def LEFT(targets, anchors): # Implement the LEFT function to find the target object to the left of the anchor object # Return the target object to the left of the anchor object ... variable CENTER # Implement the CENTER variable to represent the center of the room Here are some examples of the description and the corresponding program:", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Prompt for generating visual programs.", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "The round cocktail table in the corner of the room with the blue and yellow poster", "figure_data": "Suppose you are a person standing in a room. You need to find a keyboard it is closest to the door.In-context examples Grounding descriptionLLMVisual ProgramReasoning ProcessTarget PredictionOf course, I can help you find an object in a room based on its description. Please provide me with the details of the objectInput 3D ScanLOC ('round cocktail table')CLOSEST targets=BOX0 anchors=BOX1you're looking for, and I'll do my best to assist you in locating it. Room Information: Object 1 is a door located at ( -Program: BOX0=LOC('round cocktail table') BOX1=LOC('blue and yellow poster')LOC ('blue and yellow poster')0.65, 2.35, 1.05).TARGET=CLOSEST(targets=BOX0,Object 2 is a desk located atanchors=BOX1)(0.68, 1.30, 0.39).…Object 26 is a keyboard located at (-0.65, -1.06, 0.65). The keyboard closest to the doorDescription: Staring at the cabinets you want the window on the right sideLOC('window')RIGHT targets=BOX0 anchors=BOX1is Object 9, as it has a shorter distance of approximately 2.01 units, compared to Object 26, which has a distance of approximately 3.44 units. So, the correct object ID is Object 9.Program: BOX0=LOC('window') BOX1=LOC('cabinet') TARGET=RIGHT(targets=BOX0, anchors=BOX1)LOC('cabinet'), Visual Chat-GPT [55] uses ChatGPT as a central orchestrator, interfac-ing with a variety of visual foundation models to solve morechallenging problems. VISPROG [15] leverages the in-context learning ability to generate high-level modular pro-grams for solving complex and compositional natural lan-guage reasoning and image editing tasks. ViperGPT [44]directly feeds the API of available modules to LLM and", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Common relations in 3DVG.", "figure_data": "UpBehindLeft WindowRight WindowLeftRightFront", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "3DVG results on ScanRefer validation set. The accuracy on the \"unique\" subset, \"multiple\" subset, and whole validation set are all provided. Following [4], we label the scene as \"unique\" if it only contains a single object of its class. Otherwise, we label it as \"multiple\".", "figure_data": "5", "figure_id": "tab_6", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance analysis of language grounding on Nr3D.", "figure_data": "", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance comparison of the dialog with LLM and the visual programming approaches.", "figure_data": "5 TokensCostDialogGPT3.525.41959k$3.05DialogGPT427.51916k$62.6Program GPT3.532.1121k$0.19ProgramGPT435.4115k$4.24LEFT RIGHT FRONT BEHIND BETWEEN Accuracy26.5✓32.4✓✓35.9✓✓✓36.8✓✓✓✓38.4✓✓✓✓✓39.0", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study of different view-dependent modules.", "figure_data": "CLOSEST FARTHEST LOWER HIGHERAccuracy18.8✓30.7✓✓34.0✓✓✓36.8✓✓✓✓39.0", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study of different view-independent modules.", "figure_data": "", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablation study on different 3D backbones.", "figure_data": "2D Assistance Unique Multiple Acc@0.25CLIP62.527.135.7ViLT60.327.135.1BLIP-263.827.736.4Table 7. Ablation study on different 2D models.3D Backbone View-dep. View-indep. OverallPointNet++35.839.438.2PointBert36.039.838.6PointNeXt36.840.039.0Effect of prompt size. We use different numbers of in-context examples in the prompt for program generation.The result is shown in Figure 6. It can be seen that the per-formance on ScanRefer and Nr3d improve with the num-ber of examples. This is because more examples can guide", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" } ]
Zhihao Yuan; Jinke Ren; Chun-Mei Feng; Hengshuang Zhao; Shuguang Cui; Zhen Li
[ { "authors": "Panos Achlioptas; Ahmed Abdelreheem; Fei Xia; Mohamed Elhoseiny; Leonidas Guibas", "journal": "Springer", "ref_id": "b0", "title": "Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes", "year": "2020" }, { "authors": "Anthony Brohan; Yevgen Chebotar; Chelsea Finn; Karol Hausman; Alexander Herzog; Daniel Ho; Julian Ibarz; Alex Irpan; Eric Jang; Ryan Julian", "journal": "PMLR", "ref_id": "b1", "title": "Do as i can, not as i say: Grounding language in robotic affordances", "year": "2023" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b2", "title": "End-toend object detection with transformers", "year": "2020" }, { "authors": "Dave Zhenyu; Chen ; Angel X Chang; Matthias Nießner", "journal": "Springer", "ref_id": "b3", "title": "Scanrefer: 3d object localization in rgb-d scans using natural language", "year": "2020" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman", "journal": "", "ref_id": "b4", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Shizhe Chen; Pierre-Louis Guhur; Makarand Tapaswi; Cordelia Schmid; Ivan Laptev", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Language conditioned spatial relation reasoning for 3d object grounding", "year": "2022" }, { "authors": "Zhenyu Chen; Ronghang Hu; Xinlei Chen; Matthias Nießner; Angel X Chang", "journal": "", "ref_id": "b6", "title": "Unit3d: A unified transformer for 3d dense captioning and visual grounding", "year": "2023" }, { "authors": "Christopher Choy; Junyoung Gwak; Silvio Savarese", "journal": "", "ref_id": "b7", "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "year": "2019" }, { "authors": "Angela Dai; X Angel; Manolis Chang; Maciej Savva; Thomas Halber; Matthias Funkhouser; Nießner", "journal": "", "ref_id": "b8", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "John David; N Dionisio; William G Burns Iii; Richard Gilbert", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b9", "title": "3d virtual worlds and the metaverse: Current status and future possibilities", "year": "2013" }, { "authors": "Nouha Dziri; Ximing Lu; Melanie Sclar; Lorraine Xiang; Liwei Li; Bill Jian; Peter Yuchen Lin; Chandra West; Bhagavatula; Le Ronan; Jena D Bras; Hwang", "journal": "", "ref_id": "b10", "title": "Faith and fate: Limits of transformers on compositionality", "year": "2023" }, { "authors": "Qi Feng; Vitaly Ablavsky; Stan Sclaroff", "journal": "", "ref_id": "b11", "title": "Cityflow-nl: Tracking and retrieval of vehicles at city scale by natural language descriptions", "year": "2021" }, { "authors": "Golnaz Ghiasi; Xiuye Gu; Yin Cui; Tsung-Yi Lin", "journal": "Springer", "ref_id": "b12", "title": "Scaling open-vocabulary image segmentation with image-level labels", "year": "2022" }, { "authors": "Zoey Guo; Yiwen Tang; Ray Zhang; Dong Wang; Zhigang Wang; Bin Zhao; Xuelong Li", "journal": "", "ref_id": "b13", "title": "Viewrefer: Grasp the multi-view knowledge for 3d visual grounding", "year": "2023" }, { "authors": "Tanmay Gupta; Aniruddha Kembhavi", "journal": "", "ref_id": "b14", "title": "Visual programming: Compositional visual reasoning without training", "year": "2023" }, { "authors": "Joy Hsu; Jiayuan Mao; Jiajun Wu", "journal": "", "ref_id": "b15", "title": "Ns3d: Neurosymbolic grounding of 3d objects and relations", "year": "2023" }, { "authors": "Pin-Hao Huang; Han-Hung Lee; Hwann-Tzong Chen; Tyng-Luh Liu", "journal": "", "ref_id": "b16", "title": "Text-guided graph neural networks for referring 3d instance segmentation", "year": "2021" }, { "authors": "Shijia Huang; Yilun Chen; Jiaya Jia; Liwei Wang", "journal": "", "ref_id": "b17", "title": "Multiview transformer for 3d visual grounding", "year": "2022" }, { "authors": "Wenlong Huang; Pieter Abbeel; Deepak Pathak; Igor Mordatch", "journal": "PMLR", "ref_id": "b18", "title": "Language models as zero-shot planners: Extracting actionable knowledge for embodied agents", "year": "2022" }, { "authors": "Ayush Jain; Nikolaos Gkanatsios; Ishita Mediratta; Katerina Fragkiadaki", "journal": "Springer", "ref_id": "b19", "title": "Bottom up top down detection transformers for language grounding in images and point clouds", "year": "2022" }, { "authors": "Li Jiang; Hengshuang Zhao; Shaoshuai Shi; Shu Liu; Chi-Wing Fu; Jiaya Jia", "journal": "", "ref_id": "b20", "title": "Pointgroup: Dual-set point grouping for 3d instance segmentation", "year": "2020" }, { "authors": "Jin Zhao; Munawar Hayat; Yuwei Yang; Yulan Guo; Yinjie Lei", "journal": "", "ref_id": "b21", "title": "Context-aware alignment and mutual masking for 3dlanguage pre-training", "year": "2023" }, { "authors": "Justin Kerr; Chung ; Min Kim; Ken Goldberg; Angjoo Kanazawa; Matthew Tancik", "journal": "", "ref_id": "b22", "title": "Lerf: Language embedded radiance fields", "year": "2023" }, { "authors": "Wonjae Kim; Bokyung Son; Ildoo Kim", "journal": "PMLR", "ref_id": "b23", "title": "Vilt: Visionand-language transformer without convolution or region supervision", "year": "2021" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "Advances in neural information processing systems", "ref_id": "b24", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Boyi Li; Q Kilian; Serge Weinberger; Vladlen Belongie; Rene Koltun; Ranftl", "journal": "", "ref_id": "b25", "title": "Language-driven semantic segmentation", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b26", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Feng Liang; Bichen Wu; Xiaoliang Dai; Kunpeng Li; Yinan Zhao; Hang Zhang; Peizhao Zhang; Peter Vajda; Diana Marculescu", "journal": "", "ref_id": "b27", "title": "Open-vocabulary semantic segmentation with mask-adapted clip", "year": "2023" }, { "authors": "Jacky Liang; Wenlong Huang; Fei Xia; Peng Xu; Karol Hausman; Brian Ichter; Pete Florence; Andy Zeng", "journal": "IEEE", "ref_id": "b28", "title": "Code as policies: Language model programs for embodied control", "year": "2023" }, { "authors": "Ze Liu; Zheng Zhang; Yue Cao; Han Hu; Xin Tong", "journal": "", "ref_id": "b29", "title": "Group-free 3d object detection via transformers", "year": "2021" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b30", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Stylianos Mystakidis", "journal": "Metaverse. Encyclopedia", "ref_id": "b31", "title": "", "year": "2022" }, { "authors": "Openai Openai", "journal": "", "ref_id": "b32", "title": "", "year": "2023" }, { "authors": "Songyou Peng; Kyle Genova; Chiyu Jiang; Andrea Tagliasacchi; Marc Pollefeys; Thomas Funkhouser", "journal": "", "ref_id": "b33", "title": "Openscene: 3d scene understanding with open vocabularies", "year": "2023" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b34", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b35", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Or Charles R Qi; Kaiming Litany; Leonidas J He; Guibas", "journal": "", "ref_id": "b36", "title": "Deep hough voting for 3d object detection in point clouds", "year": "2019" }, { "authors": "Guocheng Qian; Yuchen Li; Houwen Peng; Jinjie Mai; Hasan Hammoud; Mohamed Elhoseiny; Bernard Ghanem", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b37", "title": "Pointnext: Revisiting pointnet++ with improved training and scaling strategies", "year": "2022" }, { "authors": "Xing-Yue Qiu; Chuang-Kai Chiu; Lu-Lu Zhao; Cai-Feng Sun; Shu-Jie Chen", "journal": "Interactive Learning Environments", "ref_id": "b38", "title": "Trends in vr/ar technologysupporting language learning from 2008 to 2019: A research perspective", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b39", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Junha Roh; Karthik Desingh; Ali Farhadi; Dieter Fox", "journal": "", "ref_id": "b40", "title": "Languagerefer: Spatial-language model for 3d visual grounding", "year": "2021" }, { "authors": "Jonas Schult; Francis Engelmann; Alexander Hermans; Or Litany; Siyu Tang; Bastian Leibe", "journal": "IEEE", "ref_id": "b41", "title": "Mask3d: Mask transformer for 3d semantic instance segmentation", "year": "2023" }, { "authors": "Shuran Song; Jianxiong Samuel P Lichtenberg; Xiao", "journal": "", "ref_id": "b42", "title": "Sun rgb-d: A rgb-d scene understanding benchmark suite", "year": "2015" }, { "authors": "Dídac Surís; Sachit Menon; Carl Vondrick", "journal": "", "ref_id": "b43", "title": "Vipergpt: Visual inference via python execution for reasoning", "year": "2023" }, { "authors": "Elisabetta Ayc ¸a Takmaz; Robert W Fedele; Marc Sumner; Federico Pollefeys; Francis Tombari; Engelmann", "journal": "", "ref_id": "b44", "title": "Openmask3d: Open-vocabulary 3d instance segmentation", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b45", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b46", "title": "Attention is all you need", "year": "2017" }, { "authors": "Sai Vemprala; Rogerio Bonatti; Arthur Bucker; Ashish Kapoor", "journal": "Microsoft Auton. Syst. Robot. Res", "ref_id": "b47", "title": "Chatgpt for robotics: Design principles and model abilities", "year": "2023" }, { "authors": "John Vince; John A Vince", "journal": "Springer", "ref_id": "b48", "title": "Mathematics for computer graphics", "year": "2006" }, { "authors": "Thang Vu; Kookhoi Kim; M Tung; Thanh Luu; Chang D Nguyen; Yoo", "journal": "", "ref_id": "b49", "title": "Softgroup for 3d instance segmentation on point clouds", "year": "2022" }, { "authors": "Johanna Wald; Armen Avetisyan; Nassir Navab; Federico Tombari; Matthias Nießner", "journal": "", "ref_id": "b50", "title": "Rio: 3d object instance relocalization in changing indoor environments", "year": "2019" }, { "authors": "Xin Wang; Qiuyuan Huang; Asli Celikyilmaz; Jianfeng Gao; Dinghan Shen; Yuan-Fang Wang; William Yang; Wang ; Lei Zhang", "journal": "", "ref_id": "b51", "title": "Reinforced cross-modal matching and selfsupervised imitation learning for vision-language navigation", "year": "2019" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed Chi; V Quoc; Denny Le; Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b52", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Wei Wei", "journal": "Journal of Hospitality and Tourism Technology", "ref_id": "b53", "title": "Research progress on virtual reality (vr) and augmented reality (ar) in tourism and hospitality: A critical review of publications from 2000 to 2018", "year": "2019" }, { "authors": "Chenfei Wu; Shengming Yin; Weizhen Qi; Xiaodong Wang; Zecheng Tang; Nan Duan", "journal": "", "ref_id": "b54", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "Yanmin Wu; Xinhua Cheng; Renrui Zhang; Zesen Cheng; Jian Zhang", "journal": "", "ref_id": "b55", "title": "Eda: Explicit text-decoupling and dense alignment for 3d visual grounding", "year": "2023" }, { "authors": "Fei Xia; Zhiyang Amir R Zamir; Alexander He; Jitendra Sax; Silvio Malik; Savarese", "journal": "", "ref_id": "b56", "title": "Gibson env: Real-world perception for embodied agents", "year": "2018" }, { "authors": "Zhengyuan Yang; Songyang Zhang; Liwei Wang; Jiebo Luo", "journal": "", "ref_id": "b57", "title": "Sat: 2d semantics assisted training for 3d visual grounding", "year": "2021" }, { "authors": "Xumin Yu; Lulu Tang; Yongming Rao; Tiejun Huang; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b58", "title": "Point-bert: Pre-training 3d point cloud transformers with masked point modeling", "year": "2022" }, { "authors": "Zhihao Yuan; Xu Yan; Yinghong Liao; Ruimao Zhang; Zhen Li; Shuguang Cui", "journal": "", "ref_id": "b59", "title": "Instancerefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring", "year": "2021" }, { "authors": "Zhihao Yuan; Xu Yan; Zhuo Li; Xuhao Li; Yao Guo; Shuguang Cui; Zhen Li", "journal": "", "ref_id": "b60", "title": "Toward explainable and fine-grained 3d grounding through referring textual phrases", "year": "2022" }, { "authors": "Yihan Zeng; Chenhan Jiang; Jiageng Mao; Jianhua Han; Chaoqiang Ye; Qingqiu Huang; Dit-Yan Yeung; Zhen Yang; Xiaodan Liang; Hang Xu", "journal": "", "ref_id": "b61", "title": "Clip2: Contrastive languageimage-point pretraining from real-world point cloud data", "year": "2023" }, { "authors": "Jiazhao Zhang; Chenyang Zhu; Lintao Zheng; Kai Xu", "journal": "", "ref_id": "b62", "title": "Fusion-aware point convolution for online semantic 3d scene segmentation", "year": "2020" }, { "authors": "Renrui Zhang; Ziyu Guo; Wei Zhang; Kunchang Li; Xupeng Miao; Bin Cui; Yu Qiao; Peng Gao; Hongsheng Li", "journal": "", "ref_id": "b63", "title": "Pointclip: Point cloud understanding by clip", "year": "2022" }, { "authors": "Lichen Zhao; Daigang Cai; Lu Sheng; Dong Xu", "journal": "", "ref_id": "b64", "title": "3dvgtransformer: Relation modeling for visual grounding on point clouds", "year": "2021" }, { "authors": "Denny Zhou; Nathanael Schärli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Claire Cui; Olivier Bousquet; Quoc Le", "journal": "", "ref_id": "b65", "title": "Least-to-most prompting enables complex reasoning in large language models", "year": "2022" }, { "authors": "Xiangyang Zhu; Renrui Zhang; Bowei He; Ziyu Guo; Ziyao Zeng; Zipeng Qin; Shanghang Zhang; Peng Gao", "journal": "", "ref_id": "b66", "title": "Pointclip v2: Prompting clip and gpt for powerful 3d open-world learning", "year": "2023" } ]
[ { "formula_coordinates": [ 5, 102.05, 124.38, 184.31, 11.03 ], "formula_id": "formula_0", "formula_text": "(u, v, w) T = I • (R|t) • P,(2)" } ]
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b26", "b50", "b17", "b36", "b68", "b34", "b60", "b30", "b23", "b23", "b26", "b26" ], "table_ref": [], "text": "The emergence of Large Language Models (LLMs), exemplified by various works such as [Ope23, CND + 22, Cha22, VSP + 17, BMR + 20, RNS + 18, ZRG + 22, DCLT18, RWC + 19], marks a significant breakthrough in artificial intelligence and natural language processing. As shown in [DSX23], these models, compared to other deep learning approaches, have vastly improved language comprehension and generation capabilities, a feat attributed to several pivotal factors:\n• Innovations in deep learning algorithms,\n• Advances in computing power, and\n• The abundance of textual data.\nThe evolution of LLMs can be traced back to initial attempts at language modeling using neural networks. Over time, continuous exploration of diverse architectures and methodologies has propelled LLMs into sophisticated models adept at handling intricate linguistic structures and generating coherent text. These models have demonstrated remarkable achievements across various applications, including question-answering systems [Ope23,Cha22], sentiment analysis [UAS + 20], text generation [Ope23, Cha22, ZRG + 22], and machine translation [HWL21]. Their impact has revolutionized interactions with language-based technologies, introducing new possibilities for AIdriven communication. As the scale of these language models continues to expand, recent research efforts have focused on optimizing the training efficiency of LLMs [MGN + 23, MWY + 23, PMXA23].\nAttention computation plays a crucial role in constructing LLMs [Cha22, BMR + 20, RWC + 19, DCLT18, RNS + 18, VSP + 17], allowing these models to assign weights to different elements in input sequences, aiding in capturing pertinent details for precise inference. Self-attention, a widely adopted technique in the transformer model, enables models to manage lengthy sequences efficiently, understand contextual nuances, and generate more cohesive outputs. Numerous studies have highlighted the benefits of self-attention in facilitating In-Context learning [WZW23, ZFB23,GTLV22,SZKS21]. The definition of softmax self-attention is presented as follows:\nDefinition 1.1 (Attention optimization [GSWY23]). Suppose that the matrices A 1 , A 2 , A 3 , B are in R n×d and the matrices X, Y are in R d×d .\nLet D(X) ∈ R n×n be defined as\nD(X) := diag(exp(A 1 XA ⊤ 2 )1 n )\nThen, the attention optimization problem is defined as follows:\nmin X,Y ∈R d×d D(X) -1 exp(A 1 XA ⊤ 2 )A 3 Y -B 2 F .\nBased on the Definition 1.1, [DLS23] has also proposed and studied the softmax regression problem, namely Definition 1.2 (Softmax regression problem [DLS23]). Suppose that the matrix A is in R n×d and a vector c is in R n .\nThen, the softmax regression problem is defined as\nmin x∈R d exp(Ax), 1 n -1 exp(Ax) -c 2 2 .\n[DSX23] first study the two-layer regression problem, which incorporates the softmax regression (see Definition 1.2) and ReLU function: Definition 1.3 (Two-Layer Regression studied in [DSX23]). Let A 1 be a matrix in R n×d and A 2 be a matrix in R m×n . Let b be a vector in R m and x, φ(x) be two vectors in R n . Let R be a real number. Suppose φ(x) i is defined as φ(x) i := ReLU(x i ) = max{0, x i }.\nThis regression problem is to minimize:\nexp(A 2 φ(A 1 x)), 1 m -1 exp(A 2 φ(A 1 x)) -b 2 2 under conditions min x∈{ x 2 ≤R,x∈R d } .\nNote that the inner layer of the regression problem studied by [DSX23] is the ReLU function, and the outer layer is the softmax regression (see Definition 1.2).\nIn this paper, we give a formal analysis of a two-layer regression problem, which contains the softmax regression (defined as in Definition 1.2) and a general Lipschitz continuous function h(x) ∈ R m (which we formally defined in Definition 3.5). Our regression problem is defined as follows:\nDefinition 1.4 (Two-Layer regression problem). Let h : R m → R m (Definition 3.5). Let\nA 1 ∈ R n×d , A 2 ∈ R m×n , b ∈ R m , and x ∈ R d . Let L : R d → R be L(x) := 1 2 • h(A 2 exp(A 1 x), 1 n -1 • exp(A 1 x)) -b 2 2 .\nOur purpose in the paper is to analyze\nmin x∈R d L(x).\nNote that the inner layer of our regression problem is the softmax regression, while the outer layer is represented by the Lipschitz continuous function h(x) ∈ R m . Hence, future researchers can integrate the two regression problems (as defined in Definition 1.3 and Definition 1.4) to construct a three-layer regression model: the first layer being ReLU, the second layer being softmax, and the third layer represented by h. Furthermore, our function h can be any arbitrary Lipschitz continuous function, ensuring generality for the third layer." }, { "figure_ref": [], "heading": "Our Motivations and Results", "publication_ref": [ "b30" ], "table_ref": [], "text": "The one-layer attention optimization problem (see Definition 1.1) has been well-studied: many different variations of it have been studied in [ZHDK23, AS23, BSZ23, DLS23, GSX23, GSY23, GSYZ23, DMS23, SYZ23], and finally, a complete one layer attention regression is studied in [GSWY23]. Although this one-layer attention can provide a solid theoretical foundation for supporting the ability of LLMs in simple tasks, like text generation, translation, and question-answering, effectively adapting LLMs for more complex structured prediction problems like computer vision remains an open challenge.\nMultilayer regression, on the other hand, can model more complex nonlinear relationships that better fit real-world data. Each layer transforms the data differently, allowing for the learning of complex patterns. With multiple layers, the model can detect intricate patterns in the data that would remain undetectable with a single linear regression, aiding the model's ability to generalize. Moreover, the multiple layers serve as a regularization mechanism, preventing overfitting-a concern in single linear models due to their fewer parameters. This attribute lends multilayer regression powerful predictive capabilities, particularly in tackling difficult regression problems involving high-dimensional, complex data. The layered structure extracts informative features, resulting in accurate predictions. Therefore, multilayer regression finds extensive application across diverse domains such as computer vision [KSH12, DAHG + 15, SZ14] and speech recognition [CJLV16, AAA + 16, SSB14, GMH13], showcasing its versatility and effectiveness in solving modern machine learning problems.\nThe informal version of our analysis is presented as follows:\nTheorem 1.5 (Informal version of Theorem 7.1). Let L(x) be defined in Definition 1.4 and x * be the solution to the problem min x∈R d L(x). Suppose A ≤ R and x 2 ≤ R.\nFor any accuracy parameter ǫ ∈ (0, 0.1) and failure probability δ ∈ (0, 0.1) and a initial point x 0 where x 0 2 ≤ R, there exist an randomized algorithm (Algorithm 1) that runs T = O(log( x 0x * 2 /ǫ)) iterations, spend O((nnz(C) + d ω ) poly(log(m/δ))) per iteration, and output x such that\nPr[ x -x * 2 ≤ ǫ] ≥ 1 -δ.\nTo establish these results, we derive several key properties of the loss landscape, including the positive definiteness of the Hessian and its Lipschitz continuity. Our proofs draw tools from matrix analysis and exploit the structural properties of the softmax activation. Then, we provide theoretical justification for using Newton's method to minimize the regularized training loss. This allows us to ensure that the suggested training method has assurances of converging locally. The analysis techniques developed here may extend to studying other activation functions and deeper network architectures.\nRoadmap. In Section 2, we delve into related works. In Section 3, we introduce the notations and basic mathematical principles that underpin the subsequent developments in this paper. In Section 4, we present crucial gradient and Hessian matrices pertinent to our study. In Section 5, we focus on analyzing the properties of the Hessian matrix, including its positive semi-definiteness (PSD) and Lipschitz continuity. In Section 6, we introduce Newton's method. In Section 7, we combine the Hessian properties and Newton's methods to formulate the central Theorem (refer to Theorem 7.1) and Algorithm (refer to Algorithm 1) of this paper. Finally, in Section 8, we wrap up by summarizing our findings and highlighting the significance of our work." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b60", "b34", "b66", "b52", "b55", "b3", "b10", "b43", "b71", "b14", "b5", "b45", "b24", "b42", "b29", "b60" ], "table_ref": [], "text": "Transformer theory. Since the emergence of LLMs, extensive research has been dedicated to analyzing their abilities, which build upon transformer theory. One crucial area of exploration revolves around understanding how transformers achieve in-context learning. [SZKS21] proposed the comprehension of how a single-head attention module learns in machine translation. They defined \"Knowing to Translate Individual Words\" (KTIW), signifying the model's capacity to understand a word's translation. They argued that KTIW predominantly influences attention learning, as it can be acquired from word co-occurrence prior to attention learning. [GTLV22] demonstrated that transformers can effectively learn linear function classes in close proximity to optimal levels. When provided with specific functions, transformers can extrapolate information about related functions. Additionally, [ASA + 22] linked in-context learning to conventional algorithms by encoding smaller models in activations and updating them with new examples, showcasing this through linear regression.\n[ZFB23] delved into examining in-context learning in regression, discovering that transformers can identify global optima but encounter difficulties under covariate shifts. Furthermore, [WZW23] interpreted in-context learning as Bayesian selection, enabling the deduction of task-relevant information.\nMoreover, research aims to comprehend the intrinsic structure of transformers. [ZPGA23] discovered evidence of parsing in linguistic tasks, drawing parallels to algorithms like Inside-Outside. Additionally, [PSZA23] pinpointed the location of skills post fine-tuning, demonstrating that only a few parameters are pivotal.\nOther investigations have delved into the capabilities of transformers. [SHT23] found logarithmic scaling concerning input size in sparse problems, surpassing other networks, but identified linear scaling in detection. Additionally, [AG23] linked scaling laws to inductive biases that facilitate efficient pre-training. Lastly, [BCE + 23] evaluated GPT-4 across tasks, shedding light on future work extending beyond next-word prediction.\nAttention. The work by [BCB14] stands as one of the earliest instances of utilizing attention in NLP. They posited that incorporating an attention mechanism with a fixed-length vector could enhance encoder-decoder performance, enabling the decoder to focus on pertinent words during translation. This innovation notably elevated translation quality compared to models devoid of attention. Subsequently, [LPM15] elucidated two attention types: local attention, which focuses on a subset of words, and global attention, which considers all words.\nAttention has found broad applications, such as in image captioning [XBK + 15], aligning image components with caption words. Within Transformers [VSP + 17], attention captures disparities between words within sentences. In graph neural networks [VCC + 17], attention computes relationships between nodes and their neighbors.\nFollowing the emergence of LLMs, numerous studies have delved into attention computation [DMS23, AS23, ZHDK23, CLP + 21, LSZ23, BSZ23, KKL20]. Noteworthy among these are [ZHDK23, CLP + 21, KKL20], which employ locality-sensitive hashing to approximate attention, exemplified by KDEformer [ZHDK23], providing spectral norm bounds. Research explores both static and dynamic attention [BSZ23,AS23], and investigates hyperbolic regression problems [LSZ23]. [DMS23] proposes algorithms to reduce attention matrix dimensionality in LLMs.\nOther works have analyzed attention optimization and convergence [LLR23, GMS23, SZKS21, ZKV + 20]. [LLR23] delved into acquiring word co-occurrence knowledge. [GMS23] focused on regression with exponential activations. [SZKS21] analyzed the prioritization of significant words and attention evolution. [ZKV + 20] demonstrated that heavy-tailed noise leads to issues in Stochastic Gradient Descent (SGD) compared to adaptive methods." }, { "figure_ref": [], "heading": "Convergence and Deep Neural Network Optimization", "publication_ref": [ "b27", "b9", "b69", "b29", "b45" ], "table_ref": [], "text": "Numerous works have concentrated on analyzing optimization, convergence guarantees, and training enhancements for neural networks.\n[LL18] showcased how stochastic gradient descent optimizes over-parameterized networks on structured data, while [DZPS18] demonstrated gradient descent's effectiveness on such networks.\n[AZLS19a] developed a convergence theory for over-parameterized deep networks using gradient descent. [AZLS19b] scrutinized convergence rates for recurrent neural networks. [ADH + 19b] provided an analysis of optimization and generalization for over-parameterized two-layer networks.\n[ADH + 19a] delved into computation with infinitely wide networks. [CGH + 19] introduced the Gram-Gauss-Newton method for optimizing over-parameterized networks. [ZG19] enhanced the analysis of stochastic gradient descent convergence for deep networks, necessitating milder overparameterization.\nOther studies, such as those by [OM20, JT19, ZPD + 20], center on optimization and generalization, while [GMS23,LSZ23] emphasize convergence rates and stability. Works by [BPSW20, SZZ21, ALS + 22, MOSW22, Zha22] focus on developing specialized optimization algorithms and techniques, and [LSS + 20, HLSY20] leverage network structure." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "In Section 3.1, we present the definitions of the important functions in this paper, where these functions are used to decompose the loss function (defined in Definition 1.4) into simpler forms. In Section 3.2, we express our loss function by using L reg and L tot .\nNotations. Now, we introduce the key notations.\nLet x ∈ R d >0 denote a length-n where all the entries are positive. For a matrix A ∈ R n×d , exp(A) denotes the n × d matrix given by exp(A) = ∞ i=0 1 i! A i . We use 1 n to denote a length-n vector where all its entries are 1. For an arbitrary vector x, we denote its ℓ p norm by x p , where\nx 1 := n i=1 |x i |, x 2 := ( n i=1 x 2 i ) 1/2 , and x ∞ := max i∈[n] |x i |.\nFor a matrix A ∈ R n×k where n > k, we denote A to be the spectral norm of A and A := sup x∈R k Ax 2 / x 2 For two vectors a, b ∈ R n , we define a, b := n i=1 a i b i . For two vectors a, b ∈ R n , we use a • b to denote the vector where its i-th entry is a i b i for i ∈ [n]. For x ∈ R n , we define the diagonal matrix diag(x) ∈ R n×n , whose diagonal entries are given by diag(x) i,i = x i for i ∈ [n], and the entries elsewhere are all zero. For a symmetric matrix A ∈ R n×n with real entries, we denote A ≻ 0 to indicate the matrix is positive-definite (PD), if x ⊤ Ax > 0 for all x ∈ R n . For a symmetric matrix A ∈ R n×n with real entries, we denote A 0 to indicate the matrix is positive-semidefinite (PSD), if x ⊤ Ax ≥ 0 for all x ∈ R n ." }, { "figure_ref": [], "heading": "Definition", "publication_ref": [], "table_ref": [], "text": "In this section, we present the definition of the functions.\nDefinition 3.1. Given A 1 ∈ R n×d , let u(x) : R d → R n >0 be u(x) := exp(A 1 • x) Definition 3.2. Let α(x) : R d → R >0 be α(x) := u(x), 1 n . Definition 3.3. Let f (x) : R d → R n >0 be f (x) := α(x) -1 • u(x).\nFact 3.4. By definition of f (x), we can see that f (x) 1 = 1.\nProof. By the definition of f (x), we have\nf (x) 1 = α(x) -1 • u(x) 1 = exp(A 1 • x), 1 n -1 • exp(A 1 • x) 1 = (exp(A 1 • x) • 1 ⊤ n ) -1 • exp(A 1 • x) 1 = exp(A 1 • x) -1 1 • exp(A 1 • x) 1 = exp(A 1 • x) -1 1 • exp(A 1 • x) 1 = 1,\nwhere the second step follows from the definition of α(x) and u(x) (see Definition 3.2 and Definition 3.1), the third step follows from Fact A.1, the fourth step follows from the definition of • 1 , the fifth step follows from simple algebra, and the last step follows from simple algebra. Definition 3.5. Let h : R m → R m be a Lipschitz continuous function, namely, there exists a real number L h ≥ 0 such that for all x, y ∈ R m ,\nh(x) -h(y) 2 ≤ L h • x -y 2 .\nMoreover, we assume h ′ is also a Lipschitz continuous function, namely, there exists a real number L h ≥ 0 such that for all x, y ∈ R m ,\nh ′ (x) -h ′ (y) 2 ≤ L h • x -y 2 . Definition 3.6. Let h : R m → R m (Definition 3.5), A 2 ∈ R m×n , and b ∈ R m . Let c(x) : R d → R m be c(x) := h(A 2 f (x)) -b Definition 3.7. Let L : R d → R >0 be L(x) := 1 2 • c(x) 2 2 ." }, { "figure_ref": [], "heading": "Regularization", "publication_ref": [ "b45" ], "table_ref": [], "text": "In this section, we introduce L reg and L tot .\nDefinition 3.8. Let A 1 ∈ R n×d . For a given vector w ∈ R n , we let W = diag(w) ∈ R n×n . We define L reg : R d → R as follows\nL reg (x) := 0.5 W A 1 x 2 2\nLemma 3.9 (Folklore, see [LSZ23] as an example). For a given vector w ∈ R n , we let W = diag(w) ∈ R n×n . Let L reg : R d → R be defined as Definition 3.8. Then, we have\n• The gradient is dL reg dx = A ⊤ 1 W 2 A 1 x • The Hessian is d 2 L reg dx 2 = A ⊤ 1 W 2 A 1\nDefinition 3.10. If the following conditions hold\n• Let L(x) be defined in Definition 3.7\n• Let L reg (x) be defined in Definition 3.8 Then we define our loss function as follows:\nL tot (x) := L(x) + L reg (x)" }, { "figure_ref": [], "heading": "Gradient and Hessian", "publication_ref": [], "table_ref": [], "text": "For Section 4.1, we present the important gradient of the basic functions. For Section 4.2, we present the important Hessian of the basic functions. For Section 4.3, we re-organize the important parts of the Hessian to simplify our expression and make further analysis more convenient." }, { "figure_ref": [], "heading": "Gradient", "publication_ref": [], "table_ref": [], "text": "Now, we present the gradient, where the full details can be seen in Section B.\nLemma 4.1 (Informal version of Lemma B.1). We consider that u(x), α(x), f (x), h, c(x), L(x) are established in Definition 3.1, 3.2, 3.3, 3.5, 3.6, and 3.7.\nIf the following holds\n• A 1, * ,i ∈ R n represents the i-th column vector of A 1 ∈ R n×d for all i ∈ [d] • Let A 1,l, * ∈ R d denote the l-th row vector of A 1 ∈ R n×d for all l ∈ [n] • Let A 2,k, * ∈ R n denote the k-th row vector A 2 ∈ R m×n for each k ∈ [m]\nthen for each i ∈ [d], we have\n• Part 1. Let p(x) i ∈ R n be defined as p(x) i := f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x) df (x) dx i = p(x) i • Part 2. Let h ′ (A 2 f (x)) ∈ R m denote a length-m vector i-th coordinate is the dh(y i ) dy i y i =(A 2 f (x)) i\n(here we can should think of h : R → R), we have\ndh(A 2 f (x)) dx i m×1 = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 • Part 3. dL(x) dx i = c(x) m×1 , diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 • Part 4. Let h ′ (A 2 f (x)) ∈ R m dh ′ (A 2 f (x)) dx i m×1 = diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1" }, { "figure_ref": [], "heading": "Hessian", "publication_ref": [ "b23" ], "table_ref": [], "text": "Now, we display the Hessian, where the full details can be seen in Section C.\nDefinition 4.2. We consider that h and c(x) are established in Definition 3.5 and 3.6. To simplify our calculations, we define Q 2 (x) ∈ R m×n and q 2 (x) ∈ R n as following\n• Q 2 (x) = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • • q 2 (x) = Q 2 (x) ⊤ n×m c(x) m×1\nDefinition 4.3. We consider that f (x), c(x), L(x), q 2 (x) are established in Definition 3.3, 3.6, 3.7, and 4.2. Let g(x) ∈ R d be\ng(x) := -A ⊤ 1 d×n (f (x) n×1 q 2 (x), f (x) scalar + diag(f (x)) n×n q 2 (x) n×1 )\nFor each entry g(x) i of g(x), where i ∈ [d], we have\ng(x) i := -A 1, * ,i , f (x) scalar q 2 (x), f (x) scalar + A 1, * ,i , f (x) • q 2 (x) scalar 4.3 Re-oragnizing B(x)\nIn this section, since the expression of Hessian is too complicated, we reorganize these terms.\nDefinition 4.4 (A variation of Definition 6.1 in [DLS23]). We define B(x) as follows\nB(x) = 12 i=1 B i\nwhere\nB 1 = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) B 2 = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ B 3 = f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) B 4 = f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ B 5 = 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ B 6 = 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) B 7 = diag(Q 2 (x) ⊤ c(x)) diag(f (x)) B 8 = diag(f (x))A 2 diag(h ′′ (A 2 f (x))) • diag(c(x)) diag(f (x)) B 9 = diag(f (x))A 2 diag(h ′′ (A 2 f (x))) • diag(c(x))f (x)f (x) ⊤ B 10 = f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) • diag(c(x)) diag(f (x)) B 11 = f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) • diag(c(x))f (x)f (x) ⊤ B 12 = diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x)).\nLemma 4.5 (Informal version of Lemma C.6). Let d 2 L(x) dx i dx j be computed in Lemma C.5, then, we have\nd 2 L dx 2 = A ⊤ B(x)A • Let max{ h(A 2 f (x)) 2 , h ′ (A 2 f (x)) 2 } ≤ R h . • Assume h(x) -h(y) 2 ≤ L h • x -y 2 • Assume h ′ (x) -h ′ (y) 2 ≤ L h • x -y 2\nThen we have\n∇ 2 L(x) -∇ 2 L(y) ≤ 59(R + R h )n 2 exp(4R 2 )β -4 R 5 R 2 h R f L h x -y 2\nProof. We give an overview of this proof. For specific details, please see Appendix E. We have\n∇ 2 L(x) -∇ 2 L(y) = 6 i=1 (G i (x) -G i (y)) ≤ 59(R + R h )n 2 exp(4R 2 )β -4 R 5 R 2 h R f L h x -y 2 ,\nwhich follows from Lemma E.5." }, { "figure_ref": [], "heading": "Approximate Newton Method", "publication_ref": [], "table_ref": [], "text": "In Section 6.1, we present basic definitions along with the update rule. In Section 6.2 covers the approximation of the Hessian matrix and the associated update rule." }, { "figure_ref": [], "heading": "Update Rule in Newton Method", "publication_ref": [], "table_ref": [], "text": "In this section, we study the Newton method's local convergence. Our focus is on the optimization of the target function given by min\nx∈R d L(x)\nunder the following set of assumptions: Definition 6.1 ((l, M )-good Loss function). Let L : R d → R.\nWe define L as (l, M )-good if it meets the following criteria:\n• For a positive scalar l > 0, if there is a vector x * ∈ R d satisfying:\n-∇L(x * ) = 0 d . -∇ 2 L(x * ) l • I d .\n• If there is M > 0 satisfying:\n∇ 2 L(y) -∇ 2 L(x) ≤ M • y -x 2\n• Denote the initialization point as x 0 . If r 0 := x 0 -x * 2 satisfying:\nr 0 M ≤ 0.1l\nWe define gradient and Hessian as follows Definition 6.2 (Hessian and Gradient). Let g : R d → R d and H : R d → R d×d . We define\ng(x) := ∇L(x)\nto be the gradient of L(x).\nWe define\nH(x) := ∇ 2 L(x)\nto be the Hessian of L(x).\nConsidering the gradient function g : R d → R d and the Hessian matrix H : R d → R d×d , the exact steps of the Newton method are described as follows: Definition 6.3. We define\nx t+1 = x t -H(x t ) -1 • g(x t )" }, { "figure_ref": [], "heading": "Update Rule and Hessian Approximation", "publication_ref": [ "b25", "b65", "b40", "b7", "b45", "b45", "b45" ], "table_ref": [], "text": "In practice, computing the exact ∇ 2 L(x t ) or (∇ 2 L(x t )) -1 is extremely challenging and resourceintensive. Therefore, it's practical to explore approximated calculations for the gradient and Hessian. This computation is outlined as: Definition 6.4 (Approximate Hessian). We denote an approximate Hessian H(x t ) ∈ R d×d for any given Hessian H(x t ) ∈ R d×d as a matrix satisfying the following condition:\n(1 -ǫ 0 ) • H(x t ) H(x t ) (1 + ǫ 0 ) • H(x t ).\nTo obtain the approximate Hessian H(x t ), we present a tool from Lemma 4.5 in [DSW22]. Lemma 6.5 ([DSW22, SYYZ22]). We use ǫ 0 = 0.01 to represent the constant precision parameter. Consider A ∈ R n×d , and for all D ∈ R n×n being a positive diagonal matrix, there is an algorithm with\nO((nnz(A) + d ω ) poly(log(n/δ))) running time. This algorithm generates a matrix D ∈ R n×n , which is O(d log(n/δ)) sparse diagonal, such that (1 -ǫ 0 )A ⊤ DA A ⊤ DA (1 + ǫ 0 )A ⊤ DA.\nHere, ω ≈ 2.373 is the exponent of matrix multiplication [Wil12,LG14,AW21]. Lemma 6.6 (Iterative shrinking Lemma, Lemma 6.9 on page 32 of [LSZ23]). Suppose ǫ 0 ∈ (0, 0.1), r t := x t -x * 2 , and r t := M • r t .\nThen,\nr t+1 ≤ 2 • (ǫ 0 + r t /(l -r t )) • r t ,\nwhere L is (l, M )-good.\nThe total number of iterations in the algorithm is represented by T . To utilize Lemma 6.6, we require the following lemma based on the induction hypothesis. This lemma is found in [LSZ23]. Lemma 6.7 (Induction hypothesis of [LSZ23] (see Lemma 6.10 on page 34)). Let i ∈ [t].\nLet r i := x i -x * 2 . By Definition 6.1 and Definition 6.4, we suppose ǫ 0 = 0.01, for all i, r i ≤ 0.4 • r i-1 and M • r i ≤ 0.1l.\nThen we have\n• r t+1 ≤ 0.4r t • M • r t+1 ≤ 0.1l7" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "In this section, we present our main result.\nAlgorithm 1 Main algorithm.\n1: procedure OurAlgorithm(b ∈ R n , A ∈ R n×d , w ∈ R n , ǫ, δ) ⊲ Theorem 7.1 2:\nChoose x 0 and suppose x 0 satisfies Definition 6.1)\n3: T ← log( x 0 -x * 2 /ǫ\n) is the number of iterations.\n4:\nfor t = 0 → T do 5: D ← B(x t ) + diag(w • w) 6: D ← SubSample(D, A, ǫ 1 = Θ(1), δ 1 = δ/T ) ⊲ Lemma 6.5 7: g ← -A ⊤ 1 (f (x) q 2 (x), f (x) + diag(f (x))q 2 (x))\n8:\nH ← A ⊤ DA 9:\nx t+1 ← x t + H -1 g 10:\nend for 11:\nx ← x T +1\n12:\nreturn x 13: end procedure Theorem 7.1 (Formal version of Theorem 1.5). If the following conditions hold\n• We have L(x) be established in Definition 3.7 • Suppose A ≤ R • Suppose x 2 ≤ R • x * represents the solution of min x∈R d L(x) • Let l be a scalar such that w 2 i ≥ 12R h L h R(R + R h ) + l/σ min (A) 2 for ∀i ∈ [n]\n• Suppose x 0 be the initial point that satisfies M x 0 -x * 2 ≤ 0.1l.\n• Let M = 59(R + R h )n 2 exp(4R 2 )β -4 R 5 R 2 h R f L h\nFor any accuracy parameter ǫ ∈ (0, 0.1) and a failure probability δ ∈ (0, 0.1), there exists a randomized algorithm (Algorithm 1) that performs T = O(log( x 0 -x * 2 /ǫ)) iterations. Each iteration requires O((nnz(C) + d ω ) poly(log(m/δ))) computational steps. The output x from this algorithm satisfies Pr[ x -x * 2 ≤ ǫ] ≥ 1 -δ, where ω represents the exponent for matrix multiplication.\nProof. This can be proved by combining Lemma 5.1, Lemma 6.7, Lemma 6.5, Lemma 5.2 and Lemma 6.6.\n• The upper bound on M : Lemma E.5 and M -lipschitz definition.\n• Hessian is PD: Lemma 5.1\n• Hessian is Lipschitz: Lemma 5.2\n• Cost per iteration: Lemma 6.5\n• Convergence per Iteration: Lemma 6.6. We can get\nx k -x * 2 ≤ 0.4 • x k-1 -x * 2 .\n• Number of iterations: we can get\nx T -x * 2 ≤ 0.4 T • x 0 -x * 2\nafter T iterations. Choosing the value of T allows us to achieve the intended limit. The failure probability is determined by employing a union bound over the T iterations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we formulated a two-layer regression model with softmax and Lipschitz continuous activations. We derived key properties of the loss landscape, such as the positive definiteness and Lipschitz continuity of the Hessian matrix. These findings ensure the convexity and smoothness necessary for optimization convergence guarantees. We utilized an approximate Newton method to minimize the regularized training loss and established its local convergence rate. Under reasonable assumptions, our algorithm discovers an ǫ-approximate minimizer within O(log(1/ǫ)) iterations, with nearly linear computational cost per iteration. Our general framework accommodates an outer activation that can be any arbitrary Lipschitz function, facilitating extensions to other nonlinear units. This adaptability proves crucial for applying our approach across diverse applications. By combining matrix analysis and optimization, our techniques can serve as a blueprint for analyzing deeper neural network architectures. In summary, our paper takes significant strides in comprehending optimization and generalization in multilayer nonlinear networks. We believe that our analyses and algorithm provide a principled approach, forming a solid foundation for addressing more intricate models and tasks in the future.\nRoadmap. In Section A, we introduce the basic notations and the important algebraic facts about matrices, vectors, and their derivatives. In Section B, we compute the gradient of the important functions. In Section C, we compute the Hessian of the important functions. In Section D, we show that the Hessian matrix is positive definite. In Section E, we show that the Hessian matrix is Lipschitz." }, { "figure_ref": [], "heading": "A Preliminary", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce the basic notations. Then, in Section A.1, we present the important mathematical facts about matrices and vectors.\nNotations. Let x ∈ R d >0 denote a length-n where all the entries are positive. For a matrix\nA ∈ R n×d , exp(A) denotes the n × d matrix given by exp(A) = ∞ i=0 1 i! A i .\nWe use 1 n to denote a length-n vector where all its entries are 1. For an arbitrary vector x, we denote its ℓ p norm by x p , where\nx 1 := n i=1 |x i |, x 2 := ( n i=1 x 2 i ) 1/2 , and x ∞ := max i∈[n] |x i |.\nFor a matrix A ∈ R n×k where n > k, we denote A to be the spectral norm of A and\nA := sup x∈R k Ax 2 / x 2 For two vectors a, b ∈ R n , we define a, b := n i=1 a i b i . For two vectors a, b ∈ R n , we use a • b to denote the vector where its i-th entry is a i b i for i ∈ [n]. For x ∈ R n ,\nwe define the diagonal matrix diag(x) ∈ R n×n , whose diagonal entries are given by diag(x) i,i = x i for i ∈ [n], and the entries elsewhere are all zero. For a symmetric matrix A ∈ R n×n with real entries, we denote A ≻ 0 to indicate the matrix is positive-definite (PD), if x ⊤ Ax > 0 for all x ∈ R n . For a symmetric matrix A ∈ R n×n with real entries, we denote A 0 to indicate the matrix is positive-semidefinite (PSD), if x ⊤ Ax ≥ 0 for all x ∈ R n ." }, { "figure_ref": [], "heading": "A.1 Basic Fact", "publication_ref": [], "table_ref": [], "text": "In this section, we present the important mathematical properties.\nFact A.1. Let a, b, c ∈ R n denote three column vectors. We have\n• a, b scalar c n×1 = a ⊤ b scalar c n×1 = c n×1 a ⊤ 1×n b n×1 = c n×1 b ⊤ 1×n a n×1 • a • b n×1 = b • a n×1 = diag(a) n×n b n×1 = diag(b) n×n a n×1 • a ⊤ 1×n (b • c) n×1 = b ⊤ 1×n (a • c) n×1 = c ⊤ 1×n (a • b) n×1 • diag(a • b) n×n = diag(a) n×n diag(b) n×n • diag(a + b) n×1 = diag(a) n×1 + diag(b) n×1 • a, b + c, b = a + c, b = b, a + c = b, a + b, c ." }, { "figure_ref": [], "heading": "Now, we present the derivative rules for matrices.", "publication_ref": [], "table_ref": [], "text": "Fact A.2. We have\n• d f (x),g(x) dt = df (x) dt , g(x) + f (x), dg(x) dt • d dt (f (x) + g(x)) = df (x) dt + dg(x) dt • d dt (f (x) • g(x)) = f (x) • dg(x) dt + g(x) • df (x) dt Fact A.3. For two length-n column vectors u, v ∈ R n , we have • u, v ≤ u 2 • v 2 (Cauchy-Schwarz Inequality) • u, v = u • v, 1 n • for all real number a, au 2 = |a| • u 2 • u ⊤ 2 = u 2 • u + v 2 ≤ u 2 + v 2 • u • v 2 ≤ u ∞ • v 2 • diag(u) ≤ u ∞ • u ∞ ≤ u 2 ≤ √ n • u ∞ • u 2 ≤ u 1 ≤ √ n • u 2 • exp(u) ∞ ≤ exp( u ∞ ) ≤ exp( u 2 ) • if u 2 , v 2 ≤ R, then exp(u) -exp(v) 2 ≤ exp(R) • u -v 2 , for all R ≥ 4.\nFact A.4. For arbitrary matrices A and B, we have\n• For a scalar c ∈ R, we have c • A ≤ |c| • A • A ⊤ = A • A + B ≤ A + B • A • B ≤ A • B • For any vector x, we have Ax 2 ≤ A • x 2 • For two vectors a, b ∈ R n , we have ab ⊤ ≤ a 2 b 2 Fact A.5. For two length-n column vectors u, v ∈ R n , we have • uu ⊤ u 2 2 • I n .\nHere I n ∈ R n×n denotes an identity matrix." }, { "figure_ref": [], "heading": "B Gradient", "publication_ref": [], "table_ref": [], "text": "In this section, we present the gradient of our loss function L: to do this, we break up the loss function into simpler forms, compute the gradient of each of these forms, and combine the together to form the gradient for L.\nLemma B.1. If the following holds\n• Let u(x) be defined as Definition 3.1.\n• Let α(x) be defined as Definition 3.2\n• Let f (x) be defined as Definition 3.3\n• Let h : R m → R m be defined as Definition 3.5\n• Let c(x) be defined as Definition 3.6\n• Let L(x) be defined as Definition 3.7\n• Let A 1, * ,i ∈ R n denote the i-th column vector of A 1 ∈ R n×d for all i ∈ [d] • Let A 1,l, * ∈ R d denote the l-th row vector of A 1 ∈ R n×d for all l ∈ [n] • Let A 2,k, * ∈ R n denote the k-th row vector A 2 ∈ R m×n for each k ∈ [m]\nthen for each i ∈ [d], we have\n• Part 1. du(x) dx i = u(x) • A 1, * ,i • Part 2. dα(x) dx i = u(x), A 1, * ,i • Part 3. dα(x) -1 dx i = -α(x) -1 • f (x), A 1, * ,i • Part 4. Let p(x) i ∈ R n be defined as p(x) i := f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x) df (x) dx i = p(x) i • Part 5. d f (x), A 1, * ,i dx i = -f (x), A 1, * ,i 2 + f (x), A 1, * ,i • A 1, * ,i\n• Part 6. For each j ∈ [d] and j = i,\nd f (x), A 1, * ,i dx j = -f (x), A 1, * ,i • f (x), A 1, * ,j + f (x), A 1, * ,i • A 1, * ,j • Part 7. Let h ′ (A 2 f (x)) ∈ R m denote a length-m vector i-th coordinate is the dh(y i ) dy i y i =(A 2 f (x)) i\n(here we can should think of h : R → R), we have\ndh(A 2 f (x)) dx i m×1 = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 • Part 8. dc(x) dx i = dh(A 2 f (x)) dx i • Part 9. dL(x) dx i = c(x) m×1 , diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 • Part 10. Let h ′ (A 2 f (x)) ∈ R m dh ′ (A 2 f (x)) dx i m×1 = diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 Proof. Proof of Part 1. We have du(x) dx i = d(exp(A 1 • x)) dx i = exp(A 1 x) • d(A 1 x) dx i = exp(A 1 x) • (A 1 • dx dx i ),(1)\nwhere the first step follows from the definition of u(x) (see Definition 3.1), the second step follows from the chain rule, and the third step follows from simple algebra. Furthermore, we have\n( dx dx i ) i = 1.\nand for j = i\n( dx dx i ) j = 0.\nTherefore, we have\ndx dx i = e i ,\nwhich implies that\nA 1 • dx dx i = A 1, * ,i .(2)\nTherefore, by combining Eq. (1) and Eq. (2), we have\ndu(x) dx i = u(x) • A 1, * ,i .\nProof of Part 2. We have\ndα(x) dx i = d u(x), 1 n dx i = du(x) dx i , 1 n + u(x), d1 n dx i = u(x) • A 1, * ,i , 1 n = u(x), A 1, * ,i ,\nwhere the first step follows from the definition of α(x) (see Definition 3.2), the second step follows from Fact A.2, the third step follows from Part 1 and d1n dx i = 0 n , and the last step follows from Fact A.3.\nProof of Part 3.\ndα(x) -1 dx i = -α(x) -2 • d dx i α(x) = -α(x) -2 u(x), A 1, * ,i = -α(x) -1 f (x), A 1, * ,i\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 2, and the last step follows from Definition 3.3. Proof of Part 4.\ndf (x) dx i = d(α(x) -1 u(x)) dx i = u(x) • dα(x) -1 dx i + α(x) -1 • du(x) dx i = -α(x) -2 u(x), A 1, * ,i • u(x) + α(x) -1 • u(x) • A 1, * ,i = -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i\nwhere the 1st step follows from Definition 3.3, the 2nd step follows from the differential chain rule, the 3rd step follows from the results of Part 1 and Part 3, and the last step follows from Definition 3.3.\nProof of Part 5.\nd f (x), A 1, * ,i dx i = A ⊤ 1, * ,i df (x) dx i = A ⊤ 1, * ,i (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) = -f (x), A 1, * ,i • A ⊤ 1, * ,i f (x) + A ⊤ 1, * ,i f (x) • A 1, * ,i = -f (x), A 1, * ,i 2 + f (x), A 1, * ,i • A 1, * ,i\nwhere the 1st step follows from u, v = v ⊤ u, the 2nd step follows from the result of Part 4, the 3rd step follows from the distributive property of algebra, and the last step follows from\nu, v = u ⊤ v = v ⊤ u. Proof of Part 6. For j ∈ [d], i ∈ [d] and j = i d f (x), A 1, * ,i dx j = A ⊤ 1, * ,i df (x) dx j = A ⊤ 1, * ,i (-f (x), A 1, * ,j • f (x) + f (x) • A 1, * ,j ) = -f (x), A 1, * ,j • A ⊤ 1, * ,i f (x) + A ⊤ 1, * ,i f (x) • A 1, * ,j = -f (x), A 1, * ,j • f (x), A 1, * ,i + A 1, * ,i , f (x) • A 1, * ,j = -f (x), A 1, * ,i • f (x), A 1, * ,j + f (x), A 1, * ,i • A 1, * ,j\nwhere the 1st step follows from u, v = v ⊤ u, the 2nd step follows from the result of Part 4, the 3rd step follows from the distributive property of algebra, and the 4th step follows from u, v = u ⊤ v = v ⊤ u, and the last step follows from u, v\n• w = v, u • w . Proof of Part 7. For k ∈ [m], dh(A 2 f (x)) k dx i = h ′ (A 2 f (x)) k • d(A 2 f (x)) k dx i = h ′ (A 2 f (x)) k • d A ⊤ 2,k, * , f (x) dx i = h ′ (A 2 f (x)) k • A ⊤ 2,k, * n×1 , df (x) dx i n×1 = h ′ (A 2 f (x)) k • A ⊤ 2,k, * , -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i\nwhere the 1st step follows from Definition 3.5, the 2nd step follows from uv = u ⊤ , v , the 3rd step follows from Fact A.2, and the last step follows from the result of Part 4. Thus,\ndh(A 2 f (x)) dx i m×1 = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) n×1 Proof of Part 8. dc(x) dx i = dh(A 2 f (x)) -b dx i = dh(A 2 f (x)) dx i - db dx i = dh(A 2 f (x)) dx i\nwhere the 1st step follows from Definition 3.6, the 2nd step follows from the differential rules, and the last step follows from simple algebra. Proof of Part 9.\ndL(x) dx i = d dx i ( 1 2 c(x) 2 2 ) = (c(x)) ⊤ dc(x) dx i = (c(x)) ⊤ dh(A 2 f (x)) dx i = (c(x)) ⊤ 1×m • diag(h ′ (A 2 f (x))) m×m • A 2 m×n • (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) n×1 ,\nwhere the 1st step follows from Definition 3.7, the 2nd step follows from the differential chain rule, the 3rd step follows from the result of Part 8, and the last step follows from the results of Part 7.\nProof of Part 10. For k ∈ [m], by using the chain rule, we have\ndh ′ (A 2 f (x)) k dx i = h ′′ (A 2 f (x)) k • d(A 2 f (x)) k dx i .(3)\nAdditionally, by Proof of Part 7, we have\nh ′ (A 2 f (x)) k • d(A 2 f (x)) k dx i = h ′ (A 2 f (x)) k • A ⊤ 2,k, * , -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i , which implies that d(A 2 f (x)) k dx i = A ⊤ 2,k, * , -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i .(4)\nTaking Eq. (4) into Eq. (3), we have\ndh ′ (A 2 f (x)) k dx i = h ′′ (A 2 f (x)) k • A ⊤ 2,k, * , -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i .\nTherefore, we have\ndh ′ (A 2 f (x)) dx i m×1 = diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) n×1 ,\nwhich completes the proof." }, { "figure_ref": [], "heading": "C Hessian", "publication_ref": [], "table_ref": [], "text": "In Section C.1, we compute the Hessian of u(x). In Section C.2, we compute the Hessian of α(x).\nIn Section C.3, we compute the Hessian of α(x) -1 . In Section C.4, we compute the Hessian of f (x). In Section C.5, we compute the Hessian of L(x). In Section C.6, we reorganize the expression of the Hessian of L(x) by analyzing B(x)." }, { "figure_ref": [], "heading": "C.1 Hessian of u(x)", "publication_ref": [], "table_ref": [], "text": "In this section, we study the Hessian of u(x)." }, { "figure_ref": [], "heading": "Lemma C.1. If the following condition holds", "publication_ref": [], "table_ref": [], "text": "• Let u(x) be defined as Definition 3.1.\nThen for each i ∈ [d] and j ∈ [d], we have\n• Part 1. d 2 u(x) dx 2 i = A 1, * ,i • u(x) • A 1, * ,i • Part 2. d 2 u(x) dx i dx j = A 1, * ,j • u(x) • A 1, * ,i\nProof. Proof of Part 1.\nd 2 u(x) dx 2 i = d dx i ( du(x) dx i ) = d dx i (u(x) • A 1, * ,i ) = A 1, * ,i • du(x) dx i = A 1, * ,i • u(x) • A 1, * ,i\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 1 in Lemma B.1, the 3rd step follows from Fact A.2, and the last step follows from the result of Part 1 in Lemma B.1.\nProof of Part 2.\nd 2 u(x) dx i dx j = d dx i ( du(x) dx j ) = d dx i (u(x) • A 1, * ,j ) = A 1, * ,j • du(x) dx i = A 1, * ,j • u(x) • A 1, * ,i\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 1 in Lemma B.1, the 3rd step follows from Fact A.2, and the last step follows from the result of Part 1 in Lemma B.1." }, { "figure_ref": [], "heading": "C.2 Hessian of α(x)", "publication_ref": [], "table_ref": [], "text": "In this section, we study the Hessian of α(x)." }, { "figure_ref": [], "heading": "Lemma C.2. If the following condition holds", "publication_ref": [], "table_ref": [], "text": "• Let α(x) be defined as Definition 3.2.\nThen for each i ∈ [d] and j ∈ [d], we have\n• Part 1. d 2 α(x) dx 2 i = u(x), A 1, * i • A 1, * ,i • Part 2. d 2 α(x) dx i dx j = u(x), A 1, * ,i • A 1, * ,j\nProof. Proof of Part 1.\nd 2 α(x) dx 2 i = d dx i ( dα(x) dx i ) = d u(x), A 1, * ,i dx i = A ⊤ 1, * ,i du(x) dx i = A ⊤ 1, * ,i • u(x) • A 1, * ,i = u(x), A 1, * i • A 1, * ,i\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 2 in Lemma B.1, the 3rd step follows from u, v = v ⊤ u, the fourth step follows from the result of Part 1 in Lemma B.1, the last step follows from simepl algebra.\nProof of Part 2.\nd 2 α(x) dx i dx j = d dx i ( dα(x) dx j ) = d u(x), A 1, * ,j dx i = A ⊤ 1, * ,j du(x) dx i = A ⊤ 1, * ,j • u(x) • A 1, * ,i = u(x), A 1, * ,i • A 1, * ,j\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 2 in Lemma B.1, the 3rd step follows from u, v = v ⊤ u, the fourth step follows from the result of Part 1 in Lemma B.1, the last step follows from simple algebra." }, { "figure_ref": [], "heading": "C.3 Hessian of α(x) -1", "publication_ref": [], "table_ref": [], "text": "In this section, we study the Hessian of α(x) -1 ." }, { "figure_ref": [], "heading": "Lemma C.3. If the following condition holds", "publication_ref": [], "table_ref": [], "text": "• Let α(x) be defined as Definition 3.2\n• Let f (x) be defined as Definition 3.3 Then for each i ∈ [d] and j ∈ [d], we have\n• Part 1. d 2 α(x) -1 dx 2 i = 2α(x) -1 • f (x), A 1, * ,i 2 -α(x) -1 f (x), A 1, * ,i • A 1, * ,i • Part 2. d 2 α(x) -1 dx i dx j = 2α(x) -1 • f (x), A 1, * ,i f (x), A 1, * ,j -α(x) -1 f (x), A 1, * ,i • A 1, * ,j\nProof. Proof of Part 1.\nd 2 α(x) -1 dx 2 i = d dx i ( dα(x) -1 dx i ) = d dx i (-α(x) -1 • f (x), A 1, * ,i ) = - dα(x) -1 dx i • f (x), A 1, * ,i -α(x) -1 d f (x), A 1, * ,i dx i = α(x) -1 • f (x), A 1, * ,i 2 -α(x) -1 (-f (x), A 1, * ,i 2 + f (x), A 1, * ,i • A 1, * ,i ) = 2α(x) -1 • f (x), A 1, * ,i 2 -α(x) -1 f (x), A 1, * ,i • A 1, * ,i\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 3 in Lemma B.1, the 3rd step follows from the differential product rule, the 4th step follows from the results of Part 3 and Part 5 in Lemma B.1, and the last step follows from simple algebra. Proof of Part 2.\nd 2 α(x) -1 dx i dx j = d dx i ( dα(x) -1 dx j ) = d dx i (-α(x) -1 • f (x), A 1, * ,j ) = - dα(x) -1 dx i • f (x), A 1, * ,j -α(x) -1 d f (x), A 1, * ,j dx i = α(x) -1 • f (x), A 1, * ,i f (x), A 1, * ,j -α(x) -1 (-f (x), A 1, * ,j • f (x), A 1, * ,i + f (x), A 1, * ,i • A 1, * ,j ) = 2α(x) -1 • f (x), A 1, * ,i f (x), A 1, * ,j -α(x) -1 f (x), A 1, * ,i • A 1, * ,j\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 3 in Lemma B.1, the 3rd step follows from the differential product rule, the 4th step follows from the results of Part 3 and Part 6 in Lemma B.1, and the last step follows from simple algebra." }, { "figure_ref": [], "heading": "C.4 Hessian of f (x)", "publication_ref": [], "table_ref": [], "text": "In this section, we study the Hessian of f (x)." }, { "figure_ref": [], "heading": "Lemma C.4. If the following conditions hold", "publication_ref": [], "table_ref": [], "text": "• Let f (x) be defined as Definition 3.3.\nThen for each i ∈ [d] and j ∈ [d], we have\n• Part 1. d 2 f (x) dx 2 i = 2 f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i • A 1, * ,i f (x) -2 f (x), A 1, * ,i f (x) • A 1, * ,i + A 1, * ,i • f (x) • A 1, * ,i • Part 2. d 2 f (x) dx i dx j = 2 f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j • Part 3. dp i (x) dx i = d 2 f (x) dx 2 i • Part 4. dp i (x) dx j = d 2 f (x) dx i dx j\nProof. Proof of Part 1.\nd 2 f (x) dx 2 i = d dx i ( df (x) dx i ) = d dx i (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) = - d f (x), A 1, * ,i dx i f (x) -f (x), A 1, * ,i df (x) dx i + A 1, * ,i • df (x) dx i = f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i • A 1, * ,i f (x) + f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,i + A 1, * ,i • f (x) • A 1, * ,i = 2 f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i • A 1, * ,i f (x) -2 f (x), A 1, * ,i f (x) • A 1, * ,i + A 1, * ,i • f (x) • A 1, * ,i\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 4 in Lemma B.1, the 3rd step follows from the differential product rule, the 4th step follows from the results of Part 4 and Part 5 in Lemma B.1, and the last step follows from simple algebra. Proof of Part 2.\nd 2 f (x) dx i dx j = d dx i ( df (x) dx j ) = d dx i (-f (x), A 1, * ,j • f (x) + f (x) • A 1, * ,j ) = - d f (x), A 1, * ,j dx i f (x) -f (x), A 1, * ,j df (x) dx i + A 1, * ,j • df (x) dx i = f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) + f (x), A 1, * ,j f (x), A 1, * ,i f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j = 2 f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 4 in Lemma B.1, the 3rd step follows from the differential product rule, the 4th step follows from the results of Part 4 and Part 6 in Lemma B.1, and the last step follows from simple algebra.\nProof of Part 3 dp(x) i dx i = d dx i df (x) dx i = d 2 f (x) dx 2 i\nwhere the first step follows from the expansion of hessian, the second step follows from Part 4 of Lemma B.1. Proof of Part 4\ndp(x) i dx j = d dx j df (x) dx i = d 2 f (x) dx i dx j\nwhere the first step follows from the expansion of hessian, the second step follows from Part 4 of Lemma B.1." }, { "figure_ref": [], "heading": "C.5 Hessian of L(x)", "publication_ref": [], "table_ref": [], "text": "In this section, we study the Hessian of L(x).\nLemma C.5. Let Q 2 (x) and q 2 (x) be defined as in Definition 4.2. We further define\n• B 1 (x) ∈ R n×n such that A ⊤ 1, * ,i B 1 (x)A 1, * ,j = (Q 2 (x) • (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i )) ⊤ • (Q 2 (x) • (-f (x), A 1, * ,j • f (x) + f (x) • A 1, * ,j )) • B 2 (x) ∈ R n×n such that A ⊤ 1, * ,i B 2 (x)A 1, * ,j = q 2 (x) ⊤ • (2 f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j )\nThen we have\n• Part 1. d 2 L dx 2 i = Q 2 (x) m×n • p(x) i n×1 2 2 + c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) i ) • A 2 m×n • p(x) i n×1 + c(x) m×1 , 2 Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 2 -c(x) m×1 , Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 • A 1, * ,i n×1 -c(x) m×1 , 2 Q 2 (x) m×n •(f (x) n×1 • A 1, * ,i n×1 ) • f (x) n×1 , A 1, * ,i n×1 + c(x) m×1 , Q 2 (x) m×n •(A 1, * ,i n×1 • f (x) n×1 • A 1, * ,i n×1 ) • Part 2. d 2 L dx i dx j = Q 2 (x) m×n • p(x) j n×1 , Q 2 (x) m×n • p(x) i n×1 + c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) j ) • A 2 m×n • p(x) i n×1 + c(x) m×1 , 2 Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 f (x) n×1 , A 1, * ,j n×1 -c(x) m×1 , Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 • A 1, * ,j n×1 -c(x) m×1 , 2 Q 2 (x) m×n •(f (x) n×1 • A 1, * ,j n×1 ) • f (x) n×1 , A 1, * ,i n×1 + c(x) m×1 , Q 2 (x) m×n •(A 1, * ,i n×1 • f (x) n×1 • A 1, * ,j n×1 ) Proof. Proof of Part 1. We can show d 2 L dx 2 i = d dx i ( dL dx i ) = d dx i c(x), Q 2 (x) • p(x) i = d dx i c(x), Q 2 (x) • p(x) i + c(x), d dx i (Q 2 (x)) • p(x) i + c(x), Q 2 (x) • d dx i (p(x) i ) ,(5)\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 9 in Lemma B.1. First, we compute the first term of Eq. ( 5):\nd dx i c(x), Q 2 (x) • p(x) i = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 , Q 2 (x) m×n • p(x) i n×1 = Q 2 (x) m×n • p(x) i n×1 , Q 2 (x) m×n • p(x) i n×1 = Q 2 (x) • p(x) i 2 2 ,(6)\nwhere the first step follows from Part 7 and Part 8 of Lemma B.1, the second step follows from the definition of Q 2 (x) (see Definition 4.2), and the last step follows from the definition of • 2 . Second, we compute the second term of Eq. (5).\nConsider d dx i (Q 2 (x)): we have d dx i (Q 2 (x)) = d dx i (diag(h ′ (A 2 f (x)))A 2 ) = d dx i (diag(h ′ (A 2 f (x)))) • A 2 = diag( d dx i (h ′ (A 2 f (x)))) • A 2 = diag(diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 ) • A 2 m×n ,\nwhere the first step follows from the definition of Q 2 (x) (see Definition 4.2), the second step follows from the fact that A 2 is a constant matrix with respect to x i , the third step follows from the definition of diag, and the last step follows from Part 10 of Lemma B.1.\nTherefore, we have\nc(x), d dx i (Q 2 (x)) • p(x) i = c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) i ) • A 2 m×n • p(x) i n×1(7)\nThird, we compute the third term of Eq. (5):\nc(x), Q 2 (x) • d dx i (p(x) i ) = c(x), Q 2 (x)• (2 f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i • A 1, * ,i f (x) -2 f (x), A 1, * ,i f (x) • A 1, * ,i + A 1, * ,i • f (x) • A 1, * ,i ) = c(x), 2Q 2 (x) f (x), A 1, * ,i 2 f (x) -c(x), Q 2 (x) f (x), A 1, * ,i • A 1, * ,i f (x) -c(x), 2Q 2 (x) f (x), A 1, * ,i f (x) • A 1, * ,i + c(x), Q 2 (x)A 1, * ,i • f (x) A 1, * ,i = c(x), 2Q 2 (x) • f (x) • f (x), A 1, * ,i 2 -c(x), Q 2 (x) • f (x) • f (x), A 1, * ,i • A 1, * ,i -c(x), 2Q 2 (x) • (f (x) • A 1, * ,i ) • f (x), A 1, * ,i + c(x), Q 2 (x) • (A 1, * ,i • f (x) • A 1, * ,i ) ,(8)\nwhere the first step follows from Part 1 and Part 3 of Lemma C.4, the second step follows from Fact A.1, and the third step follows from Fact A.1. Combining Eq. (5), Eq. ( 6), Eq. ( 7), Eq. ( 8), we have\nd 2 L dx 2 i = Q 2 (x) m×n • p(x) i n×1 2 2 + c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) i ) • A 2 m×n • p(x) i n×1 + c(x) m×1 , 2 Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 2 -c(x) m×1 , Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 • A 1, * ,i n×1 -c(x) m×1 , 2 Q 2 (x) m×n •(f (x) n×1 • A 1, * ,i n×1 ) • f (x) n×1 , A 1, * ,i n×1 + c(x) m×1 , Q 2 (x) m×n •(A 1, * ,i n×1 • f (x) n×1 • A 1, * ,i n×1 ) Proof of Part 2. We can show d 2 L dx i dx j = d dx j ( dL dx i ) = d dx j c(x), Q 2 (x) • p(x) i = d dx j c(x), Q 2 (x) • p(x) i + c(x), d dx j (Q 2 (x)) • p(x) i + c(x), Q 2 (x) • d dx j (p(x) i ) ,(9)\nwhere the 1st step follows from the differential rules, the 2nd step follows from the result of Part 9 in Lemma B.1.\nFirst, we compute the first term of Eq. ( 9):\nd dx j c(x), Q 2 (x) • p(x) i = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) j n×1 , Q 2 (x) m×n • p(x) i n×1 = Q 2 (x) m×n • p(x) j n×1 , Q 2 (x) m×n • p(x) i n×1(10)\nwhere the first step follows from Part 7 and Part 8 of Lemma B.1, the second step follows from the definition of Q 2 (x) (see Definition 4.2). Second, we compute the second term of Eq. ( 9). Consider d dx j (Q 2 (x)): we have\nd dx j (Q 2 (x)) = d dx j (diag(h ′ (A 2 f (x)))A 2 ) = d dx j (diag(h ′ (A 2 f (x)))) • A 2 = diag( d dx j (h ′ (A 2 f (x)))) • A 2 = diag(diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • p(x) j n×1 ) • A 2 m×n ,\nwhere the first step follows from the definition of Q 2 (x) (see Definition 4.2), the second step follows from the fact that A 2 is a constant matrix with respect to x i , the third step follows from the definition of diag, and the last step follows from Part 10 of Lemma B.1. Therefore, we have\nc(x), d dx j (Q 2 (x)) • p(x) i = c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) j ) • A 2 m×n • p(x) i n×1(11)\nThird, we compute the third term of Eq. (9):\nc(x), Q 2 (x) • d dx j (p(x) i ) = c(x), Q 2 (x)• (2 f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j ) = c(x), 2Q 2 (x) f (x), A 1, * ,i f (x), A 1, * ,j f (x) -c(x), Q 2 (x) f (x), A 1, * ,i • A 1, * ,j f (x) -c(x), 2Q 2 (x) f (x), A 1, * ,i f (x) • A 1, * ,j + c(x), Q 2 (x)A 1, * ,i • f (x) • A 1, * ,j = c(x), 2Q 2 (x) • f (x) • f (x), A 1, * ,i f (x), A 1, * ,j -c(x), Q 2 (x) • f (x) • f (x), A 1, * ,i • A 1, * ,i -c(x), 2Q 2 (x) • (f (x) • A 1, * ,j ) • f (x), A 1, * ,i + c(x), Q 2 (x) • (A 1, * ,i • f (x) • A 1, * ,j ) ,(12)\nwhere the first step follows from Part 1 and Part 3 of Lemma C.4, the second step follows from Fact A.1, and the third step follows from Fact A.1. Combining Eq. ( 9), Eq. (10), Eq. ( 11), Eq. ( 12), we have\nd 2 L dx i dx j = Q 2 (x) m×n • p(x) j n×1 , Q 2 (x) m×n • p(x) i n×1 + c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) j ) • A 2 m×n • p(x) i n×1 + c(x) m×1 , 2 Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 f (x) n×1 , A 1, * ,j n×1 -c(x) m×1 , Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 • A 1, * ,j n×1 -c(x) m×1 , 2 Q 2 (x) m×n •(f (x) n×1 • A 1, * ,j n×1 ) • f (x) n×1 , A 1, * ,i n×1 + c(x) m×1 , Q 2 (x) m×n •(A 1, * ,i n×1 • f (x) n×1 • A 1, * ,j n×1 ) C.6 Re-oragnizing B(x)\nIn this section, we reorganize B(x).\nLemma C.6. Let d 2 L(x) dx i dx j be computed in Lemma C.5, then, we have\nd 2 L dx 2 = A ⊤ B(x)A\nwhere\nB(x) = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) + diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ + f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) + f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ + 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ + 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) + diag(Q 2 (x) ⊤ c(x)) diag(f (x)) + diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) + diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ + f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) + f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ + diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))\nProof. For the first term, we have\nQ 2 (x)p(x) j , Q 2 (x)p(x) i = Q 2 (x)(f (x) • A 1, * ,i + f (x), A 1, * ,i • f (x)), Q 2 (x)(f (x) • A 1, * ,j + f (x), A 1, * ,j • f (x)) = Q 2 (x)f (x) • A 1, * ,i + Q 2 (x) f (x), A 1, * ,i • f (x), Q 2 (x)f (x) • A 1, * ,j + Q 2 (x) f (x), A 1, * ,j • f (x) = (Q 2 (x)f (x) • A 1, * ,i ) ⊤ Q 2 (x)f (x) • A 1, * ,j + (Q 2 (x)f (x) • A 1, * ,i ) ⊤ Q 2 (x) f (x), A 1, * ,j • f (x) + (Q 2 (x) f (x), A 1, * ,i • f (x)) ⊤ Q 2 (x)f (x) • A 1, * ,j + (Q 2 (x) f (x), A 1, * ,i • f (x)) ⊤ Q 2 (x) f (x), A 1, * ,j • f (x) = A ⊤ 1, * ,i diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x))A 1, * ,j + A ⊤ 1, * ,i diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ A 1, * ,j + A ⊤ 1, * ,i f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x))A 1, * ,j + A ⊤ 1, * ,i f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ A 1, * ,j\nFor the second term, we first define\nF : = diag(h ′′ (A 2 f (x))) M i : = A 2 • p(x) i\nthen the second term can be reformed as\nc(x), diag(F M j )M i = c(x), (F M j ) • M i = c(x) ⊤ (F M j ) • M i = (F M j ) ⊤ c(x) • M i = M ⊤ j F diag(c(x))M i By substitute M i with A 2 • p(x) i , we have M ⊤ j F diag(c(x))M i = (A 2 p(x) j ) ⊤ F diag(c(x))A 2 p(x) i = p(x) ⊤ j A 2 F diag(c(x))A 2 p(x) i Let D := A 2 F diag(c(x)) = A 2 diag(h ′′ (A 2 f (x))) diag(c(x))\nthen by the definition of p(x) i , we have\np(x) ⊤ j A 2 F diag(c(x))A 2 p(x) i = (f (x) • A 1, * ,j -f (x), A 1, * ,j • f (x)) ⊤ D(f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x)) = (f (x) ⊤ • A ⊤ 1, * ,j -f (x), A 1, * ,j • f (x) ⊤ )D(f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x)) = f (x) ⊤ • A ⊤ 1, * ,j Df (x) • A 1, * ,i -f (x) ⊤ • A ⊤ 1, * ,j D f (x), A 1, * ,i • f (x) -f (x), A 1, * ,j • f (x)Df (x) • A 1, * ,i + f (x), A 1, * ,j • f (x)D f (x), A 1, * ,i • f (x) = A ⊤ 1, * ,j diag(f (x))D diag(f (x))A 1, * ,i -A ⊤ 1, * ,j diag(f (x))Df (x)f (x) ⊤ A 1, * ,i -A ⊤ 1, * ,j f (x)f (x)D diag(f (x))A 1, * ,i + A ⊤ 1, * ,j f (x)f (x)Df (x)f (x) ⊤ A 1, * ,i\nFor the third term, we have\nc(x), 2Q 2 (x) • f (x) • f (x), A 1, * ,i f (x), A 1, * ,j = 2A ⊤ 1, * ,i f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ A 1, * ,j\nFor the fourth term, we have\nc(x), Q 2 (x) • f (x) • f (x), A 1, * ,i • A 1, * ,j = c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ A 1, * ,i • A 1, * ,j = A ⊤ 1, * ,i diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))A 1, * ,j\nFor the fifth term, we have\nc(x), 2Q 2 (x) • (f (x) • A 1, * ,j ) • f (x), A 1, * ,i = 2A ⊤ 1, * ,i f (x)c(x) ⊤ Q 2 (x) diag(f (x))A 1, * ,j\nFor the sixth term, we have\nc(x), Q 2 (x) • (A 1, * ,i • f (x) • A 1, * ,j ) = A ⊤ 1, * ,i diag(Q 2 (x) ⊤ c(x)) diag(f (x))A 1, * ,j\nThus, we have\nd 2 L dx 2 = A ⊤ B(x)A\nwhere\nB(x) = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) + diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ + f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) + f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ + 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ + 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) + diag(Q 2 (x) ⊤ c(x)) diag(f (x)) + diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) + diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ + f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) + f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ + diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))" }, { "figure_ref": [], "heading": "D Hessian is Positive definite", "publication_ref": [], "table_ref": [], "text": "In Section D.1, we show the final PSD lower bounds we have for B(x). In Section D.2, we present a list of tools for bounding different parts of the matrix B(x). In Section D.3, we analyze the lower bound on the Hessian of L(x)." }, { "figure_ref": [], "heading": "D.1 PSD Lower Bound: final bound", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the PSD lower bound for B(x)." }, { "figure_ref": [], "heading": "Lemma D.1. If the following conditions hold", "publication_ref": [], "table_ref": [], "text": "• Let B(x) ∈ R n×n be defined as Definition 4.4." }, { "figure_ref": [], "heading": "Then we have", "publication_ref": [], "table_ref": [], "text": "-12R h L h R(R + R h )I n B(x) 12R h L h R(R + R h )I n Proof. It follows from Lemma D.2 that we have max{ Q 2 (x) 2 , 2(L h + 1) Q 2 (x) , (R + R h )L h R, R h R(R + R h )} ≤ 12R h L h R(R + R h ) min{ Q 2 (x) 2 , 2(L h + 1) Q 2 (x) , (R + R h )L h R, R h R(R + R h )} ≥ -12R h L h R(R + R h )\nThus, we have\n-12R h L h R(R + R h )I n B(x) 12R h L h R(R + R h )I n" }, { "figure_ref": [], "heading": "D.2 PSD Lower Bound: A list of Tools", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the PSD lower bound for different parts of B(x).\nLemma D.2. If the following conditions hold\n• f (x) 1 = 1 (see Definition 3.3).\n• Let B(x) ∈ R n×n be defined as Definition 4.4.\n• Let f (x) ≥ 0 n . • Let b ≥ 0 n . • B 1 = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) • B 2 = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ • B 3 = f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) • B 4 = f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ • B 5 = 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ • B 6 = 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) • B 7 = diag(Q 2 (x) ⊤ c(x)) diag(f (x)) • B 8 = diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) • B 9 = diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ • B 10 = f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) • B 11 = f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ • B 12 = diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))\nThen we have\n• Part 1. -Q 2 (x) 2 • I n B 1 Q 2 (x) 2 • I n • Part 2. -Q 2 (x) 2 • I n B 2 Q 2 (x) 2 • I n • Part 3. -Q 2 (x) 2 • I n B 3 Q 2 (x) 2 • I n • I n • Part 4. -Q 2 (x) 2 • I n B 4 Q 2 (x) 2 • I n • Part 5. -2(L h + 1) Q 2 (x) • I n B 5 2(L h + 1) Q 2 (x) • I n • Part 6. -2(L h + 1) Q 2 (x) • I n B 5 2(L h + 1) Q 2 (x) • I n • Part 7. -2(L h + 1) Q 2 (x) • I n B 5 2(L h + 1) Q 2 (x) • I n • Part 8. -(R + R h )L h R • I n B 8 (R + R h )L h R • I n • Part 9. -(R + R h )L h R • I n B 9 (R + R h )L h R • I n • Part 10. -(R + R h )L h R • I n B 10 (R + R h )L h R • I n • Part 11. -(R + R h )L h R • I n B 11 (R + R h )L h R • I n • Part 12. -R h R(R + R h ) • I n B 12 R h R(R + R h ) • I n Proof. Proof of Part 1. We know that diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) ≤ Q 2 (x) 2 diag(f (x)) 2 2 ≤ Q 2 (x) 2\nwhere the first step follows from Fact A.3 and Fact A.4, and the second step follows from Fact A.3 and f (x) 1 ≤ 1. Thus, we have\n-Q 2 (x) • I n B 2 Q 2 (x) • I n Proof of Part 2. We have diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ ≤ Q 2 (x) 2 f (x) 2 2 diag(f (x)) ≤ Q 2 (x) 2\nwhere the first step follows from Fact A.3 and Fact A.4, and the second step follows from Fact A.3 and f (x) 1 ≤ 1.\nThus, we have\n-Q 2 (x) • I n B 2 Q 2 (x) • I n Proof of Part 3 We have f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) ≤ Q 2 (x) 2 f (x) 2 2 diag(f (x)) ≤ Q 2 (x) 2\nwhere the first step follows from Fact A.3 and Fact A.4, and the second step follows from Fact A.3 and f (x) 1 ≤ 1.\nThus, we have\n-Q 2 (x) 2 • I n B 3 Q 2 (x) 2 • I n Proof of Part 4 We have f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ ≤ Q 2 (x) 2 f (x) 3 2 ≤ Q 2 (x) 2\nwhere the first step follows from Fact A.3 and Fact A.4, and the second step follows from Fact A.3 and f (x) 1 ≤ 1. Thus, we have\n-Q 2 (x) 2 • I n B 4 Q 2 (x) 2 • I n Proof of Part 5 We have 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ ≤ 2 f (x) 3 2 c(x) 2 Q 2 (x) ≤ 2(L h + 1) Q 2 (x)\nwhere the first step follows from Fact A.3 and Fact A.4, and the second step follows from Fact A.3 , f (x) 1 ≤ 1, the definition of c(x) and h(x) 2 ≤ L h .\nThus, we have\n-2(L h + 1) Q 2 (x) • I n B 5 2(L h + 1) Q 2 (x) • I n Proof of Part 6 We have 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) ≤ 2 f (x) 2 2 diag(f (x)) Q 2 (x) ≤ 2(L h + 1) Q 2 (x)\nwhere the first step follows from Fact A.3 and Fact A.4, and the second step follows from Fact A.3 , f (x) 1 ≤ 1, the definition of c(x) and h(x) 2 ≤ L h . Thus, we have\n-2(L h + 1) Q 2 (x) • I n B 6 2(L h + 1) Q 2 (x) • I n Proof of Part 7 We have diag(Q 2 (x) ⊤ c(x)) diag(f (x)) ≤ diag(Q 2 (x) ⊤ c(x)) diag(f (x)) ≤ Q 2 (x) ⊤ c(x) f (x) 2 ≤ Q 2 (x) c(x) 2 f (x) 2 ≤ 2(L h + 1) Q 2 (x)\nwhere the first step follows from Fact A.4, the second step follows from Fact A.4, the third step follows from Fact A.4 and Fact A.3, the last step follows from Fact A.3 , f (x) 1 ≤ 1, the definition of c(x) and h(x) 2 ≤ L h . Thus, we have\n-2(L h + 1) Q 2 (x) • I n B 7 2(L h + 1) Q 2 (x) • I n Proof of Part 8 We have diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) ≤ A 2 diag(f (x)) 2 diag(c(x)) diag(h ′′ (A 2 f (x))) ≤ R(R + R h ) h ′′ (A 2 f (x)) 2 ≤ R(R + R h )L h\nwhere the first step follows from Fact A.4, the second step follows from Fact A.3 , f (x) 1 ≤ 1 and Part 2 of Lemma E.2, the last step follows from h ′′ (x) 2 ≤ L h .\nThus, we have \n-(R + R h )L h R • I n B 8 (R + R h )L h R • I n Proof of\n(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x)) ≤ f (x)f (x) ⊤ Q 2 (x) ⊤ c(x) 2 ≤ f (x) 2 2 Q 2 (x) c(x) 2 ≤ R h R(R + R h )\nwhere the first step follows from Fact A.4, the second step follows from Fact A.3, the last step follows from Fact A.3, f (x) 1 , Part 2 and Part 3 of Lemma E.2.\nThus, we have\n-R h R(R + R h ) • I n B 12 R h R(R + R h ) • I n D." }, { "figure_ref": [], "heading": "Lower Bound on Hessian", "publication_ref": [], "table_ref": [], "text": "The goal of this section is to prove Lemma D.3." }, { "figure_ref": [], "heading": "Lemma D.3. If the following conditions hold", "publication_ref": [], "table_ref": [], "text": "• Let A 1 ∈ R n×d .\n• Let L tot be defined in Definition 3.10\n• Let W = diag(w) ∈ R n×n .\n• Let W 2 ∈ R n×n denote the matrix that i-th diagonal entry is w 2 i,i .\n• Let σ min (A 1 ) denote the minimum singular value of A 1 .\n• Let l > 0 denote a scalar.\nThen, we have\n• Part 1. If all i ∈ [n], w 2 i ≥ 12R h L h R(R + R h ) + l/σ min (A 1 ) 2 , then d 2 L dx 2 l • I d • Part 2. If all i ∈ [n], w 2 i ≥ 100 + 12R h L h R(R + R h ) + l/σ min (A 1 ) 2 , then (1 -1/10) • (B(x) + W 2 ) W 2 (1 + 1/10) • (B(x) + W 2 )\nProof. Proof of Part 1 By applying Lemma C.6, we have\nd 2 L dx 2 = A ⊤ 1 B(x)A 1 where B(x) -12R h L h R(R + R h )I n(13)\nAlso, we have\nd 2 L tot dx 2 = d 2 L reg dx 2 + d 2 L dx 2(14)\nThus, by applying Lemma 3.9, Eq. ( 14) can be written as\nd 2 L tot dx 2 = A ⊤ 1 B(x)A + A ⊤ W 2 A 1 = A ⊤ 1 (B(x) + W 2 )A 1\n, where the second step follows from simple algebra.\nLet\nD = B(x) + W 2 Then, d 2 L dx 2 can be rewritten as d 2 L dx 2 = A ⊤ 1 DA 1\nNow, we can bound D as follows\nD -12R h L h R(R + R h )I n + w 2 min I n = (-12R h L h R(R + R h ) + w 2 min )I n l σ min (A 1 ) 2 I n\nwhere the first step follows from Lemma D.1 and the fact that W is a diagonal matrix (see from the Lemma statement), the second step follows from simple algebra, and the last step follows from\nw 2 min ≥ -12R h L h R(R + R h ) + l/σ min (A 1 ) 2 .\nSince D is positive definite, then we have \nA ⊤ 1 DA 1 σ min (D) • σ min (A 1 ) 2 I d l • I d Thus," }, { "figure_ref": [], "heading": "E Hessian is Lipschitz", "publication_ref": [], "table_ref": [], "text": "In Section E.1, we present the Lipschitz properties for some basic functions. Section E.2, we present the summary of this section. In Section E.3, we analyze the first part which shows that G 1 is Lipschitz continuous. In Section E.4, we analyze the second part which shows that G 2 is Lipschitz continuous. In Section E.5, we analyze the third part which shows that G 3 is Lipschitz continuous. In Section E.6, we analyze the fourth part which shows that G 4 is Lipschitz continuous.\nIn Section E.7, we analyze the fifth part which shows that G 5 is Lipschitz continuous. In Section E.8, we analyze the sixth part which shows that G 6 is Lipschitz continuous." }, { "figure_ref": [], "heading": "E.1 Lipschitz Property for Some Basic Functions", "publication_ref": [], "table_ref": [], "text": "In this section, we present the Lipschitz property for some basic functions.\nLemma E.1. If the following conditions hold\n• Let A 1 ∈ R n×d • Let x ∈ R d where x 2 ≤ R • Let R ≥ 4 • A 1 ≤ R We have exp(A 1 x) 2 ≤ √ n exp(R 2 )\nProof. We have\nexp(A 1 x) 2 ≤ √ n • exp(A 1 x) ∞ ≤ √ n • exp( A 1 x ∞ ) ≤ √ n • exp( A 1 x 2 ) ≤ √ n • exp(R 2 )\nwhere the 1st step follows from Fact A.3, the 2nd step follows from Fact A.3, the 3rd step follows from Fact A.3, and the step follows from A 1 ≤ R and x ≤ R.\nLemma E.2. If the following conditions hold:\n• Let A 1 ∈ R n×d , A 2 ∈ R m×n .\n• Let q 2 (x) and Q 2 (x) be defined in Definition 4.2.\n• Let f (x) be defined in Definition 3.3.\n• Let c(x) be defined in Definition 3.6.\n• Let α(x) be defined in Definition 3.2.\n• Let u(x) be defined in Definition 3.1.\n• Let R h > 0. • Let max{ h(A 2 f (x)) 2 , h ′ (A 2 f (x)) 2 } ≤ R h .\n• Let R > 0.\n• Suppose that b 2 ≤ R.\n• Suppose that A 1 , A 2 ≤ R.\nThen, we have\n• Part 1. f (x) 2 ≤ β -1 • √ n • exp(R 2 ) • Part 2. c(x) 2 ≤ R + R h • Part 3. Q 2 (x) ≤ R • R h • Part 4. q 2 (x) 2 ≤ R • R h • (R + R h ) • Part 5. For i ∈ [d], p(x) i 2 ≤ 2Rβ -2 • n • exp(2R 2 ) Proof. Proof of Part 1 f (x) 2 = α(x) -1 • u(x) 2 = |α(x) -1 | • u(x) 2 ≤ β -1 • u(x) 2 = β -1 • exp(A 1 x) 2 ≤ β -1 • √ n • exp(R 2 ),\nwhere the first step follows from the definition of f (x) (see Definition 3.3), the second step follows from Fact A.3, the third step follows from Eq. ( 15), the fourth step follows from the definition of u(x) (see Definition 3.1), and the last step follows from Lemma E.1.\nProof of Part 2 c(x) 2 = h(A 2 f (x)) -b 2 ≤ h(A 2 f (x)) 2 + b 2 ≤ R + R h ,\nwhere the first step follows from the definition of c(x) (see Definition 3.6), the second step follows from the triangle inequality, and the third step follows from the assumptions from the Lemma statement.\nProof of Part 3\nQ 2 (x) = A 2 diag(h ′ (A 2 f (x))) ≤ A 2 • diag(h ′ (A 2 f (x))) ≤ A 2 • h ′ (A 2 f (x)) ∞ ≤ A 2 • h ′ (A 2 f (x)) 2 ≤ R • R h ,\nwhere the first step follows from the definition of Q 2 (x) (see Definition 4.2), the second step follows from Fact A.4, the third step follows from Fact A.3, the fourth step follows from Fact A.3, and the last step follows from\nA 2 ≤ R and h ′ (A 2 f (x)) 2 ≤ R h . Proof of Part 4 q 2 (x) 2 = Q 2 (x) ⊤ c(x) 2 ≤ Q 2 (x) ⊤ c(x) 2 ≤ Q 2 (x) c(x) 2 ≤ R • R h • (R + R h ),\nwhere the first step follows from the definition of q 2 (x) (see Definition 4.2), the second step follows from Fact A.4, the third step follows from Fact A.4, and the last step follows from Part 2 and Part 3.\nProof of Part 5\np(x) i = f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x) ≤ f (x) • A 1, * ,i + f (x), A 1, * ,i • f (x) 2 ≤ Rβ -1 • √ n • exp(R 2 ) + f (x) 2 2 A 1, * ,i 2 ≤ Rβ -1 • √ n • exp(R 2 ) + Rβ -2 • n • exp(2R 2 ) ≤ 2Rβ -2 • n • exp(2R 2 )\nwhere the first step follows from the definition of p(x) i , the second step follows from triangular inequality, the third step follows from Fact A.3 and Part 1 and A 1, * ,i 2 ≤ R, the fourth step follows from Part 1 and A 1, * ,i 2 ≤ R, the last step follows from simple algebra.\nLemma E.3. If the following conditions hold\n• Let A 1 ∈ R n×d , A 2 ∈ R m×n • Let β ∈ (0, 0.1) • Let R ≥ 4 • A 1 ≤ R • exp(A 1 x), 1 n ≥ β • exp(A 1 y), 1 n ≥ β • Let R f := 2β -2 • nR exp(2R 2 )\n• Let u(x) be defined as Definition 3.1.\n• Let α(x) be defined as Definition 3.2.\n• Let f (x) be defined as Definition 3.3.\n• Let c(x) be defined as Definition 3.6.\n• Let Q 2 (x) be defined as Definition 4.2.\n• Let q 2 (x) be defined as Definition 4.2.\n• Let p i (x) be defined as Lemma C.4.\n• Assume h(x) -h(y) 2 ≤ L h • x -y 2 • Assume h ′ (x) -h ′ (y) 2 ≤ L h • x -y 2\nWe have\n• Part 1. u(x) -u(y) 2 ≤ R exp(R 2 ) • x -y 2 • Part 2. |α(x) -α(y)| ≤ √ n • exp(A 1 x) -exp(A 1 y) 2 • Part 3. |α(x) -1 -α(y) -1 | ≤ β -2 • |α(x) -α(y)| • Part 4. f (x) -f (y) 2 ≤ R f • x -y 2 • Part 5. c(x) -c(y) 2 ≤ L h • R • R f • x -y 2 • Part 6. Q 2 (x) -Q 2 (y) ≤ R 2 R f L h x -y 2 • Part 7. q 2 (x) -q 2 (y) 2 ≤ 2R 2 R f R h L h (R + R h ) x -y 2 • Part 8. g(x) -g(y) 2 ≤ 7β -2 nL h R h R f R 2 (R + R h ) exp(5R 2 ) x -y 2 • Part 9. For each i ∈ [d], p i (x) -p i (y) 2 ≤ 3RR f β -1 • √ n • exp(R 2 ) x -y 2\nProof. Proof of Part 1. We have\nu(x) -u(y) 2 ≤ exp(A 1 x) -exp(A 1 y) 2 ≤ exp(R 2 ) A 1 x -A 1 y 2 ≤ exp(R 2 ) A 1 x -y 2 ≤ R exp(R 2 ) • x -y 2\nwhere the 1st step follows from the definition of u(x) (see Definition 3.1), the 2nd step follows from Fact A.3, and the 3rd step follows from Fact A.4, and the last step follows from the assumption in the lemma statement.\nProof of Part 2. We have\n|α(x) -α(y)| = | exp(A 1 x), 1 n -exp(A 1 y), 1 n | = | exp(A 1 x) -exp(A 1 y), 1 n | ≤ exp(A 1 x) -exp(A 1 y) 2 • 1 n 2 = √ n • exp(A 1 x) -exp(A 1 y) 2\nwhere the 1st step follows from the definition of α(x) (see Definition 3.2), and the 2nd step follows from Fact A.1, the 3rd step follows from the Cauchy-Schwartz inequality (see Fact A.3), and the last step follows from the definition of 1 n . Proof of Part 3. Note that we have\nα(x) -1 = exp(A 1 x), 1 n -1 ≤ 1 β ,(15)\nwhere the first step follows from the definition of α(x) (see Definition 3.2) and the second step follows from the assumption from the lemma statement.\nWith the same strategy, we can also have\nα(y) -1 ≤ 1 β . (16\n)\nWe show that\n|α(x) -1 -α(y) -1 | = α(x) -1 α(y) -1 • |α(x) -α(y)| ≤ β -2 • |α(x) -α(y)|\nwhere the 1st step follows from simple algebra and the 2nd step follows from combining Eq. ( 15) and Eq. ( 16).\nProof of Part 4. We show that\nf (x) -f (y) 2 = α(x) -1 exp(A 1 x) -α(y) -1 exp(A 1 y) 2 ≤ α(x) -1 exp(A 1 x) -α(x) -1 exp(A 1 y) 2 + α(x) -1 exp(A 1 y) -α(y) -1 exp(A 1 y) 2 ≤ α(x) -1 exp(A 1 x) -exp(A 1 y) 2 + |α(x) -1 -α(y) -1 | • exp(A 1 y) 2\nwhere the 1st step follows from the definition of f (x) (see Definition 3.3), the 2nd step follows from the triangle inequality, and the last step follows from simple algebra.\nFor the first term α(x) -1 exp(A 1 x) -exp(A 1 y) 2 , we have\nα(x) -1 exp(A 1 x) -exp(A 1 y) 2 ≤ β -1 exp(A 1 x) -exp(A 1 y) 2 ≤ β -1 • R exp(R 2 ) • x -y 2(17)\nwhere the 1st step follows from Part 3, and the 2nd step follows from Part 1.\nFor the second term |α(x) -1 -α(y) -1 | • exp(A 1 y) 2 , we have\n|α(x) -1 -α(y) -1 | • exp(A 1 y) 2 ≤ β -2 • |α(x) -α(y)| • exp(A 1 y) 2 ≤ β -2 • |α(x) -α(y)| • √ n exp(R 2 ) ≤ β -2 • √ n • exp(A 1 x) -exp(A 1 y) 2 • √ n exp(R 2 ) ≤ β -2 • √ n • R exp(R 2 ) x -y 2 • √ n exp(R 2 ) = β -2 • nR exp(2R 2 ) x -y 2 ,(18)\nwhere the first step follows from Part 3, the second step follows from Lemma E.1, the third step follows from Part 2, the fourth step follows from Part 1, and the last step follows from simple algebra. Therefore, we sum up Eq. ( 17) and Eq. (18) to get:\nf (x) -f (y) 2 ≤ β -1 • R exp(R 2 ) • x -y 2 + β -2 • nR exp(2R 2 ) x -y 2 ≤ 2β -2 • nR exp(2R 2 ) x -y 2 = R f x -y 2 ,\nwhere the second step follows from simple algebra and the last step follows from the definition of R f (see the assumption in the lemma statement). Proof of Part 5. We have\nc(x) -c(y) 2 = h(A 2 f (x)) -b -(h(A 2 f (y)) -b) 2 = h(A 2 f (x)) -h(A 2 f (y)) 2 ≤ L h • A 2 f (x) -A 2 f (y) 2 ≤ L h • A 2 • f (x) -f (y) 2 ≤ L h • R • R f • x -y 2 ,\nwhere the first step follows from the definition of c(x) (see Definition 3.6), the second step follows from simple algebra, the third step follows from the assumption from Lemma statement, the fourth step follows from Fact A.4, and the last step follows from A 2 ≤ R and Part 4. Proof of Part 6\nQ 2 (x) -Q 2 (y) = A 2 diag(h ′ (A 2 f (x))) -A 2 diag(h ′ (A 2 f (y))) ≤ A 2 h ′ (A 2 f (x)) -h ′ (A 2 f (y)) 2 ≤ RL h A 2 f (x) -A 2 f (y) 2 ≤ R 2 L h f (x) -f (y) 2 ≤ R 2 R f L h x -y 2 ,\nwhere the first step follows from the definition of Q 2 (x), the second step follows from Fact A.4, the last step follows from A 2 ≤ R and h ′ (x) -h ′ (y) 2 ≤ L h x -y 2 , the fourth step follows from A 2 ≤ R, and the last step follows from Part 4. Proof of Part 7\nq 2 (x) -q 2 (y) 2 = Q 2 (x) ⊤ c(x) -Q 2 (y) ⊤ c(y) = Q 2 (x) ⊤ c(x) -Q 2 (x) ⊤ c(y) + Q 2 (x) ⊤ c(y) -Q 2 (y) ⊤ c(y) ≤ Q 2 (x)c(x) -Q 2 (x)c(y) + Q 2 (x)c(y) -Q 2 (y)c(y) ≤ Q 2 (x) c(x) -c(y) 2 + Q 2 (x) -Q 2 (y) c(y) 2 ≤ Q 2 (x) • L h RR f x -y + R 2 R f L h x -y 2 c(y) 2 ≤ R 2 R h R f L h • x -y 2 + R 2 R f L h • x -y 2 • (R + R h ) ≤ 2R 2 R f R h L h (R + R h ) x -y 2 ,\nwhere the first step follows from the definition of q 2 (x) (see Definition 4.2), the second step follows from simple algebra, the third step follows from the triangle inequality, the fourth step follows from Fact A.4, the fifth step follows from Part 5 and Part 6, the sixth step follows from Part 2 and Part 3 of Lemma E.2, and the last step follows from simple algebra. Proof of Part 8 First, we have\ng(x) -g(y) 2 = A ⊤ 1 (f (x) q 2 (x), f (x) + diag(f (x))q 2 (x)) -A ⊤ 1 (f (y) q 2 (y), f (y) + diag(f (y))q 2 (y)) 2 = A ⊤ 1 ((f (x) q 2 (x), f (x) + diag(f (x))q 2 (x)) -(f (y) q 2 (y), f (y) + diag(f (y))q 2 (y))) 2 ≤ A 1 (f (x) q 2 (x), f (x) -f (y) q 2 (y), f (y) ) + (diag(f (x))q 2 (x)) -diag(f (y))q 2 (y))) 2 ≤ R( f (x) q 2 (x), f (x) -f (y) q 2 (y), f (y) 2 + diag(f (x))q 2 (x) -diag(f (y))q 2 (y) 2 )\nwhere the first step follows from the definition of g(x) (see Definition 4.3), the second step follows from simple algebra, the third step follows from Fact A.4, and the last step follows from A 1 ≤ R and Fact A.3.\nFor convenience, we define\nC 1 : = f (x) q 2 (x), f (x) -f (y) q 2 (y), f(y)\nC 2 : = diag(f (x))q 2 (x) -diag(f (y))q 2 (y), so we have\ng(x) -g(y) 2 ≤ R( C 1 2 + C 2 2 ).(19)\nFor C 1 , we define\nC 1,1 : = f (x) q 2 (x), f (x) -f (x) q 2 (x), f (y) C 1,2 : = f (x) q 2 (x), f (y) -f (x) q 2 (y), f (y) C 1,3 : = f (x) q 2 (y), f (y) -f (y) q 2 (y), f(y)\nThen, we can rewrite C 1 2 as follows\nC 1 2 = C 1,1 + C 1,2 + C 1,3 2(20)\nFirst, we upper bound C 1,1 2 :\nC 1,1 2 ≤ f (x) 2 q 2 (x) 2 f (x) -f (y) 2 ≤ β -1 √ nR h R(R + R h ) exp(5R 2 ) f (x) -f (y) 2 ≤ β -1 √ nR h R f R(R + R h ) exp(5R 2 ) x -y 2 ,(21)\nwhere the first step follows from the definition of C 1,1 and Fact A.3, the second step follows from Part 1 and Part 4 of Lemma E.2, the third step follows from Part 4. Next, we upper bound C 1,2 2 :\nC 1,2 2 ≤ f (x) 2 q 2 (x) -q 2 (y) 2 f (y) 2 ≤ β -2 n exp(2R 2 ) q 2 (x) -q 2 (y) 2 ≤ 2β -2 n exp(2R 2 )R 2 R f R h L h (R + R h ) x -y 2 ,(22)\nwhere the first step follows from the definition of C 1,2 and Fact A.3, the second step follows from Part 1 of Lemma E.2, the third step follows from Part 7. Then, we upper bound C 1,3 2 :\nC 1,3 ≤ f (x) -f (y) 2 q 2 (y) 2 f (y) 2 ≤ f (x) -f (y) 2 RR h (R + R h )β -1 √ n exp(R 2 ) ≤ β -1 √ nR h R f R(R + R h ) exp(R 2 ) x -y 2 ,(23)\nwhere the first step follows from the definition of C 1,3 and Fact A.3, the second step follows from Part 1 and Part 4 of Lemma E.2, and the third step follows from Part 4. Now, it follows from combining the bound of C 1,1 , C 1,2 and C 1,3 , we obtained the bound for C 1 2 :\nC 1 2 = C 1,1 + C 1,2 + C 1,3 2 = C 1,1 2 + C 1,2 2 + C 1,3 2 ≤ 4β -2 nR h R f R(R + R h ) exp(5R 2 )L h x -y 2 ,(24)\nwhere the first step follows from the definition of C 1 (see Eq. ( 20)), the second step follows from the triangle inequality, and the third step follows from combining Eq. ( 21), Eq. ( 22), and Eq. ( 23). Then, we upper bound C 2 as follows:\nC 2 = diag(f (x))q 2 (x) -diag(f (x))q 2 (y) + diag(f (x))q 2 (y) -diag(f (y))q 2 (y) 2 ≤ diag(f (x))q 2 (x) -diag(f (x))q 2 (y) 2 + diag(f (x))q 2 (y) -diag(f (y))q 2 (y) 2 ≤ f (x) 2 q 2 (x) -q 2 (y) 2 + f (x) -f (y) 2 q 2 (y) 2 ≤ β -1 √ n exp(R 2 ) q 2 (x) -q 2 (y) 2 + f (x) -f (y) 2 RR h (R + R h ) ≤ 2β -1 √ n exp(R 2 )R 2 R f R h L h (R + R h ) x -y 2 + R f x -y 2 RR h (R + R h ) ≤ 3β -1 √ nR 2 R f R h L h (R + R h ) exp(R 2 ) x -y 2 ,(25)\nwhere the first step follows from the definition of C 2 , the second step follows from Fact A.4, the third step follows from Fact A.3, the fourth step follows from Part 1 and Part 4 of Lemma E.2, the fifth step follows from Part 4 and Part 7, and the last step follows from simple algebra. Finally, we obtained the bound for g(x) -g(y) 2 :\ng(x) -g(y) 2 ≤ R( C 1 2 + C 2 2 ) ≤ 4β -2 nR h R f R(R + R h ) exp(5R 2 )L h x -y 2 + 3β -1 √ nR 2 R f R h L h (R + R h ) exp(R 2 )) x -y 2 ≤ 7β -2 nL h R h R f R 2 (R + R h ) exp(5R 2 ) x -y 2 ,\nwhere the first step follows from Eq. ( 19), the second step follows from combining Eq. ( 24) and Eq. ( 25), and the last step follows from simple algebra. Proof of Part 9. Note that\np(x) i -p(y) i 2 = f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x) -f (x) • A 1, * ,i + f (y), A 1, * ,i • f (y) 2 = (f (x) -f (y)) • A 1, * ,i + ( f (x), A 1, * ,i • f (x) -f (y), A 1, * ,i • f (y)) 2 ≤ (f (x) -f (y)) • A 1, * ,i 2 + f (x), A 1, * ,i • f (x) -f (y), A 1, * ,i • f (y) 2\nwhere the first step follows from the definition of p(x) i , the second step follows from simple algebra, the third step follows from triangular inequality.\nFor the first term above, we have\n(f (x) -f (y)) • A 1, * ,i 2 ≤ A 1, * ,i 2 f (x) -f (y) 2 ≤ RR f x -y 2\nwhere the first step follows from Fact A.4, the second step follows from A ≤ R and Part 4.\nFor the second term, we have\nf (x), A 1, * ,i • f (x) -f (y), A 1, * ,i • f (y) 2 = f (x), A 1, * ,i • f (x) -f (x), A 1, * ,i • f (y) + f (x), A 1, * ,i • f (y) -f (y), A 1, * ,i • f (y) 2 ≤ f (x), A 1, * ,i • (f (x) -f (y)) 2 + f (x) -f (y), A 1, * ,i • f (y) 2 ≤ 2 f (x) 2 A 1, * ,i 2 f (x) -f (y) 2 ≤ 2RR f β -1 • √ n • exp(R 2 ) x -y 2\nwhere the first step follows from simple algebra, the second step follows from simple algebra, the third step follows from triangular inequality, the fourth step follows from Fact A.3, the last step follows from Part 4 and Part 1 of Lemma E.2 Thus, we have\np(x) i -p(y) i 2 ≤ (2RR f β -1 • √ n • exp(R 2 ) + RR f ) x -y 2 ≤ 3RR f β -1 • √ n • exp(R 2 ) x -y 2 G 1,2 : = Q 2 (x) • p(x) j , Q 2 (x) • p(y) i -Q 2 (x) • p(x) j , Q 2 (y) • p(y) i G 1,3 : = Q 2 (x) • p(x) j , Q 2 (y) • p(y) i -Q 2 (x) • p(y) j , Q 2 (y) • p(y) i G 1,4 : = Q 2 (x) • p(y) j , Q 2 (y) • p(y) i -Q 2 (y) • p(y) j , Q 2 (y) • p(y) i Than it's apparent that | Q 2 (x) • p(x) j , Q 2 (x) • p(x) i -Q 2 (y) • p(y) j , Q 2 (y) • p(y) i | = |G 1,1 + G 1,2 + G 1,3 + G 1,4 |\nSince G 1,i , i ∈ [4] is similar, we only need to bound G 1,1 and G 1,2 , for G 1,1 , we have\n|G 1,1 | = | Q 2 (x) • p(x) j , Q 2 (x) • (p(x) i -p(y) i ) | ≤ Q 2 (x) • p(x) j 2 Q 2 (x) (p(x) i -p(y) i ) 2 ≤ Q 2 (x) 2 p(x) j 2 p(x) i -p(y) i 2 ≤ 6R 2 h R 4 R f β -3 • n 3 2 • exp(3R 2 ) • x -y 2\nwhere the first step follows from the definition of G 1,1 , the second step follows from Fact A.3, the third step follows from Fact A.4, the last step follows from Part 3 and Part 5 of Lemma E.2 and Part 8 of Lemma E.3. For G 1,2 , we have\n|G 1,2 | = | Q 2 (x) • p(x) j , (Q 2 (x) -Q 2 (y)) • p(y) i | ≤ Q 2 (x)p(x) j 2 Q 2 (x) -Q 2 (y) p(y) i 2 ≤ Q 2 (x) p(x) j 2 2 Q 2 (x) -Q 2 (y) ≤ 2R 2 h R f R 5 L h (R + R h )β -4 • n 2 • exp(4R 2 ) x -y 2\nwhere the first step follows from the definition of G 1,2 , the second step follows from Fact A.3, the third step follows from Fact A.3, the last step follows from Part 3 and Part 5 of Lemma E.2 and Part 6 of Lemma E.3. Thus, it follows from combining the upper bounds of |G 1,i |, we have\n|G 1 (x) -G 1 (y)| = |G 1,1 + G 1,2 + G 1,3 + G 1,4 | ≤ |G 1,1 | + |G 1,2 | + |G 1,3 | + |G 1,4 | ≤ 8R 2 h R f R 5 L h (R + R h )β -4 • n 2 • exp(4R 2 ) x -y 2 E.4\nStep 2: G 2 is lipschitz continuous\nIn this section, we show that G 2 Lipschitz continuous.\nLemma E.7. Let G 2 (x) be defined in Definition E.4, then, we have\n|G 2 (x) -G 2 (y)| ≤ 24R h R f R 4 (R + R h )β -4 n 2 exp(4R 2 ) x -y 2\nProof. Note that\n|G 2 (x) -G 2 (y)| = | c(x), diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) j ) • A 2 • p(x) i -c(y), diag(diag(h ′′ (A 2 f (y))) • A 2 • p(y) j ) • A 2 • p(y) i |\nFor simplicity, we define G 2,1 : = c(x), diag(diag(h ′′ (A 2 f (x)))A 2 p(x) j )A 2 p(x) i -c(x), diag(diag(h ′′ (A 2 f (x)))A 2 p(x) j )A 2 p(y) i G 2,2 : = c(x), diag(diag(h ′′ (A 2 f (x)))A 2 p(x) j )A 2 p(y) i -c(x), diag(diag(h ′′ (A 2 f (x)))A 2 p(y) j )A 2 p(y) i G 2,3 : = c(x), diag(diag(h ′′ (A 2 f (x)))A 2 p(y) j )A 2 p(y) i -c(x), diag(diag(h ′′ (A 2 f (y)))A 2 p(y) j )A 2 p(y) i G 2,4 : = c(x), diag(diag(h ′′ (A 2 f (y)))A 2 p(y) j )A 2 p(y) i -c(y), diag(diag(h ′′ (A 2 f (y)))A 2 p(y) j )A 2 p(y) i \n|G 3 (x) -G 3 (y)| = | 5 i=1 G i | ≤ 5 i=1 |G i | ≤ 10(R + R h )R 4 R f L h β -3 • n 3 2 • exp(3R 2 ) x -y 2 E.6\nStep 4: G 4 is lipschitz continuous\nIn this section, we show that G 4 is Lipschitz continuous.\nLemma E.9. Let G 4 (x) be defined in Definition E.4, then, we have\n|G 4 (x) -G 4 (y)| ≤ 4(R + R h )R 4 R f L h β -1 • √ n • exp(R 2 ) x -y 2\nProof. The proof is similar to Lemma E.11." }, { "figure_ref": [], "heading": "E.7", "publication_ref": [], "table_ref": [], "text": "Step 5: G 5 is lipschitz continuous\nIn this section, we show that G 5 is Lipschitz continuous.\nLemma E.10. Let G 5 (x) be defined in Definition E.4, then, we have\n|G 5 (x) -G 5 (y)| ≤ 10(R + R h )R 4 R f L h β -3 • n 3 2 • exp(3R 2 ) x -y 2\nProof. The proof is similar to Lemma E.8." }, { "figure_ref": [], "heading": "E.8", "publication_ref": [], "table_ref": [], "text": "Step 6: G 6 is lipschitz continuous\nIn this section, we show that G 6 is Lipschitz continuous.\nLemma E.11. Let G 6 (x) be defined in Definition E.4, then, we have \n|G 6 (x) -G 6 (y)| ≤ 3(R + R h )R 4 R f L h β -1 • √ n • exp(R 2 ) x -y 2\n|G 6,1 | = | c(x), Q 2 (x) • (A 1, * ,i • (f (x) -f (y)) • A 1, * ,j )| ≤ c(x) 2 Q 2 (x) A 1, * ,i 2 2 f (x) -f (y) 2 ≤ (R + R h )R h R 3 R f x -y 2\nwhere the first step follows from the definition of G 6,1 , the second step follows from Fact A.4 and Fact A.3, the last step follows from Part 2 and Part 3 of Lemma E.2, Part 4 of Lemma E.3 and A 1, * ,i 2 ≤ R.\nNext, we upper bound |G 6,2 |:\n|G 6,2 | = | c(x), (Q 2 (x) -Q 2 (y)) • (A 1, * ,i • f (y) • A 1, * ,j )| ≤ c(x) 2 Q 2 (x) -Q 2 (y) A 1, * ,i 2 2 f (x) 2 ≤ (R + R h )R 4 R f L h β -1 • √ n • exp(R 2 ) x -y 2\nwhere the first step follows from the definition of G 6,1 , the second step follows from Fact A. \n) 2 Q 2 (x) A 1, * ,i 2 2 f (x) 2 ≤ L h R 4 R f R h β -1 • √ n • exp(R 2 ) x -y 2\nThus, we have\n|G 6 (x) -G 6 (y)| = |G 6,1 + G 6,2 + G 6,3 | ≤ 3(R + R h )R 4 R f L h β -1 • √ n • exp(R 2 ) x -y 2" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "• Let β ∈ (0, 0.1)\nThen we have" }, { "figure_ref": [], "heading": "Hessian is Lipschitz", "publication_ref": [], "table_ref": [], "text": "Here, we present that the Hessian matrix is Lipschitz continuous, where the details can be seen in Appendix E.\nLemma 5.2. Suppose that" }, { "figure_ref": [], "heading": "E.2 Summary of Six Steps", "publication_ref": [], "table_ref": [], "text": "In this section, we summarize the key results from the following sections." }, { "figure_ref": [], "heading": "Definition E.4. If the following conditions hold", "publication_ref": [], "table_ref": [], "text": "• Let L(x) be defined in Definition 3.7\n• Let Q 2 (x) and q 2 (x) be defined as in Definition 4.2\ndx i dx j be computed in Lemma C.5\nThen we define\nIn this section, we show that G 1 is Lipschitz continuous.\nLemma E.6. Let G 1 be defined in Definition E.4, then we have\nFor simplicity, we define\nStep 3: G 3 is lipschitz continuous\nIn this section, we show that G 3 is Lipschitz continuous.\nLemma E.8. Let G 3 be defined in Definition E.4, then we have\nProof. Note that\nFor simplicity, we define\nFirst, we upper bound |G 3,1 |:\nwhere the first step follows from the definition of G 3,1 , the second step follows from Fact A. Since G 3,1 and G 3,2 are symmetry, we directly obtained the bound for |G 3,2 |:\nThen, we upper bound |G 3,3 |:\nwhere the first step follows from the definition of G 3,2 , the second step follows from Fact A. " } ]
There have been significant advancements made by large language models (LLMs) in various aspects of our daily lives. LLMs serve as a transformative force in natural language processing, finding applications in text generation, translation, sentiment analysis, and question-answering. The accomplishments of LLMs have led to a substantial increase in research efforts in this domain. One specific two-layer regression problem has been well-studied in prior works, where the first layer is activated by a ReLU unit, and the second layer is activated by a softmax unit. While previous works provide a solid analysis of building a two-layer regression, there is still a gap in the analysis of constructing regression problems with more than two layers. In this paper, we take a crucial step toward addressing this problem: we provide an analysis of a two-layer regression problem. In contrast to previous works, our first layer is activated by a softmax unit. This sets the stage for future analyses of creating more activation functions based on the softmax function. Rearranging the softmax function leads to significantly different analyses. Our main results involve analyzing the convergence properties of an approximate Newton method used to minimize the regularized training loss. We prove that the loss function for the Hessian matrix is positive definite and Lipschitz continuous under certain assumptions. This enables us to establish local convergence guarantees for the proposed training algorithm. Specifically, with an appropriate initialization and after O(log(1/ǫ)) iterations, our algorithm can find an ǫ-approximate minimizer of the training loss with high probability. Each iteration requires approximately O(nnz(C) + d ω ) time, where d is the model size, C is the input matrix, and ω < 2.374 is the matrix multiplication exponent.
Local Convergence of Approximate Newton Method for Two Layer Nonlinear Regression
[ { "figure_caption": "Part 9 It is similar to the proof of Part 8. Proof of Part 10 It is similar to the proof of Part 8. Proof of Part 11 It is similar to the proof of Part 8. Proof of Part 12 We have diag", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Hessian is positive definite forever and thus the function is convex. Proof of Part 2 This trivially follows from Lemma D.2.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "First, we upper≤ 2 ≤2bound |G 2,1 |: |G 2,1 | = | c(x), diag(diag(h ′′ (A 2 f (x)))A 2 p(x) j )A 2 (p(x) i -p(y) i ) | ≤ c(x) 2 h ′′ (A 2 f (x)) A 2 2 |p(x) j ||p(x) i -p(y) i | ≤ 6R h R f R 4 (R + R h )β -3 n 3 2 exp(3R 2 ) x -y 2where the first step follows from the definition of G 2,1 , the second step follows from Fact A.3 and Fact A.4, the last step follows from Part 2 and Part 5 of Lemma E.2, Part 9 of Lemma E.3 and A 2 ≤ R. Since G 2,1 and G 2,2 are similar, we directly obtained the bound for |G 2,2 |:|G 2,2 | ≤ 6R h R f R 4 (R + R h )β -3 n 3 2 exp(3R 2 ) x -y 2 Next, we uppper bound |G 2,3 |: |G 2,3 | = | c(x), (diag(diag(h ′′ (A 2 f (x))) -diag(diag(h ′′ (A 2 f (y))))A 2 p(y) j )A 2 p(y) i | ≤ c(x) 2 h ′′ (A 2 (f (x))) -h ′′ (A 2 (f (y))) A 2 2 |p(x) i | 2 ≤ 4(R + R h )L h R 4 β -4 n 2 exp(4R 2 )where the first step follows from the definition of G 2,1 , the second step follows from Fact A.3 and Fact A.4, the last step follows from Part 2 and Part 5 of Lemma E.2, A 2 ≤ R and the assumption that h(x) is L h -lipschitz continuous. Last, we upper bound |G 2,4 |:|G 2,4 | = | c(x) -c(y), diag(diag(h ′′ (A 2 f (y)))A 2 p(y) j )A 2 p(y) i | ≤ c(x) -c(y) 2 h ′′ (A 2 f (y)) A 2 2 |p(x) i | 2 4L h R f R 5 R h β -4 n 2 exp(4R 2 ) x -y 2where the first step follows from the definition of G 2,1 , the second step follows from Fact A.3 and Fact A.4, the last step follows from Part 2 and Part 5 of Lemma E.2, A 2 ≤ R and Part 5 of Lemma E.3. Finally, we have|G 2 (x) -G 2 (y)| = | 4 i=1 G 2,i | ≤ 24R h R f R 4 (R + R h )β -4 n 2 exp(4R 2 ) x -y 2where the first step follows from the definition of G 3,4 , the second step follows from Fact A.3 and Fact A.4, the last step follows from Part 1, Part 2 of Lemma E.2 and Part 6 of Lemma E.3 and A 1, * ,j 2 ≤ R. Finally, we upper bound G 3,5 :|G 3,5 | = | c(x) -c(y), 2Q 2 (y) • f (y) • f (y), A 1, * ,i f (y), A 1, * ,j | ≤ c(x) -c(y) 2 2 Q 2 (y) f (x) 3 2 A 1, * ,j 2 2L h R 4 R f R h β -3 • n 3 2 • exp(3R 2 ) x -y 2where the first step follows from the definition of G 3,4 , the second step follows from Fact A.3 and Fact A.4, the last step follows from Part 1, Part 3 of Lemma E.2 and Part 5 of Lemma E.3 and A 1, * ,j 2 ≤ R. Thus, we obtained the bound for |G 3 (x) -G 3 (y)|:", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Proof.Note that|G 6 (x) -G 6 (y)| = | c(x), Q 2 (x) • (A 1, * ,i • f (x) • A 1, * ,j ) -c(y), Q 2 (y) • (A 1, * ,i • f (y) • A 1, * ,j )|For simplicity, we defineG 6,1 : = c(x), Q 2 (x) • (A 1, * ,i • f (x) • A 1, * ,j ) -c(x), Q 2 (x) • (A 1, * ,i • f (y) • A 1, * ,j ) G 6,2 : = c(x), Q 2 (x) • (A 1, * ,i • f (y) • A 1, * ,j ) -c(x), Q 2 (y) • (A 1, * ,i • f (y) • A 1, * ,j ) G 6,3 : = c(x), Q 2 (y) • (A 1, * ,i • f (y) • A 1, * ,j ) -c(y), Q 2 (y) • (A 1, * ,i • f (y) • A 1, * ,j )First, we upper bound |G 6,1 |:", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4 and Fact A.3, the last step follows from Part 1 and Part 2 of Lemma E.2, Part 6 of Lemma E.3 and A 1, * ,i 2 ≤ R.Then, we upper bound |G 6,3 |:|G 6,3 | = | c(x) -c(y), Q 2 (y) • (A 1, * ,i • f (y) • A 1, * ,j )| ≤ c(x) -c(y", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" } ]
Zhihang Li; Zhao Song; Zifan Wang; Junze Yin
[ { "authors": "Dario Amodei; Rishita Sundaram Ananthanarayanan; Jingliang Anubhai; Eric Bai; Carl Battenberg; Jared Case; Bryan Casper; Qiang Catanzaro; Guoliang Cheng; Chen", "journal": "PMLR", "ref_id": "b0", "title": "Deep speech 2: End-to-end speech recognition in english and mandarin", "year": "2016" }, { "authors": "Sanjeev Arora; Simon Du; Wei Hu; Zhiyuan Li; Ruslan Salakhutdinov; Ruosong Wang", "journal": "NeurIPS", "ref_id": "b1", "title": "On exact computation with an infinitely wide neural net", "year": "2019" }, { "authors": "Sanjeev Arora; Simon Du; Wei Hu; Zhiyuan Li; Ruosong Wang", "journal": "", "ref_id": "b2", "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "year": "2019" }, { "authors": "Sanjeev Arora; Anirudh Goyal", "journal": "", "ref_id": "b3", "title": "A theory for emergence of complex skills in language models", "year": "2023" }, { "authors": "Josh Alman; Jiehao Liang; Zhao Song; Ruizhe Zhang; Danyang Zhuo", "journal": "", "ref_id": "b4", "title": "Bypass exponential time preprocessing: Fast neural network training via weight-data correlation preprocessing", "year": "2022" }, { "authors": "Josh Alman; Zhao Song", "journal": "", "ref_id": "b5", "title": "Fast attention requires bounded entries", "year": "2023" }, { "authors": "Ekin Akyürek; Dale Schuurmans; Jacob Andreas; Tengyu Ma; Denny Zhou", "journal": "", "ref_id": "b6", "title": "What learning algorithm is in-context learning? investigations with linear models", "year": "2022" }, { "authors": "Josh Alman; Virginia Vassilevska; Williams ", "journal": "SIAM", "ref_id": "b7", "title": "A refined laser method and faster matrix multiplication", "year": "2021" }, { "authors": "Zeyuan Allen-Zhu; Yuanzhi Li; Zhao Song", "journal": "", "ref_id": "b8", "title": "A convergence theory for deep learning via over-parameterization", "year": "2019" }, { "authors": "Zeyuan Allen-Zhu; Yuanzhi Li; Zhao Song", "journal": "NeurIPS", "ref_id": "b9", "title": "On the convergence rate of training recurrent neural networks", "year": "2019" }, { "authors": "Dzmitry Bahdanau; Kyunghyun Cho; Yoshua Bengio", "journal": "", "ref_id": "b10", "title": "Neural machine translation by jointly learning to align and translate", "year": "2014" }, { "authors": "Sebastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg; Harsha Nori; Hamid Palangi; Marco Tulio Ribeiro; Yi Zhang", "journal": "", "ref_id": "b11", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Jan Van Den Brand; Binghui Peng; Zhao Song; Omri Weinstein", "journal": "", "ref_id": "b13", "title": "Training (overparametrized) neural networks in near-linear time", "year": "2020" }, { "authors": "Jan Van Den; Zhao Brand; Tianyi Song; Zhou", "journal": "", "ref_id": "b14", "title": "Algorithm and hardness for dynamic attention maintenance in large language models", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b15", "title": "CGH +", "year": "" }, { "authors": "Tianle Cai; Ruiqi Gao; Jikai Hou; Siyu Chen; Dong Wang; Di He; Zhihua Zhang; Liwei Wang", "journal": "", "ref_id": "b16", "title": "Gram-gauss-newton method: Learning overparameterized neural networks for regression problems", "year": "2019" }, { "authors": " Chatgpt", "journal": "OpenAI Blog", "ref_id": "b17", "title": "Optimizing language models for dialogue", "year": "2022-11" }, { "authors": "William Chan; Navdeep Jaitly; Quoc Le; Oriol Vinyals", "journal": "IEEE", "ref_id": "b18", "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", "year": "2016" }, { "authors": "Beidi Chen; Zichang Liu; Binghui Peng; Zhaozhuo Xu; Jonathan Lingjie Li; Tri Dao; Zhao Song; Anshumali Shrivastava; Re Mongoose Christopher", "journal": "", "ref_id": "b19", "title": "A learnable lsh framework for efficient neural network training", "year": "2021" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b20", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Jeffrey Donahue; Lisa Anne Hendricks; Sergio Guadarrama; Marcus Rohrbach; Subhashini Venugopalan; Kate Saenko; Trevor Darrell", "journal": "", "ref_id": "b21", "title": "Long-term recurrent convolutional networks for visual recognition and description", "year": "2015" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b22", "title": "Bert: Pretraining of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Yichuan Deng; Zhihang Li; Zhao Song", "journal": "", "ref_id": "b23", "title": "Attention scheme inspired softmax regression", "year": "2023" }, { "authors": "Yichuan Deng; Zhao Sridhar Mahadevan; Song", "journal": "", "ref_id": "b24", "title": "Randomized and deterministic attention sparsification algorithms for over-parameterized feature dimension", "year": "2023" }, { "authors": "Yichuan Deng; Zhao Song; Omri Weinstein", "journal": "", "ref_id": "b25", "title": "Discrepancy minimization in input sparsity time", "year": "2022" }, { "authors": "Yichuan Deng; Zhao Song; Shenghao Xie", "journal": "", "ref_id": "b26", "title": "Convergence of two-layer regression with nonlinear units", "year": "2023" }, { "authors": "Xiyu Simon S Du; Barnabas Zhai; Aarti Poczos; Singh", "journal": "", "ref_id": "b27", "title": "Gradient descent provably optimizes over-parameterized neural networks", "year": "2018" }, { "authors": "Alex Graves; Abdel-Rahman Mohamed; Geoffrey Hinton", "journal": "Ieee", "ref_id": "b28", "title": "Speech recognition with deep recurrent neural networks", "year": "2013" }, { "authors": "Yeqi Gao; Zhao Sridhar Mahadevan; Song", "journal": "", "ref_id": "b29", "title": "An over-parametrized exponential regression", "year": "2023" }, { "authors": "Yeqi Gao; Zhao Song; Weixin Wang; Junze Yin", "journal": "", "ref_id": "b30", "title": "A fast optimization view: Reformulating single layer attention in llm based on tensor and svm trick, and solving it in matrix multiplication time", "year": "2023" }, { "authors": "Yeqi Gao; Zhao Song; Shenghao Xie", "journal": "", "ref_id": "b31", "title": "In-context learning for attention scheme: from single softmax regression to multiple softmax regression via a tensor trick", "year": "2023" }, { "authors": "Yeqi Gao; Zhao Song; Junze Yin", "journal": "", "ref_id": "b32", "title": "Gradientcoin: A peer-to-peer decentralized large language models", "year": "2023" }, { "authors": "Yeqi Gao; Zhao Song; Xin Yang; Ruizhe Zhang", "journal": "", "ref_id": "b33", "title": "Fast quantum algorithm for attention computation", "year": "2023" }, { "authors": "Shivam Garg; Dimitris Tsipras; Percy Liang; Gregory Valiant", "journal": "", "ref_id": "b34", "title": "What can transformers learn in-context? a case study of simple function classes", "year": "2022" }, { "authors": "Baihe Huang; Xiaoxiao Li; Zhao Song; Xin Yang", "journal": "ICML", "ref_id": "b35", "title": "Fl-ntk: A neural tangent kernel-based framework for federated learning convergence analysis", "year": "2020" }, { "authors": "Weihua He; Yongyun Wu; Xiaohua Li", "journal": "IEEE", "ref_id": "b36", "title": "Attention mechanism for neural machine translation: A survey", "year": "2021" }, { "authors": "Ziwei Ji; Matus Telgarsky", "journal": "", "ref_id": "b37", "title": "Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks", "year": "2019" }, { "authors": "Nikita Kitaev; Lukasz Kaiser; Anselm Levskaya", "journal": "", "ref_id": "b38", "title": "Reformer: The efficient transformer", "year": "2020" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "Advances in neural information processing systems", "ref_id": "b39", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "Le François; Gall", "journal": "", "ref_id": "b40", "title": "Powers of tensors and fast matrix multiplication", "year": "2014" }, { "authors": "Yuanzhi Li; Yingyu Liang", "journal": "NeurIPS", "ref_id": "b41", "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "year": "2018" }, { "authors": "Yuchen Li; Yuanzhi Li; Andrej Risteski", "journal": "", "ref_id": "b42", "title": "How do transformers learn topic structure: Towards a mechanistic understanding", "year": "2023" }, { "authors": "Minh-Thang Luong; Hieu Pham; Christopher D Manning", "journal": "", "ref_id": "b43", "title": "Effective approaches to attention-based neural machine translation", "year": "2015" }, { "authors": "Jason D Lee; Ruoqi Shen; Zhao Song; Mengdi Wang; Zheng Yu", "journal": "NeurIPS", "ref_id": "b44", "title": "Generalized leverage score sampling for neural networks", "year": "2020" }, { "authors": "Zhihang Li; Zhao Song; Tianyi Zhou", "journal": "", "ref_id": "b45", "title": "Solving regularized exp, cosh and sinh regression problems", "year": "2023" }, { "authors": "Sadhika Malladi; Tianyu Gao; Eshaan Nichani; Alex Damian; Jason D Lee; Danqi Chen; Sanjeev Arora", "journal": "", "ref_id": "b46", "title": "Fine-tuning language models with just forward passes", "year": "2023" }, { "authors": "Alexander Munteanu; Simon Omlor; Zhao Song; David Woodruff", "journal": "PMLR", "ref_id": "b47", "title": "Bounding the width of neural networks via coupled initialization a worst case analysis", "year": "2022" }, { "authors": "Sadhika Malladi; Alexander Wettig; Dingli Yu; Danqi Chen; Sanjeev Arora", "journal": "PMLR", "ref_id": "b48", "title": "A kernel-based view of language model fine-tuning", "year": "2023" }, { "authors": "Samet Oymak; Soltanolkotabi Mahdi", "journal": "IEEE Journal on Selected Areas in Information Theory", "ref_id": "b49", "title": "Toward moderate overparameterization: Global convergence guarantees for training shallow neural networks", "year": "2020" }, { "authors": " Openai", "journal": "", "ref_id": "b50", "title": "", "year": "2023" }, { "authors": "Abhishek Panigrahi; Sadhika Malladi; Mengzhou Xia; Sanjeev Arora", "journal": "", "ref_id": "b51", "title": "Trainable transformer in transformer", "year": "2023" }, { "authors": "Abhishek Panigrahi; Nikunj Saunshi; Haoyu Zhao; Sanjeev Arora", "journal": "", "ref_id": "b52", "title": "Task-specific skill localization in fine-tuned language models", "year": "2023" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b53", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b54", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Clayton Sanford; Daniel Hsu; Telgarsky ", "journal": "", "ref_id": "b55", "title": "Representational strengths and limitations of transformers", "year": "2023" }, { "authors": "Hasim Sak; Andrew W Senior; Françoise Beaufays", "journal": "", "ref_id": "b56", "title": "Long short-term memory recurrent neural network architectures for large scale acoustic modeling", "year": "2014" }, { "authors": "Zhao Song; Xin Yang; Yuanyuan Yang; Tianyi Zhou", "journal": "", "ref_id": "b57", "title": "Faster algorithm for structured john ellipsoid computation", "year": "2022" }, { "authors": "Zhao Song; Junze Yin; Lichen Zhang", "journal": "", "ref_id": "b58", "title": "Solving attention kernel regression problem via pre-conditioner", "year": "2023" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b59", "title": "Very deep convolutional networks for largescale image recognition", "year": "2014" }, { "authors": "Charlie Snell; Ruiqi Zhong; Dan Klein; Jacob Steinhardt", "journal": "", "ref_id": "b60", "title": "Approximating how single head attention learns", "year": "2021" }, { "authors": "Zhao Song; Lichen Zhang; Ruizhe Zhang", "journal": "", "ref_id": "b61", "title": "Training multi-layer over-parametrized neural network in subquadratic time", "year": "2021" }, { "authors": "Mohd Usama; Belal Ahmad; Enmin Song; M Shamim Hossain; Mubarak Alrashoud; Ghulam Muhammad", "journal": "Future Generation Computer Systems", "ref_id": "b62", "title": "Attention-based sentiment analysis using convolutional and recurrent neural network", "year": "2020" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Lio; Yoshua Bengio", "journal": "", "ref_id": "b63", "title": "Graph attention networks", "year": "2017" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b64", "title": "Attention is all you need", "year": "2017" }, { "authors": "Virginia Vassilevska; Williams ", "journal": "", "ref_id": "b65", "title": "Multiplying matrices faster than coppersmithwinograd", "year": "2012" }, { "authors": "Xinyi Wang; Wanrong Zhu; William Yang; Wang ", "journal": "", "ref_id": "b66", "title": "Large language models are implicitly topic models: Explaning and finding good demonstrations for in-context learning", "year": "2023" }, { "authors": "Kelvin Xu; Jimmy Ba; Ryan Kiros; Kyunghyun Cho; Aaron Courville; Ruslan Salakhudinov; Rich Zemel; Yoshua Bengio", "journal": "PMLR", "ref_id": "b67", "title": "Show, attend and tell: Neural image caption generation with visual attention", "year": "2015" }, { "authors": "Ruiqi Zhang; Spencer Frei; Peter L Bartlett", "journal": "", "ref_id": "b68", "title": "Trained transformers learn linear models in-context", "year": "2023" }, { "authors": "Difan Zou; Quanquan Gu", "journal": "NeurIPS", "ref_id": "b69", "title": "An improved analysis of training over-parameterized deep neural networks", "year": "2019" }, { "authors": "Lichen Zhang", "journal": "", "ref_id": "b70", "title": "Speeding up optimizations via data structures: Faster search, sample and maintenance", "year": "2022" }, { "authors": "Amir Zandieh; Insu Han; Majid Daliri; Amin Karbasi", "journal": "", "ref_id": "b71", "title": "Kdeformer: Accelerating transformers via kernel density estimation", "year": "2023" }, { "authors": "Jingzhao Zhang; Sai Praneeth Karimireddy; Andreas Veit; Seungyeon Kim; Sashank Reddi; Sanjiv Kumar; Suvrit Sra", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b72", "title": "Why are adaptive methods good for attention models", "year": "2020" }, { "authors": "Yi Zhang; Orestis Plevrakis; Xingguo Simon S Du; Zhao Li; Sanjeev Song; Arora", "journal": "", "ref_id": "b73", "title": "Over-parameterized adversarial training: An analysis overcoming the curse of dimensionality", "year": "2020" }, { "authors": "Haoyu Zhao; Abhishek Panigrahi; Rong Ge; Sanjeev Arora", "journal": "", "ref_id": "b74", "title": "Do transformers parse while predicting the masked word?", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b75", "title": "Opt: Open pre-trained transformer language models", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 231.48, 510.02, 149.07, 15.17 ], "formula_id": "formula_0", "formula_text": "D(X) := diag(exp(A 1 XA ⊤ 2 )1 n )" }, { "formula_coordinates": [ 2, 199.8, 555.02, 212.3, 21.01 ], "formula_id": "formula_1", "formula_text": "min X,Y ∈R d×d D(X) -1 exp(A 1 XA ⊤ 2 )A 3 Y -B 2 F ." }, { "formula_coordinates": [ 2, 217.68, 668.06, 176.54, 20.89 ], "formula_id": "formula_2", "formula_text": "min x∈R d exp(Ax), 1 n -1 exp(Ax) -c 2 2 ." }, { "formula_coordinates": [ 3, 72, 181.94, 342.87, 38.09 ], "formula_id": "formula_3", "formula_text": "exp(A 2 φ(A 1 x)), 1 m -1 exp(A 2 φ(A 1 x)) -b 2 2 under conditions min x∈{ x 2 ≤R,x∈R d } ." }, { "formula_coordinates": [ 3, 72, 317.38, 467.95, 70.49 ], "formula_id": "formula_4", "formula_text": "A 1 ∈ R n×d , A 2 ∈ R m×n , b ∈ R m , and x ∈ R d . Let L : R d → R be L(x) := 1 2 • h(A 2 exp(A 1 x), 1 n -1 • exp(A 1 x)) -b 2 2 ." }, { "formula_coordinates": [ 3, 281.88, 417.7, 48.14, 18.33 ], "formula_id": "formula_5", "formula_text": "min x∈R d L(x)." }, { "formula_coordinates": [ 4, 244.08, 276.38, 123.74, 21.01 ], "formula_id": "formula_6", "formula_text": "Pr[ x -x * 2 ≤ ǫ] ≥ 1 -δ." }, { "formula_coordinates": [ 6, 77.4, 281.3, 314.54, 21.49 ], "formula_id": "formula_7", "formula_text": "x 1 := n i=1 |x i |, x 2 := ( n i=1 x 2 i ) 1/2 , and x ∞ := max i∈[n] |x i |." }, { "formula_coordinates": [ 6, 72, 462.86, 285.74, 143.41 ], "formula_id": "formula_8", "formula_text": "Definition 3.1. Given A 1 ∈ R n×d , let u(x) : R d → R n >0 be u(x) := exp(A 1 • x) Definition 3.2. Let α(x) : R d → R >0 be α(x) := u(x), 1 n . Definition 3.3. Let f (x) : R d → R n >0 be f (x) := α(x) -1 • u(x)." }, { "formula_coordinates": [ 6, 200.88, 655.94, 214.95, 74.53 ], "formula_id": "formula_9", "formula_text": "f (x) 1 = α(x) -1 • u(x) 1 = exp(A 1 • x), 1 n -1 • exp(A 1 • x) 1 = (exp(A 1 • x) • 1 ⊤ n ) -1 • exp(A 1 • x) 1 = exp(A 1 • x) -1 1 • exp(A 1 • x) 1 = exp(A 1 • x) -1 1 • exp(A 1 • x) 1 = 1," }, { "formula_coordinates": [ 7, 236.52, 194.02, 144.26, 18.65 ], "formula_id": "formula_10", "formula_text": "h(x) -h(y) 2 ≤ L h • x -y 2 ." }, { "formula_coordinates": [ 7, 72, 245.54, 371.5, 120.13 ], "formula_id": "formula_11", "formula_text": "h ′ (x) -h ′ (y) 2 ≤ L h • x -y 2 . Definition 3.6. Let h : R m → R m (Definition 3.5), A 2 ∈ R m×n , and b ∈ R m . Let c(x) : R d → R m be c(x) := h(A 2 f (x)) -b Definition 3.7. Let L : R d → R >0 be L(x) := 1 2 • c(x) 2 2 ." }, { "formula_coordinates": [ 7, 249.96, 462.74, 111.63, 15.17 ], "formula_id": "formula_12", "formula_text": "L reg (x) := 0.5 W A 1 x 2 2" }, { "formula_coordinates": [ 7, 88.44, 531.59, 277.79, 93.22 ], "formula_id": "formula_13", "formula_text": "• The gradient is dL reg dx = A ⊤ 1 W 2 A 1 x • The Hessian is d 2 L reg dx 2 = A ⊤ 1 W 2 A 1" }, { "formula_coordinates": [ 7, 245.76, 711.82, 120.51, 11.85 ], "formula_id": "formula_14", "formula_text": "L tot (x) := L(x) + L reg (x)" }, { "formula_coordinates": [ 8, 88.44, 246.38, 355.58, 65.41 ], "formula_id": "formula_15", "formula_text": "• A 1, * ,i ∈ R n represents the i-th column vector of A 1 ∈ R n×d for all i ∈ [d] • Let A 1,l, * ∈ R d denote the l-th row vector of A 1 ∈ R n×d for all l ∈ [n] • Let A 2,k, * ∈ R n denote the k-th row vector A 2 ∈ R m×n for each k ∈ [m]" }, { "formula_coordinates": [ 8, 88.44, 336.5, 450.38, 87.97 ], "formula_id": "formula_16", "formula_text": "• Part 1. Let p(x) i ∈ R n be defined as p(x) i := f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x) df (x) dx i = p(x) i • Part 2. Let h ′ (A 2 f (x)) ∈ R m denote a length-m vector i-th coordinate is the dh(y i ) dy i y i =(A 2 f (x)) i" }, { "formula_coordinates": [ 8, 88.44, 444.22, 337.56, 191.73 ], "formula_id": "formula_17", "formula_text": "dh(A 2 f (x)) dx i m×1 = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 • Part 3. dL(x) dx i = c(x) m×1 , diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 • Part 4. Let h ′ (A 2 f (x)) ∈ R m dh ′ (A 2 f (x)) dx i m×1 = diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1" }, { "formula_coordinates": [ 9, 88.44, 72.5, 164.9, 67.97 ], "formula_id": "formula_18", "formula_text": "• Q 2 (x) = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • • q 2 (x) = Q 2 (x) ⊤ n×m c(x) m×1" }, { "formula_coordinates": [ 9, 182.76, 199.7, 246.39, 29.81 ], "formula_id": "formula_19", "formula_text": "g(x) := -A ⊤ 1 d×n (f (x) n×1 q 2 (x), f (x) scalar + diag(f (x)) n×n q 2 (x) n×1 )" }, { "formula_coordinates": [ 9, 72, 265.9, 368.55, 54.78 ], "formula_id": "formula_20", "formula_text": "g(x) i := -A 1, * ,i , f (x) scalar q 2 (x), f (x) scalar + A 1, * ,i , f (x) • q 2 (x) scalar 4.3 Re-oragnizing B(x)" }, { "formula_coordinates": [ 9, 270.96, 376.22, 69.6, 34.73 ], "formula_id": "formula_21", "formula_text": "B(x) = 12 i=1 B i" }, { "formula_coordinates": [ 9, 202.8, 443.66, 206.31, 276.17 ], "formula_id": "formula_22", "formula_text": "B 1 = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) B 2 = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ B 3 = f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) B 4 = f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ B 5 = 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ B 6 = 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) B 7 = diag(Q 2 (x) ⊤ c(x)) diag(f (x)) B 8 = diag(f (x))A 2 diag(h ′′ (A 2 f (x))) • diag(c(x)) diag(f (x)) B 9 = diag(f (x))A 2 diag(h ′′ (A 2 f (x))) • diag(c(x))f (x)f (x) ⊤ B 10 = f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) • diag(c(x)) diag(f (x)) B 11 = f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) • diag(c(x))f (x)f (x) ⊤ B 12 = diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))." }, { "formula_coordinates": [ 10, 266.16, 109.82, 80.9, 27.55 ], "formula_id": "formula_23", "formula_text": "d 2 L dx 2 = A ⊤ B(x)A • Let max{ h(A 2 f (x)) 2 , h ′ (A 2 f (x)) 2 } ≤ R h . • Assume h(x) -h(y) 2 ≤ L h • x -y 2 • Assume h ′ (x) -h ′ (y) 2 ≤ L h • x -y 2" }, { "formula_coordinates": [ 11, 187.92, 163.82, 235.71, 38.65 ], "formula_id": "formula_24", "formula_text": "∇ 2 L(x) -∇ 2 L(y) ≤ 59(R + R h )n 2 exp(4R 2 )β -4 R 5 R 2 h R f L h x -y 2" }, { "formula_coordinates": [ 11, 186.48, 243.38, 239.06, 76.69 ], "formula_id": "formula_25", "formula_text": "∇ 2 L(x) -∇ 2 L(y) = 6 i=1 (G i (x) -G i (y)) ≤ 59(R + R h )n 2 exp(4R 2 )β -4 R 5 R 2 h R f L h x -y 2 ," }, { "formula_coordinates": [ 11, 283.8, 485.38, 44.43, 18.45 ], "formula_id": "formula_26", "formula_text": "x∈R d L(x)" }, { "formula_coordinates": [ 11, 111.6, 593.18, 91.3, 37.81 ], "formula_id": "formula_27", "formula_text": "-∇L(x * ) = 0 d . -∇ 2 L(x * ) l • I d ." }, { "formula_coordinates": [ 11, 239.64, 656.66, 164.91, 21.01 ], "formula_id": "formula_28", "formula_text": "∇ 2 L(y) -∇ 2 L(x) ≤ M • y -x 2" }, { "formula_coordinates": [ 11, 292.92, 711.82, 53.05, 18.65 ], "formula_id": "formula_29", "formula_text": "r 0 M ≤ 0.1l" }, { "formula_coordinates": [ 12, 271.44, 133.3, 69.15, 18.65 ], "formula_id": "formula_30", "formula_text": "g(x) := ∇L(x)" }, { "formula_coordinates": [ 12, 266.88, 191.42, 78.15, 21.01 ], "formula_id": "formula_31", "formula_text": "H(x) := ∇ 2 L(x)" }, { "formula_coordinates": [ 12, 241.2, 294.98, 129.63, 21.01 ], "formula_id": "formula_32", "formula_text": "x t+1 = x t -H(x t ) -1 • g(x t )" }, { "formula_coordinates": [ 12, 203.28, 432.7, 205.34, 18.65 ], "formula_id": "formula_33", "formula_text": "(1 -ǫ 0 ) • H(x t ) H(x t ) (1 + ǫ 0 ) • H(x t )." }, { "formula_coordinates": [ 12, 72, 528.02, 467.74, 83.53 ], "formula_id": "formula_34", "formula_text": "O((nnz(A) + d ω ) poly(log(n/δ))) running time. This algorithm generates a matrix D ∈ R n×n , which is O(d log(n/δ)) sparse diagonal, such that (1 -ǫ 0 )A ⊤ DA A ⊤ DA (1 + ǫ 0 )A ⊤ DA." }, { "formula_coordinates": [ 12, 233.04, 688.42, 145.82, 18.65 ], "formula_id": "formula_35", "formula_text": "r t+1 ≤ 2 • (ǫ 0 + r t /(l -r t )) • r t ," }, { "formula_coordinates": [ 13, 72, 182.86, 97.81, 66.5 ], "formula_id": "formula_36", "formula_text": "• r t+1 ≤ 0.4r t • M • r t+1 ≤ 0.1l7" }, { "formula_coordinates": [ 13, 77.88, 300.38, 462.05, 25.34 ], "formula_id": "formula_37", "formula_text": "1: procedure OurAlgorithm(b ∈ R n , A ∈ R n×d , w ∈ R n , ǫ, δ) ⊲ Theorem 7.1 2:" }, { "formula_coordinates": [ 13, 77.88, 327.5, 132.5, 20.41 ], "formula_id": "formula_38", "formula_text": "3: T ← log( x 0 -x * 2 /ǫ" }, { "formula_coordinates": [ 13, 77.88, 342.82, 462.06, 59.33 ], "formula_id": "formula_39", "formula_text": "for t = 0 → T do 5: D ← B(x t ) + diag(w • w) 6: D ← SubSample(D, A, ǫ 1 = Θ(1), δ 1 = δ/T ) ⊲ Lemma 6.5 7: g ← -A ⊤ 1 (f (x) q 2 (x), f (x) + diag(f (x))q 2 (x))" }, { "formula_coordinates": [ 13, 77.88, 396.26, 105.14, 25.34 ], "formula_id": "formula_40", "formula_text": "H ← A ⊤ DA 9:" }, { "formula_coordinates": [ 13, 88.44, 521.62, 376.94, 104.93 ], "formula_id": "formula_41", "formula_text": "• We have L(x) be established in Definition 3.7 • Suppose A ≤ R • Suppose x 2 ≤ R • x * represents the solution of min x∈R d L(x) • Let l be a scalar such that w 2 i ≥ 12R h L h R(R + R h ) + l/σ min (A) 2 for ∀i ∈ [n]" }, { "formula_coordinates": [ 13, 88.44, 649.22, 238.4, 15.29 ], "formula_id": "formula_42", "formula_text": "• Let M = 59(R + R h )n 2 exp(4R 2 )β -4 R 5 R 2 h R f L h" }, { "formula_coordinates": [ 14, 248.64, 226.1, 147.38, 21.01 ], "formula_id": "formula_43", "formula_text": "x k -x * 2 ≤ 0.4 • x k-1 -x * 2 ." }, { "formula_coordinates": [ 14, 251.76, 279.62, 140.67, 20.89 ], "formula_id": "formula_44", "formula_text": "x T -x * 2 ≤ 0.4 T • x 0 -x * 2" }, { "formula_coordinates": [ 15, 72, 240.74, 355.34, 21.49 ], "formula_id": "formula_45", "formula_text": "A ∈ R n×d , exp(A) denotes the n × d matrix given by exp(A) = ∞ i=0 1 i! A i ." }, { "formula_coordinates": [ 15, 109.08, 267.86, 314.54, 21.49 ], "formula_id": "formula_46", "formula_text": "x 1 := n i=1 |x i |, x 2 := ( n i=1 x 2 i ) 1/2 , and x ∞ := max i∈[n] |x i |." }, { "formula_coordinates": [ 15, 72, 284.26, 467.93, 45.77 ], "formula_id": "formula_47", "formula_text": "A := sup x∈R k Ax 2 / x 2 For two vectors a, b ∈ R n , we define a, b := n i=1 a i b i . For two vectors a, b ∈ R n , we use a • b to denote the vector where its i-th entry is a i b i for i ∈ [n]. For x ∈ R n ," }, { "formula_coordinates": [ 15, 88.44, 471.62, 265.71, 199.15 ], "formula_id": "formula_48", "formula_text": "• a, b scalar c n×1 = a ⊤ b scalar c n×1 = c n×1 a ⊤ 1×n b n×1 = c n×1 b ⊤ 1×n a n×1 • a • b n×1 = b • a n×1 = diag(a) n×n b n×1 = diag(b) n×n a n×1 • a ⊤ 1×n (b • c) n×1 = b ⊤ 1×n (a • c) n×1 = c ⊤ 1×n (a • b) n×1 • diag(a • b) n×n = diag(a) n×n diag(b) n×n • diag(a + b) n×1 = diag(a) n×1 + diag(b) n×1 • a, b + c, b = a + c, b = b, a + c = b, a + b, c ." }, { "formula_coordinates": [ 16, 72, 72.02, 402.1, 342.13 ], "formula_id": "formula_49", "formula_text": "• d f (x),g(x) dt = df (x) dt , g(x) + f (x), dg(x) dt • d dt (f (x) + g(x)) = df (x) dt + dg(x) dt • d dt (f (x) • g(x)) = f (x) • dg(x) dt + g(x) • df (x) dt Fact A.3. For two length-n column vectors u, v ∈ R n , we have • u, v ≤ u 2 • v 2 (Cauchy-Schwarz Inequality) • u, v = u • v, 1 n • for all real number a, au 2 = |a| • u 2 • u ⊤ 2 = u 2 • u + v 2 ≤ u 2 + v 2 • u • v 2 ≤ u ∞ • v 2 • diag(u) ≤ u ∞ • u ∞ ≤ u 2 ≤ √ n • u ∞ • u 2 ≤ u 1 ≤ √ n • u 2 • exp(u) ∞ ≤ exp( u ∞ ) ≤ exp( u 2 ) • if u 2 , v 2 ≤ R, then exp(u) -exp(v) 2 ≤ exp(R) • u -v 2 , for all R ≥ 4." }, { "formula_coordinates": [ 16, 72, 440.5, 300.94, 176.33 ], "formula_id": "formula_50", "formula_text": "• For a scalar c ∈ R, we have c • A ≤ |c| • A • A ⊤ = A • A + B ≤ A + B • A • B ≤ A • B • For any vector x, we have Ax 2 ≤ A • x 2 • For two vectors a, b ∈ R n , we have ab ⊤ ≤ a 2 b 2 Fact A.5. For two length-n column vectors u, v ∈ R n , we have • uu ⊤ u 2 2 • I n ." }, { "formula_coordinates": [ 17, 88.44, 298.82, 357.98, 64.33 ], "formula_id": "formula_51", "formula_text": "• Let A 1, * ,i ∈ R n denote the i-th column vector of A 1 ∈ R n×d for all i ∈ [d] • Let A 1,l, * ∈ R d denote the l-th row vector of A 1 ∈ R n×d for all l ∈ [n] • Let A 2,k, * ∈ R n denote the k-th row vector A 2 ∈ R m×n for each k ∈ [m]" }, { "formula_coordinates": [ 17, 88.44, 387.23, 386.43, 278.12 ], "formula_id": "formula_52", "formula_text": "• Part 1. du(x) dx i = u(x) • A 1, * ,i • Part 2. dα(x) dx i = u(x), A 1, * ,i • Part 3. dα(x) -1 dx i = -α(x) -1 • f (x), A 1, * ,i • Part 4. Let p(x) i ∈ R n be defined as p(x) i := f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x) df (x) dx i = p(x) i • Part 5. d f (x), A 1, * ,i dx i = -f (x), A 1, * ,i 2 + f (x), A 1, * ,i • A 1, * ,i" }, { "formula_coordinates": [ 17, 157.32, 699.22, 320.58, 26.73 ], "formula_id": "formula_53", "formula_text": "d f (x), A 1, * ,i dx j = -f (x), A 1, * ,i • f (x), A 1, * ,j + f (x), A 1, * ,i • A 1, * ,j • Part 7. Let h ′ (A 2 f (x)) ∈ R m denote a length-m vector i-th coordinate is the dh(y i ) dy i y i =(A 2 f (x)) i" }, { "formula_coordinates": [ 18, 72, 111.82, 467.91, 370.65 ], "formula_id": "formula_54", "formula_text": "dh(A 2 f (x)) dx i m×1 = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 • Part 8. dc(x) dx i = dh(A 2 f (x)) dx i • Part 9. dL(x) dx i = c(x) m×1 , diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 • Part 10. Let h ′ (A 2 f (x)) ∈ R m dh ′ (A 2 f (x)) dx i m×1 = diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 Proof. Proof of Part 1. We have du(x) dx i = d(exp(A 1 • x)) dx i = exp(A 1 x) • d(A 1 x) dx i = exp(A 1 x) • (A 1 • dx dx i ),(1)" }, { "formula_coordinates": [ 18, 279.48, 537.82, 52.94, 26.73 ], "formula_id": "formula_55", "formula_text": "( dx dx i ) i = 1." }, { "formula_coordinates": [ 18, 279, 592.18, 53.9, 26.74 ], "formula_id": "formula_56", "formula_text": "( dx dx i ) j = 0." }, { "formula_coordinates": [ 18, 285.12, 646.42, 42.98, 26.73 ], "formula_id": "formula_57", "formula_text": "dx dx i = e i ," }, { "formula_coordinates": [ 18, 265.44, 700.78, 274.47, 26.74 ], "formula_id": "formula_58", "formula_text": "A 1 • dx dx i = A 1, * ,i .(2)" }, { "formula_coordinates": [ 19, 255.72, 95.86, 101.78, 26.61 ], "formula_id": "formula_59", "formula_text": "du(x) dx i = u(x) • A 1, * ,i ." }, { "formula_coordinates": [ 19, 223.2, 161.86, 160.89, 87.09 ], "formula_id": "formula_60", "formula_text": "dα(x) dx i = d u(x), 1 n dx i = du(x) dx i , 1 n + u(x), d1 n dx i = u(x) • A 1, * ,i , 1 n = u(x), A 1, * ,i ," }, { "formula_coordinates": [ 19, 225.96, 317.9, 156.36, 68.17 ], "formula_id": "formula_61", "formula_text": "dα(x) -1 dx i = -α(x) -2 • d dx i α(x) = -α(x) -2 u(x), A 1, * ,i = -α(x) -1 f (x), A 1, * ,i" }, { "formula_coordinates": [ 19, 162.48, 434.66, 287.76, 96.61 ], "formula_id": "formula_62", "formula_text": "df (x) dx i = d(α(x) -1 u(x)) dx i = u(x) • dα(x) -1 dx i + α(x) -1 • du(x) dx i = -α(x) -2 u(x), A 1, * ,i • u(x) + α(x) -1 • u(x) • A 1, * ,i = -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i" }, { "formula_coordinates": [ 19, 155.28, 594.34, 302.04, 85.37 ], "formula_id": "formula_63", "formula_text": "d f (x), A 1, * ,i dx i = A ⊤ 1, * ,i df (x) dx i = A ⊤ 1, * ,i (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) = -f (x), A 1, * ,i • A ⊤ 1, * ,i f (x) + A ⊤ 1, * ,i f (x) • A 1, * ,i = -f (x), A 1, * ,i 2 + f (x), A 1, * ,i • A 1, * ,i" }, { "formula_coordinates": [ 19, 72, 698.26, 467.96, 24.47 ], "formula_id": "formula_64", "formula_text": "u, v = u ⊤ v = v ⊤ u. Proof of Part 6. For j ∈ [d], i ∈ [d] and j = i d f (x), A 1, * ,i dx j = A ⊤ 1, * ,i df (x) dx j = A ⊤ 1, * ,i (-f (x), A 1, * ,j • f (x) + f (x) • A 1, * ,j ) = -f (x), A 1, * ,j • A ⊤ 1, * ,i f (x) + A ⊤ 1, * ,i f (x) • A 1, * ,j = -f (x), A 1, * ,j • f (x), A 1, * ,i + A 1, * ,i , f (x) • A 1, * ,j = -f (x), A 1, * ,i • f (x), A 1, * ,j + f (x), A 1, * ,i • A 1, * ,j" }, { "formula_coordinates": [ 20, 88.92, 233.14, 386.64, 174.89 ], "formula_id": "formula_65", "formula_text": "• w = v, u • w . Proof of Part 7. For k ∈ [m], dh(A 2 f (x)) k dx i = h ′ (A 2 f (x)) k • d(A 2 f (x)) k dx i = h ′ (A 2 f (x)) k • d A ⊤ 2,k, * , f (x) dx i = h ′ (A 2 f (x)) k • A ⊤ 2,k, * n×1 , df (x) dx i n×1 = h ′ (A 2 f (x)) k • A ⊤ 2,k, * , -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i" }, { "formula_coordinates": [ 20, 88.92, 462.7, 397.23, 154.65 ], "formula_id": "formula_66", "formula_text": "dh(A 2 f (x)) dx i m×1 = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) n×1 Proof of Part 8. dc(x) dx i = dh(A 2 f (x)) -b dx i = dh(A 2 f (x)) dx i - db dx i = dh(A 2 f (x)) dx i" }, { "formula_coordinates": [ 20, 116.88, 672.1, 114.03, 55.41 ], "formula_id": "formula_67", "formula_text": "dL(x) dx i = d dx i ( 1 2 c(x) 2 2 ) = (c(x)) ⊤ dc(x) dx i = (c(x)) ⊤ dh(A 2 f (x)) dx i = (c(x)) ⊤ 1×m • diag(h ′ (A 2 f (x))) m×m • A 2 m×n • (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) n×1 ," }, { "formula_coordinates": [ 21, 204.12, 203.42, 335.79, 28.37 ], "formula_id": "formula_68", "formula_text": "dh ′ (A 2 f (x)) k dx i = h ′′ (A 2 f (x)) k • d(A 2 f (x)) k dx i .(3)" }, { "formula_coordinates": [ 21, 72, 263.38, 467.91, 85.05 ], "formula_id": "formula_69", "formula_text": "h ′ (A 2 f (x)) k • d(A 2 f (x)) k dx i = h ′ (A 2 f (x)) k • A ⊤ 2,k, * , -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i , which implies that d(A 2 f (x)) k dx i = A ⊤ 2,k, * , -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i .(4)" }, { "formula_coordinates": [ 21, 129.12, 379.7, 354.98, 28.37 ], "formula_id": "formula_70", "formula_text": "dh ′ (A 2 f (x)) k dx i = h ′′ (A 2 f (x)) k • A ⊤ 2,k, * , -f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ." }, { "formula_coordinates": [ 21, 123, 438.14, 367.22, 42.65 ], "formula_id": "formula_71", "formula_text": "dh ′ (A 2 f (x)) dx i m×1 = diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) n×1 ," }, { "formula_coordinates": [ 22, 88.44, 75.11, 301.32, 110.48 ], "formula_id": "formula_72", "formula_text": "• Part 1. d 2 u(x) dx 2 i = A 1, * ,i • u(x) • A 1, * ,i • Part 2. d 2 u(x) dx i dx j = A 1, * ,j • u(x) • A 1, * ,i" }, { "formula_coordinates": [ 22, 237, 222.26, 138.72, 108.85 ], "formula_id": "formula_73", "formula_text": "d 2 u(x) dx 2 i = d dx i ( du(x) dx i ) = d dx i (u(x) • A 1, * ,i ) = A 1, * ,i • du(x) dx i = A 1, * ,i • u(x) • A 1, * ,i" }, { "formula_coordinates": [ 22, 236.28, 396.86, 140.16, 108.61 ], "formula_id": "formula_74", "formula_text": "d 2 u(x) dx i dx j = d dx i ( du(x) dx j ) = d dx i (u(x) • A 1, * ,j ) = A 1, * ,j • du(x) dx i = A 1, * ,j • u(x) • A 1, * ,i" }, { "formula_coordinates": [ 22, 88.44, 678.35, 297, 49.16 ], "formula_id": "formula_75", "formula_text": "• Part 1. d 2 α(x) dx 2 i = u(x), A 1, * i • A 1, * ,i • Part 2. d 2 α(x) dx i dx j = u(x), A 1, * ,i • A 1, * ,j" }, { "formula_coordinates": [ 23, 236.28, 159.26, 137.04, 127.93 ], "formula_id": "formula_76", "formula_text": "d 2 α(x) dx 2 i = d dx i ( dα(x) dx i ) = d u(x), A 1, * ,i dx i = A ⊤ 1, * ,i du(x) dx i = A ⊤ 1, * ,i • u(x) • A 1, * ,i = u(x), A 1, * i • A 1, * ,i" }, { "formula_coordinates": [ 23, 234.6, 352.94, 138.78, 127.57 ], "formula_id": "formula_77", "formula_text": "d 2 α(x) dx i dx j = d dx i ( dα(x) dx j ) = d u(x), A 1, * ,j dx i = A ⊤ 1, * ,j du(x) dx i = A ⊤ 1, * ,j • u(x) • A 1, * ,i = u(x), A 1, * ,i • A 1, * ,j" }, { "formula_coordinates": [ 23, 88.44, 676.31, 382.44, 49.16 ], "formula_id": "formula_78", "formula_text": "• Part 1. d 2 α(x) -1 dx 2 i = 2α(x) -1 • f (x), A 1, * ,i 2 -α(x) -1 f (x), A 1, * ,i • A 1, * ,i • Part 2. d 2 α(x) -1 dx i dx j = 2α(x) -1 • f (x), A 1, * ,i f (x), A 1, * ,j -α(x) -1 f (x), A 1, * ,i • A 1, * ,j" }, { "formula_coordinates": [ 24, 106.44, 151.82, 400.23, 127.69 ], "formula_id": "formula_79", "formula_text": "d 2 α(x) -1 dx 2 i = d dx i ( dα(x) -1 dx i ) = d dx i (-α(x) -1 • f (x), A 1, * ,i ) = - dα(x) -1 dx i • f (x), A 1, * ,i -α(x) -1 d f (x), A 1, * ,i dx i = α(x) -1 • f (x), A 1, * ,i 2 -α(x) -1 (-f (x), A 1, * ,i 2 + f (x), A 1, * ,i • A 1, * ,i ) = 2α(x) -1 • f (x), A 1, * ,i 2 -α(x) -1 f (x), A 1, * ,i • A 1, * ,i" }, { "formula_coordinates": [ 24, 72, 340.58, 472.11, 158.53 ], "formula_id": "formula_80", "formula_text": "d 2 α(x) -1 dx i dx j = d dx i ( dα(x) -1 dx j ) = d dx i (-α(x) -1 • f (x), A 1, * ,j ) = - dα(x) -1 dx i • f (x), A 1, * ,j -α(x) -1 d f (x), A 1, * ,j dx i = α(x) -1 • f (x), A 1, * ,i f (x), A 1, * ,j -α(x) -1 (-f (x), A 1, * ,j • f (x), A 1, * ,i + f (x), A 1, * ,i • A 1, * ,j ) = 2α(x) -1 • f (x), A 1, * ,i f (x), A 1, * ,j -α(x) -1 f (x), A 1, * ,i • A 1, * ,j" }, { "formula_coordinates": [ 24, 88.44, 661.67, 432.6, 68.8 ], "formula_id": "formula_81", "formula_text": "• Part 1. d 2 f (x) dx 2 i = 2 f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i • A 1, * ,i f (x) -2 f (x), A 1, * ,i f (x) • A 1, * ,i + A 1, * ,i • f (x) • A 1, * ,i • Part 2. d 2 f (x) dx i dx j = 2 f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j • Part 3. dp i (x) dx i = d 2 f (x) dx 2 i • Part 4. dp i (x) dx j = d 2 f (x) dx i dx j" }, { "formula_coordinates": [ 25, 73.2, 306.26, 506.52, 143.41 ], "formula_id": "formula_82", "formula_text": "d 2 f (x) dx 2 i = d dx i ( df (x) dx i ) = d dx i (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i ) = - d f (x), A 1, * ,i dx i f (x) -f (x), A 1, * ,i df (x) dx i + A 1, * ,i • df (x) dx i = f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i • A 1, * ,i f (x) + f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,i + A 1, * ,i • f (x) • A 1, * ,i = 2 f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i • A 1, * ,i f (x) -2 f (x), A 1, * ,i f (x) • A 1, * ,i + A 1, * ,i • f (x) • A 1, * ,i" }, { "formula_coordinates": [ 25, 73.2, 515.42, 473.67, 158.29 ], "formula_id": "formula_83", "formula_text": "d 2 f (x) dx i dx j = d dx i ( df (x) dx j ) = d dx i (-f (x), A 1, * ,j • f (x) + f (x) • A 1, * ,j ) = - d f (x), A 1, * ,j dx i f (x) -f (x), A 1, * ,j df (x) dx i + A 1, * ,j • df (x) dx i = f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) + f (x), A 1, * ,j f (x), A 1, * ,i f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j = 2 f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j" }, { "formula_coordinates": [ 26, 88.92, 75.31, 263.43, 76.33 ], "formula_id": "formula_84", "formula_text": "Proof of Part 3 dp(x) i dx i = d dx i df (x) dx i = d 2 f (x) dx 2 i" }, { "formula_coordinates": [ 26, 259.2, 205.54, 93.63, 57.81 ], "formula_id": "formula_85", "formula_text": "dp(x) i dx j = d dx j df (x) dx i = d 2 f (x) dx i dx j" }, { "formula_coordinates": [ 26, 88.44, 377.06, 460.83, 129.61 ], "formula_id": "formula_86", "formula_text": "• B 1 (x) ∈ R n×n such that A ⊤ 1, * ,i B 1 (x)A 1, * ,j = (Q 2 (x) • (-f (x), A 1, * ,i • f (x) + f (x) • A 1, * ,i )) ⊤ • (Q 2 (x) • (-f (x), A 1, * ,j • f (x) + f (x) • A 1, * ,j )) • B 2 (x) ∈ R n×n such that A ⊤ 1, * ,i B 2 (x)A 1, * ,j = q 2 (x) ⊤ • (2 f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j )" }, { "formula_coordinates": [ 26, 88.44, 537.11, 449.04, 188.48 ], "formula_id": "formula_87", "formula_text": "• Part 1. d 2 L dx 2 i = Q 2 (x) m×n • p(x) i n×1 2 2 + c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) i ) • A 2 m×n • p(x) i n×1 + c(x) m×1 , 2 Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 2 -c(x) m×1 , Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 • A 1, * ,i n×1 -c(x) m×1 , 2 Q 2 (x) m×n •(f (x) n×1 • A 1, * ,i n×1 ) • f (x) n×1 , A 1, * ,i n×1 + c(x) m×1 , Q 2 (x) m×n •(A 1, * ,i n×1 • f (x) n×1 • A 1, * ,i n×1 ) • Part 2. d 2 L dx i dx j = Q 2 (x) m×n • p(x) j n×1 , Q 2 (x) m×n • p(x) i n×1 + c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) j ) • A 2 m×n • p(x) i n×1 + c(x) m×1 , 2 Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 f (x) n×1 , A 1, * ,j n×1 -c(x) m×1 , Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 • A 1, * ,j n×1 -c(x) m×1 , 2 Q 2 (x) m×n •(f (x) n×1 • A 1, * ,j n×1 ) • f (x) n×1 , A 1, * ,i n×1 + c(x) m×1 , Q 2 (x) m×n •(A 1, * ,i n×1 • f (x) n×1 • A 1, * ,j n×1 ) Proof. Proof of Part 1. We can show d 2 L dx 2 i = d dx i ( dL dx i ) = d dx i c(x), Q 2 (x) • p(x) i = d dx i c(x), Q 2 (x) • p(x) i + c(x), d dx i (Q 2 (x)) • p(x) i + c(x), Q 2 (x) • d dx i (p(x) i ) ,(5)" }, { "formula_coordinates": [ 27, 195.6, 379.3, 344.31, 113.45 ], "formula_id": "formula_88", "formula_text": "d dx i c(x), Q 2 (x) • p(x) i = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 , Q 2 (x) m×n • p(x) i n×1 = Q 2 (x) m×n • p(x) i n×1 , Q 2 (x) m×n • p(x) i n×1 = Q 2 (x) • p(x) i 2 2 ,(6)" }, { "formula_coordinates": [ 27, 88.92, 535.82, 352.1, 138.41 ], "formula_id": "formula_89", "formula_text": "Consider d dx i (Q 2 (x)): we have d dx i (Q 2 (x)) = d dx i (diag(h ′ (A 2 f (x)))A 2 ) = d dx i (diag(h ′ (A 2 f (x)))) • A 2 = diag( d dx i (h ′ (A 2 f (x)))) • A 2 = diag(diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • p(x) i n×1 ) • A 2 m×n ," }, { "formula_coordinates": [ 28, 119.76, 94.78, 420.15, 34.53 ], "formula_id": "formula_90", "formula_text": "c(x), d dx i (Q 2 (x)) • p(x) i = c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) i ) • A 2 m×n • p(x) i n×1(7)" }, { "formula_coordinates": [ 28, 72, 160.18, 479.19, 200.93 ], "formula_id": "formula_91", "formula_text": "c(x), Q 2 (x) • d dx i (p(x) i ) = c(x), Q 2 (x)• (2 f (x), A 1, * ,i 2 f (x) -f (x), A 1, * ,i • A 1, * ,i f (x) -2 f (x), A 1, * ,i f (x) • A 1, * ,i + A 1, * ,i • f (x) • A 1, * ,i ) = c(x), 2Q 2 (x) f (x), A 1, * ,i 2 f (x) -c(x), Q 2 (x) f (x), A 1, * ,i • A 1, * ,i f (x) -c(x), 2Q 2 (x) f (x), A 1, * ,i f (x) • A 1, * ,i + c(x), Q 2 (x)A 1, * ,i • f (x) A 1, * ,i = c(x), 2Q 2 (x) • f (x) • f (x), A 1, * ,i 2 -c(x), Q 2 (x) • f (x) • f (x), A 1, * ,i • A 1, * ,i -c(x), 2Q 2 (x) • (f (x) • A 1, * ,i ) • f (x), A 1, * ,i + c(x), Q 2 (x) • (A 1, * ,i • f (x) • A 1, * ,i ) ,(8)" }, { "formula_coordinates": [ 28, 88.92, 412.82, 450.99, 276.17 ], "formula_id": "formula_92", "formula_text": "d 2 L dx 2 i = Q 2 (x) m×n • p(x) i n×1 2 2 + c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) i ) • A 2 m×n • p(x) i n×1 + c(x) m×1 , 2 Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 2 -c(x) m×1 , Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 • A 1, * ,i n×1 -c(x) m×1 , 2 Q 2 (x) m×n •(f (x) n×1 • A 1, * ,i n×1 ) • f (x) n×1 , A 1, * ,i n×1 + c(x) m×1 , Q 2 (x) m×n •(A 1, * ,i n×1 • f (x) n×1 • A 1, * ,i n×1 ) Proof of Part 2. We can show d 2 L dx i dx j = d dx j ( dL dx i ) = d dx j c(x), Q 2 (x) • p(x) i = d dx j c(x), Q 2 (x) • p(x) i + c(x), d dx j (Q 2 (x)) • p(x) i + c(x), Q 2 (x) • d dx j (p(x) i ) ,(9)" }, { "formula_coordinates": [ 29, 195.12, 97.06, 344.67, 91.89 ], "formula_id": "formula_93", "formula_text": "d dx j c(x), Q 2 (x) • p(x) i = diag(h ′ (A 2 f (x))) m×m • A 2 m×n • p(x) j n×1 , Q 2 (x) m×n • p(x) i n×1 = Q 2 (x) m×n • p(x) j n×1 , Q 2 (x) m×n • p(x) i n×1(10)" }, { "formula_coordinates": [ 29, 171.24, 266.62, 270.74, 118.29 ], "formula_id": "formula_94", "formula_text": "d dx j (Q 2 (x)) = d dx j (diag(h ′ (A 2 f (x)))A 2 ) = d dx j (diag(h ′ (A 2 f (x)))) • A 2 = diag( d dx j (h ′ (A 2 f (x)))) • A 2 = diag(diag(h ′′ (A 2 f (x))) m×m • A 2 m×n • p(x) j n×1 ) • A 2 m×n ," }, { "formula_coordinates": [ 29, 118.68, 459.1, 421.11, 34.65 ], "formula_id": "formula_95", "formula_text": "c(x), d dx j (Q 2 (x)) • p(x) i = c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) j ) • A 2 m×n • p(x) i n×1(11)" }, { "formula_coordinates": [ 29, 94.8, 527.86, 421.92, 199.01 ], "formula_id": "formula_96", "formula_text": "c(x), Q 2 (x) • d dx j (p(x) i ) = c(x), Q 2 (x)• (2 f (x), A 1, * ,i f (x), A 1, * ,j f (x) -f (x), A 1, * ,i • A 1, * ,j f (x) -f (x), A 1, * ,j f (x) • A 1, * ,i -f (x), A 1, * ,i f (x) • A 1, * ,j + A 1, * ,i • f (x) • A 1, * ,j ) = c(x), 2Q 2 (x) f (x), A 1, * ,i f (x), A 1, * ,j f (x) -c(x), Q 2 (x) f (x), A 1, * ,i • A 1, * ,j f (x) -c(x), 2Q 2 (x) f (x), A 1, * ,i f (x) • A 1, * ,j + c(x), Q 2 (x)A 1, * ,i • f (x) • A 1, * ,j = c(x), 2Q 2 (x) • f (x) • f (x), A 1, * ,i f (x), A 1, * ,j -c(x), Q 2 (x) • f (x) • f (x), A 1, * ,i • A 1, * ,i -c(x), 2Q 2 (x) • (f (x) • A 1, * ,j ) • f (x), A 1, * ,i + c(x), Q 2 (x) • (A 1, * ,i • f (x) • A 1, * ,j ) ,(12)" }, { "formula_coordinates": [ 30, 72, 146.9, 472.62, 152.42 ], "formula_id": "formula_97", "formula_text": "d 2 L dx i dx j = Q 2 (x) m×n • p(x) j n×1 , Q 2 (x) m×n • p(x) i n×1 + c(x) m×1 , diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) j ) • A 2 m×n • p(x) i n×1 + c(x) m×1 , 2 Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 f (x) n×1 , A 1, * ,j n×1 -c(x) m×1 , Q 2 (x) m×n • f (x) n×1 • f (x) n×1 , A 1, * ,i n×1 • A 1, * ,j n×1 -c(x) m×1 , 2 Q 2 (x) m×n •(f (x) n×1 • A 1, * ,j n×1 ) • f (x) n×1 , A 1, * ,i n×1 + c(x) m×1 , Q 2 (x) m×n •(A 1, * ,i n×1 • f (x) n×1 • A 1, * ,j n×1 ) C.6 Re-oragnizing B(x)" }, { "formula_coordinates": [ 30, 266.16, 358.1, 80.9, 27.55 ], "formula_id": "formula_98", "formula_text": "d 2 L dx 2 = A ⊤ B(x)A" }, { "formula_coordinates": [ 30, 155.4, 414.98, 301.11, 210.05 ], "formula_id": "formula_99", "formula_text": "B(x) = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) + diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ + f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) + f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ + 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ + 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) + diag(Q 2 (x) ⊤ c(x)) diag(f (x)) + diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) + diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ + f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) + f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ + diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))" }, { "formula_coordinates": [ 30, 78.12, 660.7, 451.59, 69.77 ], "formula_id": "formula_100", "formula_text": "Q 2 (x)p(x) j , Q 2 (x)p(x) i = Q 2 (x)(f (x) • A 1, * ,i + f (x), A 1, * ,i • f (x)), Q 2 (x)(f (x) • A 1, * ,j + f (x), A 1, * ,j • f (x)) = Q 2 (x)f (x) • A 1, * ,i + Q 2 (x) f (x), A 1, * ,i • f (x), Q 2 (x)f (x) • A 1, * ,j + Q 2 (x) f (x), A 1, * ,j • f (x) = (Q 2 (x)f (x) • A 1, * ,i ) ⊤ Q 2 (x)f (x) • A 1, * ,j + (Q 2 (x)f (x) • A 1, * ,i ) ⊤ Q 2 (x) f (x), A 1, * ,j • f (x) + (Q 2 (x) f (x), A 1, * ,i • f (x)) ⊤ Q 2 (x)f (x) • A 1, * ,j + (Q 2 (x) f (x), A 1, * ,i • f (x)) ⊤ Q 2 (x) f (x), A 1, * ,j • f (x) = A ⊤ 1, * ,i diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x))A 1, * ,j + A ⊤ 1, * ,i diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ A 1, * ,j + A ⊤ 1, * ,i f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x))A 1, * ,j + A ⊤ 1, * ,i f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ A 1, * ,j" }, { "formula_coordinates": [ 31, 247.44, 224.9, 117.15, 37.57 ], "formula_id": "formula_101", "formula_text": "F : = diag(h ′′ (A 2 f (x))) M i : = A 2 • p(x) i" }, { "formula_coordinates": [ 31, 72, 287.14, 360.6, 187.77 ], "formula_id": "formula_102", "formula_text": "c(x), diag(F M j )M i = c(x), (F M j ) • M i = c(x) ⊤ (F M j ) • M i = (F M j ) ⊤ c(x) • M i = M ⊤ j F diag(c(x))M i By substitute M i with A 2 • p(x) i , we have M ⊤ j F diag(c(x))M i = (A 2 p(x) j ) ⊤ F diag(c(x))A 2 p(x) i = p(x) ⊤ j A 2 F diag(c(x))A 2 p(x) i Let D := A 2 F diag(c(x)) = A 2 diag(h ′′ (A 2 f (x))) diag(c(x))" }, { "formula_coordinates": [ 31, 72, 504.14, 508.83, 177.17 ], "formula_id": "formula_103", "formula_text": "p(x) ⊤ j A 2 F diag(c(x))A 2 p(x) i = (f (x) • A 1, * ,j -f (x), A 1, * ,j • f (x)) ⊤ D(f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x)) = (f (x) ⊤ • A ⊤ 1, * ,j -f (x), A 1, * ,j • f (x) ⊤ )D(f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x)) = f (x) ⊤ • A ⊤ 1, * ,j Df (x) • A 1, * ,i -f (x) ⊤ • A ⊤ 1, * ,j D f (x), A 1, * ,i • f (x) -f (x), A 1, * ,j • f (x)Df (x) • A 1, * ,i + f (x), A 1, * ,j • f (x)D f (x), A 1, * ,i • f (x) = A ⊤ 1, * ,j diag(f (x))D diag(f (x))A 1, * ,i -A ⊤ 1, * ,j diag(f (x))Df (x)f (x) ⊤ A 1, * ,i -A ⊤ 1, * ,j f (x)f (x)D diag(f (x))A 1, * ,i + A ⊤ 1, * ,j f (x)f (x)Df (x)f (x) ⊤ A 1, * ,i" }, { "formula_coordinates": [ 31, 100.8, 709.46, 413.58, 21.01 ], "formula_id": "formula_104", "formula_text": "c(x), 2Q 2 (x) • f (x) • f (x), A 1, * ,i f (x), A 1, * ,j = 2A ⊤ 1, * ,i f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ A 1, * ,j" }, { "formula_coordinates": [ 32, 113.52, 96.02, 388.02, 33.29 ], "formula_id": "formula_105", "formula_text": "c(x), Q 2 (x) • f (x) • f (x), A 1, * ,i • A 1, * ,j = c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ A 1, * ,i • A 1, * ,j = A ⊤ 1, * ,i diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))A 1, * ,j" }, { "formula_coordinates": [ 32, 107.64, 161.3, 399.9, 20.89 ], "formula_id": "formula_106", "formula_text": "c(x), 2Q 2 (x) • (f (x) • A 1, * ,j ) • f (x), A 1, * ,i = 2A ⊤ 1, * ,i f (x)c(x) ⊤ Q 2 (x) diag(f (x))A 1, * ,j" }, { "formula_coordinates": [ 32, 124.56, 208.46, 366.06, 20.89 ], "formula_id": "formula_107", "formula_text": "c(x), Q 2 (x) • (A 1, * ,i • f (x) • A 1, * ,j ) = A ⊤ 1, * ,i diag(Q 2 (x) ⊤ c(x)) diag(f (x))A 1, * ,j" }, { "formula_coordinates": [ 32, 266.16, 254.78, 80.9, 27.55 ], "formula_id": "formula_108", "formula_text": "d 2 L dx 2 = A ⊤ B(x)A" }, { "formula_coordinates": [ 32, 155.52, 311.3, 300.87, 210.05 ], "formula_id": "formula_109", "formula_text": "B(x) = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) + diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ + f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) + f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ + 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ + 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) + diag(Q 2 (x) ⊤ c(x)) diag(f (x)) + diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) + diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ + f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) + f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ + diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))" }, { "formula_coordinates": [ 33, 72, 99.34, 445.47, 85.25 ], "formula_id": "formula_110", "formula_text": "-12R h L h R(R + R h )I n B(x) 12R h L h R(R + R h )I n Proof. It follows from Lemma D.2 that we have max{ Q 2 (x) 2 , 2(L h + 1) Q 2 (x) , (R + R h )L h R, R h R(R + R h )} ≤ 12R h L h R(R + R h ) min{ Q 2 (x) 2 , 2(L h + 1) Q 2 (x) , (R + R h )L h R, R h R(R + R h )} ≥ -12R h L h R(R + R h )" }, { "formula_coordinates": [ 33, 175.44, 215.02, 260.61, 18.65 ], "formula_id": "formula_111", "formula_text": "-12R h L h R(R + R h )I n B(x) 12R h L h R(R + R h )I n" }, { "formula_coordinates": [ 33, 88.44, 334.3, 167.62, 11.85 ], "formula_id": "formula_112", "formula_text": "• f (x) 1 = 1 (see Definition 3.3)." }, { "formula_coordinates": [ 33, 88.44, 379.42, 287.19, 304.53 ], "formula_id": "formula_113", "formula_text": "• Let f (x) ≥ 0 n . • Let b ≥ 0 n . • B 1 = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) • B 2 = diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ • B 3 = f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) • B 4 = f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ • B 5 = 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ • B 6 = 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) • B 7 = diag(Q 2 (x) ⊤ c(x)) diag(f (x)) • B 8 = diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) • B 9 = diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ • B 10 = f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) • B 11 = f (x)f (x)A 2 diag(h ′′ (A 2 f (x))) diag(c(x))f (x)f (x) ⊤ • B 12 = diag(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x))" }, { "formula_coordinates": [ 34, 88.44, 75.31, 358.77, 631.17 ], "formula_id": "formula_114", "formula_text": "• Part 1. -Q 2 (x) 2 • I n B 1 Q 2 (x) 2 • I n • Part 2. -Q 2 (x) 2 • I n B 2 Q 2 (x) 2 • I n • Part 3. -Q 2 (x) 2 • I n B 3 Q 2 (x) 2 • I n • I n • Part 4. -Q 2 (x) 2 • I n B 4 Q 2 (x) 2 • I n • Part 5. -2(L h + 1) Q 2 (x) • I n B 5 2(L h + 1) Q 2 (x) • I n • Part 6. -2(L h + 1) Q 2 (x) • I n B 5 2(L h + 1) Q 2 (x) • I n • Part 7. -2(L h + 1) Q 2 (x) • I n B 5 2(L h + 1) Q 2 (x) • I n • Part 8. -(R + R h )L h R • I n B 8 (R + R h )L h R • I n • Part 9. -(R + R h )L h R • I n B 9 (R + R h )L h R • I n • Part 10. -(R + R h )L h R • I n B 10 (R + R h )L h R • I n • Part 11. -(R + R h )L h R • I n B 11 (R + R h )L h R • I n • Part 12. -R h R(R + R h ) • I n B 12 R h R(R + R h ) • I n Proof. Proof of Part 1. We know that diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) ≤ Q 2 (x) 2 diag(f (x)) 2 2 ≤ Q 2 (x) 2" }, { "formula_coordinates": [ 35, 88.92, 206.62, 381.99, 98.93 ], "formula_id": "formula_115", "formula_text": "-Q 2 (x) • I n B 2 Q 2 (x) • I n Proof of Part 2. We have diag(f (x)) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ ≤ Q 2 (x) 2 f (x) 2 2 diag(f (x)) ≤ Q 2 (x) 2" }, { "formula_coordinates": [ 35, 88.92, 362.98, 378.39, 98.93 ], "formula_id": "formula_116", "formula_text": "-Q 2 (x) • I n B 2 Q 2 (x) • I n Proof of Part 3 We have f (x)f (x) ⊤ Q 2 (x) ⊤ Q 2 (x) diag(f (x)) ≤ Q 2 (x) 2 f (x) 2 2 diag(f (x)) ≤ Q 2 (x) 2" }, { "formula_coordinates": [ 35, 88.92, 517.1, 340.35, 101.17 ], "formula_id": "formula_117", "formula_text": "-Q 2 (x) 2 • I n B 3 Q 2 (x) 2 • I n Proof of Part 4 We have f (x) ⊤ Q 2 (x) ⊤ Q 2 (x)f (x)f (x) ⊤ ≤ Q 2 (x) 2 f (x) 3 2 ≤ Q 2 (x) 2" }, { "formula_coordinates": [ 35, 88.92, 673.46, 303.93, 37.21 ], "formula_id": "formula_118", "formula_text": "-Q 2 (x) 2 • I n B 4 Q 2 (x) 2 • I n Proof of Part 5 We have 2f (x)c(x) ⊤ Q 2 (x)f (x)f (x) ⊤ ≤ 2 f (x) 3 2 c(x) 2 Q 2 (x) ≤ 2(L h + 1) Q 2 (x)" }, { "formula_coordinates": [ 36, 88.92, 191.98, 380.55, 97.73 ], "formula_id": "formula_119", "formula_text": "-2(L h + 1) Q 2 (x) • I n B 5 2(L h + 1) Q 2 (x) • I n Proof of Part 6 We have 2f (x)c(x) ⊤ Q 2 (x) diag(f (x)) ≤ 2 f (x) 2 2 diag(f (x)) Q 2 (x) ≤ 2(L h + 1) Q 2 (x)" }, { "formula_coordinates": [ 36, 88.92, 347.14, 372.27, 132.41 ], "formula_id": "formula_120", "formula_text": "-2(L h + 1) Q 2 (x) • I n B 6 2(L h + 1) Q 2 (x) • I n Proof of Part 7 We have diag(Q 2 (x) ⊤ c(x)) diag(f (x)) ≤ diag(Q 2 (x) ⊤ c(x)) diag(f (x)) ≤ Q 2 (x) ⊤ c(x) f (x) 2 ≤ Q 2 (x) c(x) 2 f (x) 2 ≤ 2(L h + 1) Q 2 (x)" }, { "formula_coordinates": [ 36, 88.92, 550.54, 348.39, 132.05 ], "formula_id": "formula_121", "formula_text": "-2(L h + 1) Q 2 (x) • I n B 7 2(L h + 1) Q 2 (x) • I n Proof of Part 8 We have diag(f (x))A 2 diag(h ′′ (A 2 f (x))) diag(c(x)) diag(f (x)) ≤ A 2 diag(f (x)) 2 diag(c(x)) diag(h ′′ (A 2 f (x))) ≤ R(R + R h ) h ′′ (A 2 f (x)) 2 ≤ R(R + R h )L h" }, { "formula_coordinates": [ 37, 88.92, 99.34, 325.17, 34.85 ], "formula_id": "formula_122", "formula_text": "-(R + R h )L h R • I n B 8 (R + R h )L h R • I n Proof of" }, { "formula_coordinates": [ 37, 192.79, 186.62, 252.8, 55.21 ], "formula_id": "formula_123", "formula_text": "(f (x)f (x) ⊤ Q 2 (x) ⊤ c(x)) ≤ f (x)f (x) ⊤ Q 2 (x) ⊤ c(x) 2 ≤ f (x) 2 2 Q 2 (x) c(x) 2 ≤ R h R(R + R h )" }, { "formula_coordinates": [ 37, 72, 299.26, 345.09, 64.34 ], "formula_id": "formula_124", "formula_text": "-R h R(R + R h ) • I n B 12 R h R(R + R h ) • I n D." }, { "formula_coordinates": [ 37, 88.44, 416.9, 83.74, 20.41 ], "formula_id": "formula_125", "formula_text": "• Let A 1 ∈ R n×d ." }, { "formula_coordinates": [ 37, 88.44, 461.9, 134.62, 20.41 ], "formula_id": "formula_126", "formula_text": "• Let W = diag(w) ∈ R n×n ." }, { "formula_coordinates": [ 37, 88.44, 574.46, 372.87, 109.45 ], "formula_id": "formula_127", "formula_text": "• Part 1. If all i ∈ [n], w 2 i ≥ 12R h L h R(R + R h ) + l/σ min (A 1 ) 2 , then d 2 L dx 2 l • I d • Part 2. If all i ∈ [n], w 2 i ≥ 100 + 12R h L h R(R + R h ) + l/σ min (A 1 ) 2 , then (1 -1/10) • (B(x) + W 2 ) W 2 (1 + 1/10) • (B(x) + W 2 )" }, { "formula_coordinates": [ 38, 72, 104.66, 467.79, 69.25 ], "formula_id": "formula_128", "formula_text": "d 2 L dx 2 = A ⊤ 1 B(x)A 1 where B(x) -12R h L h R(R + R h )I n(13)" }, { "formula_coordinates": [ 38, 250.68, 191.06, 289.11, 27.55 ], "formula_id": "formula_129", "formula_text": "d 2 L tot dx 2 = d 2 L reg dx 2 + d 2 L dx 2(14)" }, { "formula_coordinates": [ 38, 231.12, 239.66, 150.39, 43.61 ], "formula_id": "formula_130", "formula_text": "d 2 L tot dx 2 = A ⊤ 1 B(x)A + A ⊤ W 2 A 1 = A ⊤ 1 (B(x) + W 2 )A 1" }, { "formula_coordinates": [ 38, 72, 320.54, 272.19, 70.03 ], "formula_id": "formula_131", "formula_text": "D = B(x) + W 2 Then, d 2 L dx 2 can be rewritten as d 2 L dx 2 = A ⊤ 1 DA 1" }, { "formula_coordinates": [ 38, 213.96, 411.26, 183.57, 62.33 ], "formula_id": "formula_132", "formula_text": "D -12R h L h R(R + R h )I n + w 2 min I n = (-12R h L h R(R + R h ) + w 2 min )I n l σ min (A 1 ) 2 I n" }, { "formula_coordinates": [ 38, 72, 504.86, 207.26, 20.41 ], "formula_id": "formula_133", "formula_text": "w 2 min ≥ -12R h L h R(R + R h ) + l/σ min (A 1 ) 2 ." }, { "formula_coordinates": [ 38, 72, 537.26, 328.83, 32.71 ], "formula_id": "formula_134", "formula_text": "A ⊤ 1 DA 1 σ min (D) • σ min (A 1 ) 2 I d l • I d Thus," }, { "formula_coordinates": [ 39, 72, 138.62, 300.03, 135.01 ], "formula_id": "formula_135", "formula_text": "• Let A 1 ∈ R n×d • Let x ∈ R d where x 2 ≤ R • Let R ≥ 4 • A 1 ≤ R We have exp(A 1 x) 2 ≤ √ n exp(R 2 )" }, { "formula_coordinates": [ 39, 230.04, 295.66, 158.58, 77.69 ], "formula_id": "formula_136", "formula_text": "exp(A 1 x) 2 ≤ √ n • exp(A 1 x) ∞ ≤ √ n • exp( A 1 x ∞ ) ≤ √ n • exp( A 1 x 2 ) ≤ √ n • exp(R 2 )" }, { "formula_coordinates": [ 39, 88.44, 436.1, 142.42, 20.41 ], "formula_id": "formula_137", "formula_text": "• Let A 1 ∈ R n×d , A 2 ∈ R m×n ." }, { "formula_coordinates": [ 39, 88.44, 572.98, 229.18, 41.09 ], "formula_id": "formula_138", "formula_text": "• Let R h > 0. • Let max{ h(A 2 f (x)) 2 , h ′ (A 2 f (x)) 2 } ≤ R h ." }, { "formula_coordinates": [ 39, 88.44, 700.18, 183.99, 26.45 ], "formula_id": "formula_139", "formula_text": "• Part 1. f (x) 2 ≤ β -1 • √ n • exp(R 2 ) • Part 2. c(x) 2 ≤ R + R h • Part 3. Q 2 (x) ≤ R • R h • Part 4. q 2 (x) 2 ≤ R • R h • (R + R h ) • Part 5. For i ∈ [d], p(x) i 2 ≤ 2Rβ -2 • n • exp(2R 2 ) Proof. Proof of Part 1 f (x) 2 = α(x) -1 • u(x) 2 = |α(x) -1 | • u(x) 2 ≤ β -1 • u(x) 2 = β -1 • exp(A 1 x) 2 ≤ β -1 • √ n • exp(R 2 )," }, { "formula_coordinates": [ 40, 88.92, 318.31, 290.67, 73.77 ], "formula_id": "formula_140", "formula_text": "Proof of Part 2 c(x) 2 = h(A 2 f (x)) -b 2 ≤ h(A 2 f (x)) 2 + b 2 ≤ R + R h ," }, { "formula_coordinates": [ 40, 222, 456.86, 167.91, 87.01 ], "formula_id": "formula_141", "formula_text": "Q 2 (x) = A 2 diag(h ′ (A 2 f (x))) ≤ A 2 • diag(h ′ (A 2 f (x))) ≤ A 2 • h ′ (A 2 f (x)) ∞ ≤ A 2 • h ′ (A 2 f (x)) 2 ≤ R • R h ," }, { "formula_coordinates": [ 40, 88.92, 573.14, 286.94, 107.65 ], "formula_id": "formula_142", "formula_text": "A 2 ≤ R and h ′ (A 2 f (x)) 2 ≤ R h . Proof of Part 4 q 2 (x) 2 = Q 2 (x) ⊤ c(x) 2 ≤ Q 2 (x) ⊤ c(x) 2 ≤ Q 2 (x) c(x) 2 ≤ R • R h • (R + R h )," }, { "formula_coordinates": [ 41, 185.64, 98.86, 246.15, 88.25 ], "formula_id": "formula_143", "formula_text": "p(x) i = f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x) ≤ f (x) • A 1, * ,i + f (x), A 1, * ,i • f (x) 2 ≤ Rβ -1 • √ n • exp(R 2 ) + f (x) 2 2 A 1, * ,i 2 ≤ Rβ -1 • √ n • exp(R 2 ) + Rβ -2 • n • exp(2R 2 ) ≤ 2Rβ -2 • n • exp(2R 2 )" }, { "formula_coordinates": [ 41, 88.44, 262.7, 152.79, 154.69 ], "formula_id": "formula_144", "formula_text": "• Let A 1 ∈ R n×d , A 2 ∈ R m×n • Let β ∈ (0, 0.1) • Let R ≥ 4 • A 1 ≤ R • exp(A 1 x), 1 n ≥ β • exp(A 1 y), 1 n ≥ β • Let R f := 2β -2 • nR exp(2R 2 )" }, { "formula_coordinates": [ 41, 88.44, 577.9, 203.31, 40.97 ], "formula_id": "formula_145", "formula_text": "• Assume h(x) -h(y) 2 ≤ L h • x -y 2 • Assume h ′ (x) -h ′ (y) 2 ≤ L h • x -y 2" }, { "formula_coordinates": [ 41, 88.44, 642.86, 268.59, 87.61 ], "formula_id": "formula_146", "formula_text": "• Part 1. u(x) -u(y) 2 ≤ R exp(R 2 ) • x -y 2 • Part 2. |α(x) -α(y)| ≤ √ n • exp(A 1 x) -exp(A 1 y) 2 • Part 3. |α(x) -1 -α(y) -1 | ≤ β -2 • |α(x) -α(y)| • Part 4. f (x) -f (y) 2 ≤ R f • x -y 2 • Part 5. c(x) -c(y) 2 ≤ L h • R • R f • x -y 2 • Part 6. Q 2 (x) -Q 2 (y) ≤ R 2 R f L h x -y 2 • Part 7. q 2 (x) -q 2 (y) 2 ≤ 2R 2 R f R h L h (R + R h ) x -y 2 • Part 8. g(x) -g(y) 2 ≤ 7β -2 nL h R h R f R 2 (R + R h ) exp(5R 2 ) x -y 2 • Part 9. For each i ∈ [d], p i (x) -p i (y) 2 ≤ 3RR f β -1 • √ n • exp(R 2 ) x -y 2" }, { "formula_coordinates": [ 42, 209.16, 208.54, 198.51, 71.69 ], "formula_id": "formula_147", "formula_text": "u(x) -u(y) 2 ≤ exp(A 1 x) -exp(A 1 y) 2 ≤ exp(R 2 ) A 1 x -A 1 y 2 ≤ exp(R 2 ) A 1 x -y 2 ≤ R exp(R 2 ) • x -y 2" }, { "formula_coordinates": [ 42, 188.28, 349.18, 235.46, 68.21 ], "formula_id": "formula_148", "formula_text": "|α(x) -α(y)| = | exp(A 1 x), 1 n -exp(A 1 y), 1 n | = | exp(A 1 x) -exp(A 1 y), 1 n | ≤ exp(A 1 x) -exp(A 1 y) 2 • 1 n 2 = √ n • exp(A 1 x) -exp(A 1 y) 2" }, { "formula_coordinates": [ 42, 241.44, 497.54, 298.35, 43.45 ], "formula_id": "formula_149", "formula_text": "α(x) -1 = exp(A 1 x), 1 n -1 ≤ 1 β ,(15)" }, { "formula_coordinates": [ 42, 276.36, 598.18, 258.62, 26.09 ], "formula_id": "formula_150", "formula_text": "α(y) -1 ≤ 1 β . (16" }, { "formula_coordinates": [ 42, 534.98, 605.62, 4.81, 10.91 ], "formula_id": "formula_151", "formula_text": ")" }, { "formula_coordinates": [ 42, 187.92, 654.86, 235.94, 38.65 ], "formula_id": "formula_152", "formula_text": "|α(x) -1 -α(y) -1 | = α(x) -1 α(y) -1 • |α(x) -α(y)| ≤ β -2 • |α(x) -α(y)|" }, { "formula_coordinates": [ 43, 77.4, 95.18, 466.11, 56.41 ], "formula_id": "formula_153", "formula_text": "f (x) -f (y) 2 = α(x) -1 exp(A 1 x) -α(y) -1 exp(A 1 y) 2 ≤ α(x) -1 exp(A 1 x) -α(x) -1 exp(A 1 y) 2 + α(x) -1 exp(A 1 y) -α(y) -1 exp(A 1 y) 2 ≤ α(x) -1 exp(A 1 x) -exp(A 1 y) 2 + |α(x) -1 -α(y) -1 | • exp(A 1 y) 2" }, { "formula_coordinates": [ 43, 153.96, 203.18, 385.83, 38.65 ], "formula_id": "formula_154", "formula_text": "α(x) -1 exp(A 1 x) -exp(A 1 y) 2 ≤ β -1 exp(A 1 x) -exp(A 1 y) 2 ≤ β -1 • R exp(R 2 ) • x -y 2(17)" }, { "formula_coordinates": [ 43, 105.36, 279.98, 434.43, 91.69 ], "formula_id": "formula_155", "formula_text": "|α(x) -1 -α(y) -1 | • exp(A 1 y) 2 ≤ β -2 • |α(x) -α(y)| • exp(A 1 y) 2 ≤ β -2 • |α(x) -α(y)| • √ n exp(R 2 ) ≤ β -2 • √ n • exp(A 1 x) -exp(A 1 y) 2 • √ n exp(R 2 ) ≤ β -2 • √ n • R exp(R 2 ) x -y 2 • √ n exp(R 2 ) = β -2 • nR exp(2R 2 ) x -y 2 ,(18)" }, { "formula_coordinates": [ 43, 136.44, 436.82, 344.07, 55.21 ], "formula_id": "formula_156", "formula_text": "f (x) -f (y) 2 ≤ β -1 • R exp(R 2 ) • x -y 2 + β -2 • nR exp(2R 2 ) x -y 2 ≤ 2β -2 • nR exp(2R 2 ) x -y 2 = R f x -y 2 ," }, { "formula_coordinates": [ 43, 184.68, 559.54, 247.35, 84.77 ], "formula_id": "formula_157", "formula_text": "c(x) -c(y) 2 = h(A 2 f (x)) -b -(h(A 2 f (y)) -b) 2 = h(A 2 f (x)) -h(A 2 f (y)) 2 ≤ L h • A 2 f (x) -A 2 f (y) 2 ≤ L h • A 2 • f (x) -f (y) 2 ≤ L h • R • R f • x -y 2 ," }, { "formula_coordinates": [ 43, 156.96, 709.46, 297.99, 21.01 ], "formula_id": "formula_158", "formula_text": "Q 2 (x) -Q 2 (y) = A 2 diag(h ′ (A 2 f (x))) -A 2 diag(h ′ (A 2 f (y))) ≤ A 2 h ′ (A 2 f (x)) -h ′ (A 2 f (y)) 2 ≤ RL h A 2 f (x) -A 2 f (y) 2 ≤ R 2 L h f (x) -f (y) 2 ≤ R 2 R f L h x -y 2 ," }, { "formula_coordinates": [ 44, 131.64, 213.62, 348.63, 125.17 ], "formula_id": "formula_159", "formula_text": "q 2 (x) -q 2 (y) 2 = Q 2 (x) ⊤ c(x) -Q 2 (y) ⊤ c(y) = Q 2 (x) ⊤ c(x) -Q 2 (x) ⊤ c(y) + Q 2 (x) ⊤ c(y) -Q 2 (y) ⊤ c(y) ≤ Q 2 (x)c(x) -Q 2 (x)c(y) + Q 2 (x)c(y) -Q 2 (y)c(y) ≤ Q 2 (x) c(x) -c(y) 2 + Q 2 (x) -Q 2 (y) c(y) 2 ≤ Q 2 (x) • L h RR f x -y + R 2 R f L h x -y 2 c(y) 2 ≤ R 2 R h R f L h • x -y 2 + R 2 R f L h • x -y 2 • (R + R h ) ≤ 2R 2 R f R h L h (R + R h ) x -y 2 ," }, { "formula_coordinates": [ 44, 94.44, 422.98, 422.43, 87.89 ], "formula_id": "formula_160", "formula_text": "g(x) -g(y) 2 = A ⊤ 1 (f (x) q 2 (x), f (x) + diag(f (x))q 2 (x)) -A ⊤ 1 (f (y) q 2 (y), f (y) + diag(f (y))q 2 (y)) 2 = A ⊤ 1 ((f (x) q 2 (x), f (x) + diag(f (x))q 2 (x)) -(f (y) q 2 (y), f (y) + diag(f (y))q 2 (y))) 2 ≤ A 1 (f (x) q 2 (x), f (x) -f (y) q 2 (y), f (y) ) + (diag(f (x))q 2 (x)) -diag(f (y))q 2 (y))) 2 ≤ R( f (x) q 2 (x), f (x) -f (y) q 2 (y), f (y) 2 + diag(f (x))q 2 (x) -diag(f (y))q 2 (y) 2 )" }, { "formula_coordinates": [ 44, 203.4, 581.5, 200.91, 18.65 ], "formula_id": "formula_161", "formula_text": "C 1 : = f (x) q 2 (x), f (x) -f (y) q 2 (y), f(y)" }, { "formula_coordinates": [ 44, 224.76, 646.66, 315.03, 18.65 ], "formula_id": "formula_162", "formula_text": "g(x) -g(y) 2 ≤ R( C 1 2 + C 2 2 ).(19)" }, { "formula_coordinates": [ 44, 199.56, 695.26, 208.47, 35.21 ], "formula_id": "formula_163", "formula_text": "C 1,1 : = f (x) q 2 (x), f (x) -f (x) q 2 (x), f (y) C 1,2 : = f (x) q 2 (x), f (y) -f (x) q 2 (y), f (y) C 1,3 : = f (x) q 2 (y), f (y) -f (y) q 2 (y), f(y)" }, { "formula_coordinates": [ 45, 240.24, 122.14, 299.55, 11.85 ], "formula_id": "formula_164", "formula_text": "C 1 2 = C 1,1 + C 1,2 + C 1,3 2(20)" }, { "formula_coordinates": [ 45, 178.44, 169.66, 361.35, 53.93 ], "formula_id": "formula_165", "formula_text": "C 1,1 2 ≤ f (x) 2 q 2 (x) 2 f (x) -f (y) 2 ≤ β -1 √ nR h R(R + R h ) exp(5R 2 ) f (x) -f (y) 2 ≤ β -1 √ nR h R f R(R + R h ) exp(5R 2 ) x -y 2 ,(21)" }, { "formula_coordinates": [ 45, 178.08, 279.46, 361.71, 54.05 ], "formula_id": "formula_166", "formula_text": "C 1,2 2 ≤ f (x) 2 q 2 (x) -q 2 (y) 2 f (y) 2 ≤ β -2 n exp(2R 2 ) q 2 (x) -q 2 (y) 2 ≤ 2β -2 n exp(2R 2 )R 2 R f R h L h (R + R h ) x -y 2 ,(22)" }, { "formula_coordinates": [ 45, 183.48, 389.38, 356.31, 54.05 ], "formula_id": "formula_167", "formula_text": "C 1,3 ≤ f (x) -f (y) 2 q 2 (y) 2 f (y) 2 ≤ f (x) -f (y) 2 RR h (R + R h )β -1 √ n exp(R 2 ) ≤ β -1 √ nR h R f R(R + R h ) exp(R 2 ) x -y 2 ,(23)" }, { "formula_coordinates": [ 45, 183.72, 512.86, 356.07, 52.85 ], "formula_id": "formula_168", "formula_text": "C 1 2 = C 1,1 + C 1,2 + C 1,3 2 = C 1,1 2 + C 1,2 2 + C 1,3 2 ≤ 4β -2 nR h R f R(R + R h ) exp(5R 2 )L h x -y 2 ,(24)" }, { "formula_coordinates": [ 45, 107.28, 621.58, 432.51, 104.81 ], "formula_id": "formula_169", "formula_text": "C 2 = diag(f (x))q 2 (x) -diag(f (x))q 2 (y) + diag(f (x))q 2 (y) -diag(f (y))q 2 (y) 2 ≤ diag(f (x))q 2 (x) -diag(f (x))q 2 (y) 2 + diag(f (x))q 2 (y) -diag(f (y))q 2 (y) 2 ≤ f (x) 2 q 2 (x) -q 2 (y) 2 + f (x) -f (y) 2 q 2 (y) 2 ≤ β -1 √ n exp(R 2 ) q 2 (x) -q 2 (y) 2 + f (x) -f (y) 2 RR h (R + R h ) ≤ 2β -1 √ n exp(R 2 )R 2 R f R h L h (R + R h ) x -y 2 + R f x -y 2 RR h (R + R h ) ≤ 3β -1 √ nR 2 R f R h L h (R + R h ) exp(R 2 ) x -y 2 ,(25)" }, { "formula_coordinates": [ 46, 158.52, 153.46, 299.91, 71.69 ], "formula_id": "formula_170", "formula_text": "g(x) -g(y) 2 ≤ R( C 1 2 + C 2 2 ) ≤ 4β -2 nR h R f R(R + R h ) exp(5R 2 )L h x -y 2 + 3β -1 √ nR 2 R f R h L h (R + R h ) exp(R 2 )) x -y 2 ≤ 7β -2 nL h R h R f R 2 (R + R h ) exp(5R 2 ) x -y 2 ," }, { "formula_coordinates": [ 46, 93.96, 282.7, 428.91, 51.65 ], "formula_id": "formula_171", "formula_text": "p(x) i -p(y) i 2 = f (x) • A 1, * ,i -f (x), A 1, * ,i • f (x) -f (x) • A 1, * ,i + f (y), A 1, * ,i • f (y) 2 = (f (x) -f (y)) • A 1, * ,i + ( f (x), A 1, * ,i • f (x) -f (y), A 1, * ,i • f (y)) 2 ≤ (f (x) -f (y)) • A 1, * ,i 2 + f (x), A 1, * ,i • f (x) -f (y), A 1, * ,i • f (y) 2" }, { "formula_coordinates": [ 46, 190.92, 391.9, 234.99, 35.09 ], "formula_id": "formula_172", "formula_text": "(f (x) -f (y)) • A 1, * ,i 2 ≤ A 1, * ,i 2 f (x) -f (y) 2 ≤ RR f x -y 2" }, { "formula_coordinates": [ 46, 96.24, 470.98, 418.95, 85.97 ], "formula_id": "formula_173", "formula_text": "f (x), A 1, * ,i • f (x) -f (y), A 1, * ,i • f (y) 2 = f (x), A 1, * ,i • f (x) -f (x), A 1, * ,i • f (y) + f (x), A 1, * ,i • f (y) -f (y), A 1, * ,i • f (y) 2 ≤ f (x), A 1, * ,i • (f (x) -f (y)) 2 + f (x) -f (y), A 1, * ,i • f (y) 2 ≤ 2 f (x) 2 A 1, * ,i 2 f (x) -f (y) 2 ≤ 2RR f β -1 • √ n • exp(R 2 ) x -y 2" }, { "formula_coordinates": [ 46, 165.96, 619.54, 285.03, 44.69 ], "formula_id": "formula_174", "formula_text": "p(x) i -p(y) i 2 ≤ (2RR f β -1 • √ n • exp(R 2 ) + RR f ) x -y 2 ≤ 3RR f β -1 • √ n • exp(R 2 ) x -y 2 G 1,2 : = Q 2 (x) • p(x) j , Q 2 (x) • p(y) i -Q 2 (x) • p(x) j , Q 2 (y) • p(y) i G 1,3 : = Q 2 (x) • p(x) j , Q 2 (y) • p(y) i -Q 2 (x) • p(y) j , Q 2 (y) • p(y) i G 1,4 : = Q 2 (x) • p(y) j , Q 2 (y) • p(y) i -Q 2 (y) • p(y) j , Q 2 (y) • p(y) i Than it's apparent that | Q 2 (x) • p(x) j , Q 2 (x) • p(x) i -Q 2 (y) • p(y) j , Q 2 (y) • p(y) i | = |G 1,1 + G 1,2 + G 1,3 + G 1,4 |" }, { "formula_coordinates": [ 48, 187.32, 205.78, 236.79, 72.53 ], "formula_id": "formula_175", "formula_text": "|G 1,1 | = | Q 2 (x) • p(x) j , Q 2 (x) • (p(x) i -p(y) i ) | ≤ Q 2 (x) • p(x) j 2 Q 2 (x) (p(x) i -p(y) i ) 2 ≤ Q 2 (x) 2 p(x) j 2 p(x) i -p(y) i 2 ≤ 6R 2 h R 4 R f β -3 • n 3 2 • exp(3R 2 ) • x -y 2" }, { "formula_coordinates": [ 48, 169.32, 349.3, 272.91, 70.61 ], "formula_id": "formula_176", "formula_text": "|G 1,2 | = | Q 2 (x) • p(x) j , (Q 2 (x) -Q 2 (y)) • p(y) i | ≤ Q 2 (x)p(x) j 2 Q 2 (x) -Q 2 (y) p(y) i 2 ≤ Q 2 (x) p(x) j 2 2 Q 2 (x) -Q 2 (y) ≤ 2R 2 h R f R 5 L h (R + R h )β -4 • n 2 • exp(4R 2 ) x -y 2" }, { "formula_coordinates": [ 48, 72, 490.78, 394.71, 98.54 ], "formula_id": "formula_177", "formula_text": "|G 1 (x) -G 1 (y)| = |G 1,1 + G 1,2 + G 1,3 + G 1,4 | ≤ |G 1,1 | + |G 1,2 | + |G 1,3 | + |G 1,4 | ≤ 8R 2 h R f R 5 L h (R + R h )β -4 • n 2 • exp(4R 2 ) x -y 2 E.4" }, { "formula_coordinates": [ 48, 155.28, 644.06, 300.87, 20.89 ], "formula_id": "formula_178", "formula_text": "|G 2 (x) -G 2 (y)| ≤ 24R h R f R 4 (R + R h )β -4 n 2 exp(4R 2 ) x -y 2" }, { "formula_coordinates": [ 48, 72, 695.26, 528.62, 35.21 ], "formula_id": "formula_179", "formula_text": "|G 2 (x) -G 2 (y)| = | c(x), diag(diag(h ′′ (A 2 f (x))) • A 2 • p(x) j ) • A 2 • p(x) i -c(y), diag(diag(h ′′ (A 2 f (y))) • A 2 • p(y) j ) • A 2 • p(y) i |" }, { "formula_coordinates": [ 51, 72, 265.58, 391.47, 143.5 ], "formula_id": "formula_180", "formula_text": "|G 3 (x) -G 3 (y)| = | 5 i=1 G i | ≤ 5 i=1 |G i | ≤ 10(R + R h )R 4 R f L h β -3 • n 3 2 • exp(3R 2 ) x -y 2 E.6" }, { "formula_coordinates": [ 51, 152.4, 457.78, 306.63, 27.05 ], "formula_id": "formula_181", "formula_text": "|G 4 (x) -G 4 (y)| ≤ 4(R + R h )R 4 R f L h β -1 • √ n • exp(R 2 ) x -y 2" }, { "formula_coordinates": [ 51, 148.32, 585.71, 314.91, 22.25 ], "formula_id": "formula_182", "formula_text": "|G 5 (x) -G 5 (y)| ≤ 10(R + R h )R 4 R f L h β -3 • n 3 2 • exp(3R 2 ) x -y 2" }, { "formula_coordinates": [ 51, 152.4, 702.22, 306.63, 27.05 ], "formula_id": "formula_183", "formula_text": "|G 6 (x) -G 6 (y)| ≤ 3(R + R h )R 4 R f L h β -1 • √ n • exp(R 2 ) x -y 2" }, { "formula_coordinates": [ 52, 178.44, 230.38, 255.02, 54.05 ], "formula_id": "formula_184", "formula_text": "|G 6,1 | = | c(x), Q 2 (x) • (A 1, * ,i • (f (x) -f (y)) • A 1, * ,j )| ≤ c(x) 2 Q 2 (x) A 1, * ,i 2 2 f (x) -f (y) 2 ≤ (R + R h )R h R 3 R f x -y 2" }, { "formula_coordinates": [ 52, 175.32, 355.42, 261.26, 54.05 ], "formula_id": "formula_185", "formula_text": "|G 6,2 | = | c(x), (Q 2 (x) -Q 2 (y)) • (A 1, * ,i • f (y) • A 1, * ,j )| ≤ c(x) 2 Q 2 (x) -Q 2 (y) A 1, * ,i 2 2 f (x) 2 ≤ (R + R h )R 4 R f L h β -1 • √ n • exp(R 2 ) x -y 2" }, { "formula_coordinates": [ 52, 213.12, 495.86, 195.15, 38.65 ], "formula_id": "formula_186", "formula_text": ") 2 Q 2 (x) A 1, * ,i 2 2 f (x) 2 ≤ L h R 4 R f R h β -1 • √ n • exp(R 2 ) x -y 2" }, { "formula_coordinates": [ 52, 152.16, 564.94, 307.11, 36.29 ], "formula_id": "formula_187", "formula_text": "|G 6 (x) -G 6 (y)| = |G 6,1 + G 6,2 + G 6,3 | ≤ 3(R + R h )R 4 R f L h β -1 • √ n • exp(R 2 ) x -y 2" } ]
10.18653/v1/D19-1371
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b15", "b0", "b15", "b0", "b11", "b5", "b4", "b16", "b17", "b13", "b2", "b10", "b8", "b3", "b6", "b12", "b14", "b7", "b9" ], "table_ref": [], "text": "The structure of documents can vary from one domain and dataset to another, but typically, they tend to be divided into several sections, viz., thematically connected spans of text, such as e.g., introduction, methods, results and conclusion sections for experimental science articles. Furthermore, they can be ordered under potentially multiple classes, giving rise to a multi-labelling classification task.\nOne might be tempted to ignore concatenate all the sections and apply a model [16]. However, ignoring sections and assuming that they have equal importance is a weak assumption for document multi-label classification: in reality different sections contain different information, and do not contribute equally to the task, if at all. In our proposed approach, highlighting the important sections is performed through two layers of feed forward neural networks, which generate one weight per section. We additionally leverage these weights to explain and quantify the importance of each section.\nOur expectation is that the sections which include more substantial information, are assigned higher weights. However, the weights will change per document meaning that for one document, section A might be more important, while to classify another document, section B might be more important. Our work originates from the attention mechanism [1,16] where the goal is to assign higher weights to more important words. Our goal is to assign instead higher weights to more important sections (multi-label classification task). These learned weights can help researchers and content experts to better analyse the performance of the algorithm.\nThe attention mechanism originates in neural machine translation [1] where, given a sentence in a source language, the corresponding sentence in the target language is predicted. As opposed to traditional neural machine translation approaches [12] which treat words in the source sentence equally, attention-based models assign weights to each word in the source sentence to predict the next word in the target sentence. These weights can be interpreted as the importance of each word in the source language.\nDifferent types of attention mechanisms have been proposed so far. Sparse Attention [6,5,17] has been proposed to reduce the complexity and memory consumption of the original attention model. This type of attention focuses on a limited number of pairs for which attention weights should be computed, which yields a sparse matrix. Since there is a strong connection between matrix sparsity and graph neural networks [18], Graph Attention Networks [14,3,11] have been proposed. These approaches, that try to maximize the utility of sparse matrices, suffer from a lack of strong theoretical background but can be used in specific models such as transformers [9] and generative approaches [4]. Performer [7] was proposed to address these issues and decrease the run time of the attention-based models. Self-attention and multi-head attention [13] were proposed to 1) capture the relationship between words in the source sentence and 2) obtain different possible relationships through multi-head attention.\nAll these approaches try to compute the attention weights at a word or sentence level [15]. As opposed to the approaches mentioned so far, we try to propose a new Learning Section Weights (LSW) approach. Our proposed LSW approach can measure the contribution of each section of the given article in the downstream task (e.g., classification). In this paper, the downstream task we have considered is multi-label text classification.\nIndeed, as opposed to traditional BERT-based models [8] which treat the text from different sections equally, we propose LSW network to learn the importance of different sections. The LSW network is trained jointly with the model used for classification, where back-propagation [10] will propagate the classification error through the LSW and classifier parameters. Also, our proposed approach helps to have a deeper understanding of the different sections of a given article. Indeed, it brings additional information which helps data analysts and data scientists to draw conclusions based on the learned section weights.\n[2] BERT model. As core baseline, we concatenate all sections and feed them to a BERT-base model to do classification. Thereafter we improve on this baseline by adding feed-forward layers on top of the BERT model. " }, { "figure_ref": [ "fig_1" ], "heading": "CLS1 CLS1", "publication_ref": [], "table_ref": [], "text": "Feed Forward Neural Network (#neurons=256) The drawback with both approaches is that different sections are considered equally. To address this limitation, we propose to learn section weights to highlight more important sections and allow them to contribute to the multi-label classification task based on their importances.\ny1 1 y1 2 y1 K Feed Forward Neural Network (#neurons=256) y2 1 y2 2 y2 K Softmax Layer\nLet S = {s k } K k=1 represent all sections and W = {w k } K k=1 represent the corresponding weights computed per section (representing sections importances). In the remainder, X will denote the set of documents to be classified, while x ∈ X represents a single document. CLS k ∈ R d (d = 768) denotes the representation of section k obtained from the BERT model for document x. f θ : R d → R p (p = 256) is a first linear layer (of parameters θ). g η : R p → R 1 represents the second linear layer to compute section weights (of parameters η) which is also a feed-forward neural network. The output dimension of the second layer (g η ) equals to one as each section weight is a scalar representing the importance and contribution of the underlying section. Each layer is followed by a rectified linear unit (relu) activation followed by a (first) softmax layer to estimate section weights. Formally:\ny 1 k = relu(f θ (x)), y 2 k = relu(g θ (x)),(1)\nw k = exp(y 2 k ) K k=1 exp(y 2 k ) .(2)\nAfter computing Eq. 2, the section weights can be used for weighting each section and then classifying the input document as follows:\ny = K k=1 w k • CLS k , output = softmax(j ω (y))(3)\nwhere y is the summarization of all sections (instead of simply concatenating them), and its dimension is equal to CLS k dimension (768) as w k is a scalar, while j ω : R d → R p → R m (m denotes the number of classes) represents a final stack of feed-forward linear layers (of parameters ω) followed by a second softmax layer that performs the classification. Binary Cross-Entropy has been used as the loss function in our proposed LSW network. The proposed network has the following properties: -Since K k=1 w K = 1 (Eq. 2), further analysis per document classification can be done through section weights analysis.\n-The contribution of each section in classification is determined by their corresponding section weights meaning that noisy (less useful) sections will contribute less to the multi-label classification. This improves classification results.\nThe architecture of the proposed LSW network is shown in Fig. 1. Please note that in our proposed approach: 1) all sections share the same BERT model, and 2) the BERT model parameters are not frozen, meaning that its trainable parameters are updated through backpropagation." }, { "figure_ref": [], "heading": "Experimental Setups and Results", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss firstly baselines and how we fine-tuned the network then, and secondly the datasets used for our experiments. Finally, we show the results and a few section weights plots which illustrate respectively that our proposed LSW network can achieve both state-of-the-art results and bring explainability by measuring the contribution of each section." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [], "table_ref": [], "text": "Baseline #1 : This architecture is quite similar to our proposed LSW network except for the section weights, which have been removed. Please note that in this case, BERT trainable parameters are not frozen, and they are updated due to classification errors (similar to our proposed LSW).\nBaseline #2 : This baseline concatenates all CLS BERT representations and then applies a classification layer on top of these representations. Please note that BERT trainable parameters are not frozen, and they are updated at training time (similar to our proposed LSW). Baseline #3 : The architecture of this baseline is identical to baseline #1, with the difference that the BERT model parameters have now been frozen (i.e., simulates zero-shot learning)." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "We have used both private (Elsevier) and public (arXiv) randomly sampled scientific article datasets to evaluate the performance of our proposed LSW5 approach. Tab.3 includes information regarding these two datasets. Note that each document contains an average of 200 words, giving rise to datasets comprising more than 20 million tokens each.\nTo fine-tune our proposed LSW network and the baselines mentioned above, we have used 10% of our dataset as validation data. The hyperparameters that we have tuned are: 1) the choice of optimizer, 2) the learning rate, 3) batch-size and 4) the number of epochs. We report the best performing configurations in Tables 1-2." }, { "figure_ref": [ "fig_2" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "We have evaluated the performance of our proposed LSW network and baselines mentioned in Sec. 3.1 using six different metrics, including Micro/Macro F1-score, Micro/Macro Precision and Micro/Macro Recall. The results shown in Tab. 4 and Tab. 5 indicate that our proposed algorithm can outperform baselines on the majority of the metrics. Indeed, we can notice that our proposed LSW approach is able to consistently outperform all baseline on all metrics except Macro Precision. The ability to surpass baselines on both arXiv (public) dataset and our private (Elsevier) dataset proves the stability of our proposed LSW approach meaning that LSW is able to handle real world case problems. Moreover, Fig. 2 indicates that our proposed LSW network can assess the contribution of each section per document independently. As it can be noticed, per document, different sections contribute to the multi-label classification task differently, with the average section weights across all documents showing that, in general, the abstract plays the most important role in the tagging of the documents. " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we presented our proposed LSW network, which can assess the contribution of each section of an article in the multi-labelling classification downstream task. Indeed, this network can classify the underlying document and add explainability regarding the importance of each section. The section weights are updated and computed using gradient descent and backpropagation, which helps to obtain better results and utilise sections such that the classification error is minimised. The results of utilising this approach on both our Elsevier and arXiv datasets indicate that our proposed LSW network can achieve state-of-the-art results compared to different baselines. However, for future work, we will apply the proposed architecture to different tasks (e.g., clustering)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We extend our gratitude to the Elsevier Life Sciences Department, whose sponsorship and support made this research possible. We also extend our gratitude to all the many colleagues who contributed with their feedback to earlier drafts of this paper." } ]
Multi-label document classification is a traditional task in NLP. Compared to single-label classification, each document can be assigned multiple classes. This problem is crucially important in various domains, such as tagging scientific articles. Documents are often structured into several sections such as abstract and title. Current approaches treat different sections equally for multi-label classification. We argue that this is not a realistic assumption, leading to sub-optimal results. Instead, we propose a new method called Learning Section Weights (LSW), leveraging the contribution of each distinct section for multi-label classification. Via multiple feed-forward layers, LSW learns to assign weights to each section of, and incorporate the weights in the prediction. We demonstrate our approach on scientific articles. Experimental results on public (arXiv) and private (Elsevier) datasets confirm the superiority of LSW, compared to state-of-the-art multi-label document classification methods. In particular, LSW achieves a 1.3% improvement in terms of Macro F-1 while it achieves 1.3% in terms of Macro recall on publicly available arXiv dataset.
Learning Section Weights for Multi-Label Document Classification
[ { "figure_caption": "Fig. 1 :1Fig. 1: LSW network architecture.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Distribution of section weights indicating the importance of different sections. (a) shows section weights pertaining to a random document from arXiv dataset while (b) is related to section weights averaged on all arXiv test data. (c) shows section weights averaged on all Elsevier test data.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Tuned hyperparameters used for training on our private (Elsevier) dataset.", "figure_data": "DatasetOptimizer Learning Rate #Epochs MinibatchLSWAdame -5532Baseline #1 Adame -51032Baseline #2 Adame -5532Baseline #3 Adame -5432", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Tuned hyperparameters used for training on arXiv datasets.", "figure_data": "DatasetOptimizer Learning Rate #Epochs MinibatchLSWAdame -51032Baseline #1 Adame -51032Baseline #2 Adame -51032Baseline #3 Adame -51032", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Datasets information.", "figure_data": "Dataset #Documents #Classes SectionsElsevier 120,00052[Abstract, Title, Keywords]arXiv 306,11418[Abstract, Title]", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Multi-label classification results on arXiv dataset. Bold numbers represent the best performances.", "figure_data": "MethodMacro F-1 Macro Precision Macro Recall Micro F-1 Micro Precision Micro RecallLSW94.8%94.9%95.0%96.5%96.4%96.6%Baseline #1 94.4%95.3%93.7%96.3%96.2%96.4%Baseline #2 93.9%94.4%91.2%94.5%92.2%92.3%Baseline #3 89.1%89.3%89.2%90.1%90.0%90.3%", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Multi-label classification results on Elsevier dataset. Bold numbers represent the best performances.", "figure_data": "MethodMacro F-1 Macro Precision Macro Recall Micro F-1 Micro Precision Micro RecallLSW58.0%69.2%52.8%82.2%84.2%80.3%Baseline #1 56.7%70.3%52.7%81.8%83.8%79.9%Baseline #2 57.4%67.1%52.8%80.0%81.0%79.1%Baseline #3 54.8%66.0%51.5%77.7%78.3%76.6%", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Maziar Moradi Fard; Paula Sorrolla Bayod; Kiomars Motarjem; Mohammad Alian Nejadi; Saber Akhondi; Camilo Thorne
[ { "authors": "D Bahdanau; K Cho; Y Bengio", "journal": "", "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "year": "2014" }, { "authors": "I Beltagy; K Lo; A Cohan", "journal": "", "ref_id": "b1", "title": "SciBERT: A pretrained language model for scientific text", "year": "2019-11" }, { "authors": "D Busbridge; D Sherburn; P Cavallo; N Y Hammerla", "journal": "", "ref_id": "b2", "title": "Relational graph attention networks", "year": "2019" }, { "authors": "M Chen", "journal": "", "ref_id": "b3", "title": "Short text generation based on adversarial graph attention networks", "year": "2021" }, { "authors": "Z Chen; Y Quan; Z Qu; L Liu; Y Ding; Y Xie", "journal": "", "ref_id": "b4", "title": "Dynamic n: M fine-grained structured sparse attention mechanism", "year": "2022" }, { "authors": "R Child; S Gray; A Radford; I Sutskever", "journal": "", "ref_id": "b5", "title": "Generating long sequences with sparse transformers", "year": "2019" }, { "authors": "K Choromanski; V Likhosherstov; D Dohan; X Song; A Gane; T Sarlos; P Hawkins; J Davis; A Mohiuddin; L Kaiser", "journal": "", "ref_id": "b6", "title": "Rethinking attention with performers", "year": "2020" }, { "authors": "J Devlin; M W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "E Kacupaj; J Plepi; K Singh; H Thakkar; J Lehmann; M Maleshkova", "journal": "", "ref_id": "b8", "title": "Conversational question answering over knowledge graphs with transformer and graph attention networks", "year": "2021" }, { "authors": "H J Kelley", "journal": "Ars Journal", "ref_id": "b9", "title": "Gradient theory of optimal flight paths", "year": "1960" }, { "authors": "Z Li; Y Zhao; Y Zhang; Z Zhang", "journal": "Knowledge-Based Systems", "ref_id": "b10", "title": "Multi-relational graph attention networks for knowledge graph completion", "year": "2022" }, { "authors": "I Sutskever; O Vinyals; Q V Le", "journal": "Advances in neural information processing systems", "ref_id": "b11", "title": "Sequence to sequence learning with neural networks", "year": "2014" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Attention is all you need", "year": "2017" }, { "authors": "P Veličković; G Cucurull; A Casanova; A Romero; P Lio; Y Bengio", "journal": "", "ref_id": "b13", "title": "Graph attention networks", "year": "2017" }, { "authors": "H Wei; Z Li; C Zhang; H Ma", "journal": "Computer Vision and Image Understanding", "ref_id": "b14", "title": "The synergy of double attention: Combine sentence-level and word-level attention for image captioning", "year": "2020" }, { "authors": "G Xun; K Jha; Y Yuan; Y Wang; A Zhang", "journal": "Bioinformatics", "ref_id": "b15", "title": "MeSHProbeNet: a selfattentive probe net for MeSH indexing", "year": "2019" }, { "authors": "B Zhang; I Titov; R Sennrich", "journal": "", "ref_id": "b16", "title": "Sparse attention with linear units", "year": "2021" }, { "authors": "J Zhou; G Cui; S Hu; Z Zhang; C Yang; Z Liu; L Wang; C Li; M Sun", "journal": "AI Open", "ref_id": "b17", "title": "Graph neural networks: A review of methods and applications", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 346.58, 241.9, 95.68, 50.67 ], "formula_id": "formula_0", "formula_text": "y1 1 y1 2 y1 K Feed Forward Neural Network (#neurons=256) y2 1 y2 2 y2 K Softmax Layer" }, { "formula_coordinates": [ 3, 225.04, 465.9, 255.55, 12.69 ], "formula_id": "formula_1", "formula_text": "y 1 k = relu(f θ (x)), y 2 k = relu(g θ (x)),(1)" }, { "formula_coordinates": [ 3, 262.48, 481.97, 218.11, 28.16 ], "formula_id": "formula_2", "formula_text": "w k = exp(y 2 k ) K k=1 exp(y 2 k ) .(2)" }, { "formula_coordinates": [ 3, 206.23, 555.63, 274.36, 30.55 ], "formula_id": "formula_3", "formula_text": "y = K k=1 w k • CLS k , output = softmax(j ω (y))(3)" } ]
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b8", "b1", "b2" ], "table_ref": [], "text": "In recent years, the field of natural language processing (NLP) has witnessed revolutionary advancements, primarily fueled by the advent of large language models like OpenAI's ChatGPT (Brown et al., 2020) (Radford et al., 2019). These models have demonstrated remarkable capabilities in generating human-like text, leading to widespread applications in various domains, including customer service, content creation, and education. However, the increasing sophistication of these models also poses significant challenges, particularly in distinguishing machine-generated text from humangenerated content. The ability to make this distinction is crucial for maintaining the integrity and trustworthiness of digital communication. This paper presents evidence which helps to support the challenge that human-generated sentences can be differentiated from sentences generated from GPT-3.5-Turbo. Leveraging state-ofthe-art machine learning models such as RoBERTa-Base, RoBERTa-Large, and SVM, our approach aims to detect subtle differences in language patterns, stylistic features, and semantic nuances. Previous work in this area has primarily focused on identifying generic characteristics of AI-generated text, often overlooking the specific attributes of advanced models like ChatGPT (Chakraborty et al., 2023). Our work builds upon these foundations but introduces a targeted analysis aligned with the unique linguistic features of ChatGPT-generated content.\nTo develop and evaluate our model, we compiled a diverse dataset comprising thousands of sentences from human sources in domains including Sports, Medical, Twitter reviews, Text comprehension, and Literature spanning various genres and styles. Further, using these sentences to generate similar sentences from the GPT-3.5-Turbo API of ChatGPT. This dataset was used in the pre-processing stage of natural language embeddings and feature extraction before using it for any classification model.\nThe significance of this research lies not only in its immediate application for content verification but also in its broader implications for the field of digital forensics and the ethics of AI in communication. As language models continue to evolve, distinguishing between human and AI-generated content will become increasingly challenging, necessitating ongoing research and development in this area (Crothers et al., 2023).\nIn this paper, we describe our methodology, the architecture of our detection model, and the results of our experiments. We also discuss the broader implications of our findings in context with sentence features and patterns." }, { "figure_ref": [], "heading": "Numerous investigations have been undertaken", "publication_ref": [ "b7", "b4", "b10", "b5", "b9", "b6" ], "table_ref": [], "text": "to discern the features differentiating humangenerated text from machine-generated text. Let's delve into each of these approaches.\nRecent LLMs like ChatGPT can generate text to compose essays, describe art in detail, create AI art prompts, have philosophical conversations, and even code for you. To detect such intricate patterns, a new methodology was proposed in DetectGPT, where they discovered how LLMs operate where the models tend to create text that falls into specific patterns, particularly in regions where the model's calculations show negative curvature. DetectGPT utilizes log probabilities from the LLM and random perturbations that it generates to determine whether the text is machine-generated (Mitchell et al., 2023). It is particularly good at identifying fake news articles made by models like GPT-NeoX and outperforms most zero-shot methods. They test LLM performance with six diverse datasets: XSum for fake news detection, SQuAD for academic essays, Reddit WritingPrompts for creative writing, WMT16 in English and German, and Pub-MedQA for long-form expert answers. The future work that authors in DetectGPT wanted to explore was to see how watermarking algorithms worked with detection algorithms like DetectGPT. (Guo et al., 2023) proposed an HC3 (Human ChatGPT Comparison Corpus) dataset, which consists of nearly 40K questions and their corresponding human/ChatGPT answers, and developed a ChatGPT detection model to differentiate human and Chatgpt generated text. The detector is primarily created by fine-tuning RoBERTa model on the dataset, and the authors propose two methods for training it. The first method uses only the pure answered text, while the second method utilizes the question-answer text pairs for joint model training.\nIn 2019, (Solaiman et al., 2019) built a detector for GPT2-generated output by fine-tuning RoBERTa using outputs from the largest GPT2 Model, which was with 1.5 billion parameters and was capable of detecting if the text was a machinegenerated text or not. They conducted this research by using RoBERTa-base with 125 million parameters and RoBERTa-large with 356 million parameters as the foundation for their sequence classifier. RoBERTa, distinct from GPT-2 in terms of architecture and tokenizer, is a masked and non-generative language model. They achieved an accuracy of approximately 95% for this research. (Kirchenbauer et al., 2023) proposed a watermarking framework for language models. Watermarking basically means embedding signals in the machine-generated text so it is not detected by the human eye, but detected algorithmically and helps to decrease potential harm from these LLM models. Kirchenbauer tested this technique on an Open Pretrained Transformer (OPT) model with multibillion parameters. The authors used an approach of categorizing watermark tokens into green and red lists for distinct patterns. This approach helps in use cases like plagiarism check and copyright protection. To mimic diverse language modeling situations in their dataset, they extracted random text portions from the news-like section of the C4 dataset (Raffel et al., 2020). Each extracted string has a fixed number of tokens removed from the end, forming a baseline completion, while the rest of the tokens serve as the prompt.\nOn the other hand, on social media applications like Twitter, which are vulnerable to generating misinformation and fake news, (Kumarage et al., 2023) authors proposed a novel stylometric detection algorithm to detect AI-generated tweets that are human-like. Authors use BERT and ensemble approach, incorporating stylometric features as their baseline study. they test their models on two datasets: one we made to mimic human-to-AI author changes on a Twitter user's timeline and the publicly available TweepFake dataset. They use stylometric features to analyze text for stylistic signals, categorizing them into phraseology, punctuation, and linguistic diversity. Using this with a model like RoBERTa which classifies on top of the extracted features, they found that this method performs the best with XGBoost compared to logistic regression and random forest classifiers.\nThere are also tools that are developed, like (GPTZero, 2023) which is a tool used for analyzing text, employing two primary metrics, namely perplexity and burstiness, to differentiate between machine-generated and human text. It offers a publicly accessible API that generates a confidence score indicating the likelihood of a given text being machine-generated or not.\nWe plan to investigate the potential impact of different data sources and types of machinegenerated text on the AUCROC of our model. It may help us understand the limitations of our approach and identify areas for future research.\nBy building on the existing work in this field and extending it in these ways, we hope to contribute to developing more accurate and effective methods for distinguishing between human-generated and machine-generated text (ChatGPT). The model can provide a valuable addition to the literature on identifying fake and synthetic data, as it addresses a specific type of data generation method and can help improve the accuracy of existing methods. To do this, we build a custom dataset by combining multiple types of text like PubMed data, SQuAD, Twitter feed data, Football commentary, and novels." }, { "figure_ref": [], "heading": "Proposed Approach", "publication_ref": [], "table_ref": [], "text": "This section presents a nuanced and multi-faceted approach for discerning text's origin, specifically focusing on the distinction between humangenerated and ChatGPT-generated sentences. This requires training and fine-tuning state-of-the-art NLP models on a diverse dataset of humangenerated text and ChatGPT-generated text. The ultimate goal is to enable effective moderation and filtering of text content across different platforms and applications, thus ensuring the safety and integrity of online communication. The models should be able to accurately identify text generated by ChatGPT across multiple domains, including Sports, Medical, Twitter reviews, Text comprehension, and Literature. The research aims to provide insights into the decision-making process of the models and the characteristics of ChatGPT-generated text that distinguish it from human-generated text." }, { "figure_ref": [ "fig_0" ], "heading": "Dataset Description", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The dataset used in this study is a combination of five different sources of human-generated text, including Twitter Sentiment, Football Commentary, Project Gutenberg, PubMedQA, and SQuAD datasets. The purpose of using these datasets is to improve the model's ability to understand and analyze both short-form and long-form text and complex medical jargon.\nWe split the Football Commentary dataset into 100-word and 150-word sentences, truncated SQuAD questions to a maximum of 200 words, and extracted 200-word sentences from the Project Gutenberg dataset to form custom literature sentences.\nThe merged human-generated text resulted in 2,534,498 sentences, which were grouped into categories of 10-200 words with 5-word increments. Normalization was performed to ensure 16% of sentences in each category, resulting in a final dataset of 400,015 sentences as shown in figure 1.\nTo generate machine text, we rephrased the selected sentences using the GPT-3.5-turbo chatgpt API from OpenAI, with a prompt to ensure that the length of the rephrased sentence is the same as the original sentence as shown in Table 1. The total number of rows in our dataset is 800,030, consisting of both human and machine generated text, with sentences ranging from 10 to 200 words in length. We truncated sentences longer than 200 words and removed those with fewer than 10 words." }, { "figure_ref": [], "heading": "Prompt", "publication_ref": [], "table_ref": [], "text": "ChatGPT Response Please rephrase this Rephrased sentence sentence make sure the from Chatgpt words length is equal to the given sentence. " }, { "figure_ref": [], "heading": "Model Description", "publication_ref": [], "table_ref": [], "text": "Our methodology encompasses three distinct cases, each tailored to leverage the strengths of a specific model. In the first case, we employ Support Vector Machines (SVM) to conduct a meticulous analysis, followed by using RoBERTa-base in the second case and RoBERTa-large in the third case.\nThe models should be able to accurately identify text generated by ChatGPT across multiple domains, including Sports, Medical, Twitter reviews, Text comprehension, and Literature. The research aims to provide insights into the decision-making process of the models and the characteristics of ChatGPT-generated text that distinguish it from human-generated text." }, { "figure_ref": [], "heading": "SVM", "publication_ref": [ "b3" ], "table_ref": [], "text": "In the case of the SVM classifier, a powerful tool in machine learning, we employed a radial basis function (RBF) kernel to discern patterns within the feature space. The RBF kernel is known for capturing complex relationships in non-linear data, making it particularly well-suited for distinguishing between human-generated and ChatGPT-generated sentences. We used the TF-IDF (Term Frequency-Inverse Document Frequency) vectorization tech- The choice of the SVM classifier with an RBF kernel and TF-IDF stems from its status as a wellestablished and widely-used baseline approach in the domain of text classification. This approach has been successfully employed in previous text classification studies, as demonstrated by its use in studies such as (Das and Chakraborty, 2018).\nIt serves as a benchmark against which the performance of more complex models can be evaluated. In contrast to the RoBERTa models, the SVM-RBF-TF-IDF approach is lightweight, requiring less time and computational resources for both training and inference. This characteristic renders it more amenable to scenarios with resource constraints. Additionally, the SVM model proves to be robust, showcasing competitive performance compared to other machine learning models of similar size and complexity in the realm of text classification.\nHowever, it is essential to note that while the SVM model offers efficiency and robustness, it does not attain state-of-the-art performance levels achieved by the more advanced RoBERTa models, as detailed in the subsequent sections of our analysis." }, { "figure_ref": [], "heading": "RoBERTa-base", "publication_ref": [ "b7" ], "table_ref": [], "text": "In our second case, we employed the RoBERTabase model, a transformer-based architecture renowned for its exceptional natural language processing capabilities. To tailor this pre-trained language model to our specific binary classification task-distinguishing between human-generated and ChatGPT-generated sentences we augmented RoBERTa-base with a fully connected (FC) layer followed by a sigmoid activation function at the end. This modification allowed us to transform the contextualized representations learned by RoBERTa into a binary classification decision. The RoBERTa model, with its deep bidirectional architecture and extensive pre-training on vast corpora, captures intricate linguistic patterns and semantic relationships, enhancing our ability to discern nuanced differences in text origins. Adding the FC+sigmoid layer enables the model to map these representations to a binary decision space, facilitating accurate classification. This hybrid approach leverages the strengths of transformer-based language models while tailoring them to the specific demands of our binary classification task.\nThe RoBERTa-base model, augmented with an (FC+sigmoid) layer, is chosen for its excellence in NLP tasks, supported by its deep bidirectional architecture. It showcases versatility and is successfully utilized in text detection tasks by studies like (Mitchell et al., 2023) and [DistilBERT]. With 125 million parameters, RoBERTa-base balances complexity and efficiency compared to RoBERTa-large (355 million parameters). This choice facilitates an exploration of model size-performance trade-offs, offering valuable insights into transformer-based models for binary classification tasks.\nThe RoBERTa-base model, enhanced with an (FC+sigmoid) layer, proves effective in our binary classification, nearing state-of-the-art performance observed in similar studies. Its adeptness in capturing intricate linguistic nuances is countered by its larger size (125 million parameters), demanding more training and inference resources. While excelling in accuracy, its computational complexity introduces challenges in efficiency when compared to the lightweight SVM model. This highlights the nuanced trade-offs between model sophistication, performance, and resource requirements within the context of our classification task.gths of transformer-based language models while tailoring them to the specific demands of our binary classification task." }, { "figure_ref": [], "heading": "RoBERTa-large", "publication_ref": [ "b7" ], "table_ref": [], "text": "In our third case, we employed the formidable RoBERTa-large model, a transformer-based ar-chitecture renowned for its extensive depth and superior natural language understanding capabilities. Augmented with an additional fully connected (FC) layer and a sigmoid activation function at the end, RoBERTa-large was tailored for our binary classification task, distinguishing between human-generated and ChatGPT-generated sentences. With a substantial parameter count of 355 million, RoBERTa-large surpasses its base counterpart in both depth and complexity.\nRoBERTa-large, enhanced with an appended (FC+sigmoid) layer, represents a powerful extension of its base counterpart and proves instrumental in our binary classification task. Acknowledged as a highly proficient transformer for NLP tasks, this model, with its extensive depth and attention mechanisms, has been a preferred choice in recent studies addressing machine-generated text detection tasks, as demonstrated by its implementation in studies such as (Mitchell et al., 2023) and [Dis-tilBERT]. The RoBERTa-large model, with its 355 million parameters, boasts an intricate understanding of contextual relationships within textual data, enhancing its ability to discern subtle distinctions between human-generated and ChatGPT-generated sentences. Notably, it has exhibited outstanding performance in similar studies, achieving state-ofthe-art results on our dataset. However, the heightened complexity and larger size of RoBERTa-large introduce computational challenges, requiring substantial training and inference resources. This trade-off between enhanced capabilities and increased computational demands prompts a nuanced consideration of its suitability for our specific classification task, especially in comparison to the more lightweight SVM and RoBERTa-base models.\nThis section outlines the experimental configuration for both the dataset and the model." }, { "figure_ref": [], "heading": "Dataset Experiments", "publication_ref": [], "table_ref": [], "text": "For our experimental approach, we organized our datasets into distinct training, testing, and validation sets, each comprising sentences ranging in length from 10 to 200 words. To ensure a granular examination of model performance across different sentence lengths, we further divided the data within each set into specific ranges, such as 10-14 words, 15-19 words, and so on.\nThis stratification allowed us to scrutinize the models' proficiency in handling varying sentence lengths, offering insights into their adaptability across a spectrum of linguistic contexts. systematically categorizing the data based on sentence length ranges, our experimental design aimed to comprehensively evaluate model robustness and effectiveness in capturing nuances across diverse sentence lengths." }, { "figure_ref": [], "heading": "Model Experiments", "publication_ref": [], "table_ref": [], "text": "Furthermore, to comprehensively assess the discriminatory power of each model across different sentence length ranges, we recorded the Area Under the Receiver Operating Characteristic curve (AUC-ROC) for each specific range and the cumulative performance across all ranges. This recording strategy allowed us to capture the variations in model performance at distinct sentence lengths and derive an aggregate measure of effectiveness.\nThe AUC-ROC analysis is a robust metric, offering a holistic understanding of each model's ability to discriminate between human-generated and ChatGPT-generated sentences within the specified length categories. Combining results across all ranges, our evaluation framework thoroughly examines the models' overall discriminatory performance, contributing valuable insights to sentence-length-specific classification. even in longer sentence ranges, indicating its proficiency across diverse linguistic contexts. These results offer valuable insights into the models' efficacy at different scales of textual complexity, emphasizing the performance variations observed across various sentence length intervals. These scores provide a detailed breakdown of each model's performance within distinct sentence length intervals, offering valuable insights into their discriminatory abilities across varying linguistic contexts." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [ "b7" ], "table_ref": [], "text": "In our research, we effectively create a wellconstructed labeled dataset encompassing texts originated by both humans and ChatGPT (GPT-3.5 turbo) from five distinct sources. The sentences in the dataset exhibit a varied length range, spanning from 10 to 200 words. Additionally, we have designed and trained multiple classifiers to differentiate between texts generated by humans and ChatGPT.\nIn the future, when creating datasets, we can aim to add a wider range of text from more sources. This approach will make the data more versatile, suitable for different fields, and capable of han-dling texts of different lengths. To improve the results of classification, progress can be achieved by developing and training more advanced models. Considering the increasing development of Language Model Models (LLMs), our work can be expanded to incorporate all noteworthy LLMs, such as Llama, Orca, Falcon, and Palm, for both dataset creation and detection purposes. For further model improvement, we can design a system to identify phrases within document sentences, aiming to distinguish between machine-generated and human-generated text. Finally, exploring zero-shot and one-shot learning systems like (Mitchell et al., 2023) can eventually aid in saving resources and time needed for training complex classifiers." } ]
Our research focuses on the crucial challenge of discerning text produced by Large Language Models (LLMs) from human-generated text, which holds significance for various applications. With ongoing discussions about attaining a model with such functionality, we present supporting evidence regarding the feasibility of such models. We evaluated our models on multiple datasets, including Twitter Sentiment, Football Commentary, Project Gutenberg, PubMedQA, and SQuAD, confirming the efficacy of the enhanced detection approaches. These datasets were sampled with intricate constraints encompassing every possibility, laying the foundation for future research. We evaluate GPT-3.5-Turbo against various detectors such as SVM, RoBERTa-base, and RoBERTa-large. Based on the research findings, the results predominantly relied on the sequence length of the sentence.
Machine-Generated Text Detection using Deep Learning
[ { "figure_caption": "Figure 1 :1Figure 1: Dataset distribution across different range of lengths", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: SVM Pipeline.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: RoBERTa Pipeline.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance of SVM over texts of different lengths.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance of RoBERTa-base over texts of different lengths.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Performance of RoBERTa-large over texts of different lengths.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Example Generating Rephrase Data.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "AUC-ROC comparison for models.", "figure_data": "RangeSVMRoBERTa-RoBERTa-baselarge10-140.7991190.9324740.93466215-190.8265360.9456340.94932720-240.8545380.9547750.95701825-290.8849570.9603580.96495930-340.878530.961520.97051835-390.8390020.950690.95266640-490.856170.9533830.95698850-590.8399410.961730.96963960-690.8694630.9727580.97336870-790.828540.9711840.97256780-890.8071120.9772580.98270490-990.8687760.9859650.991071100-109 0.9516410.9972990.999021109-119 0.81349810.966667120-129 0.5671050.93751130-139 0.84259311140-159 0.9243760.9977170.990476160-1790.9911180-199 0.98437510.5Table 3: AUCROC scores achieved w.r.t ranges.and Val sets. Moving to transformer-based mod-els, RoBERTa-base exhibits elevated performancecompared to SVM, while RoBERTa-large furtherenhances this performance across the Train, Test,and Val sets. These scores provide a quantitativeoverview of each model's effectiveness in distin-guishing between human-generated and ChatGPT-generated sentences across different datasets, high-lighting the robust discriminatory capabilities ofthe RoBERTa models.For our second experiment, we recorded AUC-ROC scores for each range length of sentencesacross three different models: SVM, RoBERTa-base, and RoBERTa-large. The table 3 displays theAUC-ROC scores for each model within specificsentence length ranges.As sentence lengths increase, RoBERTa mod-els consistently outperform SVM, showcasing ro-bustness in capturing nuanced patterns. In table 4RoBERTa-large maintains competitive F1 scores", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "F1 scores achieved w.r.t ranges.", "figure_data": "RangeSVMRoB-base RoB-large10-140.774232 0.9216760.92211215-190.8241980.946170.94837420-240.8547330.95430.95432325-290.875496 0.9544390.95649530-340.901613 0.9686020.97293735-390.8690930.954520.95417940-490.866435 0.9521790.95754250-590.875376 0.9660870.97339260-690.921739 0.9807810.97858470-790.9411760.991110.99354480-890.954955 0.9965040.99761990-990.963504 0.9980380.998717100-109 0.884438 0.9955270.9665109-119 0.96315110.998805120-129 0.959459 0.9983610.996774130-1390.964211140-159 0.760638 0.8878050.630081160-179 0.98989911180-199 0.98412710.989247", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Raghav Gaggar; Ashish Bhagchandani; Harsh Oza
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Souradip Chakraborty; Amrit Singh Bedi; Sicheng Zhu; Bang An; Dinesh Manocha; Furong Huang", "journal": "", "ref_id": "b1", "title": "On the possibilities of ai-generated text detection", "year": "2023" }, { "authors": "Evan Crothers; Nathalie Japkowicz; L Herna; Viktor", "journal": "IEEE Access", "ref_id": "b2", "title": "Machine-generated text: A comprehensive survey of threat models and detection methods", "year": "2023" }, { "authors": "Bijoyan Das; Sarit Chakraborty", "journal": "GPTZero Website", "ref_id": "b3", "title": "An improved text sentiment classification model using tf-idf and next word negation", "year": "2018" }, { "authors": "Biyang Guo; Xin Zhang; Ziyuan Wang; Minqi Jiang; Jinran Nie; Yuxuan Ding; Jianwei Yue; Yupeng Wu", "journal": "", "ref_id": "b4", "title": "How close is chatgpt to human experts? comparison corpus, evaluation, and detection", "year": "2023" }, { "authors": "John Kirchenbauer; Jonas Geiping; Yuxin Wen; Jonathan Katz; Ian Miers; Tom Goldstein", "journal": "", "ref_id": "b5", "title": "A watermark for large language models", "year": "2023" }, { "authors": "Tharindu Kumarage; Joshua Garland; Amrita Bhattacharjee; Kirill Trapeznikov; Scott Ruston; Huan Liu", "journal": "", "ref_id": "b6", "title": "Stylometric detection of aigenerated text in twitter timelines", "year": "2023" }, { "authors": "Eric Mitchell; Yoonho Lee; Alexander Khazatsky; Christopher D Manning; Chelsea Finn", "journal": "", "ref_id": "b7", "title": "Detectgpt: Zero-shot machine-generated text detection using probability curvature", "year": "2023" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b8", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b9", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Irene Solaiman; Miles Brundage; Jack Clark; Amanda Askell; Ariel Herbert-Voss; Jeff Wu; Alec Radford; Gretchen Krueger; Jong Wook Kim; Sarah Kreps", "journal": "", "ref_id": "b10", "title": "Release strategies and the social impacts of language models", "year": "2019" } ]
[]
10.18653/v1/2021.emnlp-main.75
2023-11-26
[ { "figure_ref": [ "fig_6" ], "heading": "INTRODUCTION", "publication_ref": [ "b28", "b38", "b10", "b25", "b34", "b44", "b76", "b71", "b5", "b25", "b28", "b52", "b56", "b59", "b47", "b51", "b83", "b3", "b11", "b2", "b4", "b46", "b61", "b81", "b40" ], "table_ref": [], "text": "Recent approaches for ranking documents have focused heavily on contextual transformer-based models for both retrieval [29,39] and re-ranking [11,26,35,45,77]. To further improve the effectiveness of contextual ranking models, earlier works have explored negative sampling techniques [72], pre-training approaches [6], and different architectural variants [26,29]. One largely under-explored area is the use of data augmentation in neural information retrieval (IR) to learn effective and robust ranking models.\nData augmentation helps improve the generalization and robustness of highly-parameterized models by creating new training examples through transformations applied to the original data. A key benefit of data augmentation is that it can improve sample efficiency, meaning that a model can achieve improved performance with limited amounts of training data. This is because data augmentation effectively increases the size of the training dataset, allowing a model to learn from a wider range of examples. Additionally, data augmentation methods result in robust models allowing for better zero-shot transfer [53,57]. Augmentation techniques have been successfully used to help train more robust models, particularly when using smaller datasets in computer vision [60], speech recognition [48], spoken language understanding [52], and dialog system [84]. However, the use of data augmentation for document ranking has not been investigated in detail to the best of our knowledge until recently [4,12].\nContextual models are first pre-trained on large amounts of language data followed by taskspecific fine-tuning. However, popular contextualized models are over-parameterized with more than 100 million parameters and might over-fit the training data when the task-specific fine-tuning data is small. Many real-world ranking tasks can have smaller query workloads and therefore necessitate sample efficient training like data augmentation [3,5,47,62]. However, simply augmenting training data with existing pointor pairwise ranking losses does not lead to performance improvements. We show that our data augmentation techniques using existing pointwise ranking losses, i.e. crossentropy losses, result in degradation of performance (cf. Figure 3). This can be attributed to a known lack of robustness to noisy labels [82] and the possibility of poor margins [41], leading to reduced generalization performance." }, { "figure_ref": [], "heading": "Contrastive learning for rankings with Data Augmentation", "publication_ref": [ "b32" ], "table_ref": [], "text": "Towards improving the ranking performance in limited data setting we first propose both unsupervised and supervised data augmentation methods. Both cases involve creating new query-document pairs from existing instances. We do not perturb the query, only the document. Unsupervised data augmentation methods include adding new query document pairs where documents are relevant extractive pieces of text from an existing relevant document determined by lexical (BM25-based) or semantic (embedding-based) similarity. For supervised augmentation, we use rationale-selection [33] approach specifically devised for document ranking. This approach selects relevant portions of an existing relevant document in a supervised manner given a query.\nSecondly, we propose contrastive learning objectives for document ranking that can exploit the newly augmented training instances. A key idea in contrastive learning is to learn the input representation of an instance or anchor such that its positive instances are embedded closer to each other, and the negative samples are farther apart. In this work, we construct augmented querydocument pairs from existing positive instances by multiple augmentation strategies. We extend the idea of contrastive learning to the document ranking task by considering query-document pairs belonging to the same query as positive instances, unlike in vision and NLP tasks, where all instances with the same class label can potentially become positive pairs.\nOur key contribution to this work is the effective combination of data augmentation and contrastive learning to improve sample efficiency and robustness." }, { "figure_ref": [], "heading": "Results and Key Takeaways", "publication_ref": [], "table_ref": [], "text": "To this end, we systematically explore existing contrastive learning objectives and augmentation strategies on a host of contextual language models -BERT, RoBERTa and DistilBERT in multiple low-data ranking settings -from 100 to 100, 000 training instances. We do not intend to engineer a state-of-art ranking model for document ranking but instead focus on optimization strategies that work well in low-data settings. We find that using the right combination of augmentation technique and loss objective even when you have only 1k training instances leads to a 83% improvement in ndcg@10 for DistilBERT. We find that even larger models like BERT which tend to be more sample efficient in comparison see a 9% improvement in low data settings. When transferring augmented models to out-of-domain datasets, we once again see drastic improvements -RoBERTa sees sample efficiency gains ranging from 18.8% to sometimes a high of 134% on various BEIR datasets with no additional fine-tuning. " }, { "figure_ref": [], "heading": "Contributions", "publication_ref": [], "table_ref": [], "text": "In sum, we make the following contributions in this work:\n• We propose and study different data augmentation and contrastive loss approaches for document ranking task. • We also show the impact of model size on ranking performance using augmented data of different sizes. • We show the performance of different data augmentation and ranking losses in in-domain and out-of-domain (BEIR) settings." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "The related work on the topic at hand can be broadly categorized into three main areas of research, document ranking using contextual models, different data augmentation techniques on text-based tasks, and finally, we investigate different loss functions used in text ranking and analyse their relationship with contrastive loss." }, { "figure_ref": [], "heading": "Contextual Models for Ad-hoc Document Retrieval", "publication_ref": [ "b12", "b41", "b44", "b48", "b33", "b24", "b25", "b10", "b32", "b57", "b5", "b16", "b17", "b31", "b43", "b64", "b53" ], "table_ref": [], "text": "The task of text ranking often involves two steps: a fast retrieval step followed by a more involved re-ranking step. The re-ranking step is particularly important because it can significantly improve the performance of the text ranking task. In this paper, the focus is on improving the performance of the re-ranking stage which typically involves the use of contextual models.\nContextual models, such as BERT [13] and RoBERTa [42], have shown promising improvements in the document ranking task. The input, i.e., query document pairs, can be encoded using two major paradigms for training a contextual re-ranker: joint encoding and independent encoding. In the joint encoding paradigm, the most common way to apply contextual models for document re-ranking is to jointly encode the query and document using an over-parameterized language model [45,49]. On the other hand, the second paradigm encodes the document and the query independently of each other. Models that implement independent query and document encoding are referred to as dual encoders, bi-encoders, or two-tower models.\nWhile dual encoders are typically used in the retrieval phase, recent proposals have used them in the re-ranking phase as well [34]. It is important to note that a common problem in both approaches is the upper bound on the acceptable input length of contextual models, which restricts their applicability to shorter documents. When longer documents do not fit into the model, they are chunked into passages or sentences to fit within the token limit, either by using transformerkernels [25,26], truncation [11], or careful pre-selection of relevant text [33,58]. As pre-training is an important part of these models, several approaches and pre-training objectives have been proposed specifically for ranking [6,17,18,32].\nRanking using LLM's: Recently a lot of work has been done on re-ranking using LLM's in a zero-shot setup. There have been works using list-wise approach [44,65], Pairwise Ranking Prompting [54] for re-ranking passages. All the above approaches use either Flan-UL2 model with 20B parameters or ChatGPT (GPT-3.5-turbo,GPT-4) with more than 175B parameters. To best of our knowledge there have been no works on document ranking using LLM's in a zero-shot or instruct tuned setup.\nIn this work, the focus is on joint encoding models for document ranking, and simple document truncation is employed whenever longer documents exceed the overall input upper bound. By doing so, the aim is to improve the performance of the re-ranking stage and the text ranking task overall." }, { "figure_ref": [], "heading": "Data Augmentation", "publication_ref": [ "b6", "b9", "b18", "b42", "b55", "b77", "b82", "b30", "b60", "b63", "b45", "b79", "b49", "b75", "b73", "b74", "b7", "b67", "b37", "b72", "b22", "b27", "b71", "b68", "b36", "b3", "b11" ], "table_ref": [], "text": "Data augmentation is a powerful technique that has a significant impact on different segments, including text, speech, image, vision, and more. Researchers have proposed new data augmentation strategies and explored their influence on deep learning models in different fields, such as speech recognition, spoken language understanding, dialog systems, image recognition, and text classification. Some of the proposed techniques include GridMask [7], AutoAugment [10], and [19,43,56,78,83].\nFor text-related tasks, various data augmentation techniques have been proposed, such as named entity recognition, sentiment analysis, text classification, and text generation. Data augmentation has been shown to help boost the performance of several downstream natural language processing (NLP) and text-related tasks. For instance, data augmentation using pre-trained transformer models has been shown to improve the performance of named entity recognition, language inference, text categorization, classification, and query-based multi-document summarization tasks [31,61,64]. A proposed framework called Text Attack [46] combines data augmentation, adversarial attacks, and training in NLP. One of the most common data augmentation techniques in NLP is the use of word embeddings, which involves converting words into numerical vectors. Different techniques can be used to augment word embeddings, such as word substitution, word deletion, and word insertion. Another proposed approach for text classification involves replacing words with synonyms [80] and inserting or deleting words in the input text.\nIn recent times, data augmentation methods have gained popularity in the context of retrieval tasks. Such techniques have shown promising results for question retrieval [50], query translation [76], question-answering [74,75], cross-language sentence selection [8], machine reading [68], and query expansion [38]. For instance, Yangetal [73] proposed a cross-momentum contrastive learning [23] based scheme for open-domain question answering. Dense retriever models [28,72] have recently been proposed, which sample negative documents to train the dense retrievers in a contrastive way. However, these methods do not pay attention to the uniform nature of contrastive learning [69]. In contrast, a contrastive dual learning based method [37] for dense retrieval takes care of uniformity. Most of these approaches concentrate on negative samples and aim to train an effective dense retriever framework. Works like InPars [4] and PromptAgator [12] focus on using large generative models to generate queries from sampled documents to increase training data for dense retrieval tasks." }, { "figure_ref": [], "heading": "Contrastive Learning", "publication_ref": [ "b29", "b23", "b50", "b62", "b70", "b13", "b19", "b26", "b21", "b50", "b62", "b69", "b62", "b27", "b54", "b84", "b39", "b71", "b78" ], "table_ref": [], "text": "Contrastive losses with data augmentation have been widely studied in unsupervised learning settings, with augmentation of instances treated as positive samples and other random instances serving as negative samples. However, recent research has explored ways to incorporate label information for more precise supervision signals during data augmentation [30]. Various methods have used this approach to learn representations from unsupervised data [24,51,63,71], achieving superior performance compared to other approaches [14,20]. By generating training instances from original ones using different data augmentation strategies, a contrastive loss can help bring the representation of related entities closer together in the embedding space. For a comprehensive overview, a recent survey on supervised and self-supervised contrastive learning is recommended [27]. Recent research has applied supervised contrastive learning (SCL) to fine-tuning pre-trained language models, but with limited success [22]. There are various other contrastive losses used in text retrieval, which include InfoNCE loss [51], N-Pair loss [63], centroid triplet loss [70], and lifted structured loss [63]. We discuss in detail the contrastive losses in Section 5.\nAnother line of work that frequently utilizes contrastive loss functions is dense passage retrieval. Karpukhin et al. [28] introduced the DPR model, which uses dual-encoders to independently compute representations of queries and documents. The loss used to train DPR models is similar to the ones mentioned above; it makes use of in-batch negatives, i.e., it takes the documents from all instances in a given training batch into account and contrasts them with the relevant document. Several other approaches [55,85] have adopted this training objective. Subsequent works have focused on providing better negative sampling techniques to replace the in-batch negatives [40,72,79]. In principle, our data augmentation techniques can be applied in the context of dual-encoder models for retrieval, although we focus on cross-encoders for re-ranking in this work." }, { "figure_ref": [], "heading": "Extension from previous work", "publication_ref": [ "b1", "b1", "b32", "b1" ], "table_ref": [], "text": "This work is an extension of our previous work [2] titled \"Supervised Contrastive Learning Approach for Contextual Ranking\". In [2] we predominantly explored simple data augmentation techniques with supervised contrastive loss (SCL) for document ranking. In this work, we considerably increase the scope of our investigations to include more involved supervised data augmentation schemes like [33]. Also included in this paper is a detailed investigation of other metric losses like centroid triplet loss, noise contrastive losses, and neighborhood-based unsupervised losses. Different from [2], we also study how model size impacts model performance when trained on different sizes of augmented and non-augmented data. Finally, we showcase the benefits of data augmentation in zero-shot transfer settings to test the robustness and generalization of the rankers learned using augmented training data." }, { "figure_ref": [], "heading": "DOCUMENT (RE-)RANKING USING CONTEXTUAL LANGUAGE MODELS", "publication_ref": [ "b10", "b48", "b57", "b76", "b12", "b41", "b58" ], "table_ref": [], "text": "Our task is to train a model for document re-ranking. Ranking models usually provide a relevance score when given a query-document pair (𝑞, 𝑑) as input. This score can then be used to rank documents based on their relevance to the given query.\nFormally, the training set comprises pairs 𝑞 𝑖 , 𝑑 𝑖 𝑁 𝑖=1 , where 𝑞 𝑖 is a query and 𝑑 𝑖 is a document that is either relevant or irrelevant based on its label 𝑦 𝑖 . The aim is to train a ranker 𝑅 that predicts a relevance score ŷ ∈ [0; 1] given a query 𝑞 and a document 𝑑: 𝑅 : (𝑞, 𝑑) ↦ → ŷ\nAfter training, the ranking model 𝑅 can be used to re-rank a set of documents obtained in the initial retrieval process by a lightweight, typically term-frequency-based, retriever with respect to a query. This is a common practice for ranking tasks, where the documents are initially retrieved and then reranked by a more sophisticated and computationally expensive model. Recent studies have shown that pre-trained contextual language models have exhibited promising performance in document ranking tasks [11,49,58,77]. These cross-attention models jointly model queries and documents. In this study, three different joint modeling approaches based on BERT [13], RoBERTa [42] and DistilBERT [59] are considered, and their performance is evaluated under different contrastive loss setup with different amounts of data augmentation. All three models share the same input format: a pair of query 𝑞 and document 𝑑 is fed into the model as\n[CLS] 𝑞 [SEP] 𝑑 [SEP]\nTo account for the input length limitations of the models, long documents may need to be truncated to fit the sequence length the model is pre-trained with.\nTraditionally, there are two main methods to train ranking models, which are pointwise and pairwise. Let us assume of a mini batch of 𝑁 training examples {𝑥 𝑖 , 𝑦 𝑖 } 𝑖=1,...,𝑁 . The pointwise training method considers the document ranking task as a binary classification problem, where each training instance 𝑥 𝑖 = (𝑞 𝑖 , 𝑑 𝑖 ) is a query-document pair and 𝑦 𝑖 ∈ 0, 1 is a relevance label. The predicted score of 𝑥 𝑖 is denoted as ŷ𝑖. The cross-entropy loss function is defined as follows:\nL Point = - 1 𝑁 𝑁 ∑︁ 𝑖=1 (𝑦 𝑖 • log ŷ𝑖 + (1 -𝑦 𝑖 ) • log(1 -ŷ𝑖 ))\nIn the pairwise training method, each training example contains a query and two documents, 𝑥 𝑖 = (𝑞 𝑖 , 𝑑 + 𝑖, 𝑑 -𝑖), where the former is more relevant to the query than the latter. The pairwise loss function is defined as follows:\nL Pair = 1 𝑁 𝑁 ∑︁ 𝑖=1 max 0, 𝑚 -ŷ+ 𝑖 + ŷ- 𝑖\nwhere ŷ+ 𝑖 and ŷ-𝑖 are the predicted scores of 𝑑 + 𝑖 and 𝑑 - 𝑖 , respectively, and 𝑚 is the loss margin. The pairwise method is commonly used for ranking tasks as it takes into account the relative ordering between documents." }, { "figure_ref": [], "heading": "DATA AUGMENTATION FOR DOCUMENT RANKING", "publication_ref": [], "table_ref": [], "text": "We intend to use data augmentation in the context of a document ranking task to improve the quality of the training data and increase the diversity of the examples presented to the model. In this section, we propose extractive methods to create new query-document pairs from the instances already in the training data set.\nTo enhance the training data, we utilize augmentation techniques to form 𝑑 + 𝑎 from 𝑑 + for each triple (𝑞, 𝑑 + , 𝑑 -) in the training set. Creating an augmented instance is extractive since it involves selecting relevant sentences to the corresponding query, followed by random sampling of an irrelevant document 𝑑 - 𝑎 . The resulting augmented training instances are then appended to their respective batch, effectively doubling the size of each batch.\nThe document is treated as a sequence of sentences 𝑠 𝑖 , denoted as 𝑑 = (𝑠 1 , 𝑠 2 , ..., 𝑠 |𝑑 | ). A queryspecific selector is employed to choose a fixed number of sentences from the document, based on the distribution 𝑝 (𝑠 | 𝑞, 𝑑), encoding the relevance of the sentence given the input query 𝑞. This distribution is used to select an extractive, query-dependent summary, denoted as 𝑑 ′ ⊆ 𝑑. The augmentation process is detailed in Algorithm 1. The function that creates the augmented document (line 4) is defined as The sentences in the augmented documents are ordered by score, i.e., the original order is not preserved. Note that the scoring function score(𝑞, 𝑠) in Eq. ( 1) represents an augmentation strategy. We present both unsupervised (heuristic) and supervised (predictive) augmentation strategies in the following sections.\naugment(𝑑, 𝑞, 𝑘) = 𝑘-argmax 1≤𝑖 ≤ |𝑑 | score(𝑞, 𝑠 𝑖 )(1)" }, { "figure_ref": [], "heading": "Unsupervised augmentation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Term-matching-based (BM25). BM25 (Best Matching 25", "publication_ref": [], "table_ref": [], "text": ") is a variant of the popular tf-idf weighting scheme that is commonly used to rank documents by relevance to a query. BM25 works by computing a score for each document in a corpus based on its relevance to a query. The score is calculated by combining several factors, including the frequency of the query terms in the document, the inverse document frequency of the query terms, and the length of the document, and is then normalized by the average document length in the corpus. We use tf-idf scores between the query 𝑞 and sentences 𝑠 𝑖 to determine the best sentences: score BM25 (𝑞, 𝑠 𝑖 ) = BM25(𝑞, 𝑠 𝑖 ).\nInverse document frequencies are computed over the complete corpus." }, { "figure_ref": [], "heading": "Semantic-similarity-based (GloVe).", "publication_ref": [], "table_ref": [], "text": "GloVe works by constructing a co-occurrence matrix of word pairs based on their co-occurrence frequency in a corpus. The co-occurrence matrix is then factorized using matrix factorization techniques, such as singular value decomposition (SVD), to generate low-dimensional embeddings for each word. Unlike other word embedding methods, such as Word2Vec, GloVe is designed to capture both the global co-occurrence statistics and the local context of the words. It does this by weighting the importance of word co-occurrences based on their frequency and using a logarithmic function to down weight the importance of highly frequent co-occurrences. We use GloVe to find the representation for query 𝑞 and sentence 𝑠 𝑖 . Both the query and sentence are represented as average over their constituent word embeddings. We use semantic (cosine) similarity scores between the query 𝑞 and sentences 𝑠 𝑖 to determine the best sentences for a given query, i.e., score GloVe (𝑞, 𝑠 𝑖 ) = ⟨E GloVe (𝑞), E GloVe (𝑠 𝑖 )⟩, where ⟨•, •⟩ is the dot product and E GloVe (•) computes the average of the embeddings of all tokens in a sequence." }, { "figure_ref": [], "heading": "Supervised Augmentation", "publication_ref": [ "b32", "b35", "b80", "b32" ], "table_ref": [], "text": "Unlike unsupervised augmentation where sentence selections were based on lexical-and semanticsimilarities, we also consider sentence selections based on supervised training data. Recently, in the area of explainable information retrieval, select-and-rank approaches have been proposed that given a query-document pair, select an extractive piece of text from the document as a potentially relevant signal [33,36,81]. Specifically, in the selection phase, relevant sentences given a query is extracted. This is followed by the ranking phase, where the relevance estimation is performed just on the extracted sentences. The key idea here is that typically a small part of the document is relevant and the selection phase filters out non-relevant text. The supervision signal is obtained by the training data and a combination of the gumbel-max trick and reservoir sampling is used to train the selector network [33]. The output of the selection phase can be considered as a querybased extractive summary and can be utilized for our data augmentation needs. We considered the two supervised sentence selection approaches -Linear sentence selection, and Attention based sentence selection.\nLet a query 𝑞 = 𝑡 " }, { "figure_ref": [], "heading": "., 𝑡 𝑑", "publication_ref": [], "table_ref": [], "text": "|𝑑 | be sequences of (embedded) tokens, respectively. Furthermore, let 𝑠 𝑖 𝑗 be a sentence within the document, such that 𝑠 𝑖 𝑗 = 𝑡 𝑑 𝑖 , ..., 𝑡 𝑑 𝑗 . In the following, we describe how the two selection approaches score a sentence 𝑠 𝑖 𝑗 w.r.t. a query 𝑞." }, { "figure_ref": [], "heading": "Linear sentence selection.", "publication_ref": [], "table_ref": [], "text": "The linear sentence selector is a simple non-contextual model, i.e., each sentence within the document is scored independently. This approach is similar to the GloVebased augmentation, however, this model has been trained specifically on a ranking task. The query and each sentence are first represented as the average of their token embeddings, respectively. After averaging, each representation is passed through a single-layer feed-forward network. The final score of a sentence is then computed as the dot product of its representation and the query representation. Formally, a sequence of tokens, 𝑡 = 𝑡 1 , ..., 𝑡 |𝑡 | , is represented as\nEnc(𝑡) = 𝑠 𝑖 ∈𝑡 (𝑊 𝑡 𝑖 + 𝑏) |𝑡 | ,\nwhere 𝑊 and 𝑏 are trainable parameters of the feed-forward layer. The score of a sentence 𝑠 𝑖 𝑗 w.r.t. the query 𝑞 is then computed as score Lin (𝑞, 𝑠 𝑖 𝑗 ) = ⟨Enc(𝑞), Enc(𝑠 𝑖 𝑗 )⟩, where ⟨•, •⟩ is the dot product." }, { "figure_ref": [], "heading": "Attention-based sentence selection.", "publication_ref": [ "b65" ], "table_ref": [], "text": "The Attention-based selector computes sentence-level representations based on the QA-LSTM model [66]. Query and document are first contextualized by passing their token embeddings through a shared bi-directional LSTM:\n𝑞 LSTM = Bi-LSTM(𝑞), 𝑑 LSTM = Bi-LSTM(𝑑).\nThe query representation q is obtained by applying element-wise max-pooling over 𝑞 LSTM :\nq = Max-Pool(𝑞 LSTM )\nFor each hidden representation 𝑑 LSTM 𝑖 , attention to the query is computed as The final score of a sentence is computed as the cosine similarity of its representation and the query representation:\n𝑚 𝑖 = 𝑊 1 ℎ 1 + 𝑊 2 q, ℎ 𝑖 = 𝑑 LSTM 𝑖 exp (𝑊 3 tanh (𝑚 𝑖 )) ,\nscore Att (𝑞, 𝑠 𝑖 𝑗 ) = cos( q, ŝ𝑖 𝑗 )." }, { "figure_ref": [], "heading": "Creating Augmented Training Batches", "publication_ref": [ "b21" ], "table_ref": [], "text": "To preserve the randomness in the data while augmenting the training set, the creation of minibatches is a crucial step in our approach. It has been demonstrated in prior studies that the quality of the augmented data plays an important role in the performance of self-supervised contrastive learning [22].\nOur approach begins with retrieving the top-𝑘 documents per query using a first stage retrieval method. From this top-𝑘 set, we create the training dataset by collecting all positive query-document training instances. For each positive pair, we randomly sample one irrelevant document to serve as the negative instance. To ensure randomness, we shuffle the resulting set of (𝑞, 𝑑 + , 𝑑 -) triples. It should be noted that, for pointwise training, we create two query-document pairs from each triple." }, { "figure_ref": [], "heading": "CONTRASTIVE LEARNING WITH AUGMENTED DATA FOR DOCUMENT RANKING", "publication_ref": [], "table_ref": [], "text": "Augmented training data can be used to train a model with no alteration to the pointwise and pairwise cross entropy loss objective. We instead propose the usage of contrastive learning to better leverage augmented data. Augmented query document pairs are not clean training samples by definition and should be treated accordingly during training.\nIn this section, we detail different contrastive learning objectives which we used to train our ranking models. We do not propose a new contrastive loss objective but instead focus on combining existing losses correctly for ranking tasks." }, { "figure_ref": [ "fig_2" ], "heading": "Ranking-based Supervised Contrastive loss", "publication_ref": [], "table_ref": [], "text": "The aim of the Supervised Contrastive loss (SCL loss) is to capture the similarities between relevant document parts for a given query and contrast them against examples from non-relevant queries. The ranking model outputs the query-document representation Φ(•) ∈ R 𝑡 (e.g., the [CLS] output for BERT-based models) which can be used to compute similarities. The SCL loss function includes the adjustable scalar temperature parameter 𝜏 > 0 that controls the distance between relevant and non-relevant examples, and the scalar weighting hyper-parameter 𝜆, which is tuned for each downstream task and setting.\nThe SCL loss is formulated as\nL SCL = 𝑁 ∑︁ 𝑖=1 - 1 𝑁 + 𝑁 + ∑︁ 𝑗=1 1 𝑞 𝑖 =𝑞 𝑗 , 𝑖≠𝑗, 𝑦 𝑖 =𝑦 𝑗 =1 log exp Φ (𝑥 𝑖 ) • Φ 𝑥 𝑗 /𝜏 𝑁 𝑘=1 1 𝑖≠𝑘 exp (Φ (𝑥 𝑖 ) • Φ (𝑥 𝑘 ) /𝜏)(2)\nwhere 𝑁 + is the total number of positive examples (i.e., relevant query-document pairs) in the batch.\nThe loss function enforces that the positive pair with the same query should be embedded close to each other, rather than a pair of documents that are relevant for different queries. This is important since it ensures that the representations for the \"relevant parts\" of the same query are close to each other. The final ranking SCL loss is\nL RankingSCL = (1 -𝜆)L Ranking + 𝜆L SCL(3)\nWe illustrate the RankingSCL loss in Figure 2 (left figure) using a pointwise ranking loss. It shows the two components working together; the ranking loss separates the pairs of positive and negative documents, while the contrastive loss moves all positive documents in the batch closer to each other." }, { "figure_ref": [], "heading": "Ranking-based Centroid Triplet Loss", "publication_ref": [ "b69" ], "table_ref": [], "text": "The triplet loss function [70] enforces the learning of a distance metric that satisfies the following property: for a given triplet of data points (𝐴, 𝑃, 𝑁 ), where 𝐴 is an anchor point (same class), 𝑃 is a positive example for A, and 𝑁 is a negative example for 𝐴, the distance between 𝐴 and 𝑃 should be smaller than the distance between 𝐴 and 𝑁 by a certain margin. This property is known as the triplet constraint. The objective is to minimize the distance between 𝐴 -𝑃, while maximizing the distance to the 𝑁 sample.\nThe loss function is formulated as follows:\nL triplet = ∥ 𝑓 (𝐴) -𝑓 (𝑃) ∥ 2 2 -∥ 𝑓 (𝐴) -𝑓 (𝑁 ) ∥ 2 2\n+ 𝛼 + where [𝑧] + = max(𝑧, 0), 𝑓 denotes embedding function learned during training stage and 𝛼 is a margin parameter.\nIn the case of Centroid Triplet Loss (CTL), instead of comparing the distance of an anchor 𝐴 to the positive and negative instances, CTL measures the distance between 𝐴 and class centroids 𝑐 𝑃 and 𝑐 𝑁 representing either the same class as the anchor or a different class respectively. In case of a document ranking task, for a given query 𝑞, let {𝑑 + 𝑖 , 𝑑 - 𝑖 } 𝑖=1,...,𝑁 be a set of relevant and non-relevant documents respectively to that given query. Where 𝑑 + 𝑖 can be positives or augmented positives. Let 𝑐 𝑃 be the centroid of the relevant class (relevant and augmented relevant documents) and 𝑐 𝑁 be centroid of the non-relevant class. Then CTL is therefore formulated as:\nL CTriplet = 𝑑 + 𝑖 -𝑐 𝑃 2 2 -∥𝑑 - 𝑖 -𝑐 𝑁 ∥ 2 2 + 𝛼 𝑐 +(4)\nL RankingCTriplet = (1 -𝜆)L Ranking + 𝜆L triplet(5)" }, { "figure_ref": [], "heading": "Ranking-based InfoNCE", "publication_ref": [ "b50" ], "table_ref": [], "text": "The InfoNCE loss function [51] encourages similar samples to have similar representations and dissimilar samples to have dissimilar representations. This is achieved by comparing the similarity between positive pairs (similar samples) and negative pairs (dissimilar samples) using the crossentropy loss. The InfoNCE loss function is a variant of the standard cross-entropy loss that has been modified to account for the varying number of negative samples used in the contrastive learning framework.\nInfoNCE is particularly effective in situations where the number of negative samples is large. Additionally, the use of the cross-entropy loss makes the InfoNCE loss easy to optimize using standard optimization algorithms.\nIn document ranking given a query q i and corresponding relevant document d + i , the positive sample should be drawn from the conditional distribution 𝑝 (x | d + i ), while 𝑁 -1 negative samples are drawn from the proposal distribution 𝑝 (x), independent from the context 𝑑 + 𝑖 . For simplicity, let us label all the documents for the query 𝑞 𝑖 as 𝐷 = {d 𝑖 } 𝑁 𝑖=1 among which only one of them d pos is a positive sample. The probability of we detecting the positive sample correctly is:\n𝑝 (𝐶 = pos | 𝐷, d + i ) = 𝑝 𝑑 pos | d + i 𝑖=1,...,𝑁 ;𝑖≠pos 𝑝 (x 𝑖 ) 𝑁 𝑗=1 𝑝 x 𝑗 | d + i 𝑖=1,...,𝑁 ;𝑖≠𝑗 𝑝 (x 𝑖 ) = 𝑝 (xpos |𝑑 + 𝑖 ) 𝑝 (xpos) 𝑁 𝑗=1 𝑝 (x𝑗 |d + i ) 𝑝 (x𝑗 ) = 𝑓 x pos , d + i 𝑁 𝑗=1 𝑓 x 𝑗 , d + i\nwhere the scoring function is 𝑓 (x,\nd + i ) ∝ 𝑝 (x|d + i )\n𝑝 (x) . The InfoNCE loss optimizes the negative log probability of classifying the positive sample correctly:\nL InfoNCE = -E log 𝑓 (x, d + i ) x ′ ∈𝑋 𝑓 x ′ , d + i (6)\nThe fact that 𝑓 (𝑥, 𝑑 + 𝑖 ) estimates the density ratio\n𝑝 (𝑥 |𝑑 + 𝑖 )\n𝑝 (𝑥 ) has a connection with mutual information optimization. To maximize the the mutual information between input 𝑥 and context vector 𝑑 + 𝑖 , we have:\n𝐼 (x; d + i ) = ∑︁ x,d + i 𝑝 (x, d + i ) log 𝑝 (x, d + i ) 𝑝 (x)𝑝 (d + i ) = ∑︁ x,d + i 𝑝 (x, d + i ) log 𝑝 (x | d + i ) 𝑝 (x)\nwhere the logarithmic term in below is estimated by 𝑓 . \n𝑝 (x | d + i ) 𝑝 (x) L RankingInfoNCE = (1 -𝜆)L Ranking + 𝜆L InfoNCE(7\nL NCA = ∑︁ 𝑖 log ∑︁ 𝑗 ∈𝐶 𝑖 𝑝 𝑖 𝑗 = ∑︁ 𝑖 log (𝑝 𝑖 )(8)\nwhere 𝑝 𝑖 is the probability of calculating the document 𝑑 𝑖 to the relevant class as neighbouring point 𝑑 𝑗 is defined as:\n𝑝 𝑖 = ∑︁ 𝑗 ∈𝐶 𝑖 𝑝 𝑖 𝑗\nWe define the 𝑝 𝑖 𝑗 using a softmax over Euclidean distances in the transformed space:\n𝑝 𝑖 𝑗 = exp -𝐴𝑑 𝑖 -𝐴𝑑 𝑗2\n𝑘≠𝑖 exp -∥𝐴𝑑 𝑖 -𝐴𝑑 𝑘 ∥ 2 where A is the transformation matrix, 𝑑 𝑖 is relevant to the query 𝑞 𝑖 and 𝑑 𝑘 is the k-Nearest relevant Neighbor of 𝑑 𝑖 in the input space.\nL RankingNCA = (1 -𝜆)L Ranking + 𝜆L NCA(9)" }, { "figure_ref": [], "heading": "Combining Ranking Loss with Contrastive Loss", "publication_ref": [], "table_ref": [], "text": "We propose a simple linear combination of standard ranking losses with contrastive losses described above for the augmented training samples. The overall ranking losses are then given by\nL ContrastiveRanking = (1 -𝜆)L Ranking + 𝜆L Contrastive ,\nwhere\nL Ranking ∈ {L Point , L Pair } and L Contrastive ∈ {L SCL , L triplet , L InfoNCE , L NCA }.\nWe use the following terminology in the paper: linear interpolation of Pointwise and SCL is referred to as RankingSCL. Although all of the aforementioned losses are contrastive, each of them has subtle differences in the way they optimize learning of the representation space.\n• Supervised Contrastive Loss (Eq. It is often used in self-supervised learning tasks where there is no labeled data available. In summary, the centroid triplet loss and supervised contrastive Loss is used for supervised learning tasks where labeled data is available, while NCA and InfoNCE can be used for unsupervised or self-supervised learning tasks where labeled data is not available." }, { "figure_ref": [], "heading": "EXPERIMENTAL SETUP", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the setup we used to answer the research questions. Note that we focus on the re-ranking task and not the retrieval task." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [], "table_ref": [], "text": "We conduct experiments on in-domain and out-of-domain benchmarks to showcase the utility and performance benefits of data augmentation." }, { "figure_ref": [], "heading": "6.1.1", "publication_ref": [ "b8" ], "table_ref": [], "text": "In-Domain Benchmark -TREC-DL. In this study, we utilize the dataset provided by the TREC Deep Learning track in 2019. We evaluate our proposed model on Doc'19, comprising of 200 distinct queries. The training and development set is obtained from MS MARCO, which contains a total of 367K queries. For the retrieval of the top 100 documents for each query, we use Indri [9]." }, { "figure_ref": [], "heading": "Out of Domain Benchmark -BEIR.", "publication_ref": [ "b66" ], "table_ref": [], "text": "The BEIR dataset is a collection of datasets used for benchmarking and evaluating the performance of information retrieval (IR) models [67]. The datasets are selected from various domains, such as news articles, scientific papers, and product reviews. It consists of 17 different datasets." }, { "figure_ref": [], "heading": "Ranking Models", "publication_ref": [], "table_ref": [], "text": "We use different cross-attention models for our experiments:\n( '19, BEIR. This leads to a large experimental space for exploration which we have synthesized into key insights in our results. For instance, the number of models trained on Doc'19 is around 1050 with 208 best combinations chosen for reporting. Given a large number of models, it is difficult to report all combination of results and their respective hyperparameters. So we report the best models in the paper and a subset of the results, hyperparameters and can be found in Appendix A." }, { "figure_ref": [], "heading": "Batch Creation and Hyperparameters", "publication_ref": [], "table_ref": [], "text": "In Section 4.3, we described our approach to creating batches for supervised contrastive learning, where we start with positive query-document pairs from the top-𝑘 retrieved set and randomly sample negative pairs for the original dataset. We also use a selector to generate augmented versions of documents. For evaluating our approach, we experimented with different sizes of query-document pairs for MS MARCO, including 1k, 2k, 10k, and 100k. For instance, the 1k dataset contains 500 positive and 500 negative pairs in the original dataset, to which we add 1k more pairs through the augmentation process, resulting in a total of 2k query-document pairs. This pattern holds for the other three sizes as well. It is worth noting that we only augment the training data, and the validation and test sets remain unaltered.\nHyperparameters. We have two hyperparameters in our models with RankingSCL: the temperature (𝜏), and the degree of interpolation (𝜆) as in RankingSCL [Eq. ( 3)]. For all other losses ([Eq. ( 5)][Eq. ( 7)][Eq. ( 9)]) we only have one hyperparameter (𝜆) for interpolation. We use the MS MARCO development set to determine the best combination of hyperparameters. These parameters are different for different ranking models and augmentation strategies. For example, in TREC-DL, BERT ranking model using Attention data augmentation and RankingCTriplet loss objective returns the best score on the validation set at 𝜆 = 0.3. In all our experiments we use a batch size of 16. A brief of the hyperparameters used is given in Appendix A.1." }, { "figure_ref": [ "fig_6" ], "heading": "EXPERIMENTAL RESULTS", "publication_ref": [ "b2" ], "table_ref": [], "text": "The results section is divided into 3 major subsections. Section 7.1 presents the results of our in-domain experiments where we strictly observe how augmentation and contrastive learning can benefit sample efficiency and performance for the same dataset. Then we discuss the out-of-domain performance of the models born out of our training regimen. More specifically, we study the zero-shot transfer of our models to other ranking datasets that vary in topicality and type of query in Section 7.2. Finally, we discuss the failure cases, limitations, and drawbacks of our approach in Section 7. 3 The key research questions we seek answers for are as follows:\nThe key research questions we seek answers for are as follows:\nRQ I. Does data augmentation require a different training paradigm? RQ II. Which data augmentation technique provides the highest sample efficiency gains? RQ III. Which contrastive loss objective leads to the highest gains when data augmentation type and budget are fixed? RQ IV. Does our training regimen impact various model sizes differently? RQ V. Do augmented models transfer better than their non-augmented counterparts? We first verify the impact of our proposed approach by comparing 3 types of models -a model with data augmentation and SCL loss (SCL) versus a model trained with only Pointwise loss (baseline) versus a model trained with Pointwise loss and augmented data (CE). We also repeated the experiment with a pairwise objective and found the same trends. The SCL model is trained with augmented data using the PointwiseRankingSCL loss objective, i.e. pointwise ranking loss interpolated with the SCL objective. In Figure 3 (in terms of nDCG@10) of the CE and PointwiseRankingSCL model over the baseline (zero line) (trained without data augmentation) with BM25 augmentation and pointwise objective. We see that only using Pointwise loss with augmented data performs worse than baseline. We find that PointwiseRankingSCL more effectively utilizes augmented data to learn better representations which are reflected in consistent improvements over the baseline. As the size of the training set increases, the detrimental effect of data augmentation on the standard CE loss objective diminishes but does not increase sample efficiency. The PointwiseRankingSCL model on the other hand sees gains in efficiency for all models. Augmenting a dataset of 100 and 1K examples leads to a 8.1% and 6% improvement over the baseline respectively for DistilBERT only when using our RankingSCL loss. This establishes that using data augmentation with traditional ranking loss functions is detrimental to ranking performance.\nInsight 1. A different training paradigm is required to take advantage of the data augmentation techniques we propose. In particular, adding a contrastive learning objective leads to gains between 1.3% and 10.2% in terms of sample efficiency whereas training without an additional objective and the augmented dataset results in lowered sample efficiency." }, { "figure_ref": [], "heading": "RQ II. Which data augmentation technique provides the highest sample efficiency gains?", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "To answer RQ II, we extend the previous experiment with more data augmentation techniques. In Table 1 we compare 2 different classes of augmentation techniques: data augmentation using supervised methods (Linear, Attention) and unsupervised methods (BM25, GloVe). We use nDCG@10 as the key performance indicator. We trained various models using the RankingSCL loss objective on different dataset sizes and different contextual models (BERT, RoBERTa, DistilBERT). Our first major observation is that augmentation helps across the board. As dataset size increases, all methods show steady improvement which is not surprising but the magnitude of relative improvement over the baseline is large in nearly all cases. With dataset of 100 instances attention based approach performs 20% better than baseline on all models. With 1k samples to train on, attention based augmentation with RankingSCL leads to a near 70% relative improvement for DistilBERT. BERT achieves the highest absolute performance on the 1k and 2k samples using attention based augmentation. The exact choice of augmentation technique is not always clear although supervised techniques tend to generally outperform unsupervised techniques. Supervised techniques benefit from being exposed to a ranking oriented sentence selection task beforehand.\nFor RoBERTa in particular, we see a different technique winning for each dataset. However, when observing the absolute ndcg@10 measurements, we find that in all cases except for 1k, supervised methods are either comparable or better by a large margin. Unsupervised methods, while simpler, are particularly poor for lean models like DistilBERT. We believe that supervised methods are usually more capable of constructing meaningful augmentations. Unsupervised methods create noisy samples (see Table 2) by either ignoring semantics (BM25 -syntactic matching) or not being aware of the ranking task (GloVe)." }, { "figure_ref": [], "heading": "Method Augmented Document", "publication_ref": [ "b1", "b0" ], "table_ref": [ "tab_6", "tab_14", "tab_6" ], "text": "Query: 293327 -how many players are in a cricket team Document: D863798 BM25 The quota system was introduced as part of South Africa's re-admission into international cricket [. . . ]\nGloVe South Africa have announced a commitment to a minimum of five black players in their squad for the World Cup [. . . ]\nLinear Salute Michael Jordan at 40Is this the end for Warne? We also observe that the relative improvements over baseline are greater in the case of smaller datasets compared to the larger datasets (which ratifies the findings from [2]) for both classes of approaches. In the case of RoBERTa and DistilBERT we see an 80% improvement (statistically significant) over the baseline for the 2K dataset when using supervised augmentation. For the 100k dataset, surprisingly unsupervised methods are key to achieving sample efficiency for DistilBERT which requires further investigations into the interplay between model size and the ability to learn effectively from augmented data.\nAttention A team\nThe difference between linear and attention is more evident when observing smaller datasets (100, 1k and 2k). For BERT and DistilBERT we see large sample efficiency gains when using supervised approaches for the 1k dataset but attention results in the largest gains: 3.7% vs 9.1% for BERT and 31% vs 70% for DistilBERT when compared to linear. The same can be observed for 100 instances where Attention gains are 22.7% vs 29.3% in case of BERT and 25.8% vs 20.7% in case of DistilBERT in comparison with Linear Insight 2. Supervised augmentation techniques, specifically Attention leads to higher sample efficiency rates especially for smaller datasets (100, 1K, 2K) when training with RankingSCL. RQ III. Which contrastive loss objective leads to the highest gains when data augmentation type and budget are fixed? Different contrastive loss functions have different properties that may make them more or less effective for our particular problem. When the data augmentation type and training budget are fixed, we find that the contrastive loss objective which leads to the highest gains depends on specific characteristics of the dataset and the model being used. To this extent we experiment with 4 different contrastive losses (RankingCTriplet, RankingNCA, RankingInfoNCE, RankingSCL) with fixed supervised augmentation techniques (Attention, Linear) for different models (BERT, RoBERTa, DistilBERT). The results are shown in Table 3 and Appendix Table 8.\nIn Table 3 we compare different loss objectives while using attention augmentation. It is clear that RankingCTriplet outperforms all other losses across varying dataset sizes. It shows more than 85% improvement over baseline (95% statistically significant) in the case of RoBERTa and DistilBERT when trained on the 2K dataset. Additionally, the performance of DistilBERT matches the performance of BERT and RoBERTa when training with the 100K dataset. Notice that for RoBERTa, the choice of the loss function is crucial to sample efficiency on the 1k dataset -with RankingCTriplet we see a 6.6% improvement.\nThe results also show that increasing the training set size generally leads to better performance across all models and loss functions. However, the rate of improvement varies depending on the model and loss function. For example, for RoBERTa, RankingCTriplet shows the largest percentage change when increasing the training set size from 1k to 2k, while for DistilBERT, RankingNCA shows the largest percentage change.\nRankingCTriplet loss has 2 unique properties -the margin hyperparameter and the usage of centroids rather than the actual data points. Max margin based losses are commonly used for ranking tasks [1] which makes this style of contrastive loss more suited to our problem. Additionally, since the augmentation techniques can be noisy, the usage of the centroid dampens it by averaging over all augmentations of the anchor. RankingCTriplet loss is generally used when there is more intra-class variation which is the case in document ranking where positive documents can vary drastically based on the query at hand. Other loss functions are more suited to classification problems and do not translate as well to our problem setting.\nOur empirical findings suggest that pairing data augmentation with the right loss function is imperative to maximizing sample efficiency. Comparing RankingSCL loss to RankingCTriplet we see consistent gains across models and data sizes in terms of nDCG@10 and relative improvement to the baseline. Even for 100k data points, we see gains of 3.1% to 7.3% when using RankingCTriplet which is considerably higher than all other losses. Insight 3. Empirically Centroid Triplet Loss (RankingCTriplet) is the superior choice for all models for all dataset sizes when using attention based augmentation." }, { "figure_ref": [ "fig_7" ], "heading": "RQ IV. Does our training regimen impact various model sizes differently?", "publication_ref": [], "table_ref": [], "text": "To look at the impact of model sizes on ranking performance we conduct baseline and augmented data experiments on DistilBERT, BERT and BERT-Large by varying the dataset sizes (100, 1K, 2K, 10K, 100K). For the baseline, we use non-augmented data same as all experiments above. For augmented experiments, we use Attention augmentation with RankingCTriplet loss. The performance results (nDCG@10) results are shown in Figure 4.\nWe observe that larger BERT-based language models tend to perform better on smaller datasets, indicating that increasing the model size can compensate for the limited amount of data. Furthermore, we already showed, augmented models outperform their non-augmented counterparts (Insight 1). However, it is worth noting that as the dataset size increases, the performance gap between smaller and larger models narrows, implying that increasing model size may have diminishing returns in the presence of large datasets.\nFor DistilBERT, we see that data augmentation leads to large improvements. For practitioners with limited computing budgets, data augmentation provides a significant method to improve the performance of leaner models. Fine tuning lean models on smaller ranking datasets is not always straightforward. We experimented with several hyperparameters to improve the performance of the DistilBERT baseline for 1k and 2k. However, with augmentation minimal hyperparameter tuning led to gains of over 50% in sample efficiency. Hard-to-tune models can benefit directly from our approach due to more informative samples and a better loss objective. " }, { "figure_ref": [], "heading": "Out-of-domain (BEIR) Experiments", "publication_ref": [ "b14" ], "table_ref": [ "tab_7", "tab_9" ], "text": "RQ V. Do augmented models transfer better than their non-augmented counterparts? Zero-shot out-of-domain transfer for ranking datasets is challenging due to the diversity of topical domains and the information needs of users. Table 4 shows the performance of three different BERT-based models (BERT, RoBERTa, and DistilBERT) we trained using our proposed paradigm on 6 different datasets against the current SOTA model (SPLADE [15]) on the BEIR benchmark without any further fine-tuning on each dataset.\nWe find that our augmented models are significantly better than their corresponding baselines making them not only sample efficient but also more robust. RoBERTa in particular sees large gains The BEIR datasets also varied in the type of queries -our training dataset Doc'19 has simple factual questions whereas DBPedia for instance has entity specific attribute queries and Robust has topical keywords as queries (Table 5 shows anecdotal evidence).\nOur augmentation with contrastive learning exposes the model to more general matching patterns since the model has to not only learn how to estimate relevance between a question and a document but also a sentence and a passage selected by our augmentation. This helps improve the overall semantic understanding of the model which greatly aids transfer. These results hold for all model types which makes a strong case for our augmentation models to be used as universal ranking models when computing and training data is severely limited.\nThis result shows that we can save costs and increase efficiency by training a single model that performs well across IR datasets with limited amounts of out-of-domain training data. Practitioners need not invest in further fine tuning or deploying individual models for each dataset. Insight 5. Augmented models are better suited for zero-shot transfer because our approach improves the model's ability to estimate relevance between two pieces of text by training on more diverse examples." }, { "figure_ref": [ "fig_9" ], "heading": "Limitations", "publication_ref": [ "b32" ], "table_ref": [], "text": "In our experiments, we considered 3 contextual models, 4 losses, 4 augmentation techniques, and 4 dataset sizes. We ran experiments with all combinations of dataset size, loss and augmentation. We had 2 key questions to verify our insights from the previous sections:\n• Is attention based augmentation always the right choice irrespective of the loss function? From Figure 5 it is apparent that attention augmentation and RankingCTriplet are not always the best choice. Even though the best performance at 1k, 2k and 10k is from the proposed combination, we see certain combinations being close to or surpassing it in the case of 100k (BM25 + RankingSCL).\nIn the case of 1k and 2k, RankingCTriplet performs considerably better than all other losses for unsupervised augmentation. When the dataset size increases however, the differences are much smaller. When using a 100K dataset, the choice of the loss function and augmentation is not clear. RankingSCL performs relatively poorly ,especially for unsupervised augmentation in 1k and 2k but slightly outperforms our proposed combination for 100k. RankingCTriplet does not result in the best performance in low data regimes when using unsupervised augmentation.\nIn the previous section, we studied the impact of various losses on attention augmentation. Attention and linear augmentation techniques are both supervised selectors from [33]. In both cases, the objective of the selector is to pick the most relevant sentence. The linear selector pools word embeddings using a max operation so positional context is lost. This design choice leads to attention outperforming linear irrespective of the loss function and dataset size. For 1k and 2k, linear is outperformed by GloVe and BM25. This could mean that selecting noisy sentences for contrastive learning is worse than simple term matching when paired with the correct loss. Supervised augmentation alone is not sufficient to gain the best performance for 1k and 2k. It must be paired with the right loss. RankingCTriplet in general is seemingly a good choice for a loss function If we consider the simplicity of implementation, the combination of BM25 and RankingInfoNCE is a good choice. It rivals Attention and RankingCTriplet in all datasets above 1k. For 1k however, attention based augmentation makes the largest difference since all losses except RankingNCA exhibit large improvements. The value of attention based augmentation compared to other augmentations diminishes as the data size grows. While augmentation still leads to overall sample efficiency gains, the choice of augmentation is not as crucial only if paired with the right loss function. Empirically trying to detect which combination works the best for a practitioner's dataset however goes against the spirit of efficiency. Our proposed combination of RankingCTriplet and Attention displays the clearest trend even if there are a few cases where it is not the best." }, { "figure_ref": [], "heading": "DISCUSSION AND CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we empirically explored the impact of data augmentation on sample efficiency for ranking problems. Our central premise is that data augmentation when paired with a contrastive learning objective leads to significant improvements in performance with the same number of training instances. Our experimental space includes both supervised and unsupervised augmentation techniques and 4 different contrastive learning objectives. We found that a combination of attention-based supervised augmentation and RankingCTriplet provides the highest sample efficiency gains for a range of models and dataset sizes. We see large benefits for smaller models on smaller dataset sizes which is an important step toward their wider adoption. DistilBERT sees gains of up to 85% when fine-tuned using our proposed setup. We also observe that not all models exhibit the same level of gains with BERT gaining between 1% and 10% depending on the size of the dataset. Another benefit of augmented training is the drastic improvement in zero-shot transfer. We showed that our best-augmented models improve performance by large margins compared to their non-augmented counterparts. RoBERTa on average sees a near 60% improvement when augmented and can rival fine-tuned SOTA models.\nIn conclusion, we believe the approach we propose is only the first step towards sample efficient training of ranking models with contrastive losses. Augmentation and adjusting loss objectives are cheaper alternatives for most practitioners instead of gathering expensive training data. There remain several areas of future work -for instance, augmentation techniques that can operate on queries is still under-explored. We still lack a clear understanding of the impact of various contrastive losses on the type of augmentation. Further research is needed to identify why a specific loss function benefits from a particular type of augmentation. We are also yet to explore the usage of synthetic training data in this context. Generative models have been used in the past to augment datasets with new queries, which we can also leverage in future work.\nA APPENDIX Here we add additional details relating to experimental setup and also show additional results. We have a large experimental space. Here are the number models best trained " }, { "figure_ref": [], "heading": "A.2 Results", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "Here in Table 7 we compare the performance of full MS MARCO document dataset with 100K dataset. Both the datasets are augmented using BM25 augmentation and the loss used here is RankingSCL. We can see that the performance difference between the two datasets is not large. The absolute values start converging as the dataset grows." }, { "figure_ref": [], "heading": "Model full 100K", "publication_ref": [], "table_ref": [], "text": "BERT 0.648(▲2.4%) 0.602(▲3.6%) RoBERTa 0.652(▲11.2%) 0.598(▲2.9%) DistilBERT 0.653(▲6.6%) 0.641(▲5.7%) " }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work is supported by the European Union -Horizon 2020 Program under the scheme \"INFRAIA-01-2018-2019 -Integrating Activities for Advanced Communities\", Grant Agreement n.871042, \"SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics\" (http://www.sobigdata.eu). This work is supported in part by the Science and Engineering Research Board, Department of Science and Technology, Government of India, under Project SRG/2022/001548. Koustav Rudra is a recipient of the DST-INSPIRE Faculty Fellowship [DST/INSPIRE/04/2021/003055] in the year 2021 under Engineering Sciences." } ]
Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this paper, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achieving sample efficiency or learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of the MS MARCO and TREC-DL test sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-ofdomain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.
Data Augmentation for Sample Efficient and Robust Document Ranking
[ { "figure_caption": "Fig. 1 .1Fig. 1. Training a ranking model with augmented data using different contrastive loss objectives", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "𝑞 1 ,1..., 𝑡 𝑞 |𝑞 | and document 𝑑 = 𝑡 𝑑 1 , ..", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. On the left we describe RankingSCL loss (Eq. (3)) and on the right Centroid triplet Loss (Eq. (4))", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": ") 5 . 454Neighbourhood Component Analysis (NCA) NCA[21] is a distance-based classification algorithm that learns a metric for measuring the similarity between data points. The metric is learned based on a training set of labeled examples and is used to classify new, unseen examples. The basic idea behind NCA is to learn a linear transformation of the input data that maximizes the accuracy of a k-Nearest Neighbor (k-NN) classifier. The transformation is learned by minimizing a loss function that measures the classification error of the k-NN classifier. In context of ranking, given a query 𝑞 𝑖 and corresponding relevant documents 𝑑 𝑖 and 𝑑 𝑗 , the NCA loss function is defined as follows:", "figure_data": "", "figure_id": "fig_3", "figure_label": "54", "figure_type": "figure" }, { "figure_caption": "2 )2encourages the features of positive examples from the same class to be similar while making sure that the features of negative examples from different classes are dissimilar. • Centroid Triplet Loss (Eq.4) is a contrastive loss function that is designed to encourage a larger margin between the distances of the positive and negative examples compared to the distance between the anchor and the centroid of the positive examples. This means that the loss function places a greater emphasis on making sure that the positive examples are tightly clustered around their centroid, while the negative examples are kept farther away. • Neighborhood Component Analysis (NCA) (Eq.8) is a metric learning algorithm that is designed to learn a linear transformation that maximizes the accuracy of the k-nearest neighbors (KNN) classifier. The goal of NCA is to find a transformation that reduces the distance between examples that belong to the same class and increases the distance between examples from different classes. • InfoNCE (InfoMax Contrastive Estimation) (Eq.6) is based on the concept of maximizing mutual information between positive examples and minimizing it between negative examples.", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "7. 11In-Domain Experiments RQ I. Does data augmentation require a different training paradigm?", "figure_data": "", "figure_id": "fig_5", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Relative nDCG@10 improvement of CE (cross entropy) model and SCL model over Baseline model. CE model is trained on an augmented dataset using pointwise loss, SCL is trained on an augmented dataset with PointwiseRankingSCL loss and the Baseline model is trained on a non-augmented dataset using pointwise loss. The dataset used here is Doc'19 and BM25 augmentation strategy.", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Insight 4 .4Larger models (BERT-Large) out perform smaller models (BERT, DistilBERT) in the case of smaller datasets (100, 1k, 2k, 10k) with augmentation primarily benefiting the smallest model (DistilBERT).", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Performance of models of different model sizes compared to their respective baselines. The values with \"base_\" represents the base model.", "figure_data": "", "figure_id": "fig_8", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. DistilBERT nDCG@10 performance on difference size datasets with different augmentation and losses.", "figure_data": "", "figure_id": "fig_9", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": ": 5 Datasets * (3 Models * 4 Loss Function * 4 Augmentation Techniques) + (5 Datasets * 3 Models) Baseline = 255 models • Datasets: 100, 1K, 2k, 10K, 100K • Models: BM25, RoBERTa, DistilBERT • Losses: RankingSCL, RankingCTriplet, RankingNCA, RankingInfoNCE • Augmentation techniques: BM25, GloVe, Attention, Linear This does not include intermediate models, models trained with different hyperparameters. That would increase the number of models trained by a factor of 5 on average.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Algorithm 1: Training data augmentation Input: training batch 𝐵, number of sentences per augmented document 𝑘 𝑎 Output: augmented training batch 𝐵 ′ 1 𝐵 ′ ← empty list 2 foreach (𝑞, 𝑑 + , 𝑑 -) in 𝐵 do // keep the original example", "figure_data": "5𝑑 -𝑎 ← random irrelevant document6append (𝑞, 𝑑 + 𝑎 , 𝑑 -𝑎 ) to 𝐵 ′7 end8 return 𝐵 ′", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "at any level should be based on the best 11 players available, not on what colour you are.", "figure_data": "Query: 428365 -is there a natural muscle relaxerDocument: D101612", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Anecdotal augmented training instances from Doc'19. BM25 and GloVe are unsupervised methods that select paragraphs from the original document denoted here by it's ID. Linear and Attention are supervised techniques that select relevant sentences based on the query from the same document.", "figure_data": "", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "nDCG@10 values for different models (BERT, RoBERTa, DistilBERT) on the BEIR dataset for evaluating Out-of-distribution performance. The model used for zero-shot performance are trained on 100K Attention dataset with RankingCTriplet loss. Statistically significant improvements at a level of 95% and 90% are indicated by * and # respectively[16].The best results for each dataset and each model is in bold.", "figure_data": "DatasetsBERTRoBERTaDistilBERT SPLADESciFact0.678(▲3.1%)0.676(▲134%) * 0.526(▲14.2%) *0.699Baseline0.6580.2880.461FiQA0.315(▲4.4%) *0.339(▲50.6%) * 0.211(▲20.5%) *0.351Baseline0.3020.2250.108DBPedia0.481(▼ -0.4%) * 0.514(▲36.3%) * 0.435(▲16.2%) *0.442Baseline0.4830.3770.236TREC-COVID 0.694(▲2.7%) *0.721(▲21.8%) * 0.603(▲2.3%) *0.711Baseline0.6760.5920.589NFCorpus0.260(▲0.9%)0.282(▲18.8%) * 0.260(▲2.2%) *0.345Baseline0.2570.2370.255Robust040.442(▲8.8%) #0.429(▲30.3%) * 0.386(▲17%) *0.458Baseline0.4060.3290.330Average0.4780.4940.4040.501", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Medical what is known about those infected with Covid-19 but are asymptomatic?,what evidence is there for the value of hydroxychloroquine in treating Covid-19? NFCorpus Science Is apple cider vinegar good for you?, How can you believe in any scientific study?, organotins, oxen meat", "figure_data": "DatasetDomain QueriesRobust04Newsprice fixing, Russian food crisis, ADD diagnosis treatment, ModernSlaveryFiQAFinance How are various types of income taxed differently in the USA?, Lookingfor good investment vehicle for seasonal work and savings, Understand-ing the T + 3 settlement days rule, How should I prepare for the nextfinancial crisis?DbpediaEntity south korean girl groups, electronic music genres, digital music notationformats, FIFA world cup national team winners since 1974Trec-Covid", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Example queries from BEIR. Each dataset varies in topicality and query type.", "figure_data": "", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "• Is RankingCTriplet loss the best choice irrespective of augmentation technique?", "figure_data": "Effect of Data Augmentation on Re-ranking Performance for DistilBERT model0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60BM25Glove1KLinear Attention0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60BM25Glove2KLinear Attention0.50 0.52 0.54 0.56 0.58 0.60 0.62 0.64BM25Glove10KLinear Attention SCL InfoNCEBM25 Triplet 0.64 0.62 0.60 0.58 0.56 0.54 0.52 0.50NCAGlove100KLinear Attention", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Learning rates used for training models for different datasets", "figure_data": "", "figure_id": "tab_12", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparing nDCG@10 values for different models (BERT, RoBERTa, DistilBERT) on the full MS MARCO and 100K dataset with BM25 and RankingSCL. The percentage improvements in brackets are improvements from their respective baselines.", "figure_data": "Losses RankingSCL RankingInfoNCE RankingCTriplet RankingNCABERT1k0.554(▲3.8%)0.572(▲7.0%)0.541(▲1.2%)0.558(▲4.4%)2k0.592(▲5%)0.584(▲3.8%)0.590(▲4.8%)0.603(▲7.1%)10k0.600(▲1.4%)0.607(▲2.7%)0.620(▲4.7%)0.605(▲2.2%)100k0.628(▲2.2%)0.610(▼ -0.9%)0.637(▲3.6%)0.626(▲1.8%)RoBERTa1k0.288(▼ -3%) 0.287(▼ -3.5%) *0.271(▼ -9%) #0.273(▼ -8%)2k0.516(▲75.6%) * 0.427(▲45.4%) *0.421(▲43.3%) *0.470(▲59.9%) *10k0.594(▲6.6%) *0.570(▲2.3%)0.609(▲9.3%) *0.588(▲5.6%) *100k0.637(▲10.2%) 0.619(▲7.1%)0.615(▲6.5%)0.632(▲9.4%)DistilBERT1k0.287(▲31.1%) * 0.293(▲33.8%) *0.302(▲37.9%) *0.240(▲9.7%)2k0.517(▲85.7%) * 0.389(▲39.9%) *0.400(▲43.8%) *0.426(▲53.1%) *10k0.545(▼ -3.6%) 0.573(▲1.4%)0.569(▲0.7%)0.565(▲0.0%)100k0.582(▼ -4%)0.600(▼ -1.1%)0.620(▲2.3%)0.574(▲5.3%)", "figure_id": "tab_13", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "nDCG@10 performance of different language models (BERT, RoBERTa, and DistilBERT) on different loss functions (RankingSCL, RankingInfoNCE, RankingCTriplet, and RankingNCA) at different training set sizes (1k, 2k, 10k, and 100k) with Linear data augmentation. Statistically significant improvements at a level of 95% and 90% are indicated by * and # respectively[16].The best results for each dataset and each model is in bold", "figure_data": "", "figure_id": "tab_14", "figure_label": "8", "figure_type": "table" } ]
Abhijit Anand; Jurek Leonhardt; Koustav Rudra; Avishek Anand; Abhijit Anand; Avishek Anand
[ { "authors": "Shivani Agarwal; Michael Collins", "journal": "Springer", "ref_id": "b0", "title": "Maximum margin ranking algorithms for information retrieval", "year": "2010-03-28" }, { "authors": "Abhijit Anand; Jurek Leonhardt; Koustav Rudra; Avishek Anand", "journal": "", "ref_id": "b1", "title": "Supervised Contrastive Learning Approach for Contextual Ranking", "year": "2022" }, { "authors": "Paheli Bhattacharya; Kripabandhu Ghosh; Saptarshi Ghosh; Arindam Pal; Parth Mehta; Arnab Bhattacharya; Prasenjit Majumder", "journal": "", "ref_id": "b2", "title": "FIRE 2019 AILA track: Artificial intelligence for legal assistance", "year": "2019" }, { "authors": "Luiz Bonifacio; Hugo Abonizio; Marzieh Fadaee; Rodrigo Nogueira", "journal": "", "ref_id": "b3", "title": "Inpars: Unsupervised dataset generation for information retrieval", "year": "2022" }, { "authors": "Kiran Butt; Abid Hussain", "journal": "Library Philosophy and Practice", "ref_id": "b4", "title": "Evaluation of Scholarly Information Retrieval Using Precision and Recall", "year": "2021" }, { "authors": "Wei-Cheng Chang; Felix X Yu; Yin-Wen Chang; Yiming Yang; Sanjiv Kumar", "journal": "", "ref_id": "b5", "title": "Pre-training tasks for embeddingbased large-scale retrieval", "year": "2020" }, { "authors": "Pengguang Chen; Shu Liu; Hengshuang Zhao; Jiaya Jia", "journal": "", "ref_id": "b6", "title": "Gridmask data augmentation", "year": "2020" }, { "authors": "Yanda Chen; Chris Kedzie; Suraj Nair; Petra Galuščáková; Rui Zhang; Douglas W Oard; Kathleen Mckeown", "journal": "", "ref_id": "b7", "title": "Cross-language Sentence Selection via Data Augmentation and Rationale Training", "year": "2021" }, { "authors": "Nick Craswell; Mitra Bhaskar; Emine Yilmaz; Daniel Campos", "journal": "", "ref_id": "b8", "title": "TREC-2019-Deep-Learning", "year": "2019" }, { "authors": "Barret Ekin D Cubuk; Dandelion Zoph; Vijay Mane; Quoc V Vasudevan; Le", "journal": "", "ref_id": "b9", "title": "Autoaugment: Learning augmentation strategies from data", "year": "2019" }, { "authors": "Zhuyun Dai; Jamie Callan", "journal": "", "ref_id": "b10", "title": "Deeper Text Understanding for IR with Contextual Neural Language Modeling", "year": "2019" }, { "authors": "Zhuyun Dai; Y Vincent; Ji Zhao; Yi Ma; Jianmo Luan; Jing Ni; Anton Lu; Kelvin Bakalov; Keith B Guu; Ming-Wei Hall; Chang", "journal": "", "ref_id": "b11", "title": "Promptagator: Few-shot dense retrieval from 8 examples", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b12", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2018" }, { "authors": "Jeff Donahue; Karen Simonyan", "journal": "", "ref_id": "b13", "title": "Large scale adversarial representation learning", "year": "2019" }, { "authors": "Thibault Formal; Benjamin Piwowarski; Stéphane Clinchant", "journal": "", "ref_id": "b14", "title": "SPLADE: Sparse lexical and expansion model for first stage ranking", "year": "2021" }, { "authors": "Luke Gallagher", "journal": "", "ref_id": "b15", "title": "Pairwise t-test on TREC Run Files", "year": "2019" }, { "authors": "Luyu Gao; Jamie Callan", "journal": "", "ref_id": "b16", "title": "Condenser: a Pre-training Architecture for Dense Retrieval", "year": "2021" }, { "authors": "Luyu Gao; Jamie Callan", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval", "year": "2022" }, { "authors": "Xiang Gao; K Ripon; Mukul R Saha; Abhik Prasad; Roychoudhury", "journal": "IEEE", "ref_id": "b18", "title": "Fuzz testing based data augmentation to improve robustness of deep neural networks", "year": "2020" }, { "authors": "Spyros Gidaris; Praveer Singh; Nikos Komodakis", "journal": "", "ref_id": "b19", "title": "Unsupervised representation learning by predicting image rotations", "year": "2018" }, { "authors": "Jacob Goldberger; Geoffrey E Hinton; Sam Roweis; Russ R Salakhutdinov", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Neighbourhood components analysis", "year": "2004" }, { "authors": "Beliz Gunel; Jingfei Du; Alexis Conneau; Ves Stoyanov", "journal": "", "ref_id": "b21", "title": "Supervised contrastive learning for pre-trained language model fine-tuning", "year": "2020" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b22", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Devon Hjelm; Alex Fedorov; Samuel Lavoie-Marchildon; Karan Grewal; Phil Bachman; Adam Trischler; Yoshua Bengio", "journal": "", "ref_id": "b23", "title": "Learning deep representations by mutual information estimation and maximization", "year": "2018" }, { "authors": "Sebastian Hofstätter; Hamed Zamani; Bhaskar Mitra; Nick Craswell; Allan Hanbury", "journal": "", "ref_id": "b24", "title": "Local Self-Attention over Long Text for Efficient Document Retrieval", "year": "2020" }, { "authors": "Sebastian Hofstätter; Markus Zlabinger; Allan Hanbury", "journal": "", "ref_id": "b25", "title": "Interpretable & time-budget-constrained contextualization for re-ranking", "year": "2020" }, { "authors": "Ashish Jaiswal; Ramesh Ashwin; Mohammad Zaki Babu; Debapriya Zadeh; Fillia Banerjee; Makedon", "journal": "Technologies", "ref_id": "b26", "title": "A survey on contrastive self-supervised learning", "year": "2021" }, { "authors": "Vladimir Karpukhin; Barlas Oğuz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "", "ref_id": "b27", "title": "Dense passage retrieval for open-domain question answering", "year": "2020" }, { "authors": "Omar Khattab; Matei Zaharia", "journal": "", "ref_id": "b28", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT", "year": "2020" }, { "authors": "Prannay Khosla; Piotr Teterwak; Chen Wang; Aaron Sarna; Yonglong Tian; Phillip Isola; Aaron Maschinot; Ce Liu; Dilip Krishnan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Supervised contrastive learning", "year": "2020" }, { "authors": "Varun Kumar; Ashutosh Choudhary; Eunah Cho", "journal": "", "ref_id": "b30", "title": "Data augmentation using pre-trained transformer models", "year": "2020" }, { "authors": "Carlos Lassance; Hervé Déjean; Stéphane Clinchant", "journal": "", "ref_id": "b31", "title": "An Experimental Study on Pretraining Transformers from Scratch for IR", "year": "2023" }, { "authors": "Jurek Leonhardt; Koustav Rudra; Avishek Anand", "journal": "ACM Transactions on Information Systems", "ref_id": "b32", "title": "Extractive Explanations for Interpretable Text Ranking", "year": "2021" }, { "authors": "Jurek Leonhardt; Koustav Rudra; Megha Khosla; Abhijit Anand; Avishek Anand", "journal": "", "ref_id": "b33", "title": "Fast Forward Indexes for Efficient Document Ranking", "year": "2021" }, { "authors": "Canjia Li; Andrew Yates; Sean Macavaney; Ben He; Yingfei Sun", "journal": "", "ref_id": "b34", "title": "PARADE: Passage Representation Aggregation for Document Reranking", "year": "2020" }, { "authors": "Minghan Li; Diana Nicoleta Popa; Johan Chagnon; Yagmur Gizem Cinar; Eric Gaussier", "journal": "ACM Transactions on Information Systems", "ref_id": "b35", "title": "The power of selecting key blocks with local pre-ranking for long document information retrieval", "year": "2023" }, { "authors": "Yizhi Li; Zhenghao Liu; Chenyan Xiong; Zhiyuan Liu", "journal": "", "ref_id": "b36", "title": "More Robust Dense Retrieval with Contrastive Dual Learning", "year": "2021" }, { "authors": "Yijiang Lian; Zhenjun You; Fan Wu; Wenqiang Liu; Jing Jia", "journal": "", "ref_id": "b37", "title": "Retrieve Synonymous keywords for Frequent Queries in Sponsored Search in a Data Augmentation Way", "year": "2020" }, { "authors": "Jimmy Lin", "journal": "ACM", "ref_id": "b38", "title": "The Neural Hype and Comparisons Against Weak Baselines", "year": "2019" }, { "authors": "Erik Lindgren; Sashank Reddi; Ruiqi Guo; Sanjiv Kumar", "journal": "Curran Associates, Inc", "ref_id": "b39", "title": "Efficient Training of Retrieval Models using Negative Cache", "year": "2021" }, { "authors": "Weiyang Liu; Yandong Wen; Zhiding Yu; Meng Yang", "journal": "", "ref_id": "b40", "title": "Large-margin softmax loss for convolutional neural networks", "year": "2016" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b41", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Shayne Longpre; Yu Wang; Christopher Dubois", "journal": "", "ref_id": "b42", "title": "How Effective is Task-Agnostic Data Augmentation for Pretrained Transformers?", "year": "2020" }, { "authors": "Xueguang Ma; Xinyu Zhang; Ronak Pradeep; Jimmy Lin", "journal": "", "ref_id": "b43", "title": "Zero-Shot Listwise Document Reranking with a Large Language Model", "year": "2023" }, { "authors": "Sean Macavaney; Andrew Yates; Arman Cohan; Nazli Goharian", "journal": "", "ref_id": "b44", "title": "Contextualized Word Representations for Document Re-Ranking", "year": "2019" }, { "authors": "Eli John X Morris; Jin Yong Lifland; Jake Yoo; Di Grigsby; Yanjun Jin; Qi", "journal": "", "ref_id": "b45", "title": "Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp", "year": "2020" }, { "authors": "Markus Mühling; Nikolaus Korfhage; Kader Pustu-Iren; Joanna Bars; Mario Knapp; Hicham Bellafkir; Markus Vogelbacher; Daniel Schneider; Angelika Hörth; Ralph Ewerth", "journal": "International Journal on Digital Libraries", "ref_id": "b46", "title": "VIVA: visual information retrieval in video archives", "year": "2022" }, { "authors": "Thai-Son Nguyen; Sebastian Stueker; Jan Niehues; Alex Waibel", "journal": "IEEE", "ref_id": "b47", "title": "Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation", "year": "2020" }, { "authors": "Rodrigo Nogueira; Kyunghyun Cho", "journal": "", "ref_id": "b48", "title": "Passage Re-ranking with BERT", "year": "2019" }, { "authors": "Helmi Satria; Nugraha ; Suyanto Suyanto", "journal": "IEEE", "ref_id": "b49", "title": "Typographic-based data augmentation to improve a question retrieval in short dialogue system", "year": "2019" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b50", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Baolin Peng; Chenguang Zhu; Michael Zeng; Jianfeng Gao", "journal": "", "ref_id": "b51", "title": "Data augmentation for spoken language understanding via pretrained models", "year": "2020" }, { "authors": "Libo Qin; Minheng Ni; Yue Zhang; Wanxiang Che", "journal": "", "ref_id": "b52", "title": "CoSDA-ML: multi-lingual code-switching data augmentation for zero-shot cross-lingual NLP", "year": "2021" }, { "authors": "Zhen Qin; Rolf Jagerman; Kai Hui; Honglei Zhuang; Junru Wu; Jiaming Shen; Tianqi Liu; Jialu Liu; Donald Metzler; Xuanhui Wang", "journal": "", "ref_id": "b53", "title": "Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting", "year": "2023" }, { "authors": "Yingqi Qu; Yuchen Ding; Jing Liu; Kai Liu; Ruiyang Ren; Wayne Xin Zhao; Daxiang Dong; Hua Wu; Haifeng Wang", "journal": "", "ref_id": "b54", "title": "RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering", "year": "2021" }, { "authors": "Roberta Raileanu; Max Goldstein; Denis Yarats; Ilya Kostrikov; Rob Fergus", "journal": "", "ref_id": "b55", "title": "Automatic data augmentation for generalization in deep reinforcement learning", "year": "2020" }, { "authors": "Arij Riabi; Thomas Scialom; Rachel Keraron; Benoît Sagot; Djamé Seddah; Jacopo Staiano", "journal": "", "ref_id": "b56", "title": "Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering", "year": "2021" }, { "authors": "Koustav Rudra; Avishek Anand", "journal": "", "ref_id": "b57", "title": "Distant supervision in BERT-based adhoc document retrieval", "year": "2020" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b58", "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Connor Shorten; M Taghi; Khoshgoftaar", "journal": "Journal of Big Data", "ref_id": "b59", "title": "A survey on image data augmentation for deep learning", "year": "2019" }, { "authors": "Connor Shorten; M Taghi; Borko Khoshgoftaar; Furht", "journal": "Journal of big Data", "ref_id": "b60", "title": "Text Data Augmentation for Deep Learning", "year": "2021" }, { "authors": "Jaspreet Singh; Wolfgang Nejdl; Avishek Anand", "journal": "", "ref_id": "b61", "title": "History by diversity: Helping historians search news archives", "year": "2016" }, { "authors": "Kihyuk Sohn", "journal": "Advances in neural information processing systems", "ref_id": "b62", "title": "Improved deep metric learning with multi-class n-pair loss objective", "year": "2016" }, { "authors": "Lichao Sun; Congying Xia; Wenpeng Yin; Tingting Liang; Philip S Yu; Lifang He", "journal": "", "ref_id": "b63", "title": "Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks", "year": "2020" }, { "authors": "Weiwei Sun; Lingyong Yan; Xinyu Ma; Pengjie Ren; Dawei Yin; Zhaochun Ren", "journal": "", "ref_id": "b64", "title": "Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent", "year": "2023" }, { "authors": "Ming Tan; Bing Cicero Dos Santos; Bowen Xiang; Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b65", "title": "Improved Representation Learning for Question Answer Matching", "year": "2016" }, { "authors": "Nandan Thakur; Nils Reimers; Andreas Rücklé; Abhishek Srivastava; Iryna Gurevych", "journal": "", "ref_id": "b66", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models", "year": "2021" }, { "authors": "Hoang Van; Vikas Yadav; Mihai Surdeanu", "journal": "", "ref_id": "b67", "title": "Cheap and Good? Simple and Effective Data Augmentation for Low Resource Machine Reading", "year": "2021" }, { "authors": "Tongzhou Wang; Phillip Isola", "journal": "PMLR", "ref_id": "b68", "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "year": "2020" }, { "authors": "Barbara Mikołaj ; Wieczorek; Jacek Rychalska; Dąbrowski", "journal": "Springer", "ref_id": "b69", "title": "On the unreasonable effectiveness of centroids in image retrieval", "year": "2021" }, { "authors": "Zhirong Wu; Yuanjun Xiong; Stella X Yu; Dahua Lin", "journal": "", "ref_id": "b70", "title": "Unsupervised feature learning via non-parametric instance discrimination", "year": "2018" }, { "authors": "Lee Xiong; Chenyan Xiong; Ye Li; Kwok-Fung Tang; Jialin Liu; Paul Bennett; Junaid Ahmed; Arnold Overwijk", "journal": "", "ref_id": "b71", "title": "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval", "year": "2020" }, { "authors": "Nan Yang; Furu Wei; Binxing Jiao; Daxing Jiang; Linjun Yang", "journal": "", "ref_id": "b72", "title": "xmoco: Cross momentum contrastive learning for open-domain question answering", "year": "2021" }, { "authors": "Wei Yang; Yuqing Xie; Luchen Tan; Kun Xiong; Ming Li; Jimmy Lin", "journal": "", "ref_id": "b73", "title": "Data augmentation for bert fine-tuning in open-domain question answering", "year": "2019" }, { "authors": "Yinfei Yang; Ning Jin; Kuo Lin; Mandy Guo; Daniel Cer", "journal": "", "ref_id": "b74", "title": "Neural Retrieval for Question Answering with Cross-Attention Supervised Data Augmentation", "year": "2020" }, { "authors": "Liang Yao; Baosong Yang; Haibo Zhang; Boxing Chen; Weihua Luo", "journal": "", "ref_id": "b75", "title": "Domain transfer based data augmentation for neural query translation", "year": "2020" }, { "authors": "Akkalyoncu Zeynep; Wei Yilmaz; Haotian Yang; Jimmy Zhang; Lin", "journal": "", "ref_id": "b76", "title": "Cross-domain modeling of sentencelevel evidence for document retrieval", "year": "2019" }, { "authors": "Yi Zeng; Han Qiu; Gerard Memmi; Meikang Qiu", "journal": "Springer", "ref_id": "b77", "title": "A data augmentation-based defense method against adversarial attacks in neural networks", "year": "2020" }, { "authors": "Jingtao Zhan; Jiaxin Mao; Yiqun Liu; Jiafeng Guo; Min Zhang; Shaoping Ma", "journal": "Association for Computing Machinery", "ref_id": "b78", "title": "Optimizing Dense Retrieval Model Training with Hard Negatives", "year": "2021" }, { "authors": "Xingyu Zhang; Tong Xiao; Yidong Chen; Qun Liu", "journal": "", "ref_id": "b79", "title": "Text Augmentation for Neural Machine Translation: A Review", "year": "2021" }, { "authors": "Zijian Zhang; Koustav Rudra; Avishek Anand", "journal": "", "ref_id": "b80", "title": "Explain and predict, and then predict again", "year": "2021" }, { "authors": "Zhilu Zhang; R Mert; Sabuncu", "journal": "", "ref_id": "b81", "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "year": "2018" }, { "authors": "Zhun Zhong; Liang Zheng; Guoliang Kang; Shaozi Li; Yi Yang", "journal": "", "ref_id": "b82", "title": "Random erasing data augmentation", "year": "2020" }, { "authors": "Qingqing Zhu; Xiwei Wang; Chen Chen; Junfei Liu", "journal": "IEEE", "ref_id": "b83", "title": "Data Augmentation for Retrieval-and Generation-Based Dialog Systems", "year": "2020" }, { "authors": "Yutao Zhu; Jian-Yun Nie; Zhicheng Dou; Zhengyi Ma; Xinyu Zhang; Pan Du; Xiaochen Zuo; Hao Jiang", "journal": "Association for Computing Machinery", "ref_id": "b84", "title": "Contrastive Learning of User Behavior Sequence for Context-Aware Document Ranking", "year": "2021" } ]
[ { "formula_coordinates": [ 6, 195.68, 189.38, 94.65, 8.01 ], "formula_id": "formula_0", "formula_text": "[CLS] 𝑞 [SEP] 𝑑 [SEP]" }, { "formula_coordinates": [ 6, 139.12, 296.04, 207.45, 26.64 ], "formula_id": "formula_1", "formula_text": "L Point = - 1 𝑁 𝑁 ∑︁ 𝑖=1 (𝑦 𝑖 • log ŷ𝑖 + (1 -𝑦 𝑖 ) • log(1 -ŷ𝑖 ))" }, { "formula_coordinates": [ 6, 169.6, 369.86, 141.2, 26.64 ], "formula_id": "formula_2", "formula_text": "L Pair = 1 𝑁 𝑁 ∑︁ 𝑖=1 max 0, 𝑚 -ŷ+ 𝑖 + ŷ- 𝑖" }, { "formula_coordinates": [ 6, 148.91, 649.05, 291.86, 11.45 ], "formula_id": "formula_3", "formula_text": "augment(𝑑, 𝑞, 𝑘) = 𝑘-argmax 1≤𝑖 ≤ |𝑑 | score(𝑞, 𝑠 𝑖 )(1)" }, { "formula_coordinates": [ 8, 191.52, 399.38, 102.86, 23.24 ], "formula_id": "formula_4", "formula_text": "Enc(𝑡) = 𝑠 𝑖 ∈𝑡 (𝑊 𝑡 𝑖 + 𝑏) |𝑡 | ," }, { "formula_coordinates": [ 8, 199.07, 536.3, 87.2, 28.15 ], "formula_id": "formula_5", "formula_text": "𝑞 LSTM = Bi-LSTM(𝑞), 𝑑 LSTM = Bi-LSTM(𝑑)." }, { "formula_coordinates": [ 8, 200.97, 590.64, 84.91, 11.19 ], "formula_id": "formula_6", "formula_text": "q = Max-Pool(𝑞 LSTM )" }, { "formula_coordinates": [ 8, 179.1, 628.06, 127.08, 27.19 ], "formula_id": "formula_7", "formula_text": "𝑚 𝑖 = 𝑊 1 ℎ 1 + 𝑊 2 q, ℎ 𝑖 = 𝑑 LSTM 𝑖 exp (𝑊 3 tanh (𝑚 𝑖 )) ," }, { "formula_coordinates": [ 10, 111.07, 203.04, 329.7, 31.47 ], "formula_id": "formula_8", "formula_text": "L SCL = 𝑁 ∑︁ 𝑖=1 - 1 𝑁 + 𝑁 + ∑︁ 𝑗=1 1 𝑞 𝑖 =𝑞 𝑗 , 𝑖≠𝑗, 𝑦 𝑖 =𝑦 𝑗 =1 log exp Φ (𝑥 𝑖 ) • Φ 𝑥 𝑗 /𝜏 𝑁 𝑘=1 1 𝑖≠𝑘 exp (Φ (𝑥 𝑖 ) • Φ (𝑥 𝑘 ) /𝜏)(2)" }, { "formula_coordinates": [ 10, 166.72, 327.95, 274.05, 9.8 ], "formula_id": "formula_9", "formula_text": "L RankingSCL = (1 -𝜆)L Ranking + 𝜆L SCL(3)" }, { "formula_coordinates": [ 10, 137.77, 504.38, 185.31, 13.18 ], "formula_id": "formula_10", "formula_text": "L triplet = ∥ 𝑓 (𝐴) -𝑓 (𝑃) ∥ 2 2 -∥ 𝑓 (𝐴) -𝑓 (𝑁 ) ∥ 2 2" }, { "formula_coordinates": [ 10, 154, 642.78, 286.77, 18.4 ], "formula_id": "formula_11", "formula_text": "L CTriplet = 𝑑 + 𝑖 -𝑐 𝑃 2 2 -∥𝑑 - 𝑖 -𝑐 𝑁 ∥ 2 2 + 𝛼 𝑐 +(4)" }, { "formula_coordinates": [ 11, 150.36, 101.54, 290.41, 9.8 ], "formula_id": "formula_12", "formula_text": "L RankingCTriplet = (1 -𝜆)L Ranking + 𝜆L triplet(5)" }, { "formula_coordinates": [ 11, 55.61, 311.37, 368.97, 42.97 ], "formula_id": "formula_13", "formula_text": "𝑝 (𝐶 = pos | 𝐷, d + i ) = 𝑝 𝑑 pos | d + i 𝑖=1,...,𝑁 ;𝑖≠pos 𝑝 (x 𝑖 ) 𝑁 𝑗=1 𝑝 x 𝑗 | d + i 𝑖=1,...,𝑁 ;𝑖≠𝑗 𝑝 (x 𝑖 ) = 𝑝 (xpos |𝑑 + 𝑖 ) 𝑝 (xpos) 𝑁 𝑗=1 𝑝 (x𝑗 |d + i ) 𝑝 (x𝑗 ) = 𝑓 x pos , d + i 𝑁 𝑗=1 𝑓 x 𝑗 , d + i" }, { "formula_coordinates": [ 11, 198.52, 360.8, 50.85, 17.2 ], "formula_id": "formula_14", "formula_text": "d + i ) ∝ 𝑝 (x|d + i )" }, { "formula_coordinates": [ 11, 167.41, 396.72, 273.36, 28.97 ], "formula_id": "formula_15", "formula_text": "L InfoNCE = -E log 𝑓 (x, d + i ) x ′ ∈𝑋 𝑓 x ′ , d + i (6)" }, { "formula_coordinates": [ 11, 246.92, 431.63, 24.92, 8.93 ], "formula_id": "formula_16", "formula_text": "𝑝 (𝑥 |𝑑 + 𝑖 )" }, { "formula_coordinates": [ 11, 111.42, 470.81, 261.33, 32.26 ], "formula_id": "formula_17", "formula_text": "𝐼 (x; d + i ) = ∑︁ x,d + i 𝑝 (x, d + i ) log 𝑝 (x, d + i ) 𝑝 (x)𝑝 (d + i ) = ∑︁ x,d + i 𝑝 (x, d + i ) log 𝑝 (x | d + i ) 𝑝 (x)" }, { "formula_coordinates": [ 11, 152.18, 522.9, 285.07, 51.29 ], "formula_id": "formula_18", "formula_text": "𝑝 (x | d + i ) 𝑝 (x) L RankingInfoNCE = (1 -𝜆)L Ranking + 𝜆L InfoNCE(7" }, { "formula_coordinates": [ 12, 165.88, 128.67, 274.89, 23.41 ], "formula_id": "formula_19", "formula_text": "L NCA = ∑︁ 𝑖 log ∑︁ 𝑗 ∈𝐶 𝑖 𝑝 𝑖 𝑗 = ∑︁ 𝑖 log (𝑝 𝑖 )(8)" }, { "formula_coordinates": [ 12, 218.78, 193.25, 47.03, 23.41 ], "formula_id": "formula_20", "formula_text": "𝑝 𝑖 = ∑︁ 𝑗 ∈𝐶 𝑖 𝑝 𝑖 𝑗" }, { "formula_coordinates": [ 12, 176.29, 246.56, 115.98, 23.21 ], "formula_id": "formula_21", "formula_text": "𝑝 𝑖 𝑗 = exp -𝐴𝑑 𝑖 -𝐴𝑑 𝑗2" }, { "formula_coordinates": [ 12, 166.72, 322.75, 274.05, 9.8 ], "formula_id": "formula_22", "formula_text": "L RankingNCA = (1 -𝜆)L Ranking + 𝜆L NCA(9)" }, { "formula_coordinates": [ 12, 136.64, 397.92, 212.83, 9.8 ], "formula_id": "formula_23", "formula_text": "L ContrastiveRanking = (1 -𝜆)L Ranking + 𝜆L Contrastive ," }, { "formula_coordinates": [ 12, 148.97, 432.77, 188.09, 24.92 ], "formula_id": "formula_24", "formula_text": "L Ranking ∈ {L Point , L Pair } and L Contrastive ∈ {L SCL , L triplet , L InfoNCE , L NCA }." }, { "formula_coordinates": [ 17, 50.81, 200.71, 84.26, 8.84 ], "formula_id": "formula_25", "formula_text": "Attention A team" } ]
10.48550/ARXIV.2204.02311
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b33", "b27", "b6", "b16", "b17", "b21", "b25" ], "table_ref": [], "text": "Transformer-based (Vaswani et al., 2017) large scale language models trained with general corpus have shown tremendously improvement of generalization in particular with in-context few-shot learning in recent years (Shoeybi et al., 2019;Brown et al., 2020;Rae et al., 2021;Chowdhery et al., 2022;Hoffmann et al., 2022). Despite of the impressive capability of text generation, training and serving these giant models are non-trivial even with the recent progress of hardware and software (Jouppi et al., 2017;Lepikhin et al., 2021;Patterson et al., 2021). One of the major challenges is that the processing of each input requires to activate all the parameters of a model, which often leads to trillions of floating point operations (FLOPs) per prediction. This imposes a big burden on both model training and inference 1 Work done when Dewen did his internship at Google Research and when Nan and Tao were at Google. 2 Google Brain. Correspondence to: Nan Du <dunan@apple.com>, Zhifeng Chen <zhifengc@google.com>." }, { "figure_ref": [], "heading": "Preprint version.", "publication_ref": [ "b35", "b22", "b4" ], "table_ref": [], "text": "since we have no control over the amount of computation that can be assigned to each input example.\nIn contrast, it is commonly believed that human cognition (Stanovich & West, 2000;Levy, 2008) uses varying cognitive efforts to operate and learn depending on the 'hardness' of the input. Specifically, one may only need small efforts (lower computational cost) to process 'easy' examples, like the commonly used stop-words, punctuation, patches in the background of an image, etc., but allow additional efforts (more computational cost) for 'hard' examples, e.g., a rare abstract concept for reasoning, when they are truly needed. Therefore, allocating the same computational power of a large model uniformly for processing all samples tend to be wasteful and less efficient. Such issue might be even more exacerbated when training large models using real-world data corpus in that the redundancy of trivial examples will be more pronounced as more and more data are used.\nConditional computation (Bengio et al., 2013;2015) is the paradigm where only a small subset of the model parameters are activated based on the input representation, thereby reducing the amount of computation needed per example. However, due to the discreteness of the decisions based on each input, training neural networks with conditionally activated components end-to-end differentiably and efficiently is still challenging.\nIn this paper, we develop a simple framework, referred to as the SkipLayer, which allows an input to skip any layer that can be wrapped inside it conditioned on the contextual representation. More specifically, SkipLayer-based models can be trained end-to-end differentiably while at the same time the discrete decisions during the forward pass can still be respected, which enables us to precisely control the performance-compute tradeoff through external constraint. Moreover, because the discrete decisions can be preserved during the forward pass, we also develop an efficient implementation so that the additional computation can be further saved in both pretraining and inference for the given target budget. We then apply SkipLayer to the Transformer architecture (Vaswani et al., 2017) to demonstrate the potential efficacy of the method for decoder-only language model pretraining and decoding. Finally, we extensively validate our method on a suite of well established NLP benchmarks ranging from open-domain QA tasks, reading comprehen- sion, common sense reasoning, to natural language inference tasks. SkipLayer-based models have shown strong 1-shot performance with controllable computation tradeoff between model quality and decoding efficiency compared to a variety of competitive baselines." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we elaborate on our proposed SkipLayer framework, its efficient implementation, and the application to Transformer-based language models." }, { "figure_ref": [], "heading": "SkipLayer", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Let X o", "publication_ref": [ "b9", "b21", "b21", "b9", "b21" ], "table_ref": [], "text": "[layer] = F [layer] (X|W) denote a parameterized layer (or module) of a neural network with input X and output X o given an optional set of weights represented by W. For instance, a plain FeedForward layer (FFN) can be denoted as\nX o FFN = F FFN (X|{W i , W o })\nwhere W i ∈ R m×h and W o ∈ R h×m are the input and output weight, respectively.\nA SkipLayer F SL is designed to wrap an existing layer such that\nX o SL = F SL F [layer] (X)|W G (1) = F [layer] (X) ⊙ G(X|W G ) + X ⊙ (1 -G(X|W G )),\nwhere G(X|W G ) ∈ {0, 1} is a router function with the learnable weight W G . Figure 1(a) shows the overall framework.\nGiven a batch X ∈ R B×T ×d of B sequences, each of length T and the embedding dimension d, for each token input\nX[b, t] ∈ R d , b ≤ B, t ≤ T , we have that X o SL [b, t] = F [layer] (X[b, t]) , if G(X[b, t]) = 1. X[b, t],\notherwise.\n(2)\nTherefore, as shown in Figure 1(a), any existing layer applied to the input in a pointwise way (e.g., FFN) can be easily embedded inside a SkipLayer. Based on the context, if the router decides to skip, the input will be connected directly to the output, otherwise it will go through the embedded layer logic.\nRouter Function. Central to the SkipLayer is the router function G(X|W G ) which is learned to assign only a subset of inputs to the embedded layer for the best model performance under a given budget. For a batch of input tokens X ∈ R B×T ×d , the router outputs a binary mask matrix\nM = G(X|W G ) ∈ {0, 1} B×T , W G ∈ R d×2 .(3)\nThere are several choices for designing G(X|W G ). One choice is the Sigmoid function G(X|W G ) = σ(XW G ) that independently normalizes each value to be within the continuous range (0, 1) as the soft approximation to the binary masking. Although this approximation is easy to differentiate, it needs an additional threshold to produce the binary decision rule of Equation 2.\nThe second design choice is the Top-K (K = 1) routing which is widely used in the works of (Du et al., 2022;Lepikhin et al., 2021), giving G(X|W G ) = Top-1 (XW G ) = argmax XW G . In order to address the indifferentiability of the argmax operator, as discussed in (Lepikhin et al., 2021), for each input token X[b, t], we first normalize the dot-product scores by g = Softmax(X[b, t]W G ) ∈ R 2 , and let\nX o SL [b, t] = g[1] • F [layer] (X[b, t]) , if argmax g = 1. g[0] • X[b, t], otherwise,(4)\nsuch that the gradients can be backpropgated through the coefficients g. Unfortunately, in our experiments, we find that the Top-1 formulation cannot precisely control the sparsity of the model (which is crucial to the efficiency) since g is still a soft approximation of the binary decision rule.\nWe thus formulate the router with the Straight-Through Gumbel-Softmax trick shown in Figure 1(b). In the forward pass, sampled binary values are returned for the gating G(X[b, t]) as in Equation 2. In the backward pass, the soft probabilities are used as g in Equation 4for the gradients to be propagated back to update the router weights. Because during the forward pass we are able to directly calculate the percentage of the tokens that are not skipped based on the binary masking of Equation 3, we can better control the density of the model.\nRouter Capacity. The binary mask of the router output in Equation 3is the assignment of a subset of tokens in a batch to the embedded layer inside a SkipLayer. For simplicity, suppose each sequence in a batch of size B has the same sequence length T . Then, the ratio r = i,j M [i,j] B×T is the percentage (or probability) that a token is assigned to the layer, which is also referred to as the capacity. Consider P as a global budget of how many input tokens can be assigned to a layer. Following (Du et al., 2022;Lepikhin et al., 2021), we introduce an auxiliary loss term ℓ aux = L i (r i -P ) 2 where r i is the capacity of layer i ≤ L, so that each layer will respect the budget constraint. The overall loss function of the model will be L = ℓ nll + λ • ℓ aux where ℓ nll is the negative log-likelihood of predicting the next token on average. By optimizing L, on the one hand, the layer capacity will be pushed to be closer to the target probability P . On the other hand, the ℓ aux term will continuously improve the model's predictive accuracy. Since the ℓ aux term will enforce only P percent of tokens in a batch to go with the layer, in order to reduce the first term ℓ nll , 'hard' examples that lead to large marginal reduction on average will be prioritized while 'easy' examples that already achieve low perplexity will be skipped in order to save FLOPs.\nThe router capacity enables the flexibility of controlling the performance-computation trade-off. Specifically, we can increase the number of layers in total while at the same time reduce target probability P to keep the average number of activated layers roughly the same. This effectively separates the increase of model capacity from the computation cost per prediction, and makes it possible to trade off the increased model capacity for better prediction. During serving, we are then able to load the model that can best utilize the accelerators' memory, lead to the highest prediction quality, and only mildly increase the computation cost while still meeting the latency requirement simultaneously." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Efficient Implementation", "publication_ref": [], "table_ref": [], "text": "The major advantage of SkipLayer-based models is that the number of inputs computed by each layer is different across the entire stack of layers and continuously varies during training. At the same time, this dynamic characteristic is also challenging for implementation on TPU where computations of tensors with static shapes are often preferred. The basic implementation is to first apply the given layer logic in Figure 1(a) to the entire batch and then multiply the output batch with the mask given by Equation 3 so that the skipped tokens will not be used in the layer. However, this masking mechanism is computational expensive since we should not spend the same computations on the skipped inputs as those on the non-skipped ones especially when the skip ratio is high. based on dynamic gather and scatter. The idea is illustrated in Figure 2 where we focus on the sparse computation of a FFN layer since it is a widely used component and often computationally intensive. The overall algorithm includes three major steps." }, { "figure_ref": [], "heading": "We thus develop an efficient SkipLayer implementation", "publication_ref": [], "table_ref": [], "text": "1. All inputs are marked as skip or non-skip based on results of the router in Equation 3.\n2. All non-skipped inputs (in the blue rectangles) are gathered and evenly partitioned into groups. Although each group will be fed into the FFN for computation sequentially, all the elements in the same group will be gathered, computed, and scattered in parallel.\n3. The compute results from the non-skipped inputs will be scattered back to the final outputs of this FFN layer, while the skipped inputs will be directly written into the final outputs without any computation.\nThe group size (the number of inputs in a group), denoted as Gsize, is a hyper-parameter that controls how many tokens will be processed by the FFN in parallel. Because the number of non-skipped inputs in a batch is dynamic and unknown in advance, Gsize affects the training efficiency. When Gsize is too large, e.g., there is only one single group, this group may include too many skipped inputs, leading to sub-optimal performance. When Gsize is too small, it will produce too many groups of small size, and the computation will be close to being sequential. Thus, there will be little parallelism, and the overheads maybe even larger than the basic masking implementation. In practice, we often set Gsize ∝ P • BT where P is the target probability, B is the batch size, and T is the sequence length. " }, { "figure_ref": [], "heading": "Algorithm 1 Forward pass of SkipLayer", "publication_ref": [], "table_ref": [], "text": "Data: A batch of tokens X ∈ R B×T ×d , target probability P .\nGet the mask M by Equation 3.\nGet the key and value projection\nK ← Fkey(X), V ← Fval(X). for b ≤ B, t ≤ T do if M [b, t] = 1 then\nGet the query projection q ← Fquery(X[b, t])\nx ′ ← FAttn (FLN(X[b, t])|K, q, V ) • M [b, t] + X[b, t] XSL[b, t] ← FFFN (FLN(x ′ )) + x ′ else XSL[b, t] ← X[b, t]) • (1 -M [b, t]) end end ℓaux ← ( b,t M [b, t]/(B • T ) -P ) 2 return XSL, ℓaux" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "SkipLayer for Transformer-based Models", "publication_ref": [], "table_ref": [], "text": "In this section, we focus in particular on applying SkipLayer to Transformer-based decoder-only language models in the setup of in-context learning. A Transformer layer mainly includes the self-attention, layer normalization, and FFN as the sub-layers, and can be represented as\nX l ′ = F Attn F LN (X l ) + X l , X l+1 = F FFN F LN (X l ′ ) + X l ′ .\n(5)\nSkipLayer can be applied to a single Transformer layer shown in Figure 3. We propose to wrap the entire Transformer layer into a SkipLayer to preserve the atomicity of the self-attention (F Attn ) → FFN (F FFN ) structure. However, the F Attn layer and the F FFN layer have slightly different skipping implementations. The F FFN layer is often the most computationally intensive component of a Transformer model, but can be applied to a batch of tokens in a pointwise manner. Therefore, each input token of a batch can activate the F FFN layer independently with the probability P . Because F FFN consumes most of the computation in a Transformer layer, we can thus have big savings in FLOPs when the activation probability P is small.\nThe self-attention layer F Attn consumes much less computation relative to the F FFN layer. However, F Attn cannot apply to a batch of tokens in the pointwise way because tokens need to attend to each other to compute their own attention output. If most tokens in a batch are skipped when P is small, the left non-skipped tokens will lose most of the context of the respective sequences they belong to. We also empirically observe lower predictive quality when this simple skipping mechanism is applied. Therefore, we propose the following partial skipping mechanism. As shown in Figure 3, when the input tokens are skipped, their key and value projections are still preserved (Line 2 in Algorithm 1) since they are part of the context and are needed for the rest non-skipped tokens to further attend to. However, because the skipped tokens do not require attention calculations by themselves, we can still omit their query projections. It is worth to mention that Line 3-11 in Algorithm 1 can be computed in parallel using our efficient implementation in Section 2.2 to increase training speed.\nAlgorithm 2 shows the greedy decoding logic of one SkipLayer-based Transformer layer. The router makes the skipping decision by picking the most likely outcome. The key and value projections will be computed and saved in the decoding cache (Line 2). Only when the router activates the current layer, the query projection will be computed for current token, and then the decoding cache K and V which contain the key and value projections of previous decoding steps will be used to compute the self attention. Otherwise, there will be no further computations from this layer.\nAlgorithm 2 SkipLayer per decoding step\nData: The current state x ∈ R d , key and value cache K, V 1 m = G(x|WG) = argmax x ⊤ WG 2 K ← Fkey(x), V ← Fval(x) 3 if m = 1 then 4\nGet the query projection q ← Fquery(x)\n5 x ′ ← FAttn (FLN(x)|K, q, V ) + x 6 xSL ← FFFN (FLN(x ′ )) + x ′ 7 else 8 xSL ← x 9 end 10 return: xSL" }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [ "b9", "b19", "b26", "b26", "b18", "b34" ], "table_ref": [ "tab_0" ], "text": "We focus on training decoder-only language models. This section elaborates our training setup, hyperparameters, base- Dataset. The pretraining dataset has 1.6 trillion tokens that are representative of a wide range of natural language use cases. An in-house classifier is trained to classify between a collection of curated text and other web pages so that we are able to estimate the content quality of a webpage.\nA high-quality filtered subset of webpages are combined with books, Wikipedia pages, conversations, forums, and news to create the final dataset which is the same as (Du et al., 2022) for training.\nModel Training. We have trained several variants of SkipLayer-based models and baselines shown in Table 1.\nThe model dimension of all the models is 1,536 and the hidden dimension of the FFN has 8× of the model dimension.\nThe hidden dim of each attention head is 64. n params is the total number of trainable model parameters, L is the total number of Transformer layers, P is the probability of activating a layer, and Eff-L is the effective number of layers activated on average. The sequence length is set to 1,024 tokens during training, and the batch size includes 256 sequences. We set the learning rate to 0.1 for the first 10K training steps and then decay following an inverse square root schedule. We use Adafactor optimizer with first-moment decay β 1 = 0 and second-moment decay β 2 = 0.99. The dropout rate is set to 0 as the processed tokens are token from an extremely large training corpus. We use the SentencePiece (Kudo & Richardson, 2018) subword tokenizer with a vocabulary of size of 32K. During training, we use float32 for model weights and bfloat16 for activations. Aside from the general negative log-likelihood loss, we add the auxiliary loss discussed in Section 2.1 to control the skip ratio of our SkipLayer model, the auxiliary loss weight λ is set to 0.1. Finally, Gsize is set to 1,024 for all SkipLayer-based models.\nModel Evaluation. To directly evaluate the effectiveness of SkipLayer-based models, we mainly follow the 1-shot learning protocol suggested by (Radford et al., 2018), which is widely used for evaluating the generalization quality of pre-trained language models. We evaluate each example in the development set of a benchmark. For each benchmark, only one example will be randomly drawn from that task's training set as the only demonstration and context, which will be then concatenated with the evaluation example with two newlines in between, and then fed into the model. We use exactly the same prompting format as (Radford et al., 2018) for each downstream benchmark.\nBenchmarks. We use 24 datasets including four natural language generative (NLG) tasks and 20 natural language understanding (NLU) tasks for evaluations. For NLG tasks, we compare the decoded sequence of tokens by the models to the ground-truth and report the Exact Match (EM) accuracy. These tasks are TriviaQA, NQS, WebQS, and SQuADv2. Greedy decoding is used for each task. All NLU tasks are formulated into the form of selecting one correct answer from multiple candidate options. The prediction is based on the maximum log-likelihood of each option given the context log P (option|context) normalized by the token length of each option. These NLU tasks include ANLI (R1, R2, R3), ARC (Easy, Challenge), BoolQ, CB, COPA, Hellaswag, Openbookqa, PIQA, Race (middle, high), ReCord, RTE, Storycloze, WIC, Winograd, Winogrande and WSC273. Finally, we use the average of the scores across all datasets to report the overall 1-shot performance of models on both NLG and NLU tasks.\nBaselines. We consider the following baselines to study the effectiveness of SkipLayer-based models.\n• Standard base model (STD). The standard Transformer dense models without any skipping operations.\n• WideFFN. Because the FFN often consumes the major computation and has big impact to the predictive performance (Kocsis et al., 2022), WideFFN is thus designed to further double the hidden dimension of the FFN component. We then apply SkipLayer only to the FFN component without changing the total number of layers. As a consequence, when the skipping probability P = 50%, the compute FLOPs per prediction does not change due to the 2x FFN size.\n• HighwayNet (Srivastava et al., 2015) is among the first few works that propose to learn a gating function that could help to train very deep networks efficiently.\nWe also apply this idea to the FFN component of the Transformer layer as one baseline.\n• Random Gating (Rand). Random gating method is the baseline where the learned gating function of a SkipLayer model is replaced by a pure random function without learning. This baseline is designed to evaluate the importance of the learned gating function to a model's predictive performance." }, { "figure_ref": [ "fig_6", "fig_4", "fig_4", "fig_6", "fig_6", "fig_4", "fig_7" ], "heading": "Full Results", "publication_ref": [], "table_ref": [], "text": "We have conducted comprehensive evaluation of the effectiveness of SkipLayer and report the quantitative results compared to different baselines in this section.\nHow does SkipLayer perform in 1-shot learning? SkipLayer allows each input to selectively activate a particular layer depending on the context. By keeping the average number of activated layers constant while increasing the total number of layers of a language model, we expect that the increased model capacity can improve the predictive quality of few-shot learning. Thus, in Figure 4(a-b), we first report the average 1-shot performance of different SkipLayer models. In Figure 4(a), the y-axis is the average 1-shot performance of all the NLG tasks, and the x-axis (log-scale) is the compute FLOPs per token during the forward-pass. The black line first shows the performance of the standard baseline models of 6, 12, 24 and 48 layers, respectively. For each baseline model of a particular number of layers, e.g., 6L, we report the performance of the respective SkipLayer models of comparable compute FLOPs. For example, the three yellow dots represent the SkipLayer models of 12 layers (12L) with 50% density, 24 layers (24L) with 25% density, and 48 layers (48L) with 12.5% density, respectively. Compared to the 6L baseline model, all these three SkipLayer models (in yellow) have the same average number of activated layers of six (Effective number of layers) denoted as Eff06. Similarly, the orange line (Eff12) and red line (Eff24) denote the SkipLayer models with 12 and 24 average number of activated layers, accordingly.\nIn all the cases, it first shows that as the SkipLayer models become deeper and sparser (keeping the same activated number of layers), it keeps improving the few-shot learning performance at the modest cost of increased FLOPs per token prediction. For instance, SkipLayer (24L, 25%) has 51.5% performance gain at the cost of 18.2% increased compute FLOPs compared to the 6L baseline. Similarly, SkipLayer (48L, 25%) and SkipLayer (96L, 25%) have 28% and 22% at the cost of 19.2% and 20% compared to the 12L and 24L baselines, respectively. Moreover, in Figure 4(a) we can also observe that even though SkipLayer (48L, 12.5%) has less compute FLOPs compared to the 12L baseline, it has achieved pretty close 1-shot performance. Similarly, SkipLayer (96L, 12.5%) has even better 1-shot performance compared to the 24L baseline by using less compute FLOPs. This verifies that we are able to trade off model capacity for better predictive quality. Likewise, Figure 4(b) shows similar patterns that increased model capacity can lead to better predictive quality across the NLU tasks while at the cost of modest increased FLOPs. Does SkipLayer decode and train efficiently? As shown in Figure 4(a-b), deeper and sparser SkipLayer models have consistent performance improvement in few-shot learning. We are also interested in studying if they are able to decode and train fast. Figure 4(c) compares the decoding time per token of different models using a single TPU v3 chip. It shows that SkipLayer (12, 50%) has nearly the same per-step decoding time as the baseline 6L. SkipLayer (24, 50%) also has similar speed as the baseline 12L. SkipLayer (48, 50%) has 8% 1-shot performance gain at the cost of 6% increase in the per-step decoding time compared to the baseline 24L.\nAs the models become deeper, the per-step decoding time also increases. However, we may find some good tradeoff between quality and speed. For example, SkipLayer (96, 25%) has achieved 20% decoding efficiency with a tiny quality loss of only 0.5% compared to the baseline 48L. When all the models become deeper in Figure 5(b), the standard baseline 12L scales better than WideFFN, HighwayNet and the Random gating method. However, in both cases, SkipLayer based models have achieved the best performance compared to all the other baselines. HighwayNet performs the worst among all the models we trained, one possible reason could be Highway network is designed for FFN only architectures, it works like a weighted residual branch added to the wrapped layer, which helps the training stability of very deep FFN only networks. However, residual branch is quite normal in today's Transformer models, self-attention and FFN both have residuals. In our implementation, we replace the original residual with the highway network residual which leads to performance degradation. Figure 5(c-d) further compare the average 1-shot performance of different methods across all the NLU tasks. With a similar trend observed before, WideFFN has a scaling performance closer to the standard baseline models of 12 layers, both of which significantly outperform the HighwayNet and the Random gating method, and SkipLayer based models lead to the best performance overall when the models scale up.\nDoes the learned gating matter? The gating function inside a SkipLayer enables each input example to activate a subset of layers of the model based on the context. We have already observed that this flexibility of switching model parameters improve the predictive accuracy of the model when it becomes deeper and sparser. We wonder how much gain the learned routing function can contribute to the predictive performance of the SkipLayer-based models. We approach this question by comparing the SkipLayer-based models with the respective Random gating baselines where the gating is not learned by varying the density of the model side-by-side in Figure 6. Because each token activates a layer randomly and independently, Random gating baselines have the similar compute FLOPs per token prediction as the SkipLayer-based models given the same model density. However, as shown in Figure 6, for the same model density, the learned gating of SkipLayer-based models performs significantly better than the respective methods using random gating. Even though Random gating baselines also have the flexibility of switching model parameters per token prediction, Figure 6 verifies that the learned gating functions are much more effective to improve the prediction accuracy. derstand what kind of tokens skip more layers than others during greedy decoding, we collected the skipping statistics of the decoding results of 500 sampled questions in Trivi-aQA using our SkipLayer (12L, 50%) model. In Figure 7, we plotted the bubble chart of 300 frequently used tokens in the decoding results according to their averaged number of skipped layers (One token may have different skipping patterns under different contexts). Larger dots in the figure represents more layers are skipped. We can observe that tokens that skip the most are mainly functional words like \"and\", \"to\", \"ed\" or \"ing\". These tokens can be easily inferred from the previous contexts and thus do not need much computation to decode. While tokens that skip less are usually independent words like \"No\" or \"Paris\". This indeed shows that our SkipLayer model can successfully identify such tokens and assign proper computation accordingly." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b4", "b39", "b41", "b31", "b24", "b7", "b10", "b30", "b34", "b40", "b2", "b20", "b0", "b32", "b15", "b21", "b13", "b9", "b1", "b23", "b43", "b28", "b36", "b14", "b8", "b12", "b42", "b12" ], "table_ref": [], "text": "Conditional Computation (Bengio et al., 2013;2015) is a paradigm where only a subset of model parameters can be activated per input example. Early-exit is one kind of implementation for conditional computation where external classifiers equipped with confidence-based thresholding are used to exit early without going through the whole stack of layers (Wang et al., 2017;Xin et al., 2020;Schwartz et al., 2020;Liu et al., 2020;Dabre et al., 2020;Elbayad et al., 2020;Schuster et al., 2022). Unlike these approaches where the computation is activated for the bottom layers, SkipLayer-based models technically allows each input to explore 2 L different compute paths for a model with L stacked layers.\nAn alternative technique is to enable the model to 'learn' how to activate its different sub-layers. Due to the discrete-ness of the activating decisions, soft-approximations and RLbased implementations has been explored in the vision (Srivastava et al., 2015;Wang et al., 2018) and NLP (Bapna et al., 2020) community. Our approach is closer to the second learning approach but differ in that SkipLayer does not require the soft-approximation during the forward pass, giving computational savings not just in inference but also during training. This difference is crucial since pretraining language models is often time consuming and costly.\nAdditionally, concurrent work such as CODA (Lei et al., 2023) and CoLT5 (Ainslie et al., 2023) have applied similar token selection method to activate Transformer layers. However, these work only apply conditional activation in the encoder layers of an encoder-decoder model such as T5.\nMixture of Experts have recently been proposed to improve model efficiency (Shazeer et al., 2017;Gross et al., 2017;Lepikhin et al., 2021;Fedus et al., 2021;Roller et al., 2021;Du et al., 2022;Artetxe et al., 2021;Lewis et al., 2021;Zhou et al., 2022;Rajbhandari et al., 2022) by sparsely activating a subset of experts in a MoE layer.\nOur approach is orthogonal to MoE models in that an MoE layer can be easily wrapped by the SkipLayer for additional efficiency. Moreover, SkipLayer can apply conditional computation to both the self-attention and the Feed-Forward (FFN) component of a Transformer layer, whereas MoE models mainly focus on conditionally activating the FFN component in a MoE layer.\nStructural Dropout (Tompson et al., 2015;Ghiasi et al., 2018;Dai et al., 2019;Fan et al., 2019;Zeng et al., 2021) randomly drops a group of weights, e.g., a layer (Fan et al., 2019), during training to achieve better generalization and robustness for pruning during inference. However, the amount of computation during inference is still uniform per example. In contrast, SkipLayer-based models learn the skipping patterns from the data which shows better performance than the random skipping baseline, and potentially assign non-uniform amount of computation to each example during inference." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We propose a new general method named SkipLayer for dynamically skipping the execution of arbitrary layers based on the input context using a simple routing algorithm. This method enables heterogeneous computation for tokens at different complexity or importance so that more computation resources can be used for improving the predictive quality of harder tokens. Our model demonstrates significant 1-shot performance improvement across 24 NLP tasks compared to other competitive baselines with only a small extra cost for inference." } ]
Overparameterized large-scale language models have impressive generalization performance of in-context few-shot learning. However, most language models allocate the same amount of parameters or computation to each token, disregarding the complexity or importance of the input data. We argue that in language model pretraining, a variable amount of computation should be assigned to different tokens, and this can be efficiently achieved via a simple routing mechanism. Different from conventional early stopping techniques where tokens can early exit at only early layers, we propose a more general method that dynamically skips the execution of a layer (or module) for any input token with a binary router. In our extensive evaluation across 24 NLP tasks, we demonstrate that the proposed method can significantly improve the 1-shot performance compared to other competitive baselines only at mild extra cost for inference.
Learning to Skip for Language Modeling
[ { "figure_caption": "Figure 1 .1Figure 1. (a) Overview of our SkipLayer framework. The router can choose to activate or skip the embeded layer logic based on the input context. (b) Straight-Through Gumbel-Softmax is used for the router. In the forward pass, binary variables are sampled. During the backward pass, gradients can be backpropagated to update the router.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "FFNFigure 2 .2Figure 2. Illustration of efficient SkipLayer implementation. We focus on the sparse computation of FFN, non-skip tokens are gathered based on indices generated by the router and then fed into the FFN as groups. Gsize is a hyper-parameter that controls the group size. The results are scattered to the final output.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Overview of our SkipLayer for Transformer-based models, example of a single layer. LN is the layer normalization layer, query, key and value refers to the computation of query, key and value projections in the self-attention layer. Attn is the attention computation. Residual connections in the self attention layer and FFN layer are ignored for simplicity.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Average 1-shot performance of different SkipLayer-based models for comparable effective FLOPs per token prediction over the NLG tasks (a) and NLU tasks (b). (c) Comparisons of the decoding time per token between the SkipLayer-based models (SL) and the respective baseline models (STD). (d) Comparisons of the training speed among models of different density.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .Figure 6 .56Figure 5. Average 1-shot NLG and NLU performance of different methods with 6 (a-b) and 12 (c-d) effective number of activated layers, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Figure 44Figure 4(d) further shows the training speed on a single TPU v4 of different SkipLayer models. In general, the training speed decreases as more FLOPs are used per prediction.Compared to the respective full dense baselines, SkipLayer (24, 50%) has 18% speed gains, and SkipLayer (48, 12.5%) has 3x gains.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Bubble chart of the skipping behaviour of tokens during greedy decoding on TriviaQA dataset using our SkipLayer (12L, 50%) model. Each dot represents a token, larger dot size means more layers are skipped. Black/Red texts show some tokens that skip the most/least layers.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Architectures and sizes of the models trained in our experiments. All trained model share the same learning hyperparameters.", "figure_data": "Modeln params PLEff-LStandard (6L)408M 06SkipLayer (12, 50%) SkipLayer (24, 25%)766M 50% 1.47B 25%12 246SkipLayer (48, 12.5%) 2.92B 12.5% 48Standard (12L)766M 012SkipLayer (24, 50%) SkipLayer (48, 25%)1.47B 50% 2.92B 25%24 4812SkipLayer (96, 12.5%) 5.79B 12.5% 96Standard (24L)1.47B 024SkipLayer (48, 50%)2.92B 50%4824SkipLayer (96, 25%)5.79B 25%96lines, benchmarks, and evaluation protocol.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Dewen Zeng; Nan Du; Tao Wang; Yuanzhong Xu; Tao Lei; Zhifeng Chen; Claire Cui
[ { "authors": "J Ainslie; T Lei; M De Jong; S Ontañón; S Brahma; Y Zemlyanskiy; D Uthus; M Guo; J Lee-Thorp; Y Tay", "journal": "", "ref_id": "b0", "title": "Colt5: Faster long-range transformers with conditional computation", "year": "2023" }, { "authors": "M Artetxe; S Bhosale; N Goyal; T Mihaylov; M Ott; S Shleifer; X V Lin; J Du; S Iyer; R Pasunuru; G Anantharaman; X Li; S Chen; H Akin; M Baines; L Martin; X Zhou; P S Koura; B O'horo; J Wang; L Zettlemoyer; M Diab; Z Kozareva; V Stoyanov", "journal": "", "ref_id": "b1", "title": "Efficient large scale language modeling with mixtures of experts", "year": "2021" }, { "authors": "A Bapna; N Arivazhagan; O Firat", "journal": "", "ref_id": "b2", "title": "Controlling computation versus quality for neural sequence models", "year": "2020" }, { "authors": "E Bengio; P Bacon; J Pineau; D Precup", "journal": "", "ref_id": "b3", "title": "Conditional computation in neural networks for faster models", "year": "2015" }, { "authors": "Y Bengio; N Léonard; A C Courville", "journal": "", "ref_id": "b4", "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "year": "2013" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell; S Agarwal; A Herbert-Voss; G Krueger; T Henighan; R Child; A Ramesh; D Ziegler; J Wu; C Winter; C Hesse; M Chen; E Sigler; M Litwin; S Gray; B Chess; J Clark; C Berner; S Mccandlish; A Radford; I Sutskever; D Amodei", "journal": "", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "" }, { "authors": "A Chowdhery; S Narang; J Devlin; M Bosma; G Mishra; A Roberts; P Barham; H W Chung; C Sutton; S Gehrmann; P Schuh; K Shi; S Tsvyashchenko; J Maynez; A Rao; P Barnes; Y Tay; N Shazeer; V Prabhakaran; E Reif; N Du; B Hutchinson; R Pope; J Bradbury; J Austin; M Isard; G Gur-Ari; P Yin; T Duke; A Levskaya; S Ghemawat; S Dev; H Michalewski; X Garcia; V Misra; K Robinson; L Fedus; D Zhou; D Ippolito; D Luan; H Lim; B Zoph; A Spiridonov; R Sepassi; D Dohan; S Agrawal; M Omernick; A M Dai; T S Pillai; M Pellat; A Lewkowycz; E Moreira; R Child; O Polozov; K Lee; Z Zhou; X Wang; B Saeta; M Diaz; O Firat; M Catasta; J Wei; K Meier-Hellstern; D Eck; J Dean; S Petrov; N Fiedel", "journal": "", "ref_id": "b6", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "R Dabre; R Rubino; A Fujita", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Balancing cost and benefit with tied-multi transformers", "year": "2020" }, { "authors": "Z Dai; M Chen; X Gu; S Zhu; P Tan", "journal": "", "ref_id": "b8", "title": "Batch dropblock network for person re-identification and beyond", "year": "2019" }, { "authors": "N Du; Y Huang; A M Dai; S Tong; D Lepikhin; Y Xu; M Krikun; Y Zhou; A W Yu; O Firat; B Zoph; L Fedus; M P Bosma; Z Zhou; T Wang; E Wang; K Webster; M Pellat; K Robinson; K Meier-Hellstern; T Duke; L Dixon; K Zhang; Q Le; Y Wu; Z Chen; C Cui; Glam", "journal": "PMLR", "ref_id": "b9", "title": "Efficient scaling of language models with mixture-of-experts", "year": "2022-07-23" }, { "authors": "M Elbayad; J Gu; E Grave; M Auli", "journal": "", "ref_id": "b10", "title": "Depth-adaptive transformer", "year": "2020" }, { "authors": " Openreview", "journal": "", "ref_id": "b11", "title": "", "year": "2020" }, { "authors": "A Fan; E Grave; A Joulin", "journal": "", "ref_id": "b12", "title": "Reducing transformer depth on demand with structured dropout", "year": "2019" }, { "authors": "W Fedus; B Zoph; N Shazeer", "journal": "", "ref_id": "b13", "title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity", "year": "2021" }, { "authors": "G Ghiasi; T.-Y Lin; Q V Le", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Dropblock: A regularization method for convolutional networks", "year": "2018" }, { "authors": "S Gross; M Ranzato; A Szlam", "journal": "", "ref_id": "b15", "title": "Hard mixtures of experts for large scale weakly supervised vision", "year": "2017" }, { "authors": "J Hoffmann; S Borgeaud; A Mensch; E Buchatskaya; T Cai; E Rutherford; D De Las Casas; L A Hendricks; J Welbl; A Clark; T Hennigan; E Noland; K Millican; G Van Den Driessche; B Damoc; A Guy; S Osindero; K Simonyan; E Elsen; O Vinyals; J W Rae; L Sifre", "journal": "", "ref_id": "b16", "title": "An empirical analysis of compute-optimal large language model training", "year": "2022" }, { "authors": "N P Jouppi; C Young; N Patil; D Patterson; G Agrawal; R Bajwa; S Bates; S Bhatia; N Boden; A Borchers", "journal": "", "ref_id": "b17", "title": "In-datacenter performance analysis of a tensor processing unit", "year": "2017" }, { "authors": "P Kocsis; P Súkeník; G Brasó; M Nießner; L Leal-Taixé; I Elezi", "journal": "", "ref_id": "b18", "title": "The unreasonable effectiveness of fullyconnected layers for low-data regimes", "year": "2022" }, { "authors": "T Kudo; J Richardson", "journal": "", "ref_id": "b19", "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "T Lei; J Bai; S Brahma; J Ainslie; K Lee; Y Zhou; N Du; V Y Zhao; Y Wu; B Li; Y Zhang; M.-W Chang", "journal": "", "ref_id": "b20", "title": "Conditional adapters: Parameter-efficient transfer learning with fast inference", "year": "2023" }, { "authors": "D Lepikhin; H Lee; Y Xu; D Chen; O Firat; Y Huang; M Krikun; N Shazeer; Z Chen", "journal": "", "ref_id": "b21", "title": "GShard: Scaling giant models with conditional computation and automatic sharding", "year": "2021" }, { "authors": "R Levy", "journal": "Cognition", "ref_id": "b22", "title": "Expectation-based syntactic comprehension", "year": "2008" }, { "authors": "M Lewis; S Bhosale; T Dettmers; N Goyal; L Zettlemoyer", "journal": "", "ref_id": "b23", "title": "Base layers: Simplifying training of large, sparse models", "year": "2021" }, { "authors": "W Liu; P Zhou; Z Wang; Z Zhao; H Deng; Q Ju", "journal": "", "ref_id": "b24", "title": "FastBERT: a self-distilling BERT with adaptive inference time", "year": "2020-07" }, { "authors": "D Patterson; J Gonzalez; Q Le; C Liang; L.-M Munguia; D Rothchild; D So; M Texier; J Dean", "journal": "", "ref_id": "b25", "title": "Carbon emissions and large neural network training", "year": "2021" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "", "ref_id": "b26", "title": "Language models are unsupervised multitask learners", "year": "2018" }, { "authors": "J W Rae; S Borgeaud; T Cai; K Millican; J Hoffmann; H F Song; J Aslanides; S Henderson; R Ring; S Young; E Rutherford; T Hennigan; J Menick; A Cassirer; R Powell; G Van Den Driessche; L A Hendricks; M Rauh; P Huang; A Glaese; J Welbl; S Dathathri; S Huang; J Uesato; J Mellor; I Higgins; A Creswell; N Mcaleese; A Wu; E Elsen; S M Jayakumar; E Buchatskaya; D Budden; E Sutherland; K Simonyan; M Paganini; L Sifre; L Martens; X L Li; A Kuncoro; A Nematzadeh; E Gribovskaya; D Donato; A Lazaridou; A Mensch; J Lespiau; M Tsimpoukelli; N Grigorev; D Fritz; T Sottiaux; M Pajarskas; T Pohlen; Z Gong; D Toyama; C De Masson D'autume; Y Li; T Terzi; V Mikulik; I Babuschkin; A Clark; D De Las Casas; A Guy; C Jones; J Bradbury; M Johnson; B A Hechtman; L Weidinger; I Gabriel; W S Isaac; E Lockhart; S Osindero; L Rimell; C Dyer; O Vinyals; K Ayoub; J Stanway; L Bennett; D Hassabis; K Kavukcuoglu; G Irving", "journal": "", "ref_id": "b27", "title": "Scaling language models: Methods, analysis & insights from training gopher", "year": "2021" }, { "authors": "S Rajbhandari; C Li; Z Yao; M Zhang; R Y Aminabadi; A A Awan; J Rasley; Y He; Deepspeed-Moe", "journal": "PMLR", "ref_id": "b28", "title": "Advancing mixture-of-experts inference and training to power next-generation AI scale", "year": "2022-07-23" }, { "authors": "S Roller; S Sukhbaatar; J Weston", "journal": "", "ref_id": "b29", "title": "Hash layers for large sparse models", "year": "" }, { "authors": "T Schuster; A Fisch; J P Gupta; M Dehghani; D Bahri; V Q Tran; Y Tay; D Metzler", "journal": "", "ref_id": "b30", "title": "Confident adaptive language modeling", "year": "2022" }, { "authors": "R Schwartz; G Stanovsky; S Swayamdipta; J Dodge; N A Smith", "journal": "", "ref_id": "b31", "title": "The right tool for the job: Matching model and instance complexities", "year": "2020-07" }, { "authors": "N Shazeer; A Mirhoseini; K Maziarz; A Davis; Q V Le; G E Hinton; J Dean", "journal": "", "ref_id": "b32", "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "year": "2017" }, { "authors": "M Shoeybi; M Patwary; R Puri; P Legresley; J Casper; B Catanzaro", "journal": "", "ref_id": "b33", "title": "Megatron-lm: Training multi-billion parameter language models using gpu model parallelism", "year": "2019" }, { "authors": "R K Srivastava; K Greff; J Schmidhuber; Highway; Networks", "journal": "", "ref_id": "b34", "title": "", "year": "2015" }, { "authors": "K E Stanovich; R F West", "journal": "Behavioral and Brain Sciences", "ref_id": "b35", "title": "Individual differences in reasoning: Implications for the rationality debate?", "year": "2000" }, { "authors": "J Tompson; R Goroshin; A Jain; Y Lecun; C Bregler", "journal": "", "ref_id": "b36", "title": "Efficient object localization using convolutional networks", "year": "2015" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L U Kaiser; I Polosukhin", "journal": "", "ref_id": "b37", "title": "Attention is all you need", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b38", "title": "", "year": "2017" }, { "authors": "X Wang; Y Luo; D Crankshaw; A Tumanov; J E Gonzalez", "journal": "", "ref_id": "b39", "title": "IDK cascades: Fast deep learning by learning not to overthink", "year": "2017" }, { "authors": "X Wang; F Yu; Z Dou; T Darrell; J E Gonzalez", "journal": "Springer", "ref_id": "b40", "title": "Skipnet: Learning dynamic routing in convolutional networks", "year": "2018" }, { "authors": "J Xin; R Tang; J Lee; Y Yu; J Lin; Deebert", "journal": "", "ref_id": "b41", "title": "Dynamic early exiting for accelerating bert inference", "year": "2020" }, { "authors": "Y Zeng; T Dai; B Chen; S.-T Xia; J Lu", "journal": "Pattern Recognition", "ref_id": "b42", "title": "Correlation-based structural dropout for convolutional neural networks", "year": "2021" }, { "authors": "Y Zhou; T Lei; H Liu; N Du; Y Huang; V Zhao; A Dai; Z Chen; Q Le; J Laudon", "journal": "", "ref_id": "b43", "title": "Mixtureof-experts with expert choice routing", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 66.95, 554.97, 119.85, 12.52 ], "formula_id": "formula_0", "formula_text": "X o FFN = F FFN (X|{W i , W o })" }, { "formula_coordinates": [ 2, 56.17, 616.88, 233.94, 26.97 ], "formula_id": "formula_1", "formula_text": "X o SL = F SL F [layer] (X)|W G (1) = F [layer] (X) ⊙ G(X|W G ) + X ⊙ (1 -G(X|W G ))," }, { "formula_coordinates": [ 2, 307.44, 68.64, 213.16, 44.95 ], "formula_id": "formula_2", "formula_text": "X[b, t] ∈ R d , b ≤ B, t ≤ T , we have that X o SL [b, t] = F [layer] (X[b, t]) , if G(X[b, t]) = 1. X[b, t]," }, { "formula_coordinates": [ 2, 334.22, 277.3, 207.89, 11.72 ], "formula_id": "formula_3", "formula_text": "M = G(X|W G ) ∈ {0, 1} B×T , W G ∈ R d×2 .(3)" }, { "formula_coordinates": [ 2, 312.01, 488.75, 230.1, 37.57 ], "formula_id": "formula_4", "formula_text": "X o SL [b, t] = g[1] • F [layer] (X[b, t]) , if argmax g = 1. g[0] • X[b, t], otherwise,(4)" }, { "formula_coordinates": [ 4, 55.44, 369.92, 229.2, 27.92 ], "formula_id": "formula_5", "formula_text": "K ← Fkey(X), V ← Fval(X). for b ≤ B, t ≤ T do if M [b, t] = 1 then" }, { "formula_coordinates": [ 4, 55.44, 409.16, 224.6, 86.24 ], "formula_id": "formula_6", "formula_text": "x ′ ← FAttn (FLN(X[b, t])|K, q, V ) • M [b, t] + X[b, t] XSL[b, t] ← FFFN (FLN(x ′ )) + x ′ else XSL[b, t] ← X[b, t]) • (1 -M [b, t]) end end ℓaux ← ( b,t M [b, t]/(B • T ) -P ) 2 return XSL, ℓaux" }, { "formula_coordinates": [ 4, 103.75, 603.17, 137.39, 33.29 ], "formula_id": "formula_7", "formula_text": "X l ′ = F Attn F LN (X l ) + X l , X l+1 = F FFN F LN (X l ′ ) + X l ′ ." }, { "formula_coordinates": [ 4, 299.35, 536.47, 219.11, 49.04 ], "formula_id": "formula_8", "formula_text": "Data: The current state x ∈ R d , key and value cache K, V 1 m = G(x|WG) = argmax x ⊤ WG 2 K ← Fkey(x), V ← Fval(x) 3 if m = 1 then 4" }, { "formula_coordinates": [ 4, 296.23, 587.43, 148.28, 61.73 ], "formula_id": "formula_9", "formula_text": "5 x ′ ← FAttn (FLN(x)|K, q, V ) + x 6 xSL ← FFFN (FLN(x ′ )) + x ′ 7 else 8 xSL ← x 9 end 10 return: xSL" } ]
2023-11-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14" ], "table_ref": [], "text": "Graphics primitives are building blocks used to create complex visual scenes in computer graphics. These primitives can be thought of as functions that map positional or directional information from R m to attributes in R n . The quality and performance characteristics of the mathematical representation are crucial for visual fidelity: we desire representations that remain fast and compact while capturing high-frequency, local detail. Recently, we have seen a trend in representing graphics primitives with neural networks as demonstrated by advancements in occupancy networks [1,2], signed distance functions [3,4] and radiance fields [5]. While neural networks such as multi-layer perceptrons (MLPs) have shown great potential in modeling graphics primitives, they struggle to capture high-frequency details due to their inherent smoothness.\nTo overcome this limitation, neural networks are often paired with various encoding techniques, which map input into higher dimensionality to allow a finer representation of high-frequency details. Implicit encodings, such as frequency encoding used by authors in NeRF [5], encodes scalar positions as a multi-resolution sequence of sine and cosine functions. These type of encodings normally don't carry any trainable parameters and therefore require a larger neural network to achieve the same level of fidelity. Conversely, explicit encodings rely on additional grid-based structures with trainable parameters. Recent examples of grid representation includes dense grid [6,7], sparse grid [8,9], octree [10], tensor decomposition [11], planar factorization [12,13,14], and multi-resolution grid [15]. These configurations aim to exchange a reduced memory footprint for a lower computational cost with the use of a smaller neural network.\nNonetheless, incorporating grid-based structures inevitably introduces excessive computation, as evaluating an input in n dimensions necessitates evaluating 2 n neighboring vertices. Drawing inspiration from simplex noise, we leverage simplex structures, defined as the polygon with the minimum number of vertices tiling an n-dimensional space. The use of simplex-based structures proves more advantageous than grid-based structures in encoding for two main reasons:\nFewer variables In dense-grid encoding, the number of variables required increases exponentially with the dimensionality of the graphics primitives, making it impractical for high-dimensional problems. In contrast, a simplex-based structure uses only the n+1 vertices in n-dimension regardless of the dimensionality. This would lead to a significant improvement in computation speed.\nFewer artifacts In simplex-based structure, the vertices of the simplex are typically well-separated and represent different combinations of variables. This reduces the correlation between variables, which in turn reduces the likelihood of artifacts arising due to the interactions between variables. Simplex-based encoding can more easily handle nonlinearities due to its ability to adapt to the shape of the solution space. In contrast, the dense-grid shape is fixed, leading to artifacts on discontinuities or sharp edges.\nIn later sections, we will further review the properties of grid and simplex structures on noises (Section 3) and present our proposed method with simplex-based encoding (Section 4) with the implementation of a simplex-based structure backed by multi-resolution hash encoding, a state-of-the-art method on graphics primitives (Section 5). We then verify our performance and feasibility with multi-dimensional experiments on various tasks (Section 6). We finally conclude with future works and discussions thereof (Section 7) with mathematical derivations of the proposed algorithm feasibility (Section 8)." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Perlin noise", "publication_ref": [ "b15" ], "table_ref": [], "text": "Perlin noise [16], also known as classical noise, is a procedural generation algorithm first introduced by Ken Perlin in the 1980s. Due to its natural appearance and simple implementation, it has been widely adopted in computer graphics for generating visual content including texture, terrain, smoke, etc. In n dimensional space, Perlin noise can be viewed as a pseudo-random mapping R n → R and can be calculated with the following three steps as shown in Figure 1: (a) Grid subdivision. Given an input coordinate x ∈ R n , we determine a grid cell that contains x. The cell is a n dimensional hypercube with 2 n vertices in Z n spanned by ⌊x⌋ and ⌈x⌉. (b) Noise evaluation. For each vertex x i ∈ Z n of the hypercube, we generate a pseudo-random gradient vector of unit length g i ∈ R n , ||g i || = 1 and a displacement vector d i = xx i . We then use the dot product p i = g i • d i of the two vectors as the noise vector. (c) Linear interpolation. Finally, we use n-linear interpolation to obtain the final noise scalar at x. When higher-order smoothness is desired, instead of using n-quadratic or n-cubic interpolation, we can apply a more efficient smoothing function to the 2 n dot products on vertices. Particularly, Perlin noise uses the smoother-step function,\nS 2 (x) = 6x 5 -15x 4 + 10x 3 , 0 ≤ x ≤ 1,(1)\nwhich is C 2 smooth and has vanishing derivatives at the endpoints. Considering the procedures mentioned above, Perlin noise requires evaluation at 2 n vertices of the containing grid cell and calculation of 2 n-1 weighted sums during interpolation. Therefore, the algorithm scales with O(2 n ), which grows exponentially with dimension." }, { "figure_ref": [ "fig_1" ], "heading": "Simplex noise", "publication_ref": [ "b16" ], "table_ref": [], "text": "Perlin noise, while very useful, suffers from exponential scaling across dimensions and directional artifacts in image reconstruction [17]. Those shortcomings inspire us to investigate a new reconstruction primitive: simplex noise. Rather than placing each input point into a cubic grid based on the integer parts of coordinate values, the input point is placed onto a simplicial grid, which is derived by dividing n-dimensional space into a regular grid of shapes with a minimum number of vertices (triangles in 2D, tetrahedrons in 3D, and so on). It's important to note that the number of simplex vertices in each dimension is n + 1, where n is the number of sizes.\nCompared to Perlin noise, simplex noise can be generated with lower computational overhead, especially in higher dimensions. Simplex noise scales to higher dimensions with much less computational cost: the complexity is O(n 2 ) (depending on the sorting algorithm) for n dimensions. Simplex noise inherently produces fewer noticeable directional artifacts and is more isotropic (meaning it looks the same from all directions) than the Perlin noise. Both of these advantages over Perlin noise are crucial for a robust and computationally efficient image reconstruction algorithm. To generate a simplex noise in n dimensional space, the following 4 steps must be made as shown in Figure 2.\n(a) Coordinate skewing: The coordinate axis in the n dimension is skewed such that the coordinate vector aligns with the simplex shape. In a 2D example, the x-y Cartesian coordinate is translated to a new u-v plane. The coordinate translation formula is given below,\nx ′ = x + 1 T n • F n i x i , F n = √ n + 1 -1 n ,(2)\nThis has the effect of rearranging a hyper-cubic coordinate that has been squashed along its main diagonal such that the distance between the points (0, 0, ..., 0) and (1, 1, ..., 1) becomes equal to the distance between the points (0, 0, ..., 0) and (1, 0, ..., 0).\n(b) Simplicial subdivision: Once the input coordinate is determined in the translated coordinate system, the surrounding lattice point of an input in the simplex grid is calculated via the following steps. First, take a floor and ceiling of the coordinates in the input. For input with coordinate (x, y, z, ...) in the simplex coordinate, it lies in a simplex with at least coordinate spanned by (⌊x⌋, ⌊y⌋, ⌊z⌋...) and (⌈x⌉, ⌈y⌉, ⌈z⌉...). Then, the coordinate (x i , y i , z i , ...) are sorted in decreasing order. Start with (⌊x⌋, ⌊y⌋, ⌊z⌋...), successively add 1 to the largest point in the coordinate until all n + 1 simplex points are found.\n(c) Noise evaluation: At each vertex of the grid, a random gradient vector is assigned. To generate a noise value at a given point in space, the algorithm first determines which simplex shape contains the point, and then interpolates the gradient vectors at the vertices of that simplex to obtain a weighted sum. The resulting value is then scaled and smoothed to produce the final noise value.\n(d) Kernel summation: The input in the simplex coordinate is skewed back into the Cartesian coordinate system using the formula below,\nx = x ′ -1 T n • G n i x ′ i , G n = 1 -1/ √ n + 1 n .(3)\nNote that the translated unscrewed coordinate is precisely the coordinate of the original input in the orthogonal axes." }, { "figure_ref": [], "heading": "Proposed Method", "publication_ref": [], "table_ref": [], "text": "To optimize the performance of sampling and interpolation, it may be beneficial to replace n-cubes with n-simplices, as simplices are defined as polygons with the least number of vertices in each respective dimension. For example, a simplex is a triangle in two-dimensional space and a tetrahedron in three-dimensional space. However, regular simplices that have equal edges cannot tile space beyond two dimensions. Furthermore, indexing the vertices of equilateral triangles in two dimensions is no easy task without careful manipulation. Fortunately, the simplex noise algorithm has provided a solid foundation for such operations. By making some adaptations, we can apply this methodology to a wider range of tasks, including parameterizing graphics primitives. In doing so, we can make full use of simplex-based structures to reduce computational costs and optimize memory usage during sampling and interpolation. For showing the correctness of our proposed method in arbitrary dimensions, we derive and prove the following theorems and lemmas (please refer to Appendix A for detailed proofs):\n(a) 3-orthoscheme (b) Coordinate skewing (c) Simplicial subdivision Figure 3: Illustration of the proposed method with coordinate skewing and simplicial subdivision in three dimensions.\nTheorem 1. S = {x ∈ R n : 0 ≤ x 1 ≤ • • • ≤ x n ≤ 1} is a n-simplex. Remark 2. Let π denote a permutation of {1, • • • , n}, then S π = {x ∈ R n : 0 ≤ x π(1) ≤ • • • ≤ x π(n) ≤ 1} is a n-simplex. The n-simplex has n + 1 vertices v 1 , • • • , v n+1\n, where all entries at {π 1 , • • • , π i-1 } of v i are 1 and the rest are 0. Additionally, v 1 is 0 T and v n+1 is 1 T .\nTheorem 3. All S π are congruent.\nLemma 4. After coordinate transformation, all possible S ′ π are still congruent. Theorem 5. A n-hypercube can be triangulated into n! disjoint congruent simplices." }, { "figure_ref": [], "heading": "Coordinate skewing", "publication_ref": [], "table_ref": [], "text": "Similar to the simplex noise algorithm, we first apply coordinate transformation according to Equation 2. This skewing operation can also be rewritten as\nx ′ =    1 + F n • • • 1 . . . . . . . . . 1 • • • 1 + F n    x, F n = √ n + 1 -1 n ,(4)\nwhere the sampled point x ∈ R n is multiplied by a constant matrix. As such, coordinate skewing is an affine transformation2 that preserves the parallelism of planes and transforms the unit n-hypercube into an n-parallelpiped.\nThis transformation is crucial as it reduces distortion in simplex cells, resulting in the more balanced spatial division. Without this step, each cell would be a n-orthoscheme, which is a generalization of a right triangle in higher dimensions.\nThe transformation leads to equilateral triangles in two-dimensional space and tetrahedrons with congruent isosceles faces in three-dimensional spaces. More specifically, this tetrahedron has 4 edges of the same length and another 2 edges of the same length which can be derived from the proof for Lemma 4. We can also calculate the ratio of the two different types of edges which is\n√ 3 2 .\nThus by using a coordinate transformation, we aim to avoid axis-aligned artifacts and increase quality in representing graphics primitives." }, { "figure_ref": [], "heading": "Simplicial subdivision", "publication_ref": [], "table_ref": [], "text": "After transforming the input coordinate into simplex-based coordinates, we will need to locate the coordinates of neighboring vertices in the n-dimension hypercube. According to Theorem 5, in n-dimensional space, the hypercube can be split into n! number of disjoint and congruent simplices. While congruency is not preserved under general affine transformation, we prove in Lemma 4 that under our coordinate skewing, the n! simplices are still congruent. Assume the given point is sampled within the hypercube, then to locate the cell containing it, we need to determine which simplex cell among the n! simplices it resides. For this step, we use a similar subdivision scheme from the simplex noise algorithm. Through direct sorting, we can find the corresponding vertices of the cell as shown in the proof for Remark 2. For a given point x inside the unit hypercube, we let x 1 , • • • , x n denote the sorted entries of x in descending order and record their respective original index. Coding-wise, we perform sorting for the input while preserving their index 3 . Then, we can obtain the n + 1 vertices starting from 0 T as the base vertex v 1 4 . By adding 1 to the base vertex at the index of the next largest entry of x, we obtain the next vertex and use that as the new base. The process is repeated until we finally reach vertex v n+1 , which should always be 1 T ." }, { "figure_ref": [ "fig_2" ], "heading": "Barycentric interpolation", "publication_ref": [], "table_ref": [], "text": "In order to learn the mapping for graphics primitives, instead of generating random gradient vectors as in noise algorithms, we retrieve learnable vectors at each vertex and interpolate them to obtain the final feature for the queried point. To maintain a similar interpolation scheme as the original trilinear interpolation, we didn't select kernel summation from the simplex noise algorithm. Instead, we derive our barycentric interpolation as an efficient alternative which is illustrated in the Figure 4.\nIn geometry, a barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex, which makes it the perfect choice for our simplex-based structure. The barycentric coordinate can be found by expressing the point inside a simplex using a convex combination of the neighboring n + 1 vertices and the coefficients of such combination are the local barycentric coordinate. Since the coordinate skewing is affine, the barycentric coordinate is preserved and the weights are given in the proof of Theorem 1 as follows,\nw 1 = 1 -x 1 , w 2 = x 1 -x 2 , • • • , w n = x n-1 -x n , w n+1 = x n ,(5)\nwhere the entries of x are already sorted in descending order from the previous step. Compared to the computationally expensive n-linear interpolation, the use of a simpler formula for weights in our proposed algorithm also enables a more efficient implementation. " }, { "figure_ref": [], "heading": "Example Algorithm", "publication_ref": [ "b14" ], "table_ref": [], "text": "Here we provide an example pseudo-code of our proposed method. The function takes in a point x inside the unit hypercube, retrieves values at its neighboring simplex vertices, and returns their interpolated values. With slight modifications, this example can be adapted using CUDA for more advanced scenarios including processing inputs in parallel, handling feature vectors, etc. For a detailed demonstration of Python and sample outputs for each phase of our proposed method, please refer to Appendix B.\n4 Implementation\n(a) (b) (c) (d) (e)\nFigure 5: Illustration of the proposed method in 2D in combination with multi-resolutional hash encoding.\nTo validate our proposed structure for representing graphics primitives, we adopt the structural backbone from Instant-NGP [15] and replace its explicit grid-based structure with a simplex-based structure. The demonstration of our implementation in 2D is given in Figure 5: " }, { "figure_ref": [ "fig_4" ], "heading": "Scale Adjustment", "publication_ref": [], "table_ref": [], "text": "In the multiresolution setup, sampling still takes place inside the unit n-cube. The difference is that the n-cube is divided accordingly to each level, or equivalently the input coordinate is scaled up accordingly. Then by simply taking its floor and ceiling, we can identify its local parallelepiped and continue with our proposed simplex algorithm within the unit n-cube.\nAdditionally, due to coordinate skewing, the vertex 1 T n is now √ n+1\nT n in the new coordinate system. The resulting parallelepiped is smaller than the original hypercube and cannot cover our sampling volume entirely. Therefore, we need to use an adjustment scale of S n = √ n + 1 to avoid accessing points outside the simplicial grids. The choice of whether to use a dense grid or hash table can be task-specific for grid-based structures. However, using a dense grid to back a simplicial grid can be extremely inefficient. This is because only a portion of the vertices is accessed when we sample inside the unit n-cube, which leads to significant memory wastage if all vertex features are stored. Figure 6 provides a visual illustration of this in 2D. Assigning an order to the unused vertices to address this issue can cause unnecessary overhead, especially as it varies with grid size and dimension. To solve this problem, we have chosen to use a hash table. This approach avoids the need for explicit ordering, resulting in more efficient and scalable implementation of simplicial grids." }, { "figure_ref": [ "fig_13", "fig_13" ], "heading": "Hash Table Selection", "publication_ref": [], "table_ref": [], "text": "To determine the size of the hash table when given dimension and level, we need to calculate the percentage of unused vertices. As the level increases, the volume covered by the used simplex vertices will converge to the volume of the hypercube. Assuming the grid is infinitely dense with at infinite level, the ratio of the two volumes would be 1. Then the percentage of vertices used in the simplex gird could be approximated by the volume ratio of the hypercube and the parallelpiped. The volume of the distorted n-cube, which is a parallelepiped as discussed in coordinate skewing, can be determined by 3, we can obtain the vector coordinates in the original space. Hence,\nV = | det(unskew(v 1 , v 2 , . . . , v n ))|,\nV = |S n n det(I n -G n C n )|=S n n (1 -nG n )=S n-1 n = (n + 1) n-1 2\n, where I n is the identity matrix and C n is the constant matrix of 1. Then, the ratio of the parallelepiped and the unit n-cube is 1\nV . As shown in Figure 10a, this ratio decays exponentially. However, we would expect this ratio to be higher at lower levels due to discretization. We would like to derive an estimation that gives us a varying percentage at different levels. The result is reported in Figure 10b. As the level increases, the ratio quickly converges to the theoretical lower bound. Therefore, in practice, we could use a hash table with a size of the theoretical ratio to achieve similar quality compared to the collision-free implementation. We could also fix the hash table size and scale our level by (n + 1) n-1 2n , which is how we implemented it when compared with the baseline methods to guarantee equal memory size." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b14" ], "table_ref": [], "text": "To compare simplex-based and grid-based structures in multi-resolution hash encoding [15], we employ a variety of tasks with increasing dimensional inputs to provide comprehensive results. Our evaluation process involves comparing the training performance and benchmark kernel run-time of dense-grid and simplex multi-resolution backbone methods. Through these experiments, we would showcase the versatility of the simplex-based structure and highlight its superiority over traditional grid-based encoding methods in various applications. We applied our method in the following tasks:\n1. Gigapixel image: the network learns the mapping from 2D coordinates to RGB colors of a high-resolution image.\n2. Volumetric Rendering: the network learns the mapping from 5D coordinates to trace the rays in a given 3D space 2 . The volume ratio between the unit hypercube and the transformed parallelepiped is used for calculating the lower bound. 3. High dimensional analysis: the network learns the mapping from n-dimension coordinates to the predetermined noise value." }, { "figure_ref": [ "fig_9" ], "heading": "Gigapixel Image", "publication_ref": [ "b17", "b14" ], "table_ref": [], "text": "Learning the 2D to RGB mapping of image coordinates to colors has become a popular benchmark for testing a model's ability to represent high-frequency detail. Recent breakthroughs in adaptive coordinate networks have shown impressive results when fitting very large images-up to a billion pixels-with high fidelity at even the smallest scales [18,15]. We attempt to replicate the same experiment targeting to represent high-fidelity images with a hash table and implicit MLP in seconds to minutes. We begin by using the Tokyo gigapixel photograph as a reference image 8 and utilize simplex-based encoding to represent the image with hash maps and MLP parameters. Initially, the hash map features and the MLP weighting is randomly initialized so that the trained image appears noise-like. Through progressive weight back-propagation with ground truth and xy-to-RGB references appending, the network converges to the reference image with an indescribable difference. After 10,000 iterations, we are able to represent a 439M pixels image with only 7.9M trainable parameters, reaching a stunning 1.7 degrees of freedom in Figure 8.\nPractically, the simplex-based and grid-based encoding yielded close to almost identical runtime and PSNR scores for this image over-fitting task. With 16 multi-resolution levels and 2 1 9 size of hash tables, after 10000 iterations, the simplex-based encoding obtains a PSNR of 29.94, whereas the grid-based encoding has a PSNR of 29.82. Additionally, 10000 iterations take 16.34 seconds for grid-based encoding and 14.94 seconds for simplex-based encoding. On 2D tasks like image overfitting, the grid-based structure requires information from 4 neighboring vertices, whereas the simplex-based structure has 3 neighboring vertices to interpolate with extra computational overhead on coordinate skewing and simplicial subdivision. While we do not expect any runtime improvement over instant-NGP, we would like to first use this experiment to verify the feasibility of simplex encoding. We use our analogous dense structure to check if we can produce similar results in both speed and quality. We then adopt the multi-resolution structure and compare the results with instant-NGP." }, { "figure_ref": [ "fig_11" ], "heading": "Volumetric Rendering", "publication_ref": [ "b7" ], "table_ref": [], "text": "A more useful application is volumetric rendering, which computes the pixel color of a ray by integrating over transmittance and density. Given a ray vector r parameterized by distance t and viewing direction d, volumetric rendering computes its final pixel color C(r) by \nwhere σ is the density and T is the transmittance. Unlike raytracing and rasterizing, volumetric rendering is inherently differentiable which enabled us to learn 3D shape with only 2D supervision. This is first used for 3D reconstruction in NeRF [8], where the 3D scene is represented by a neural network. The network takes in 3D position and 2D direction and outputs the volume density of the particle at the position as well as RGB radiance at that position viewed from the given angle. To render an image, the network is queried multiple times at discrete points along the ray and their density and color are obtained. Using a volumetric rendering equation, the samples are composited into the final ray color. Finally, the L2 loss is computed based on ground truth pixel color and through gradient-descent, the network learns this 3D scene. Note that in the volumetric rendering equation, transmittance T in Equation 6 depends on previously sampled densities and cannot be evaluated in parallel. This makes our implementation extremely efficient because as the number of samples increases, the runtime of the entire algorithm would approach the theoretical bounds, which is a 2-time speed up in 3D (with 4 vertices in simplex vs 8 vertices in the grid). Through repetitive sampling, our computational overhead for sorting coordinates for each vertex in the Simplicial Subdivision phase (Refer to Section 4.2) becomes negligible. This effect is proven in Figure 9. This can allow faster training and rendering for NeRF without any loss in quality." }, { "figure_ref": [ "fig_12" ], "heading": "Kernel Analysis", "publication_ref": [], "table_ref": [], "text": "To investigate the performance of our core implementation, we compare the kernel run time with baseline implementation on both CPU and GPU. For the CPU, we used an Intel Core i7-8700K CPU with 6 cores and 12 threads, running at 2.6 GHz with 16 GB of RAM (2019 Macbook Pro 16 inch). We implemented both the baseline method (grid implementation) and our proposed method (simplex hash implementation) on this CPU and measured the kernel run-time of both methods for various dimensional inputs. We used c++ for our implementation and measured the kernel run-time using the chrono module.\nFor each dimension, we use 2 27 cells and randomly sample 2 10 data points inside the n-dimensional structure. In order to produce a result in the seconds level, we perform the computation for each method 1000 times. Note that the side length of the grid is n √ 2 27 . For a 3-dimensional input, for example, the side length is\n3 √ 2 27 = 2 9 .\nFor each dimension, we run the experiments 5 times to calculate the average of the kernel run-time for each method. The experiment result is summarized in Figure 10. According to the graph, the kernel run-time of simplices scales much better with dimension. The simplex-based encoding scales linearly because its number of vertices also scales linearly with dimension. On the other hand, the kernel run-time for grid-based structure scales exponentially -matches our observation on the exponential growth of the number of vertices with respect to dimension. This gives us a huge competitive advantage against grid-based encoding in high-dimensional tasks such as NeRF and SDF." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "In this paper, we present a new approach for parameterizing graphics primitives using a simplex-based structure that offers significant advantages over traditional algorithms. By representing primitives as simplices, we are able to reduce the memory access and interpolation complexity, resulting in more efficient implementation. Through repetitive experiments and benchmarking, we show that our approach scales exceptionally well with dimensionality, making it particularly well-suited for tasks such as volumetric rendering. We believe that the simplicity, efficiency, and versatility of our approach make it an exciting avenue for future research in graphics primitives and beyond.\nAssume, to the contrary, that two different simplices S π , S π * with their two corresponding permutations π, π * intersect each other. Then ∃x ∈ R n , s.t. x is strictly in the interior of both S π and S π * . By sorting the entries of x, if the order satisfy both constraints from S π , S π * , there must exist two entries with the same value. Therefore as the inequality constraints are not strictly satisfied, x has to be on the surface of both S π , S π * . Contradiction.\nSince there are n! such permutations, there are n! simplices with the disjoint interior contained by the hypercube. Together with Lemma 4, the hypercube can be triangulated into n! disjoint congruent simplices. n ← length(points) " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Appendix A. Mathematical proofs to simplex-based structures\nProof. Let M be an upper triangular (n + 1) × (n + 1) matrix with only 0 and 1. Then for any x ∈ S, the solution to\n, where the first i + 1 entries of v i are 1 and the rest are 0. Then M can also be expressed as\n. The solution shows that x can be written as a linear combination of the n + 1 points. Addi tionally, given that 0 ≤ x 1 ≤ • • • ≤ x n ≤ 1, every entry of λ is no less than 0. With the addition constraint i λ i = 1, we conclude that x can be written as a convex combination of the n + 1 points and hence is inside the convex hull\nThen, we have\nIn conclusion, S = C. Since the n + 1 vertices v 1 , • • • , v n+1 are affinely independent points, its convex hull C is a n-simplex and so is S.\n, where all entries at indices {π 1 , • • • , π i-1 } of v i are 1 and the rest are 0. Additionally, v 1 is 0 T and v n+1 is 1 T . Theorem 3. All possible S π are congruent.\nProof. For any S π , consider its n + 1 vertices v 1 , • • • , v n+1 as defined in Remark 2. Any two vertices of S π is an edge and it has n(n+1) 2 edges with different lengths. Let d k denote the distance between two difference vertices v i and v j , where i > j and k = i -j. As shown in Remark 2, consecutive vertices only differ by 1 at one entry, and we have\nSimilarly, vertices that differ by k in their order have k more (or less) 1 and\nwhere k is from 1 to n. Since all simplices have the same edges with each other, they are congruent regardless of the order of π. Lemma 4. All possible S ′ π in the transformed coordinate system are still congruent in the original coordinate system.\nProof. We first obtain the vertex coordinates of S ′ π in the original coordinate system using coordinate unskewing in Equation 3. For its vertex v ′ i , every entry is subtracted by (i -1)G n . Using similar notation as above,\nTherefore, for any S ′ π , it contains n + 1 -k edges with length d k where k is from 1 to n. Since all transformed simplices still have the same edges with each other, they are congruent regardless of the order of π. Theorem 5. A n-cube can be triangulated into n! disjoint congruent simplices.\nProof. Based on Remark 2, the hypercube [0, 1] n fully contains all S π where π ranges over all possible n! permutations of {1, 2, ..., n}." } ]
Grid-based structures are commonly used to encode explicit features for graphics primitives such as images, signed distance functions (SDF), and neural radiance fields (NeRF) due to their simple implementation. However, in n-dimensional space, calculating the value of a sampled point requires interpolating the values of its 2 n neighboring vertices. The exponential scaling with dimension leads to significant computational overheads. To address this issue, we propose a simplex-based approach for encoding graphics primitives. The number of vertices in a simplex-based structure increases linearly with dimension, making it a more efficient and generalizable alternative to gridbased representations. Using the non-axis-aligned simplicial structure property, we derive and prove a coordinate transformation, simplicial subdivision, and barycentric interpolation scheme for efficient sampling, which resembles transformation procedures in the simplex noise algorithm. Finally, we use hash tables to store multiresolution features of all interest points in the simplicial grid, which are passed into a tiny fully connected neural network to parameterize graphics primitives. We implemented a detailed simplex-based structure encoding algorithm in C++ and CUDA using the methods outlined in our approach. In the 2D image fitting task, the proposed method is capable of fitting a giga-pixel image with 9.4% less time compared to the baseline method proposed by instant-ngp, while maintaining the same quality and compression rate. In the volumetric rendering setup, we observe a maximum 41.2% speedup when the samples are dense enough.
EFFICIENT ENCODING OF GRAPHICS PRIMITIVES WITH SIMPLEX-BASED STRUCTURES A
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of Perlin noise in 2D.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of Simplex noise in 2D.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Illustration of linear interpolation for cube and tetrahedron.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "( a )aHashing of voxel vertices Find the surrounding neighbors at L different resolution levels on a 2D plane and assign indices to their corners by hashing their integer coordinates. (b) Lookup Look up the corresponding n-dimensional feature vectors from the hash tables H L for all resulting corner indices. (c) Barycentric Interpolation Perform barycentric interpolation on neighboring coordinates according to the relative position of x within the respective l-th voxel. (d) Concatenation Concatenate the result of each level as well as any auxiliary inputs, producing the encoded MLP input. (e) Neural Network Backpropagate loss function through the MLP (e) the concatenation (d), the linear interpolation (c), and then accumulated in the looked-up feature vectors.", "figure_data": "", "figure_id": "fig_3", "figure_label": "a", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Only partial entries (colored in green) would be accessed and interpolated when sampling within the unit square (colored in red).", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "where v 1 ,1v 2 , . . . , v n are the n-dimensional vectors that define the edges of the parallelepiped, and | • | denotes the absolute value of the determinant. In the skewed space, the vectors are on the axis and (v 1 , v 2 , . . . , v n ) is the identity matrix times S n . By performing coordinate unskewing operation as Equation", "figure_data": "", "figure_id": "fig_5", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Theoretical percentage of the memory usage of simplex structures, which is (n + 1) -n-1", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Analytical percentage of the memory usage of simplex structures with different levels. The dashed lines correspond to the theoretical lower bounds in each dimension. Note that the results were calculated by sampling and may slightly underestimate.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Analysis of memory usage for simplicial structures with different dimensions and levels.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Optimization results from fitting an RGB image with 439M pixels (21450 × 21450). We use the same configurations with 7.9M trainable parameters (7.87M + 7k). Tokyo gigapixel photograph ©Trevor Dobson (CC BY-NC-ND 2.0)", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "T(t)σ(r(t))c(r(t), d)dt, T (t) = exp(-t tn σ(r(s))ds),", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "2Figure 9 :9Figure 9: Analysis of memory usage for simplicial structures with different dimensions and levels.", "figure_data": "", "figure_id": "fig_11", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Analysis of memory usage for simplicial structures with different dimensions and different levels. Both graph exhibits the same pattern with dimension", "figure_data": "", "figure_id": "fig_12", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Appendix B. Trilinear and barycentric interpolation Algorithm 11Trilinear Inpoterlation 1: function TRILINEAR INTERPOLATION(x, points) 2:", "figure_data": "", "figure_id": "fig_13", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "i = x i + 1 T i • F i x i ▷ Coordinate Skewing start-weight = 1 initialize w = w 1 , w 2 , w 3 , ...w n f = bubble-sort(f 1 , f 2 , f 3 , ...f n ) ⌊x⌋ = Floored coordinates x 1 , x 2 , x 3 , ...x n if i = 0 then coordinate i = the index of largest coordinate in ⌊x⌋ w i = start-weight -coordinate i start-weight = f 1 else coordinate i = the index of i-th largest coordinate in ⌊x⌋ x coordinatei += 1 w i = start-weight -coordinate i ▷ Simplicial Subdivision end if end for g = 2 features for point x x-feature = w • g ▷ Return feature for point x", "figure_data": "3:if n = 1 then4:return points[0].value5:end if6:i ← 08:i ← i + 19:end while10:14: end functionAlgorithm 2 Barycentric Interpolationfor i do in n+1 dimensions √ n+1-1 F n = n", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
Yibo Preprint; Wen; Yunfan Yang
[ { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b0", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "", "ref_id": "b1", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b2", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019-06" }, { "authors": "Delio Vicini; Sébastien Speierer; Wenzel Jakob", "journal": "Transactions on Graphics (Proceedings of SIGGRAPH)", "ref_id": "b3", "title": "Differentiable signed distance function rendering", "year": "2022-07" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b4", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Cheng Sun; Min Sun; Hwann-Tzong Chen", "journal": "", "ref_id": "b5", "title": "Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction", "year": "2022" }, { "authors": "Stephen Lombardi; Tomas Simon; Jason Saragih; Gabriel Schwartz; Andreas Lehrmann; Yaser Sheikh", "journal": "ACM Trans. Graph", "ref_id": "b6", "title": "Neural volumes: Learning dynamic renderable volumes from images", "year": "2019-07" }, { "authors": "Peter Hedman; P Pratul; Ben Srinivasan; Jonathan T Mildenhall; Paul Barron; Debevec", "journal": "", "ref_id": "b7", "title": "Baking neural radiance fields for real-time view synthesis", "year": "2021" }, { "authors": "Sara Fridovich; -Keil ; Alex Yu; Matthew Tancik; Qinhong Chen; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b8", "title": "Plenoxels: Radiance fields without neural networks", "year": "2022" }, { "authors": "Alex Yu; Ruilong Li; Matthew Tancik; Hao Li; Ren Ng; Angjoo Kanazawa", "journal": "", "ref_id": "b9", "title": "PlenOctrees for real-time rendering of neural radiance fields", "year": "2021" }, { "authors": "Anpei Chen; Zexiang Xu; Andreas Geiger; Jingyi Yu; Hao Su", "journal": "", "ref_id": "b10", "title": "Tensorf: Tensorial radiance fields", "year": "2022" }, { "authors": "Ang Cao; Justin Johnson", "journal": "", "ref_id": "b11", "title": "Hexplane: a fast representation for dynamic scenes", "year": "2023" }, { "authors": "Eric R Chan; Connor Z Lin; Matthew A Chan; Koki Nagano; Boxiao Pan; Shalini De Mello; Orazio Gallo; Leonidas Guibas; Jonathan Tremblay; Sameh Khamis; Tero Karras; Gordon Wetzstein", "journal": "", "ref_id": "b12", "title": "Efficient geometryaware 3D generative adversarial networks", "year": "2021" }, { "authors": "Sara Fridovich-Keil; Giacomo Meanti; Frederik Warburg; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b13", "title": "K-planes: Explicit radiance fields in space, time, and appearance", "year": "2023" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Trans. Graph", "ref_id": "b14", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022-07" }, { "authors": "Ken Perlin", "journal": "SIGGRAPH Comput. Graph", "ref_id": "b15", "title": "An image synthesizer", "year": "1985-07" }, { "authors": "Ken Perlin", "journal": "", "ref_id": "b16", "title": "Chapter 2 noise hardware", "year": "" }, { "authors": "N P Julien; David B Martel; Connor Z Lindell; Eric R Lin; Marco Chan; Gordon Monteiro; Wetzstein", "journal": "", "ref_id": "b17", "title": "Acorn: Adaptive coordinate networks for neural scene representation", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 222.76, 674.33, 317.9, 11.72 ], "formula_id": "formula_0", "formula_text": "S 2 (x) = 6x 5 -15x 4 + 10x 3 , 0 ≤ x ≤ 1,(1)" }, { "formula_coordinates": [ 3, 211.76, 618.35, 328.91, 34.48 ], "formula_id": "formula_1", "formula_text": "x ′ = x + 1 T n • F n i x i , F n = √ n + 1 -1 n ,(2)" }, { "formula_coordinates": [ 4, 205.35, 208.69, 335.32, 34.48 ], "formula_id": "formula_2", "formula_text": "x = x ′ -1 T n • G n i x ′ i , G n = 1 -1/ √ n + 1 n .(3)" }, { "formula_coordinates": [ 4, 71.67, 621.4, 468.33, 39.62 ], "formula_id": "formula_3", "formula_text": "Theorem 1. S = {x ∈ R n : 0 ≤ x 1 ≤ • • • ≤ x n ≤ 1} is a n-simplex. Remark 2. Let π denote a permutation of {1, • • • , n}, then S π = {x ∈ R n : 0 ≤ x π(1) ≤ • • • ≤ x π(n) ≤ 1} is a n-simplex. The n-simplex has n + 1 vertices v 1 , • • • , v n+1" }, { "formula_coordinates": [ 5, 193.29, 131.36, 347.38, 40.52 ], "formula_id": "formula_4", "formula_text": "x ′ =    1 + F n • • • 1 . . . . . . . . . 1 • • • 1 + F n    x, F n = √ n + 1 -1 n ,(4)" }, { "formula_coordinates": [ 5, 203.59, 261.39, 14.17, 19.28 ], "formula_id": "formula_5", "formula_text": "√ 3 2 ." }, { "formula_coordinates": [ 5, 174.75, 615.6, 365.92, 9.65 ], "formula_id": "formula_6", "formula_text": "w 1 = 1 -x 1 , w 2 = x 1 -x 2 , • • • , w n = x n-1 -x n , w n+1 = x n ,(5)" }, { "formula_coordinates": [ 6, 155.9, 560.63, 314.37, 8.64 ], "formula_id": "formula_7", "formula_text": "(a) (b) (c) (d) (e)" }, { "formula_coordinates": [ 7, 72, 419.71, 153.94, 9.68 ], "formula_id": "formula_8", "formula_text": "V = | det(unskew(v 1 , v 2 , . . . , v n ))|," }, { "formula_coordinates": [ 7, 70.83, 450.89, 469.17, 23.82 ], "formula_id": "formula_9", "formula_text": "V = |S n n det(I n -G n C n )|=S n n (1 -nG n )=S n-1 n = (n + 1) n-1 2" }, { "formula_coordinates": [ 10, 412.87, 238.11, 44.28, 17.67 ], "formula_id": "formula_11", "formula_text": "3 √ 2 27 = 2 9 ." } ]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14" ], "table_ref": [], "text": "Medical image analysis, particularly segmentation, is a critical area in healthcare, aiding in the accurate diagnosis and treatment of various diseases. Despite its importance, this task faces significant challenges in terms of data privacy and the heterogeneity of data sources [1,2,3].\nFederated Learning (FL) [4] has emerged as a key technology in the context of data privacy and decentralization. It allows for collaborative model training across multiple data sources without the need to share raw data, which is crucial in handling sensitive medical data. Typically, FL begins model training from a random initialization. While ran-Correspondence: g.yang@imperial.ac.uk dom initialization is effective in various scenarios, it presents challenges, notably in handling non-IID (independently and identically distributed) data, a common scenario in medical datasets where data distribution varies significantly across different sources [5]. Pre-training is commonly applied in computer vision, natural language processing, and many other application domains to speed up convergence and boost accuracy for downstream tasks [6,7]. Despite the ample research on pretraining, its impacts on FL model initialization for medical image analysis remain largely unexplored [8,9]. The concept of Foundation Model (FM) [10], such as the Segment Any-thing Model (SAM) [11], offers a new perspective. These models are pre-trained on vast datasets and carry extensive knowledge, which can be beneficial in addressing the non-IID challenge in FL [12,13]. This work explores the potential of using FM, specifically SAM-Med2D [14], a variant of SAM fine-tuned for medical imaging, as an instructive teacher for FL model initialization. The objective is to assess whether the pre-trained knowledge of FM can help mitigate the non-IID dilemma in FL, thus enhancing model performance and efficiency.\nHowever, the direct application of large FM like SAM-Med2D in FL poses significant challenges, primarily due to its size and consequent resource intensiveness. FM is GPUheavy and can lead to substantial communication costs during the FL process, posing a significant hurdle for efficient FL deployment. To address this issue, we propose the usage of knowledge distillation technique [15]. The idea is to distill the knowledge from the large FM into a smaller, more manageable model, which can then serve as the initialization model in FL. This approach aims to leverage the strengths of FM while mitigating its resource and communication demands.\nIn summary, this work aims to investigate the impact of using FM, particularly SAM-Med2D, as an instructive teacher for model initialization in FL within the domain of medical image segmentation. Our study on chest x-ray lung segmentation reveals three findings:\n• FM instructed initialization can serve as a good starting point for FL, enabling models to converge faster and achieve better performance, without numerous communication rounds, thus avoiding huge communication cost. • Utilizing knowledge distillation, where the FM acts as the \"teacher,\" enhances the performance and generalization capabilities of simpler \"student\" model within the FL system. • Starting from FM-instructed initialization also mitigates the impact of non-IID data." }, { "figure_ref": [], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Federated Learning", "publication_ref": [ "b3", "b11", "b12" ], "table_ref": [], "text": "In the context of FL, the training data are distributed among\nK clients. Each client k has a local private dataset D k = {(x i , y i )} |D k | i=1\n, where |D k | is the number of data samples within client k, x i is the data sample, and y i is the groundtruth. FL aims to learn a global model parameterized by θ that solves the following optimization problem and minimizes its risk on all clients:\nmin θ L(θ) = K k=1 |D k | |D| L k (θ),\nwhere\nL k (θ) = 1 |D k | |D k | i=1 ℓ(x i , y i ; θ).(1)\nHere, D represents the training data of all clients, ℓ is the loss function on a single data sample, L k signifies the empirical risk on client k, and L denotes the empirical risk on D.\nFederated Averaging (FedAvg) [4], the most widely used and standard method in FL, consists of alternating between concurrent multiple local stochastic gradient descent update at each client and the model aggregation update at the central server over several communication rounds. The local client update and global server aggregation in FL can be defined as:\nClient: θ (t) k = arg min θ L k (θ), initialized by θ (t-1) , (2) Server: θ (t) = K k=1 |D k | |D| θ (t) k .(3)\nHere, θ\nk represents the local model parameters of client k after the t-th communication round, θ (t) denotes the global model parameters after the t-th round of communication. The objective in the local training phase is to minimize the empirical risk per client, often employing several epochs of stochastic gradient descent updates. Subsequently, the global aggregation phase computes an element-wise average of all local models.\nVanilla FedAvg, starting from a random initialization, demonstrates a vulnerability to the heterogeneity in client data [12,13]. The non-IID nature of client data can lead to significant divergence in the local models both from each other and from the global optimum. Such divergence often results in a clear decline in the model performance. In this study, we employ FedAvg to explore the influence of different initialization strategies in FL, aiming to understand how various initial conditions impact the convergence and effectiveness of FedAvg in handling non-IID data heterogeneity, with a specific focus on the medical image segmentation task." }, { "figure_ref": [], "heading": "Knowledge Distillation for FM", "publication_ref": [ "b14", "b13", "b7", "b8" ], "table_ref": [], "text": "Knowledge distillation [15] is an effective technique for transferring knowledge from a larger, well-trained teacher model to a smaller and efficient student model. This approach is particularly valuable in scenarios where deploying a large model is computationally prohibitive. SAM-Med2D [14], a foundational model for segmenting medical 2D images, exemplifies the power of large-scale training. It has been trained on a massive dataset of 4.6 million images and 19.7 million masks, endowing it with exceptional zero-shot generalization capabilities. However, the sheer size of SAM-Med2D makes it impractical for direct use in FL, which often requires lightweight models due to computational and bandwidth constraints.\nInspired by [8,9], we employ knowledge distillation to transfer the rich insights from SAM-Med2D to a more compact model, SM-lite. This process involves using the outputs of SAM-Med2D to guide the learning process of SM-lite. We freeze SAM-Med2D and utilize its outputs to provide extensive pre-trained domain knowledge, ensuring robust and accurate feature learning in the student model.\nWe employ a proxy dataset D P to conduct knowledge distillation and use Kullback-Leibler (KL) divergence to measure and minimize the discrepancy between the predictions of SAM-Med2D (teacher θ T ) and SM-lite (student θ S ). The KL divergence provides a way to quantify the difference in the predicted probability distributions of the two models, focusing on aligning the student model's predictions with those of the teacher. This alignment is achieved by optimizing the student model to minimize the KL divergence between the two sets of predictions. The process can be formalized as follows:\nmin θ S E x∼D P [D KL [σ (g(x; θ T )) ∥ σ (g(x; θ S ))]] .(4)\nHere, g(•) is the logits output, and σ(•) is the non-linear activation.\nIn addition to the distillation loss, we also incorporate the ground truth segmentation masks into the loss function. This dual-loss approach, combining distillation loss with traditional supervised learning loss, ensures that SM-lite not only learns to mimic the teacher model's predictions but also adheres to the groundtruth. The complete function can be expressed as a weighted sum of the distillation loss and the segmentation loss (dice loss) against the groundtruth:\nL = αL distill + (1 -α)L segment (y, σ (g(x; θ S ))). (5)\nHere, L distill is the distillation loss, L segment is the segmentation loss, and α is a hyperparameter that balances the two types of loss." }, { "figure_ref": [], "heading": "Initialization Strategies", "publication_ref": [], "table_ref": [], "text": "We consider three initialization strategies in our study: random initialization, pre-training initialization, and FMinstructed initialization." }, { "figure_ref": [], "heading": "Random Initialization", "publication_ref": [], "table_ref": [], "text": "Random initialization in FL is characterized by starting the model training with weights that are randomly assigned. This strategy, devoid of any preliminary knowledge about the data or task, is fundamental in its approach and execution. Its inherent randomness can lead to variability in performance, especially in non-IID data scenarios, often resulting in extended training times and potential convergence issues. We use random initialization to serve as a baseline for comparison against more informed initialization strategies." }, { "figure_ref": [], "heading": "Pre-training Initialization", "publication_ref": [], "table_ref": [], "text": "Pre-training initialization involves using a proxy dataset D P to pre-train the model before it is incorporated into FL. This process begins by selecting a representative dataset that closely aligns with the target task but is not part of the distributed datasets in the FL network. The model is initially trained on the proxy dataset to acquire a basic understanding of the task, which includes learning general features and patterns relevant to medical image segmentation. This foundational training phase aims to provide the model with a head start when it enters the FL cycle, potentially leading to quicker convergence and improved performance compared to models that begin with random weights." }, { "figure_ref": [], "heading": "FM-instructed Initialization", "publication_ref": [], "table_ref": [], "text": "FM-instructed initialization represents a more advanced strategy, leveraging the vast pre-trained knowledge encapsulated in an FM like SAM-Med2D. The process involves distilling the knowledge from SAM-Med2D into a smaller model SMlite as described in Section 2.2, which is then used as the starting point in FL. This distillation process is crucial; it involves training SM-lite on a proxy dataset D P , ensuring that it learns to mimic the high-level feature representations and predictions of the SAM-Med2D model. The resulting SMlite model embodies the extensive learning and generalization capabilities of SAM-Med2D but in a more compact and FL-friendly form. This strategy is potentially beneficial for dealing with the non-IID data challenge in FL, as it provides the model with a robust and comprehensive understanding of medical image segmentation tasks right from the beginning." }, { "figure_ref": [], "heading": "EXPRIMENTS AND RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset and Implementation Details", "publication_ref": [ "b15" ], "table_ref": [], "text": "We utilize the COVID-19 chest X-ray dataset1 , which contains 6,504 images of AP/PA chest X-ray annotated with pixel-level polygonal lung segmentations. All images are resized to 256×256. Data augmentation, including rotation, flipping, and adjustments in brightness and contrast, are employed to enhance model generalizability [16]. The dataset is partitioned into three distinct segments: 1,504 images form the proxy dataset D P , 500 images are reserved for testing, and the remaining 4,500 images are distributed across 3 clients. To simulate the non-IID conditions, we design age skew and quantity skew. Age skew refers to the imbalance in the age distribution of training data across 3 clients, while quantity skew refers to the imbalance in the number of training data across 3 clients.\nIn the knowledge distillation process, the bounding box of groundtruth is used as the box prompt for SAM-Med2D, and α is set to 0.6 to balance the loss function. Our FL model is based on the U-Net. Training involves 100 communication rounds, with each round consisting of 5 epochs. We use an Adam optimizer with an initial learning rate of 1e-4. We also compare FedAvg employing various initialization strategies against centralized and standalone learning to assess their performance gap. We use Dice score to evaluate the segmentation performance. All the experiments are implemented in PyTorch and trained on 4 NVIDIA RTX 3090 GPUs. " }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Results: From Random to Pre-training and FMinstructed Initailization", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Reduce the performance gap between FL and centralized learning. The comparison of segmentation performance under different initialization strategies reveals that both Pretraining and FM-instructed initialization significantly reduce the performance gap between federated and centralized learning. As shown in Table 1, while the random initialization lags substantially behind the centralized approach with a Dice score of 74.67 in federated settings, both Pre-training and FM-instructed initialization markedly narrow this gap, achieving Dice scores of 83.92 and 85.38, respectively. The latter score approaches the centralized learning of 89.41 more closely, suggesting that the FM-instructed initialization can further enhance the performance of the model. Narrow the performance gap between non-IID and IID. We compare FedAvg employing various initialization strategies under IID and non-IID (age skew and quantity skew) data splits. Figure 2 indicates that both Pre-training and FM-instructed initialization demonstrate potential capabilities to mitigate the non-IID issue. When comparing non-IID and IID scenarios, Pre-training and FM-instructed initialization show marginal decreases, unlike the random initialization which exhibits a clear performance drop, emphasizing the robustness of Pre-training and FM-instructed initialization against data heterogeneity. Specifically, in non-IID (age skew) conditions, the decrement in performance is substantially less pronounced for FM-instructed initialization compared to Pre-training initialization, suggesting that the extensive pre-trained knowledge embedded within FM offers an advantage in managing data heterogeneity. Faster convergence to better performance. Figure 3 presents the training dynamics of various initialization strategies, with a particular emphasis on the accelerated convergence rates facilitated by pre-training and, more notably, FM-instructed initializations. It indicates a steep curve for Pre-training and FM-instructed initialization, reaching higher Dice scores in fewer rounds (less than 40 rounds) compared to random initialization strategy (more than 40 rounds). This rapid convergence verifies the efficiency of more informed initialization strategies and further suggests that the FMinstructed initialization enhances the federated model's ability to start with a strong pre-trained knowledge base, leading to quickly achieving optimal performance." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "Our study focuses on the impact of FM-instructed initialization in FL. We find that FM-instructed initialization can serve as a good teacher to help address the suboptimal performance issue in FL. This study represents a pioneering effort to combine the strengths of FM with FL, potentially setting a new standard for model initialization in FL settings." }, { "figure_ref": [], "heading": "COMPLIANCE WITH ETHICAL STANDARDS", "publication_ref": [], "table_ref": [], "text": "This research study was conducted retrospectively using the COVID-19 Chest X-ray dataset made available in open access. Ethical approval was not required as confirmed by the license attached with the open access dataset." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This study was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\\NSFC\\211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, Wellcome Leap Dynamic Resilience, and the UKRI Future Leaders Fellowship (MR/V023799/1)." } ]
In medical image analysis, Federated Learning (FL) stands out as a key technology that enables privacy-preserved, decentralized data processing, crucial for handling sensitive medical data. Currently, most FL models employ random initialization, which has been proven effective in various instances. However, given the unique challenges posed by non-IID (independently and identically distributed) data in FL, we propose a novel perspective: exploring the impact of using the foundation model with enormous pre-trained knowledge, such as the Segment Anything Model (SAM), as an instructive teacher for FL model initialization in medical image segmentation task. This work for the first time attempts to utilize the foundation model as an instructive teacher for initialization in FL, assessing its impact on the performance of FL models, especially in non-IID data scenarios. Our empirical evaluation on chest x-ray lung segmentation showcases that FL with foundation model instructed initialization not only achieves faster convergence but also improves performance in complex data contexts. These findings offer a new perspective for model initialization in FL.
WHERE TO BEGIN? FROM RANDOM TO FOUNDATION MODEL INSTRUCTED INITIALIZATION IN FEDERATED LEARNING FOR MEDICAL IMAGE SEGMENTATION
[ { "figure_caption": "Fig. 1 .1Fig. 1. Where to begin? From random initialization to foundation model instructed initialization. Considering the high communication costs and GPU resource demands of directly employing the foundation model, it is used as an instructive teacher for the initialization in FL.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. The segmentation performance of various initialization strategies for FedAvg trained on IID and non-IID data.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The training dynamics of various initialization strategies for FedAvg (age skew).", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "The Segmentation performance of various initialization strategies for FedAvg (age skew).", "figure_data": "InitializationFederated Centralized StandaloneRandom74.6789.4167.20Pre-training83.92--FM-instructed85.38--80 9078.11 non-IID (Age Skew) non-IID (Quantity Skew) 83.92 IID Data81.3385.7485.3883.7987.15Dice7074.6770.926050RandomPre-trainingFM-instructed", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Ming Li; Guang Yang
[ { "authors": "Xin Liu; Yiting Fan; Shuang Li; Meixiang Chen; Ming Li; William Kongto Hau; Heye Zhang; Lin Xu; Alex Pui; -Wai Lee", "journal": "American Journal of Physiology-Heart and Circulatory Physiology", "ref_id": "b0", "title": "Deep learning-based automated left ventricular ejection fraction assessment using 2-d echocardiography", "year": "2021" }, { "authors": "Ming Li; Chengjia Wang; Heye Zhang; Guang Yang", "journal": "Computers in biology and medicine", "ref_id": "b1", "title": "Mv-ran: Multiview recurrent aggregation network for echocardiographic sequences segmentation and full cardiac cycle analysis", "year": "2020" }, { "authors": "Ming Li; Weiwei Zhang; Guang Yang; Chengjia Wang; Heye Zhang; Huafeng Liu; Wei Zheng; Shuo Li", "journal": "Springer", "ref_id": "b2", "title": "Recurrent aggregation learning for multi-view echocardiographic sequences segmentation", "year": "2019" }, { "authors": "Brendan Mcmahan; Eider Moore; Daniel Ramage; Seth Hampson; Blaise Aguera Y Arcas", "journal": "PMLR", "ref_id": "b3", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "Ming Li; Guang Yang", "journal": "", "ref_id": "b4", "title": "Data-free distillation improves efficiency and privacy in federated thorax disease analysis", "year": "2023" }, { "authors": "Shizhong Dong; Zhifan Gao; Shanhui Sun; Xin Wang; Ming Li; Heye Zhang; Guang Yang; Huafeng Liu; Shuo Li", "journal": "BMVC", "ref_id": "b5", "title": "Holistic and deep feature pyramids for saliency detection", "year": "2018" }, { "authors": "Ming Li; Shizhou Dong; Zhifan Gao; Cheng Feng; Huahua Xiong; Wei Zheng; Dhanjoo Ghista; Heye Zhang; Victor Hugo; C De Albuquerque", "journal": "Applied Soft Computing", "ref_id": "b6", "title": "Unified model for interpreting multi-view echocardiographic sequences without temporal information", "year": "2020" }, { "authors": "Sahib Julka; Michael Granitzer", "journal": "", "ref_id": "b7", "title": "Knowledge distillation with segment anything (sam) model for planetary geological mapping", "year": "2023" }, { "authors": "Jingqian Wu; Rongtao Xu; Zach Wood-Doughty; Changwei Wang", "journal": "", "ref_id": "b8", "title": "Segment anything model is a good teacher for local feature learning", "year": "2023" }, { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Sydney Von Arx; Jeannette Michael S Bernstein; Antoine Bohg; Emma Bosselut; Brunskill", "journal": "", "ref_id": "b9", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b10", "title": "Segment anything", "year": "2023" }, { "authors": "Hong-You Chen; Cheng-Hao Tu; Ziwei Li; Han Wei Shen; Wei-Lun Chao", "journal": "", "ref_id": "b11", "title": "On the importance and applicability of pre-training for federated learning", "year": "2023" }, { "authors": "John Nguyen; Jianyu Wang; Kshitiz Malik; Maziar Sanjabi; Michael Rabbat", "journal": "", "ref_id": "b12", "title": "Where to begin? on the impact of pre-training and initialization in federated learning", "year": "2023" }, { "authors": "Junlong Cheng; Jin Ye; Zhongying Deng; Jianpin Chen; Tianbin Li; Haoyu Wang; Yanzhou Su; Ziyan Huang; Jilong Chen; Lei Jiang", "journal": "", "ref_id": "b13", "title": "Sam-med2d", "year": "2023" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b14", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Ming Li; Yingying Fang; Zeyu Tang; Chibudom Onuorah; Jun Xia; Javier Del Ser; Simon Walsh; Guang Yang", "journal": "IEEE Transactions on Emerging Topics in Computational Intelligence", "ref_id": "b15", "title": "Explainable covid-19 infections identification and delineation using calibrated pseudo labels", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 54.43, 576.02, 243.78, 27.3 ], "formula_id": "formula_0", "formula_text": "K clients. Each client k has a local private dataset D k = {(x i , y i )} |D k | i=1" }, { "formula_coordinates": [ 2, 139.32, 659.65, 115.87, 30.55 ], "formula_id": "formula_1", "formula_text": "min θ L(θ) = K k=1 |D k | |D| L k (θ)," }, { "formula_coordinates": [ 2, 131.74, 688.39, 166.46, 38.09 ], "formula_id": "formula_2", "formula_text": "L k (θ) = 1 |D k | |D k | i=1 ℓ(x i , y i ; θ).(1)" }, { "formula_coordinates": [ 2, 324.74, 192.02, 234.25, 52.97 ], "formula_id": "formula_3", "formula_text": "Client: θ (t) k = arg min θ L k (θ), initialized by θ (t-1) , (2) Server: θ (t) = K k=1 |D k | |D| θ (t) k .(3)" }, { "formula_coordinates": [ 3, 81.13, 240.95, 217.07, 15.28 ], "formula_id": "formula_5", "formula_text": "min θ S E x∼D P [D KL [σ (g(x; θ T )) ∥ σ (g(x; θ S ))]] .(4)" }, { "formula_coordinates": [ 3, 79.86, 398.08, 218.34, 9.81 ], "formula_id": "formula_6", "formula_text": "L = αL distill + (1 -α)L segment (y, σ (g(x; θ S ))). (5)" } ]
2023-11-27
[ { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b15", "b38", "b3", "b11", "b26", "b30", "b34", "b13", "b22", "b33", "b42", "b13", "b5" ], "table_ref": [], "text": "Humans possess the remarkable ability to deconstruct intricate concepts into their fundamental components and then ingeniously recombine these elements to generate novel concepts. For instance, when presented with images of two distinct bird species, say artic tern and song sparrow, we can vividly imagine the appearance of a hybrid species with artic tern's head and song sparrow's body (Fig. 1 and2). This capability is likely to find potential applications in digital asset creation and biodiversity analysis [16,39]. However, existing generative models [4,12,27,31,33,35] are still too limited to conduct such fine-grained synthesis tasks.\nRecent personalization methods are applicable for this type of imagination task when they are adapted and integrated properly [2, 14,23,34,43]. Given a small set of selected images that depict the same concept, they learn a dedicated token in the textual representation space (i.e., textual inversion [14]), which could be flexibly used as a proxy in various text descriptions to generate consistent images. They focus on learning the concepts as a whole and putting them in diverse backgrounds and contexts. This ability has been further extended to a single shot setting where only one image of the target concept is provided for learning [2]. However, we observe a limitation in these methods regarding the reuse of their learned concepts for new concept generation at the fine-grained level (Fig. 4).\nIn this work, we introduce a novel unsupervised generation task -virtual creatures generation. Given a set of unlabeled images from the target concepts (e.g., 200 bird species), we aim to train a text-to-image (T2I) generative model that can create new hybrid concepts in diverse backgrounds and contexts. To realize this task, we further formulate a DreamCreature method. It is capable of leveraging off-the-shelf image encoders (e.g., DINO [6]) to identify the underlying sub-concepts (e.g., body parts of a specific bird species such as wings, head, tail). It further adapts the T2I model to generate these sub-concepts through textual inversion. To further improve sub-concept fidelity and disentanglement, we introduce a projector for mapping subconcept tokens to the text embedding space, complemented by tailored attention loss regularization. This attention loss serves a dual purpose: it not only ensures accurate positioning of sub-concepts in the cross-attention map but also enforces that each image region is occupied by no more than one sub-concept.\nOur contributions are as follows: (i) We introduce a more challenging fine-grained image generation task -virtual creatures generation, that needs to create new hybrid concepts based on learned sub-concepts. This not only reveals the limitations of existing generative models but also expands the scope of generative AI. (ii) We propose a novel method called DreamCreature, capable of automatically discovering the underlying sub-concepts in an unsupervised manner and allowing flexible generation of new concepts with faithful holistic structures and photorealistic appearance. (iii) To benchmark this generation task, we introduce two quantitative metrics. Extensive experiments on CUB-200-2011 (birds) and Stanford Dogs datasets show that DreamCreature excels over prior art alternatives in both qualitative and quantitative assessments. (iv) Finally, DreamCreature opens up new avenues for creative applications such as innovative consumer product design and nuanced property modifications, showcasing the practical utility and versatility of the learned sub-concepts." }, { "figure_ref": [ "fig_0" ], "heading": "Related Work", "publication_ref": [ "b26", "b30", "b34", "b23", "b36", "b43", "b45", "b46", "b20", "b25", "b2", "b8", "b16", "b44", "b0", "b13", "b33", "b40", "b6", "b12", "b27", "b35", "b9", "b14", "b39" ], "table_ref": [], "text": "Text-to-image synthesis and personalization. State-ofthe-art large text-to-image (T2I) diffusion models [4, 11, 12, 27,31,33,35] have surpassed conventional methods [24,29,37,44,46,47] in generating high-quality images from unstructured text prompts. These advanced generative models have been widely applied to global [5, 21,26] and localized [3,9,17,45] image editing tasks. However, the effectiveness of T2I models is constrained by the user's ability to articulate their desired image through text. These models face challenges in faithfully replicating visual characteristics from a reference set and generating innovative interpretations in diverse contexts, even with detailed textual descriptions.\nTo address this limitation, various personalization techniques have been developed. These techniques obtain a new single-word embedding from multiple images [1,14,34,41] or multiple new word embeddings for various subjects within a single image [2] through inversion. However, most of these approaches focus on extracting the complete subjects as presented but not creating new subjects. We address this limitation in this work by introducing an unsupervised task of creating novel concepts/subjects from existing ones by meaningful recombination (see Fig. 2).\nCreative editing and generation. Creativity involves generating innovative ideas or artifacts across various domains [7]. Extensive research has explored the integration of creativity into Generative Adversarial Networks (GANs) [13,28,36] and Variational Autoencoders (VAEs) [8,10]. For example, DoodlerGAN [15] learns and combines finelevel part components to create sketches of new species.\nA recent study by [40] demonstrated decomposing personalized concepts into distinct visual aspects, creatively recombined through diffusion models. InstructPix2Pix [5] allows creative image editing through instructions, while ConceptLab [32] aims to identify novel concepts within a specified category, deviating from existing concepts.\nIn contrast, our focus is on training a text-to-image generative model that creatively generates new concepts by seamlessly composing sub-concepts from different existing concepts in diverse backgrounds." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology", "publication_ref": [ "b13", "b18" ], "table_ref": [], "text": "Overview. Given an unlabeled image dataset for target concepts (e.g., 200 bird species [42]), we aim to train a text-to-image generative model that can create new hybrid concepts in diverse backgrounds and contexts. To that end, we propose a novel DreamCreature method, as depicted in Fig 3.\nIt starts by discovering the underlying sub-concepts (e.g., body parts of each species) in a two-tier hierarchy, as detailed in Sec. 3.1. Note that each target concept is composed of a set of sub-concepts. Paired with the training images {x}, this semantic hierarchy subsequently serves as the supervision to fine-tune a pre-trained text-to-image model, say a latent diffusion model [33], denoted as {ϵ θ , τ θ , E, D}, where ϵ θ represents the diffusion denoiser, τ θ the text encoder, and E/D the autoencoder respectively. We adopt the textual inversion technique [14]. Concretely, we learn a set of pseudo-words p * for each sub-concept in the word embedding space with:\nL ldm = E z,t,p,ϵ ||ϵ -ϵ θ (z t , t, τ θ (y p )|| 2 2 ,(1)\np * = argmin p L ldm ,(2)\nwhere ϵ ∼ N (0, 1) denotes the unscaled noise, t is the time step, z = E(x) is the latent representation of the image, z t is the latent noise at time t, and y p is the text condition that includes p as part of the text tokens. L ldm is a standard diffusion loss [19] to reconstruct the sub-concepts. As each target concept is composed of a set of sub-concepts, its reconstruction is achieved by the reconstruction of the associated set of sub-concepts." }, { "figure_ref": [ "fig_1" ], "heading": "Sub-concepts Discovery", "publication_ref": [ "b5" ], "table_ref": [], "text": "To minimize the labeling cost, we develop a scalable process to reveal the underlying semantic hierarchy with subconcepts in an unsupervised fashion. We leverage the offthe-shelf vision model for image decomposition and clustering. Specifically, given an image x i , we employ DINO [6] to extract the feature map F = {F i = dino(x i )} N i . We then conduct three-level hierarchical clustering (see Fig. 3): 1. At the top level, k-means is applied with two clusters on the feature maps F to obtain the foregrounds and backgrounds B. 2. At the middle level, k-means is further applied on the foregrounds to acquire M clusters each representing a class-agnostic sub-concept, such as the head of birds. 3. At the bottom level, we further group each of the M clusters as well as the background cluster B into K splits. Each split refers to further fine meaning, such as the head of a specific bird species, or a specific background style. After the above structural analysis, each region of an image will be tagged with the corresponding cluster index. We represent these cluster tags as follows:\np = (0, k 0 ), (1, k 1 ), ..., (M, k M ),(3)\nwhere the first pair refers to the background style, and the following M pairs denote the combinations of M subconcepts (e.g., head, body, wings) each associated with a specific concept (e.g., sparrow), and k ∈ {1, . . . , K}. This description will be used as the textual prompt in model training, such as \"a photo of a [p]\". Please refer to the supplementary material for more examples of the autodiscovered semantic hierarchy. This process also yields the segmentation mask of each m-th sub-concept, which we define as S m ." }, { "figure_ref": [], "heading": "Sub-concepts Projection", "publication_ref": [ "b13", "b13" ], "table_ref": [], "text": "In contrast to prior text inversion studies [14], our task requires mastering a greater quantity-specifically, (M + 1)K-of word tokens derived from a collection of selfdiscovered concepts and sub-concepts marked by inherent imperfections (such as partial overlap and over splitting). This makes the learning task more demanding. To enhance the learning process, we propose a neural network f comprising a two-layer MLP with ReLU activation:\ny p = f (e(p)),(4)\nwhere y p will be subsequently used as the input 1 to the text encoder τ θ and e ∈ R M K×D is a learnable word embedding dictionary that maps p to their respective embeddings. Our design demonstrates quicker convergence than directly learning the final word embeddings e(.) [14] (see Sec. 4.3). This could be attributed to the entanglement of word embeddings in the conventional design, where there is no information exchange among them during optimization. This lack of communication leads to lower data efficiency and slower learning. It's worth noting that the conventional design is a specific instance of our approach when f is an identity function." }, { "figure_ref": [ "fig_6" ], "heading": "Model Training", "publication_ref": [ "b22", "b33", "b19", "b16", "b19" ], "table_ref": [], "text": "Fine-tuning the T2I model, rather than sorely learning pseudo-words, has been shown to achieve better reconstruction of target concepts as demonstrated in [23,34]. However, this comes with a significant training cost. Thus, we apply LoRA (low-rank adaptation) [20] to the crossattention block for efficient training. We then minimize the diffusion loss L ldm (Eq. ( 1)) to learn both pseudo-words and LoRA adapters.\nWhile training with only L ldm , entanglement happens between parts, as evident from the attention maps in the cross-attention block of the denoiser ϵ θ (see Sec. 4.3). This entanglement arises due to the correlation between subconcepts (e.g., a bird head code is consistently paired with a bird body code to represent the same species). To address this issue, we introduce an entropy-based attention loss as regularization:\nL attn = E z,t,m -S m log Âm + (1 -S m ) log(1 -Âm ) , (5) Ām = 1 L L l A l,m , Âm,i,j = Ām,i,j k Āk,i,j ,(6)\nwhere A ∈ [0, 1] M ×HW represents the cross-attention map between the m-th sub-concept and the noisy latent z t , L represents the number of selected attention maps, Â ∈ [0, 1] M ×HW represents the averaged and normalized cross-attention map over all sub-concepts and S m ∈ {0, 1} M ×HW serves as the mask that indicates the location of m-th part. In cases where the sub-concept is not present in the image (e.g., occluded), we set both S m and Âm as 0 to exclude them. Thus, the overall learning objective is defined as:\nL total = L ldm + λ attn L attn ,(7)\nwhere λ attn = 0.01. We focus on attention maps at the resolution of 16 × 16 where rich semantic information is captured [17]. Normalization is performed at each location to ensure that the sum of a patch location equals 1. This attention loss aims to maximize the attention of a specific sub-concept at a particular location which implicitly minimizing the attention of other sub-concepts. Compared to the mean-square \n✓ ✗ ✗ ✗ DreamBooth [34] ✓ LoRA* ✗ ✗ CustomDiffusion [23] ✓ K/V ✗ ✗ Break-a-scene [2] ✓ LoRA* MSE ✗ Ours ✓ LoRA Eq. 5 ✓ Table 1.\nComparing our and competing methods in design properties. *: We fine-tuned the added LoRA [20] adapter rather than the entire diffusion model ϵ θ due to resource limit. MSE is a meansquare based attention loss used in [2].\nbased attention loss [2], this intuitively ensures that a subconcept only appears once at a particular location, facilitating stronger disentanglement from other sub-concepts during the denoising operation. When generating a subconcept for a particular location, the diffusion model ϵ θ should only attend to the sub-concept instead of other nonrelated sub-concepts. We have validated the effectiveness of this design empirically (see Fig. 7)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b21", "b13", "b33", "b22" ], "table_ref": [], "text": "Datasets. We demonstrate virtual creature generation on two fine-grained object datasets: CUB-200-2011 (birds)\n[42] which contains 5994 training images, and Stanford Dogs [22] which contains 12000 training images.\nImplementation. For virtual creature generation, we assess the model's ability to combine up to 4 different parts from 4 distinct species/images. We set M = 5 for bird generation (head, front body/breast area, wings, legs, tail) and M = 7 for dog generation (forehead, eyes, mouth/nose, ears, neck, body/tail, legs). For both datasets, K is set as 256, ensuring sufficient coverage of all fine-grained classes (i.e., 200 for birds and 120 for dogs). We randomly generate 500 images by sampling 500 sets of sub-concepts. For each image, we randomly replace an original sub-concept with any sub-concept from another 500 non-overlapping sets of sub-concepts. The resulting set of sub-concepts may take the form of \"(0, Competitors. We compare our method with the recent personalization methods: Textual Inversion [14], Dream-Booth [34], Custom Diffusion [23], Break-a-scene [2]. These personalization methods were designed to take single or multiple images with associated labeled concepts as input. For fair comparison, we adapted them to our problem setting by inputting the same unsupervised associated concepts as DreamCreature (obtained as in Sec. 3.1). Thus, the text prompt for each image is \"a photo of [p]\" where p is expressed in Eq. ( 3). We employ the official implementations released by the original authors for training. We summarize the main design properties of all competitors in Tab. 1.\nk A ) (1, k B ) (2, k C ) ... (M, k D )\"," }, { "figure_ref": [ "fig_2" ], "heading": "Virtual Creature Generation Evaluation", "publication_ref": [], "table_ref": [], "text": "Evaluation metrics. To assess a model's ability to disentangle and composite sub-concepts, we introduce two metrics: (a) exact matching rate (EMR) and (b) cosine similarity (CoSim) between the k-means embeddings of the subconcepts of real and generated images. Utilizing the pretrained k-means from Sec. 3.1, we predict the sub-concepts of generated images. EMR quantifies how accurately the cluster index of sub-concepts of generated images matches the sub-concepts of the corresponding real images whereas CoSim measures the cosine similarity between the k-means centroid vector that the sub-concept belongs to between generated and real images. These metrics assess the model's ability to follow the input sub-concepts and accurately reconstruct them, with perfect disentanglement indicated by EMR of 1 and CoSim of 1. A detailed algorithm is provided in the supplementary material.\nQuantitative results. Our findings, as shown in Fig. 4, can be summarized as follows: (i) As the number of composited sub-concepts increases, EMR and CoSim decrease, reflecting the challenge of composing multiple diverse subconcepts. (ii) Break-a-scene and DreamCreature achieve notably higher EMR and CoSim scores, thanks to disentanglement through attention loss minimization. (iii) Dream-Creature outperforms Break-a-scene significantly by dedi- cated token projector and tailored attention loss designs (especially in the case of bird generation).\nQualitative results. In Fig. 5, we visualize the results of composing 4 different sub-concepts. While all images appear realistic, most methods struggle to assemble all 4 subconcepts. In contrast, our methods successfully combine 4 different sub-concepts from 4 different species, demonstrating the superior ability of our approach to sub-concept composition. We also visualize additional examples of our method in the supplementary material. Furthermore, we explore the versatility of the adapted model by generating images with simple styles such as pencil drawing. While most methods successfully incorporate specific styles into the generated image, Custom Diffusion often fails to do so, possibly due to the unconstrained finetuning of the cross-attention components K/V ." }, { "figure_ref": [], "heading": "Conventional Generation Evaluation", "publication_ref": [ "b17", "b29", "b5", "b33" ], "table_ref": [], "text": "Our method works for traditional image generation, i.e., reconstructing target concepts (e.g., specific bird species). Evaluation metrics. We measure generated image quality using FID [18] to assess model performance in terms of image distribution. We also compute the average pairwise cosine similarity between CLIP [30]/DINO [6] embeddings of generated and real class-specific images following [34]. Each generated image is conditioned on the sub-concepts of the corresponding real image. This results in 5,994 generated images for birds and 12,000 generated images for dogs. Quantitative results. In Tab. 2 and Tab. 3, we summarize the performance of respective methods on bird and dog generation, respectively. We highlight four observations: (i) Textual Inversion performs quite well compared to Dream-Booth, CustomDiffusion, and Break-a-scene in terms of FID, CLIP, and DINO scores although did not fine-tune the diffusion model ϵ θ . This may be due to the potential risk of overfitting when fine-tuning ϵ θ especially when learning a vast array of new concepts with many update iterations. It is also not uncommon to carefully tune the learning rate and the training iterations in these models when fine-tuning new concepts (e.g., only 800-1000 steps of updates to learn a new concept in [2]). (ii) Nonetheless, fine-tuning the diffusion model ϵ θ can help improve the ability to follow prompts as shown by increased EMR and CoSim scores (e.g., EMR of at least 5% in DreamBooth). (iii) Break-ascene has a better ability to reconstruct the sub-concepts as shown by EMR and CoSim, this is due to the attention loss explicitly forcing the sub-concept to focus on the respective semantic region. (iv) DreamCreature achieves the best performance in DINO, EMR, and CoSim scores (e.g., 7% bet- ter in EMR compared to Break-a-scene). This indicates that not only does our image-generation ability perform comparably well with textual inversion, but DreamCreature is also able to disentangle the sub-concept learning so that it can follow the prompt instructions more accurately to generate the sub-concepts in a cohort. In Fig. 6, we present generated images from different methods, with CustomDiffusion exhibiting high-contrast images, possibly due to unconstrained fine-tuning on the cross-attention components K/V and resulting in worse FID scores." }, { "figure_ref": [ "fig_6" ], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Component analysis. In Fig. 7, we evaluate the effect of our proposed components (token projection and attention loss) on creating virtual bird species. (i) Removing the sub-concept projector outlined in Eq. ( 4) degrades the generation quality as evidenced by a higher FID score (12.86 → 16.36) even though both EMR and CoSim remain. (ii) By replacing our L attn with the MSE loss as proposed in [2], we observe significant deterioration in both EMR and CoSim. (iii) Finally, incorporating both the projector and our attention loss performs the best. This improvement highlights the necessity of incorporating interactions between multiple sub-concepts to achieve more effective subconcept disentanglement and optimization.\nCross-attention visualization. Our attention loss plays a crucial role in token disentanglement. We demonstrate the impact of this loss in the Appendix E.3, where we observe significantly enhanced disentanglement after explicitly guiding attention to focus on distinct semantic regions.\nConvergence analysis. We present a visual comparison of images generated by various methods under the conventional setting in the Appendix E.4, spanning from the initial to the final stages of training. Notably, our DreamCreature demonstrates an ability to learn new concepts at even the early stages of training.\nTransferability and Creative Asset Creation. (i) In Fig. 8a, we demonstrate that not only it can compose subconcepts within the domain of the target concepts (e.g., birds), but it can also transfer the learned sub-concepts to and combine with other domains (e.g., cat). This enables the creation of unique combinations, such as a cat with a dog's ear. (ii) Leveraging the prior knowledge embedded in Stable Diffusion, DreamCreature can also repurpose learned sub-concepts to design innovative digital assets. An example of this is the generation of a bird-shaped robot adorned with various sub-concepts, as depicted in Fig. 8b. These examples showcase DreamCreature's immense potential for diverse and limitless creative applications. Please see the supplementary material for more examples." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduced a novel task: Virtual Creature Generation, focusing on training a text-to-image generative model capable of seamlessly composing parts from various species to create new classes. We addressed the challenge of learning sub-concepts by proposing DreamCreature, an innovative method with token projection and tailored attention loss regularization. DreamCreature can seamlessly compose different sub-concepts from different images, creating species that do not exist by mixing them. Extensive experiments demonstrated DreamCreature's superior performance in both qualitative and quantitative evaluation. Moreover, the learned sub-concepts demonstrate strong transferability and significant potential for creative asset generation. We hope that our DreamCreature will empower artists, designers, and enthusiasts to bring the creatures of their dreams to reality." }, { "figure_ref": [], "heading": "Limitations and Future Works", "publication_ref": [ "b42" ], "table_ref": [], "text": "It is worth noting that the accuracy of obtained sub-concepts may be affected by using a self-supervised pre-trained feature extractor. Future work may explore the incorporation of encoders, such as [43], to improve sub-concept accuracy. We also observed challenges in composing relatively small sub-concepts, like tails and legs, which require further investigation. Additionally, we are also exploring crossdomain generation, i.e., combining learned sub-concepts from different datasets to create creatures with even more diverse sub-concepts. " }, { "figure_ref": [ "fig_0" ], "heading": "C. Examples of our sub-concept discovery", "publication_ref": [ "b13" ], "table_ref": [], "text": "In Fig. 9, we display a few examples of our obtained segmentation masks and associated sets of sub-concepts. We visualize the word embeddings of learned tokens of Textual Inversion [14], Break-a-scene [2] and our Dream-Creature for birds generation (CUB-200-2011 [42]) through tSNE [38] in Fig. 12. In our DreamCreature, the word embeddings are the projected embeddings through Eq. ( 4) We can see that our projected version has a better semantic meaning such that the sub-concept embeddings are clustered together by their semantic meaning (e.g., head). We believe this is one of the reasons that our DreamCreature outperforms previous methods in which we can compose all sub-concepts seamlessly yet with higher quality." }, { "figure_ref": [ "fig_10" ], "heading": "E.2. Attention loss weight", "publication_ref": [], "table_ref": [], "text": "In Fig. 13 and Tab. 4, we summarize the results of our ablation study on the impact of λ attn . We observed that λ attn = 0.01 frequently yields the best EMR and CoSim scores, while also delivering comparable FID scores. Consequently, we have adopted this value as the default in our experiments. " }, { "figure_ref": [ "fig_11" ], "heading": "E.3. Cross-attention visualization", "publication_ref": [], "table_ref": [], "text": "Our attention loss plays a crucial role in token disentanglement. We demonstrate the impact of this loss in Fig. 14, where we observe significantly enhanced disentanglement after explicitly guiding attention to focus on distinct semantic regions. " }, { "figure_ref": [ "fig_12" ], "heading": "E.4. Convergence analysis", "publication_ref": [], "table_ref": [], "text": "We present a visual comparison of images generated by various methods under the conventional setting in Fig. 15, " }, { "figure_ref": [], "heading": "DreamCreature: Crafting Photorealistic Virtual Creatures from Imagination", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Implementation Details", "publication_ref": [ "b19" ], "table_ref": [], "text": "We conducted training on a single GeForce RTX 3090 GPU with a batch size of 2 over 100 epochs. AdamW [25] optimizer was employed with a constant learning rate of 0.0001 and weight decay of 0.01. Only random horizontal flip augmentation is used. 512 × 512 image resolution is applied. We adopted the LoRA design [20] from diffusers library 2 , in which the low-rank adapters were added to the QKV and out components of all cross-attention modules.\nRegarding the attention loss (see Eq. ( 5)), we selected cross-attention maps with a feature map size of 16 × 16. The specific layers chosen for this purpose were as follows: " }, { "figure_ref": [], "heading": "B. Implementation of EMR and CoSim", "publication_ref": [], "table_ref": [], "text": "" } ]
ANY Concept Composing (mixing sub-concepts from ANY * species) DreamCreature (Ours) … … …
DreamCreature: Crafting Photorealistic Virtual Creatures from Imagination
[ { "figure_caption": "Figure 2 .2Figure 2. Integrating a specific sub-concept (e.g., body, head, or even background (BG) of a source concept B to the target concept A.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Overview of our DreamCreature. (Left) Discovering sub-concepts within a semantic hierarchy involves partitioning each image into distinct parts and forming semantic clusters across unlabeled training data. (Right) These clusters are organized into a dictionary, and their semantic embeddings are learned through a textual inversion approach. For instance, a text description like \"a photo of a [Head,42] [Wing,87]...\" guides the optimization of the corresponding textual embedding by reconstructing the associated image. To promote disentanglement among learned concepts, we minimize a specially designed attention loss, denoted as Lattn.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Quantitative comparisons of virtual creature generation in terms of EMR and CoSim.Learnable Fine Tune Disentanglement Projector TokenTextual Inversion [14] ✓ ✗ ✗ ✗ DreamBooth [34] ✓ LoRA* ✗ ✗ CustomDiffusion [23] ✓ K/V ✗ ✗ Break-a-scene [2] ✓ LoRA* MSE ✗ Ours ✓ LoRAEq. 5 ✓", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "representing a composition from species A, B, C, and D. Stable Diffusion v1.5 [33] is used. Please see the supplementary material for further training details.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .Figure 6 .56Figure 5. Visual comparison on 4-species (specified on the top row) hybrid generation. The last column indicates generated images with different styles (i.e., DSLR, Van Gogh, Oil Painting, Pencil Drawing).", "figure_data": "", "figure_id": "fig_4", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Method Birds: CUB-200-2011 FID CLIP DINO EMR CoSim Textual Inversion [14] 10.10 0.784 0.607 0.305 0.842 DreamBooth [34] 12.94 0.775 0.594 0.355 0.856 Custom Diffusion [23] 37.61 0.694 0.504 0.338 0.833 Break-a-Scene [2] 20.05 0.742 0.549 0.390 0.854 DreamCreature (Ours) 12.86 0.783 0.618 0.460 0.882", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FullFigure 7 .7Figure 7. Ablation on our token projection and attention loss under the virtual creature generation setting on CUB-200-2011 birds.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8. (a) A cat with beagle's ear; a cat with cardinal's body; a lion with cardinal's body. (b) A samoyed with papillon's ear with Christmas theme; A robot design inspired by red header woodpecker's head and blue jay's body; A cup design inspired by white pelican's head and red bellied Woodpecker's wing texture.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "p_input[rand_pop] = p_real[rand_pop] 18 19 gen_x = pipeline(p_input) 20 p_gen = subconcept_predictor.predict(gen_x) # (M+1) 21 22 p_input_embs = subconcept_predictor.get centroids( p_input) # (M+1,D) 23 p_gen_embs = subconcept_predictor.get centroids( p_gen) # (M+1,D) 24 25 EMR = average(p_input == p_gen) 26 CoSim = average(cossim(p_input_embs, p_gen_embs)) Our evaluation algorithms for the Exact Matching Rate (EMR) and Cosine Similarity (CoSim) between generated 2 https://github.com/huggingface/diffusers/blob/ main/examples/text_to_image/train_text_to_image_ lora.py Algorithm 2: EMR and CoSim for conventional generation 1 # subconcept_predictor: obtained via Sec 3.1 2 # pipeline: diffusion generation pipeline 3 # real_x: real image (512x512x3) 4 # M: number of sub-concepts 5 # D: number of dino feature dimension 6 7 # obtain the prompt of the real image (Eq.3) 8 p_real = subconcept_predictor.predict(real_x) # (M +1) 9 gen_x = pipeline(p_real) 10 p_gen = subconcept_predictor.predict(gen_x) # (M+1) 11 12 # an example of \"p\" is [4, 222, 55, 23, 98, 22] 13 # in the \"pipeline\", we prepend word template like \"a photo of a \" 14 # e.g., \"a photo of a [0,4] [1,222] ... [M,K]\" 15 # the token [ * , * ] will be replaced by its embedding computed via Eq.4 16 17 p_real_embs = subconcept_predictor.get centroids( p_real) # (M+1,D) 18 p_gen_embs = subconcept_predictor.get centroids( p_gen) # (M+1,D) 19 20 EMR = average(p_real == p_gen) 21 CoSim = average(cossim(p_real_embs, p_gen_embs))and real images are presented in Algorithms 1 and 2, respectively. Each algorithm is designed to evaluate a single sample. For the evaluations in Section 4.1, we computed the average results over 500 iterations using Algorithm 1. Similarly, for Section 4.2, we averaged the outcomes over 5,994 and 12,000 iterations for the CUB-200-2011 (birds) and Stanford Dogs datasets, respectively.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "( 0 ,Figure 9 .Figure 12 .0912Figure 9. Three example outputs of our sub-concept discovery.", "figure_data": "", "figure_id": "fig_9", "figure_label": "0912", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Ablation on the effect of λattn for virtual creature generation on CUB-2011 birds. Different colors represent different numbers of composited sub-concepts.", "figure_data": "", "figure_id": "fig_10", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Cross-attention map of each sub-concept (top) without and (bottom) with our attention loss.", "figure_data": "", "figure_id": "fig_11", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure 15. Generated images over different stages of training.", "figure_data": "", "figure_id": "fig_12", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Quantitative comparison for conventional generation", "figure_data": "MethodDogs: Stanford DogsFIDCLIP DINO EMR CoSimTextual Inversion [14] 23.36 0.652 0.532 0.218 0.754DreamBooth [34]22.65 0.660 0.563 0.275 0.777Custom Diffusion [23] 42.41 0.593 0.491 0.253 0.755Break-a-Scene [2]24.20 0.633 0.532 0.300 0.775DreamCreature (Ours) 16.92 0.669 0.573 0.358 0.796", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Quantitative comparison for conventional generation.", "figure_data": "", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation on the effect of λattn for conventional generation on CUB-200-2011 birds.", "figure_data": "λ attn0.10.01 0.001 0.0001 0.00001FID (↓)19.08 12.86 12.44 11.7811.64EMR (↑)0.339 0.460 0.445 0.4250.397CoSim (↑) 0.851 0.882 0.880 0.8780.872", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
Kam Woh Ng; Xiatian Zhu; Yi-Zhe Song; Tao Xiang
[ { "authors": "Yuval Alaluf; Elad Richardson; Gal Metzer; Daniel Cohen-Or", "journal": "", "ref_id": "b0", "title": "A neural space-time representation for text-toimage personalization", "year": "2023" }, { "authors": "Omri Avrahami; Kfir Aberman; Ohad Fried; Daniel Cohen-Or; Dani Lischinski", "journal": "", "ref_id": "b1", "title": "Break-a-scene: Extracting multiple concepts from a single image", "year": "2023" }, { "authors": "Omri Avrahami; Ohad Fried; Dani Lischinski", "journal": "ACM TOG", "ref_id": "b2", "title": "Blended latent diffusion", "year": "2023" }, { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro", "journal": "", "ref_id": "b3", "title": "ediffi: Text-to-image diffusion models with an ensemble of expert denoisers", "year": "2022" }, { "authors": "Tim Brooks; Aleksander Holynski; Alexei A Efros", "journal": "", "ref_id": "b4", "title": "Instructpix2pix: Learning to follow image editing instructions", "year": "2023" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b5", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Eva Cetinic; James She", "journal": "ACM Transactions on Multimedia Computing, Communications, and Applications", "ref_id": "b6", "title": "Understanding and creating art with ai: Review and outlook", "year": "2022" }, { "authors": "Celia Cintas; Payel Das; Brian Quanz; Girmaw Abebe Tadesse; Skyler Speakman; Pin-Yu Chen", "journal": "", "ref_id": "b7", "title": "Towards creativity characterization of generative models via group-based subset scanning", "year": "2022" }, { "authors": "Guillaume Couairon; Jakob Verbeek; Holger Schwenk; Matthieu Cord", "journal": "ICLR", "ref_id": "b8", "title": "Diffedit: Diffusion-based semantic image editing with mask guidance", "year": "2023" }, { "authors": "Payel Das; Brian Quanz; Pin-Yu Chen; Jae-Wook Ahn; Dhruv Shah", "journal": "", "ref_id": "b9", "title": "Toward a neuro-inspired creative decoder", "year": "2020" }, { "authors": "Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang", "journal": "NeurIPS", "ref_id": "b10", "title": "Cogview: Mastering text-to-image generation via transformers", "year": "2021" }, { "authors": "Ming Ding; Wendi Zheng; Wenyi Hong; Jie Tang", "journal": "NeurIPS", "ref_id": "b11", "title": "Cogview2: Faster and better text-to-image generation via hierarchical transformers", "year": "2022" }, { "authors": "Ahmed Elgammal; Bingchen Liu; Mohamed Elhoseiny; Marian Mazzone", "journal": "", "ref_id": "b12", "title": "Can: Creative adversarial networks generating \"art\" by learning about styles and deviating from style norms", "year": "2017" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "ICLR", "ref_id": "b13", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2023" }, { "authors": " Songwei Ge; C Lawrence Vedanuj Goswami; Devi Zitnick; Parikh", "journal": "ICLR", "ref_id": "b14", "title": "Creative sketch generation", "year": "2021" }, { "authors": "D N Paul; Mark Y Hebert; Tyler S Stoeckle; Charles M Zemlak; Francis", "journal": "PLoS biology", "ref_id": "b15", "title": "Identification of birds through dna barcodes", "year": "2004" }, { "authors": "Amir Hertz; Ron Mokady; Jay Tenenbaum; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b16", "title": "Prompt-to-prompt image editing with cross attention control", "year": "2022" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "NeurIPS", "ref_id": "b17", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b18", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "ICLR", "ref_id": "b19", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Bahjat Kawar; Shiran Zada; Oran Lang; Omer Tov; Huiwen Chang; Tali Dekel; Inbar Mosseri; Michal Irani", "journal": "", "ref_id": "b20", "title": "Imagic: Text-based real image editing with diffusion models", "year": "2023" }, { "authors": "Aditya Khosla; Nityananda Jayadevaprakash; Bangpeng Yao; Fei-Fei Li", "journal": "CVPRW", "ref_id": "b21", "title": "Novel dataset for fine-grained image categorization: Stanford dogs", "year": "2011" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "", "ref_id": "b22", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "Bowen Li; Xiaojuan Qi; Thomas Lukasiewicz; Philip Torr", "journal": "NeurIPS", "ref_id": "b23", "title": "Controllable text-to-image generation", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "ICLR", "ref_id": "b24", "title": "Decoupled weight decay regularization", "year": "2018" }, { "authors": "Ron Mokady; Amir Hertz; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b25", "title": "Null-text inversion for editing real images using guided diffusion models", "year": "2023" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b26", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Amin Heyrani; Nobari ; Muhammad Fathy; Rashad ; Faez Ahmed", "journal": "", "ref_id": "b27", "title": "Creativegan: Editing generative adversarial networks for creative design synthesis", "year": "2021" }, { "authors": "Tingting Qiao; Jing Zhang; Duanqing Xu; Dacheng Tao", "journal": "", "ref_id": "b28", "title": "Mirrorgan: Learning text-to-image generation by redescription", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b29", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b30", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Elad Richardson; Kfir Goldberg; Yuval Alaluf; Daniel Cohen-Or", "journal": "", "ref_id": "b31", "title": "Conceptlab: Creative generation using diffusion prior constraints", "year": "2023" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b32", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b33", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "NeurIPS", "ref_id": "b34", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Othman Sbai; Mohamed Elhoseiny; Antoine Bordes; Yann Lecun; Camille Couprie", "journal": "ECCVW", "ref_id": "b35", "title": "Design: Design inspiration from generative networks", "year": "2019" }, { "authors": "Ming Tao; Hao Tang; Fei Wu; Xiao-Yuan Jing; Bing-Kun Bao; Changsheng Xu", "journal": "", "ref_id": "b36", "title": "Df-gan: A simple and effective baseline for text-to-image synthesis", "year": "2022" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b37", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Carles Vilà; Jennifer A Leonard", "journal": "The behavioural biology of dogs", "ref_id": "b38", "title": "Origin of dog breed diversity", "year": "2007" }, { "authors": "Yael Vinker; Andrey Voynov; Daniel Cohen-Or; Ariel Shamir", "journal": "", "ref_id": "b39", "title": "Concept decomposition for visual exploration and inspiration", "year": "2023" }, { "authors": "Andrey Voynov; Qinghao Chu; Daniel Cohen-Or; Kfir Aberman", "journal": "", "ref_id": "b40", "title": "P+: Extended textual conditioning in text-toimage generation", "year": "2023" }, { "authors": "Catherine Wah; Steve Branson; Peter Welinder; Pietro Perona; Serge Belongie", "journal": "", "ref_id": "b41", "title": "The Caltech-UCSD Birds-200-2011 Dataset", "year": "2011" }, { "authors": "Yuxiang Wei; Yabo Zhang; Zhilong Ji; Jinfeng Bai; Lei Zhang; Wangmeng Zuo", "journal": "", "ref_id": "b42", "title": "Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation", "year": "2023" }, { "authors": "Tao Xu; Pengchuan Zhang; Qiuyuan Huang; Han Zhang; Zhe Gan; Xiaolei Huang; Xiaodong He", "journal": "", "ref_id": "b43", "title": "Attngan: Finegrained text to image generation with attentional generative adversarial networks", "year": "2018" }, { "authors": "Binxin Yang; Shuyang Gu; Bo Zhang; Ting Zhang; Xuejin Chen; Xiaoyan Sun; Dong Chen; Fang Wen", "journal": "", "ref_id": "b44", "title": "Paint by example: Exemplar-based image editing with diffusion models", "year": "2023" }, { "authors": "Guojun Yin; Bin Liu; Lu Sheng; Nenghai Yu; Xiaogang Wang; Jing Shao", "journal": "", "ref_id": "b45", "title": "Semantics disentangling for text-toimage generation", "year": "2019" }, { "authors": "Minfeng Zhu; Pingbo Pan; Wei Chen; Yi Yang", "journal": "", "ref_id": "b46", "title": "Dmgan: Dynamic memory generative adversarial networks for text-to-image synthesis", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 344.41, 568.21, 200.7, 12.69 ], "formula_id": "formula_0", "formula_text": "L ldm = E z,t,p,ϵ ||ϵ -ϵ θ (z t , t, τ θ (y p )|| 2 2 ,(1)" }, { "formula_coordinates": [ 3, 355.97, 583.16, 189.15, 18.14 ], "formula_id": "formula_1", "formula_text": "p * = argmin p L ldm ,(2)" }, { "formula_coordinates": [ 4, 101.9, 342.15, 184.47, 9.65 ], "formula_id": "formula_2", "formula_text": "p = (0, k 0 ), (1, k 1 ), ..., (M, k M ),(3)" }, { "formula_coordinates": [ 4, 139.92, 615.12, 146.45, 9.65 ], "formula_id": "formula_3", "formula_text": "y p = f (e(p)),(4)" }, { "formula_coordinates": [ 4, 335.17, 387.19, 209.94, 65.45 ], "formula_id": "formula_4", "formula_text": "L attn = E z,t,m -S m log Âm + (1 -S m ) log(1 -Âm ) , (5) Ām = 1 L L l A l,m , Âm,i,j = Ām,i,j k Āk,i,j ,(6)" }, { "formula_coordinates": [ 4, 368.09, 597, 177.02, 9.65 ], "formula_id": "formula_5", "formula_text": "L total = L ldm + λ attn L attn ,(7)" }, { "formula_coordinates": [ 5, 50.11, 243.66, 220.16, 60.54 ], "formula_id": "formula_6", "formula_text": "✓ ✗ ✗ ✗ DreamBooth [34] ✓ LoRA* ✗ ✗ CustomDiffusion [23] ✓ K/V ✗ ✗ Break-a-scene [2] ✓ LoRA* MSE ✗ Ours ✓ LoRA Eq. 5 ✓ Table 1." }, { "formula_coordinates": [ 5, 118.48, 680.29, 140.22, 9.65 ], "formula_id": "formula_7", "formula_text": "k A ) (1, k B ) (2, k C ) ... (M, k D )\"," } ]
2024-03-13
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b63", "b3", "b11", "b35", "b40", "b48", "b30", "b9", "b20", "b81", "b79", "b1", "b47", "b55", "b33", "b64", "b55", "b37", "b37", "b64", "b21", "b57", "b53", "b51" ], "table_ref": [], "text": "Widely available text-to-image models such as Stable Diffusion [64], trained on large-scale text and 2D image data, contain rich knowledge of the 3D world. They are capable of generating scenes from various viewpoints, including aerial views. However, due to limitations of the expressiveness of the text, we may not be able to completely describe the precise scene that we wish to generate. Moreover, the generative capabilities of the pretrained model is constrained by the aerial-view images in the dataset that it was trained on, which is typically limited. Consequentially, along with text, it is beneficial to use an easily available representative front-view image describing the aerial view of the scene we wish to generate. The task of generating aerial-view images from a given input image and its text description finds applications in the generation of realistic diverse aerial view synthetic data for improved aerial view perception tasks [4,12,36,37,41], and weak supervision for cross-view synthesis applications [49] such as localization and mapping [31], autonomous driving [10], augmented and virtual reality [21], 3D reconstruction [82], medical imaging [80], drone-enabled surveillance [2].\nAerial-view images corresponding to text and an input image can be sampled using text-to-3D and novel view synthesis (NVS) [48,56]. These methods sample different camera viewpoints by explicitly specifying the camera angle. However, they often need to trained on enormous, large-scale datasets with 3D details and scenes from multiple views. Is it possible for text-to-image(2D) diffusion models to generate aerial-view images without any multi-view or 3D information?\nAnother closely related task is image editing [34] and personalization [65], where the goal is to use an input image and a target text to generate an image consistent with both inputs. These methods are generally successful in performing a wide range of non-rigid transformations including text-controlled view synthesis. However, the large translation required for aerial view synthesis makes them sub-optimal [56]. This is due to bias-variance trade-off issues, even more amplified when only a single input image is provided. Aerial Diffusion [38] attempted to alleviate this bias-variance trade-off, but at the cost of per-sample hyperparameter tuning, residual diagonal artifacts in many of the generated images arising from direct finetuning on sub-optimal homography projections and severe performance drops on complex scenes with multiple objects.\nMain contributions. We propose HawkI for aerial view synthesis, guided by text and a single input image. Our method leverages text-to-image diffusion models for prior knowledge and does not require any 3D or multi-view data. Since explicitly specifying camera details in text descriptions isn't always possible, similar to prior work on text-based viewpoint generation [38,65], we consider any generated image with a significantly higher viewpoint and altitude compared to the original image to be an aerial view. HawkI fuses techniques from classical computer vision and information theory within a stable diffusion backbone model to guide the synthesis of the aerial-view image. The key novel components of our algorithm include:\n1. Test-time optimization: This step enables the model to acquire the characteristics of the input image, while maintaining sufficient variability in the embedding space for aerial-view synthesis. We condition the embedding space by sequentially optimizing the CLIP text-image embedding and the LoRA layers corresponding to the diffusion UNet on the input image and its Inverse Perspective Mapping (IPM) homography transformation in close vicinity. In addition to creating variance, IPM provides implicit guidance towards the direction of transformation for aerial-view synthesis. 2. Mutual Information Guided Inference: This step generates a semantically consistent aerial-view image while accounting for viewpoint differences. Unlike conventional approaches [3,22] that rely on restrictive pixel-level constraints (often ineffective for different viewpoints), we propose a mutual information vastly guidance formulation. Mutual information guidance, rooted in information theory, ensures consistency between the contents of the generated image and the input image by maximizing the information contained between the probability distributions of the input image and the generated aerial image.\nOur method performs inference-time optimization on the given text-image inputs and does not require a dataset to train on, hence, it is easily applicable to any inthe-wild image. To test our method, we collect a diverse set of synthetic images (from Stable Diffusion XL) and real images (from Unsplash), spanning across natural scenes, indoor scenes, human actions and animations. Qualitative and quantitative comparisons with prior work, on metrics such as CLIP [58] (measuring viewpoint and text consistency) and SSCD [54], DINOv2 [52] (measuring consistency w.r.t. input image), demonstrate that HawkI generates aerial-view images with a significantly better viewpoint-fidelity (or bias-variance) trade-off. We also present extensive ablation experiments and comparisons with 3D-based novel-view synthesis methods highlighting the benefits of our 3D-free classical guidance approaches. Our method can also be extended to generate more views that can be text-controlled (such as 'side view', bottom view', 'back view'), as evidenced by our results." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b52", "b78", "b82", "b25", "b32", "b74", "b91", "b15", "b46", "b47", "b56", "b66", "b69", "b70", "b72", "b10", "b41", "b55", "b58", "b85", "b47", "b69", "b63", "b90", "b62", "b82", "b17", "b24", "b45", "b87", "b64", "b29", "b65", "b23", "b38", "b33", "b86", "b88", "b12", "b37", "b16", "b28", "b50" ], "table_ref": [], "text": "3D and novel view synthesis. Novel view synthesis [53,79,83] from a single image is an active area of research in generative AI. Many methods [26,33,75,92] use NeRF based techniques. Nerdi [16] use language guidance with NeRFs for view synthesis. Many recent methods use diffusion [6, 47,48,57,67,70,71,73] to sample different views. 3D generation methods [11,42,56,59,86] use text to guide the reconstruction. All of these methods use large amounts of multi-view and 3D data for supervised training. Methods like Zero-1-to-3 [48] and Zero-123++ [70] use a pretrained stable diffusion [64] model, along with large data for supervised training, to learn different camera viewpoints. 3D-free methods such as Free3D [91] still require multi-view and 3D information while training.\nWarping, scene extrapolation and homography. Scenescape [23], Diff-Dreamer [7] and similar methods [9, 63,83] estimate a depth map, reproject the pixels into the desired camera perspective and outpaint the scene. Again, these methods require 3D and multi-view information at training stage. Using a homography to estimate the scene from an aerial perspective is highly inaccurate, hence, attempting to create realistic aerial view images by simply filling in missing information based on the homography (outpainting) leads to poor outcomes. Homography maps have also been used in various deep learning based computer vision solutions [14, 18,25,46,88].\nImage editing/ personalization. Diffusion models have emerged as successful tools for single image editing and personalization. Methods such as Dream-Booth [65], DreamBooth LoRA [30], HyperDreamBooth [66], Textual Inversion [24], Custom Diffusion [39] are able to generated personalized images of subjects. Image editing methods such as Imagic [34], Paint-by-Example [87], ControlNet [89], DiffEdit [13] are able to edit images to perform non-rigid transformations and also use exemplar signals for guidance. However, these methods can either generate aerial images with low fidelity w.r.t. the input image or generate high-fidelity images with viewpoints very close to the input image.\nCross-view synthesis. Prior work on cross-view synthesis [1, 19, 43-45, 49, 60-62, 69, 72, 76, 78, 84, 90] are data intensive -they use paired data and modalities such as semantic maps, depth, multi-views, etc within their architectures.\nAerial Diffusion [38] uses text and an exemplar image for the task by alternating sampling between viewpoint and homography projects. However, the generated images have diagonal artifacts with poor quality results for complex scenes that typically contain more than one object and requires manual per-sample hyperparameter tuning.\nGuidance techniques in diffusion. Guidance methods [3,17,29,51] have been used to control and guide diffusion denoising towards semantic maps, image classes, etc. These guidance techniques cannot enforce view-invariant imageimage similarity, critical for aligning the contents in two images with vastly different viewpoints." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [ "b39", "b63" ], "table_ref": [], "text": "We present HawkI to generate aerial view images using a single input image I S and its text description t S (e.g. 'a cosy living room', can be obtained using the BLIP-2 model [40]). We do not use any training data or 3D/multi-view data. We leverage the pretrained text-to-2D image stable diffusion [64] model to serve as a strong prior, and utilize classical computer vision and information theory principles to achieve the desired goal in a holistic manner. We present an overview of our method in Figure 2.\n-Test-time optimization: We perform multi-step test-time optimization to incorporate the input image I S within the pretrained model, at an appropriate bias-variance trade-off. Specifically, we optimize the CLIP text-image embedding and the LoRA layers in the diffusion UNet sequentially on the input image and its inverse perspective mapping, in close vicinity. This additionally conditions the embedding space viewpoint transformations, along with acquiring the characteristics of the input image. -Inference: To generate the aerial-view image, we use the target text description t T , that takes the form 'aerial view, ' + t S (e.g. 'aerial view, a cosy living room'). To ensure that the generated aerial image is semantically close to the input image, we use mutual information guidance.\nNext, we describe our method in detail." }, { "figure_ref": [], "heading": "Test-time optimization", "publication_ref": [ "b67", "b64", "b29", "b33", "b26", "b27", "b73", "b37" ], "table_ref": [], "text": "The text-to-2D image stable diffusion model has knowledge of the 3D world as a consequence of the large amount of diverse data it has been trained on. It understands [68] different viewpoints, different styles, backgrounds, etc. Image editing and personalization methods such as DreamBooth [65], DreamBooth LoRA [30], Imagic [34], SVDiff [27] exploit this property to perform transformations such as making a standing dog sit and generating its image in front of the Eiffel tower. At a high level, the standard procedure adopted by these methods to generate edited or personalized images is to finetune the model on the input image, followed by inferencing. These methods are however not very successful in textguided aerial view synthesis, which demands a large transformation. Specifically, directly finetuning the diffusion model on e S to reconstruct I S results in severe overfitting, where e S is the CLIP text embedding for t S . This makes it difficult for the model to generate large variations to the scene required for aerial view synthesis.\nWe propose a four-step finetuning approach to enable the model to learn the characteristics of I S , while ensuring sufficient variance for aerial view generation.\nOptimization using I S : In the first step, we start from e S and compute the optimized CLIP text-image embedding e opt to reconstruct I S using a frozen diffusion model UNet using the denoising diffusion loss function L [28].\nmin eopt 0 t=T L(f (x t , t, e opt ; θ), I S ), (1\n)\nwhere t is the diffusion timestep and x t is the latents at time t. This formulation allows us to find the text embedding that characterizes I S better than the generic text embedding e S .\nNext, to enable e opt accurately reconstruct I S , we optimize the diffusion UNet using the denoising diffusion objective function. Note that we insert LoRA layers within the attention modules in the diffusion UNet and finetune only the LoRA layers with parameters θ LoRA , the rest of the UNet parameters are frozen,\nmin θ LoRA 0 t=T L(f (x t , t, e opt ; θ), I S ).(2)\nWhile optimizing e opt instead of e S to reconstruct I S ensures lesser bias (or more variance), the embedding space is still not sufficiently conditioned to generate an aerial view of the image.\nOptimization using inverse perspective mapping. Inverse perspective mapping (IPM) [74] is a homography transformation from classical computer vision to generate the aerial-view of an image from its ground-view. Despite not being accurate, it can provide pseudo weak supervision for the generation of the aerial image and also add more variance to the embedding space. We denote the inverse perspective mapping of the input image by I H , computed following [38].\nWe perform the following optimization steps to condition the embedding space towards the desired viewpoint transformation. To find the text embedding e H that best characterises I H , we start from e opt and optimize the text embedding with a frozen diffusion model, similar to Equation 1. Finding e H in the vicinity of e opt instead of e S ensures that the text-image space corresponding to e S doesn't get distorted to generate the poor quality IPM image. Next, we finetune the diffusion model using the denoising diffusion objective function to reconstruct I H at e H , similar to Equation 2. Again, only the LoRA layers are finetuned, the rest of the UNet is frozen. Note that we find e opt and e H by optimizing e S and e opt , respectively for a small number of iterations. We need to ensure that e S , e opt and e H are all in close vicinity. Our finetuning approach conditions the embedding space to encapsulate the details of I S and viewpoint, while having sufficient variance to generate large transformations required for the generation of the aerial image." }, { "figure_ref": [], "heading": "Mutual Information Guided Inference", "publication_ref": [ "b34", "b49", "b80", "b84", "b76", "b31", "b28", "b50" ], "table_ref": [], "text": "Our next step is to use the finetuned diffusion model to generate the aerial view image for the text prompt t T . The text embedding for t T is e T . Diffusion denoising, conditioned on e T is capable of generating aerial images corresponding to I S . However, oftentimes, the contents of the generated aerial view image does not align well with the contents of I S . Consequently, to ensure high fidelity generations, our goal is to guide the contents of the aerial view image towards the contents of I S .\nSimilarity measures such as L1 distance, cosine similarity are capable of providing this guidance. However, they are not invariant to viewpoint/ structure. Since we want the two images to be similar (while observed from different viewpoints), using metrics that impose matching at the pixel (or feature) level is not the best approach. Rather, it is judicious to use the probability distribution of the features.\nIn information theory, mutual information quantifies the 'amount of information' obtained about one random variable by observing the other random variable. Mutual information has been used [35,50,81,85] to measure the similarity between images in various computer vision tasks such as medical image registration, frame sampling, etc. It yields smooth cost functions for optimization [77]. The mutual information between two probability distribution functions (pdf) p(x), p(y) for two random variables X , Y is defined as\nI(X , Y) = H(X ) + H(Y) -H(X , Y) where H(X ), H(Y) are the entropies of p(x), p(y) and H(X , Y) is the joint entropy. Entropy of a random variable X is a measure of its uncertainity, H(X ) = -x∈X p X (x)log(p X (x)); H(X , Y) = - (x,y)∈X ,Y p XY (x, y)log(p XY (x, y)).\nThus,\nI(X , Y) = - (x,y)∈X ,Y p XY (x, y) log(p XY (x, y)) p X (x)p Y (y) .\nHence, mutual information, in some sense, measures the distance between the actual joint distribution between two probability distributions and the distribution under an assumption that the two variables are completely independent. Thus, it is a measure of dependence [32] and can be used to measure the information between two images. In order to maximize the similarity in content between I S and the generated aerial image, we maximize the mutual information between them. We define our mutual information guidance function as follows. Let z t denote the predicted latents at timestep t. We denote z 0,t as the latents of the final predicted image extrapolated from z t i.e. if the denoising were to proceed in a vanilla fashion in the same direction that computed z t , the latents of the final predicted image would be z 0,t . At every step of sampling (except the final step), we wish to maximize the mutual information between z 0,t and z S where z S are the latents corresponding to I S . Hence, the guidance function we wish to maximize is, G M I = I(z 0,t , z S ).\nThe computation of mutual information requires us to compute the marginal and joint probability density functions (pdf) of z 0,t and z S . We construct 2D histograms of the latents (by reshaping the latents of size C ×H ×W into C ×HW ) and compute their marginal pdfs. The joint pdfs can then be computed from the marginal pdfs, which can be plugged into the formula for mutual information. Our next task is to use G M I to guide the generation of the aerial image.\nGuidance techniques such as classifier-free guidance [29], universal guidance [3] and steered diffusion [51] modify the sampling method to guide the image generation with feedback from the guidance function. The gradient of the guidance function w.r.t. the predicted noise at timestep t is an indicator of the additional noise that needs to be removed from the latents to steer the generated image towards the guidance signal. Synonymously, the gradient of the guidance function w.r.t. the predicted latents is an indicator of the direction in which the latents need to move in order to maximize their alignment with the guidance function. Specifically, at every step of sampling (except the final step), we modify the predicted latents z t as ẑt = z t -λ M I ∇ zt (-1 * G M I ). Note that we use the negative of the mutual information to compute the gradients since we want to maximize the mutual information between the generated latents and the input image." }, { "figure_ref": [], "heading": "Experiments and Results", "publication_ref": [ "b54", "b39", "b33", "b64", "b37", "b57", "b64", "b7", "b51", "b53", "b4" ], "table_ref": [], "text": "Data. We collect a synthetic dataset, HawkI-Syn and a real dataset, HawkI-Real. Both datasets contain images across a wide variety of categories including indoor scenes, natural scenes, human actions, birds/ animals, animations, traffic scenes and architectures. HawkI-Syn contains 500 images that were generated using Stable Diffusion XL [55]. To generate the text prompts for the generated images in HawkI-Syn, we used Large Languuage Models (LLMs) such as ChatGPT and Bard. HawkI-Real contains 139 images downloaded from the Unsplash website, the text descriptions for these images were obtained using the BLIP-2 [40] model.\nTraining details. We use the stable diffusion 2.1 model in all our experiments, ablations and comparisons. All our images (except for images in HawkI-Real) are at a resolution of 512 × 512. With respect to I S , we train the text embedding and the diffusion model for 1, 000 and 500 iterations respectively, at a learning rate of 1e -3 and 2e -4 respectively. Using 1000 iterations to optimize the text embedding ensures that the text embedding e opt at which I S is reconstructed is not too close to e S , which would make it biased towards I S otherwise. Similarly, it is not too far from e S either, hence the text embedding space learns the characteristics of I S . With respect to I H , we train the text embedding and the diffusion UNet for 500 and 250 iterations respectively. We want e H to be in close vicinity of e opt ; we train the diffusion model for just 250 iterations so that the model does not completely overfit to I H . The role of I H is to create variance and provide pseudo supervision, it is not an accurate approximation of the aerial view. We set the hyperparameter for mutual information guidance at 1e -5 or 1e -6, the inference is run for 50 steps.\nComputational cost. For each input image, HawkI takes 3.5 minutes to perform test-time optimization on one NVIDIA A5000 GPU with 24 GB memory. The inference time is consistent with that of Stable Diffusion, about 7 seconds to generate each sample with 50 denoising steps. The computational cost is on par with text-based image personalization [34,65] and text-based aerial-view synthesis [38] methods. The number of network parameters is also consistent across all these models, we use the Stable Diffusion v2.1 + LoRA backbone across methods.\nQuantitative evaluation metrics. No concrete evaluation metrics exist for this task. We follow prior work on text-based image editing and personalization to evaluate our method on the following 5 metrics:\n-Viewpoint and text alignment: We use the text description 'aerial view, ' + t S and 'aerial view', along with the generated image, to compute the CLIP-Score [58] and the A-CLIP Score respectively. The former indicates alignment of the generated image with the detailed textual description of the image describing the contents along with the viewpoint; the latter focuses more on the viewpoint. -Image fidelity and 3D coherence: To evaluate the overall alignment of the contents of the generated aerial-view image with the input image, we compute the CLIP-I score [65] which measures the cosine similarity between the embeddings of the aerial-view image and the input image in the CLIP space. For a better indicator of the fidelity and 3D coherence between the two images, we also use the self-supervised similarity detection metrics, DI-NOv2 [8,52] and SSCD [54].\nHigher values are desired for each of these metrics. Viewpoint faithfulness and fidelity w.r.t input image are a direct result of the bias-variance trade-off of the model, and high values for both are desired. However, as noted by Blau et. al. [5], maximizing both is not straightforward; inevitably one of the factors will degrade in response to the improvement in the other. For each input image, we generate 5 aerial images, with random noise initializations, and choose the image with the highest CLIP + SSCD score (since CLIP is an indicator of viewpoint + content alignment and SSCD score measures the fidelity w.r.t. the input image). " }, { "figure_ref": [ "fig_1" ], "heading": "Comparisons against text + exemplar image based methods", "publication_ref": [ "b64", "b33", "b37" ], "table_ref": [], "text": "We compare our method with DreamBooth LoRA [65], a text-based image personalization method; Imagic LoRA [34], a text-based image editing method; and Aerial Diffusion LoRA [38], a method for text-based aerial image generation from a single image. We keep the backbone stable diffusion model, image prompts, training details, and evaluation method consistent across all comparisons. We show qualitative results in Figure 3. Our method is able to generate aerial views as per input image guidance across a diverse set of scenes. Our method generates results that are more aerial in viewpoint than DreamBooth and Imagic, while being largely consistent with the contents of the input image. Aerial Diffusion is unable to generate good quality images of scenes that have many objects. Our method is able to deal with complex scenes as well as modify the viewpoint.\nWe show the quantitative results in Figure 7. Our method achieves a higher CLIP score than all prior work, indicating that it is able to generate an aerial view of the scene with contents dictated by the text better than prior work. The A-CLIP score achieved by our method is higher than that of DreamBooth and Imagic, indicating better conformance to the aerial viewpoint. Even though the A-CLIP score of HawkI is lower than than of Aerial Diffusion, Aerial Diffusion generates poor quality images for scenes with more than one object and also has diagonal artifacts in its generated images, as we observe from the other metrics and qualitative results, thus, offsetting its high A-CLIP Score.\nCLIP-I and self-supervised metrics such as SSCD and DINO are not viewpoint invariant. In many cases, since Imagic and DreamBooth generate views close to the input view, rather than aerial views, it is natural for them to have higher CLIP-I, SSCD and DINO Scores. Our method has a much higher CLIP-I, SSCD and DINO score than Aerial Diffusion, showing considerable improvement over prior work in retaining the fidelity and 3D consistency w.r.t. the input image, while modifying the viewpoint. In summary, our specialized aerial-view synthesis method achieves the best viewpoint-fidelity trade-off amongst all related prior work." }, { "figure_ref": [ "fig_3" ], "heading": "Comparisons against 3D based novel view synthesis (NVS) methods", "publication_ref": [ "b47", "b69", "b14", "b47" ], "table_ref": [], "text": "We compare with state-of-the-art benchmark methods on stable diffusion based novel view synthesis from a single image in Figure 5. Zero-1-to-3 [48] and Zero123++ [70], both, train on large amounts of multi-view and 3D data from Objaverse [15] contain 800K+ 3D models; in addition to leveraging a pretrained text-to-2Dimage stable diffusion model. In contrast, our method does not use any multi-view or 3D information and is capable of generating better results in multiple cases.\nAnother task-level difference between our method and prior work on NVS is that the latter aim to explicitly control the camera angle and generate 3D objects in Zero-1-to-3 [48], the camera-angle generated by our method is arbitrary within the realms of the text control. The CLIP scores on HawkI-Syn for Zero123++ and HawkI are 0.3071 and 0.3108 respectively, the DINO scores are 0.4341 and 0.4532 respectively. On HawkI-Real, the CLIP score for Zero123++ and HawkI are 0.2908 and 0.3045 respectively, the DINO scores are 0.3916 and 0.3915 respectively. Our aerial-view synthesis method, even without any 3D/ multi-view information and large dataset training, is better than or comparable to 3D-based NVS methods." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation studies", "publication_ref": [], "table_ref": [], "text": "We show ablation experiments in Figure 5. In the second column, we show results of our model where it is neither finetuned on the homography image nor uses mutual information guidance for sampling. Thus, the text embedding for the input image, followed by the diffusion UNet are finetuned and the diffusion model generates the aerial image by diffusion denoising, without any mutual information guidance. Many of the generated images either have low fidelity or have low correspondence to the aerial viewpoint. In the experiment in the third column, we add mutual information guidance to the model in column 2. We see higher fidelity (than column 2) of the generated images w.r.t. the input image.\nIn the fourth column, we add the homography image finetuning step to the model in column 2, but do not use mutual information guidance at inference. The generated images, in many cases, are aerial, but have lower fidelity w.r.t. the input image. In the final column, we show results with our full model. The generated images achieve the best trade-off between the viewpoint being aerial and fidelity w.r.t. the input image, in comparison to all ablation experiments. We report a table with all quantitative metrics for the ablation experiments on HawkI-Syn and HawkI-Real in the supplementary pdf. A summary of the analysis is as follows:\n- " }, { "figure_ref": [], "heading": "Comparison with other metrics for guidance.", "publication_ref": [], "table_ref": [], "text": "We compare with two other metrics for diffusion guidance at inference: (i) L2 distance between the features of the generated image and the input image, (ii) a metric inspired by Wasserstein distance or Earth Mover's distance, for which we compute the distance between the histograms of the probability distributions of the two images. Our mutual information guidance method is better at preserving the fidelity w.r.t the input image, as evidenced by higher SSCD scores. The SSCD score on HawkI-Real for Wasserstein guidance, L2 guidance, and mutual information guidance are 0.3181, 0.3224 and 0.3345 respectively. " }, { "figure_ref": [ "fig_0" ], "heading": "Conclusions, Limitations and Future Work", "publication_ref": [ "b69", "b19", "b88" ], "table_ref": [], "text": "We present a novel method for aerial view synthesis. Our goal is to leverage pretrained text-to-image models to advance the frontiers of text + exemplar image based view synthesis without any additional 3D or multi-view data at train/ test time. Our method has a few limitations, which form an avenue for future work -(i) The absence of 3D information makes it difficult to explicitly control the camera angle of the generated scene. Methods like Zero123++ [70], after being trained on large-scale 3D data, have a reasonable amount of generalization capabalities, which can be as prior 3D information to generate camera controlled views. (ii) Another consequence of the absence of 3D and multi-view information, the details of the generated image not being fully consistent with the input images. Further improving the fidelity w.r.t. input image will enable the utilization of our method in cross-view synthesis applications. Other directions for future work are as follows -(i) IPM serves as an important tool for aerial view generation -working on similar homography projections to synthesize other views can be explored as well, (ii) our mutual information guidance formulation can be used in other problems pertaining to image editing and personalization to maximize the information in the probability content between two images, (iii) using generated data for various downstream UAV and aerial-view applications such as cross-view mapping, 3D reconstruction and synthetic data augmentation. 1: We report the quantitative metrics for the ablation experiments corresponding to removing the IH finetuning step, removing mutual information guidance or removing both. We use two additional quantitative metricsn -CLIP-D and A-CLIP-D which analyze directional similarity. CLIP directional similarity [20] measures the consistency of the change between two images, IS and IT , in the CLIP space with the change between the two image captions (dictating the transformation from IS to IT ). We compute two versions of this score, CLIPD score and the A-CLIPD score. CLIPD score uses 'aerial view, ' + text as the target text, and 'A-CLIPD score' uses 'aerial view' as the target text. Without finetuning on IH , while the generated images have high fidelity w.r.t the input image, the generated images score low on the aspect of the viewpoint being aerial, adding the IH finetuning step enables the generation of 'aerial' images. Without mutual information guidance, the generated images have low fidelity w.r.t. the input images, adding mutual information guidance steers the content in the generated image towards the content in the input image. In summary, our full model, with the inverse perspective mapping finetuning step as well as mutual information guidance, achieves the best viewpoint-fidelity trade-off amongst all ablation experiments. We compare with latest related work on novel view synthesis: Zero-1-to-3 and Zero123++ on images from HawkI-Syn. Both of these methods use the pretrained stable diffusion model and the 3D objects dataset, Objaverse with 800k+ 3D objects, for training. Our method uses just the pretrained stable diffusion model for the task of aerial view synthesis from a single image. 3D generation methods like Zero123++ are capable of generating different views with high fidelity by using pretrained stable diffusion models to finetune on large-scale 3D objects datasets. However, their generalization capabilities are limited. Our method is able to generate high quality aerial images for the given input images without any 3D data and using just the pretrained text-to-2D image stable diffusion model, however, there is scope for improving the fidelity of the generated aerial image w.r.t the input image. Moreover, our method controls the viewpoint via text and does not provide the provision to quantitatively control the camera angle. Both of these limitations of our method can be alleviated by exploring the combination of pretrained Zero123++ models (or other 3D models) and our method, as a part of future work. Mapping is related to whether it actually provides pseudo weak guidance, in addition to increasing variance (or reducing bias) in the representation space that is being conditioned for aerial view generation. The latter can be achieved with any random data augmentation. To understand this, we use a 45 degrees rotated image in place of the image corresponding to the Inverse Perspective Mapping in the second stage of finetuning the text embedding and the diffusion UNet. Our finding is that results with models that use Inverse Perspective Mapping are generally better in terms of the viewpoint being aerial, while preserving the fidelity with respect to the input image, than models that use the 45 degree rotated image. Thus, we conclude that rather than using any random data augmentation technique, it is beneficial to use IPM as it is capable of providing pseudo weak guidance to the model for aerial view synthesis. This finding also paves direction for future work on using carefully crafted homography priors for view synthesis corresponding to different camera angles and viewpoints. Fig. 22: Comparisons with (i) warping + scene extrapolation, (ii) Control-Net [89]. In the second column, we present results on warping + scene extrapolation. Specifically, we warp the image to its pseudo aerial-view using the IPM, and use Stable Diffusion to extrapolate. To do so, we finetune the Diffusion UNet using the warped image and the text prompt corresponding to 'aerial view' + image description, and run inference using the finetuned diffusion model. Warping + scene extrapolation is highly ineffective, due to the poor quality of pseudo aerial-view images. Our method, HawkI is able to generatefar higher quality images. In the third and fourth columns, we show results with ControlNet Img2Img (https://stablediffusionweb.com/ControlNet). We provide the input image and the text prompt corresponding to 'aerial view' + image description and we show results corresponding to two runs of the model. Typically, ControlNet is highly successful in text-based image to image synthesis in cases dictating small-scale pixel-level. However, it is unable to perform view synthesis i.e. it is unable to generate high-fidelity aerial-view images for a given input image." }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "Fig. 7: We show detailed quantitative comparisons of HawkI against Zero123++, a state-of-the-art 3D-based novel-view synthesis method. Zero123++ uses 800k+ 3D objects in its finetuning of a stable diffusion model. In contrast, our method, HawkI, uses absolutely no 3D information at test-time finetuning of the stable diffusion model or at inference; and is able to achieve comparable or better performance on various metrics indicative of viewpoint or fidelity w.r.t. input image. Moreover, since HawkI performs 3D-free test-time optimization + inference on a pre-trained stable diffusion model, it is easily applicable to any in-the-wild image without any additional generalization issues or constraints, beyond the pretrained stable diffusion model itself." } ]
Fig. 1: HawkI generates aerial-view images from a text description and an exemplar input image. It builds on a text to 2D image stable diffusion model and does not require any additional 3D or multi-view information at fine-tuning or inference.
HawkI: Homography & Mutual Information Guidance for 3D-free Single Image to Aerial View
[ { "figure_caption": "Fig. 2 :2Fig. 2: Overview. HawkI generates aerial-view images, using a text description and a single image IS as supervisory signals. It builds on a pretrained text-to-image diffusion model, and does not use any 3D or multi-view information. It performs test-time finetuning to optimize the text embedding and the diffusion model to reconstruct the input image and its inverse perspective mapping in close vicinity. Such a mechanism enables the incorporation of image specific knowledge within the model, while retaining its imaginative capabilities (or variance). At inference, HawkI uses mutual information guidance to maximize the information between the probability distributions of the generated image and IS, to generate a high-fidelity aerial-view image.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The top three images are from the HawkI-Syn dataset, the bottom three images are from the HawkI-Real dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: HawkI achieves the best viewpoint-fidelity trade-off amongst prior work on text + exemplar image based aerial-view synthesis, on various quantitative metrics indicate of text-alignment (for viewpoint and a broad description of the scene) and image alignment (for fidelity w.r.t. input image).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: (Left figure.) Ablation experiments show that Inverse Perspective Mapping helps in the generation of images that are aerial, mutual information guidance helps in preserving the contents w.r.t. input image. (Right figure.) We compare with latest related work on novel view synthesis: Zero-1-to-3 [48] and Zero123++ [70]. Both methods use the pretrained text-to-2D-image stable diffusion model along with the 800k 3D objects dataset, Objaverse [15], for training. Our method uses just the pretrained text-to-2D-image stable diffusion model to generate better results for the task of aerial view synthesis, guided by text and a single input image.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Effect of I H : We use two additional quantitative metrics -CLIP-D and A-CLIP-D which analyze directional similarity. CLIP directional similarity [20] measures the consistency of the change between two images, I S and I T , in the CLIP space with the change between the two image captions (dictating the transformation from I S to I T ). We compute two versions of this score, CLIPD score and the A-CLIPD score. CLIPD score uses 'aerial view, ' + text as the target text, and 'A-CLIPD score' uses 'aerial view' as the target text. To study the effect of I H , we compare the A-CLIP, CLIPD, A-CLIPD scores for the full model vs model w/o I H and model w/o I H w/o G M I vs model w/o G M I . The scores in all cases where I H is not present are lower. For instance, on HawkI-Real, the CLIP-D scores for the full model and model w/o I H are 0.0564 and 0.0444 respectively. Similarly, the CLIP-D scores for model w/o I H w/o G M I vs model w/o G M I are 0.0550 and 0.0445 respectively. -Effect of G M I : To study the effect of I H , we compare the SSCD, DINO, CLIPD, A-CLIPD scores for the full model vs model w/o G M I and model w/o I H w/o G M I vs model w/o I H . The scores in all cases where G M I is not present are lower. For instance, on HawkI-Real, the SSCD scores for the full model and model w/o G M I are 0.3345 and 0.3204 respectively. Similarly, the CLIP-D scores for model w/o I H w/o G M I vs model w/o I H are 0.4013 and 0.4066 respectively.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: HawkI can be extended to generate other text-controlled views as well.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "4. 55Text controlled view synthesis: Other views HawkI can be extended to generate other text-controlled views such as side view, bottom view and back view. For the results shown in Figure 20, we modify the target text t T to indicate different viewpoints, and retain the other finetuning/ inference details. More results. Please refer to the supplementary material for more analysis including (i) more qualitative results and comparisons with text + exemplar image based methods, 3D based NVS methods, (ii) qualitative examples on comparison with other guidance metrics, (iii) detailed quantitative results for ablations, (iv) comparison of IPM with data augmentation, (v) more results on extension of our method to other text-controlled views, (vi) qualitative comparisons with warping + outpainting (scene extrapolation) and ControlNet variations, (vii) interesting qualitative results of our method on various types of images, etc.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :8Fig. 8: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The images are from the HawkI-Syn dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 :9Fig. 9: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The images are from the HawkI-Syn dataset.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 :10Fig. 10: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The images are from the HawkI-Syn dataset.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 :11Fig. 11: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The images are from the HawkI-Syn dataset.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 :12Fig. 12: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The images are from the HawkI-Real dataset.", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 13 :13Fig. 13: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The images are from the HawkI-Real dataset.", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Fig. 14 :14Fig. 14: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The images are from the HawkI-Real dataset.", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Fig. 15 :15Fig. 15: Compared to state-of-the-art text + exemplar image based methods, HawkI is able to generate images that are \"more aerial\", while being consistent with the input image. The top two images are from the HawkI-Syn dataset, the bottom four images are from the HawkI-Real dataset.", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Fig. 16 :16Fig.16: We show a few examples for comparisons with other metrics for diffusion guidance such as L2 distance and Wasserstein distance. Our mutual information guidance method is better at preserving the fidelity w.r.t the input image, as also evidenced by higher SSCD scores. The SSCD score for Wasserstein guidance, L2 guidance, and mutual information guidance are 0.3181, 0.3224 and 0.3345 respectively, averaged over all images in the HawkI-Real dataset.", "figure_data": "", "figure_id": "fig_15", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Fig. 17 :17Fig. 17: More qualitative results for ablation experiments.", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Fig. 18 :18Fig.18: We compare with latest related work on novel view synthesis: Zero-1-to-3 and Zero123++ on images from HawkI-Syn. Both of these methods use the pretrained stable diffusion model and the 3D objects dataset, Objaverse with 800k+ 3D objects, for training. Our method uses just the pretrained stable diffusion model for the task of aerial view synthesis from a single image. 3D generation methods like Zero123++ are capable of generating different views with high fidelity by using pretrained stable diffusion models to finetune on large-scale 3D objects datasets. However, their generalization capabilities are limited. Our method is able to generate high quality aerial images for the given input images without any 3D data and using just the pretrained text-to-2D image stable diffusion model, however, there is scope for improving the fidelity of the generated aerial image w.r.t the input image. Moreover, our method controls the viewpoint via text and does not provide the provision to quantitatively control the camera angle. Both of these limitations of our method can be alleviated by exploring the combination of pretrained Zero123++ models (or other 3D models) and our method, as a part of future work.", "figure_data": "", "figure_id": "fig_17", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Fig. 19 :19Fig. 19: We compare with latest related work on novel view synthesis: Zero-1-to-3 and Zero123++ on images from HawkI-Real. Both of these methods use the pretrained stable diffusion model and the 3D objects dataset, Objaverse with 800k+ 3D objects, for training. Our method uses just the pretrained stable diffusion model for the task of aerial view synthesis from a single image.", "figure_data": "", "figure_id": "fig_18", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Fig. 20 :20Fig. 20: Additional results on extending HawkI to generate other text-controlled views.", "figure_data": "", "figure_id": "fig_19", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Fig. 21 :21Fig.21: Can any data augmentation be used in place of Inverse Perspective Mapping (IPM)? One question that arises from the usage of Inverse Perspective Mapping is related to whether it actually provides pseudo weak guidance, in addition to increasing variance (or reducing bias) in the representation space that is being conditioned for aerial view generation. The latter can be achieved with any random data augmentation. To understand this, we use a 45 degrees rotated image in place of the image corresponding to the Inverse Perspective Mapping in the second stage of finetuning the text embedding and the diffusion UNet. Our finding is that results with models that use Inverse Perspective Mapping are generally better in terms of the viewpoint being aerial, while preserving the fidelity with respect to the input image, than models that use the 45 degree rotated image. Thus, we conclude that rather than using any random data augmentation technique, it is beneficial to use IPM as it is capable of providing pseudo weak guidance to the model for aerial view synthesis. This finding also paves direction for future work on using carefully crafted homography priors for view synthesis corresponding to different camera angles and viewpoints.", "figure_data": "", "figure_id": "fig_20", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Divya Kothandaraman; Tianyi Zhou; Ming Lin; Dinesh Manocha
[ { "authors": "S Ammar Abbas; A Zisserman", "journal": "", "ref_id": "b0", "title": "A geometric approach to obtain a bird's eye view from an image", "year": "2019" }, { "authors": "S Ardeshir; A Borji", "journal": "", "ref_id": "b1", "title": "Integrating egocentric videos in top-view surveillance videos: Joint identification and temporal alignment", "year": "2018" }, { "authors": "A Bansal; H M Chu; A Schwarzschild; S Sengupta; M Goldblum; J Geiping; T Goldstein", "journal": "", "ref_id": "b2", "title": "Universal guidance for diffusion models", "year": "2023" }, { "authors": "M Barekatain; M Martí; H F Shih; S Murray; K Nakayama; Y Matsuo; H Prendinger", "journal": "", "ref_id": "b3", "title": "Okutama-action: An aerial view video dataset for concurrent human action detection", "year": "2017" }, { "authors": "Y Blau; T Michaeli", "journal": "", "ref_id": "b4", "title": "The perception-distortion tradeoff", "year": "2018" }, { "authors": "J Burgess; K C Wang; S Yeung", "journal": "", "ref_id": "b5", "title": "Viewpoint textual inversion: Unleashing novel view synthesis with pretrained 2d diffusion models", "year": "2023" }, { "authors": "S Cai; E R Chan; S Peng; M Shahbazi; A Obukhov; L Van Gool; G Wetzstein", "journal": "", "ref_id": "b6", "title": "Diffdreamer: Towards consistent unsupervised single-view scene extrapolation with conditional diffusion models", "year": "2023" }, { "authors": "M Caron; H Touvron; I Misra; H Jégou; J Mairal; P Bojanowski; A Joulin", "journal": "", "ref_id": "b7", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Q Chen; V Koltun", "journal": "", "ref_id": "b8", "title": "Photographic image synthesis with cascaded refinement networks", "year": "2017" }, { "authors": "X Chen; H Ma; J Wan; B Li; T Xia", "journal": "", "ref_id": "b9", "title": "Multi-view 3d object detection network for autonomous driving", "year": "2017" }, { "authors": "Y Chen; C Zhang; X Yang; Z Cai; G Yu; L Yang; G Lin", "journal": "", "ref_id": "b10", "title": "It3d: Improved text-to-3d generation with explicit view synthesis", "year": "2023" }, { "authors": "J Choi; G Sharma; M Chandraker; J B Huang", "journal": "", "ref_id": "b11", "title": "Unsupervised and semisupervised domain adaptation for action recognition from drones", "year": "2020" }, { "authors": "G Couairon; J Verbeek; H Schwenk; M Cord", "journal": "", "ref_id": "b12", "title": "Diffedit: Diffusion-based semantic image editing with mask guidance", "year": "2022" }, { "authors": "G D'amicantonio; E Bondarev", "journal": "", "ref_id": "b13", "title": "Automated camera calibration via homography estimation with gnns", "year": "2024" }, { "authors": "M Deitke; D Schwenk; J Salvador; L Weihs; O Michel; E Vanderbilt; L Schmidt; K Ehsani; A Kembhavi; A Farhadi", "journal": "", "ref_id": "b14", "title": "Objaverse: A universe of annotated 3d objects", "year": "2023" }, { "authors": "C Deng; C Jiang; C R Qi; X Yan; Y Zhou; L Guibas; D Anguelov", "journal": "", "ref_id": "b15", "title": "Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors", "year": "2023" }, { "authors": "P Dhariwal; A Nichol", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "C Ding; D Tao", "journal": "Pattern Recognition", "ref_id": "b17", "title": "Pose-invariant face recognition with homography-based normalization", "year": "2017" }, { "authors": "H Ding; S Wu; H Tang; F Wu; G Gao; X Y Jing", "journal": "Springer", "ref_id": "b18", "title": "Cross-view image synthesis with deformable convolution and attention mechanism", "year": "2020" }, { "authors": "H Docs", "journal": "", "ref_id": "b19", "title": "Clip directional similarity", "year": "" }, { "authors": "R Emmaneel; M R Oswald; S De Haan; D Datcu", "journal": "Applied Sciences", "ref_id": "b20", "title": "Cross-view outdoor localization in augmented reality by fusing map and satellite data", "year": "2023" }, { "authors": "D Epstein; A Jabri; B Poole; A Efros; A Holynski", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Diffusion self-guidance for controllable image generation", "year": "2024" }, { "authors": "R Fridman; A Abecasis; Y Kasten; T Dekel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Scenescape: Text-driven consistent scene generation", "year": "2024" }, { "authors": "R Gal; Y Alaluf; Y Atzmon; O Patashnik; A H Bermano; G Chechik; D Cohen-Or", "journal": "", "ref_id": "b23", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2022" }, { "authors": "J Gu; B Wu; L Fan; J Huang; S Cao; Z Xiang; X S Hua", "journal": "", "ref_id": "b24", "title": "Homography loss for monocular 3d object detection", "year": "2022" }, { "authors": "J Gu; A Trevithick; K E Lin; J M Susskind; C Theobalt; L Liu; R Ramamoorthi", "journal": "PMLR", "ref_id": "b25", "title": "Nerfdiff: Single-image view synthesis with nerf-guided distillation from 3d-aware diffusion", "year": "2023" }, { "authors": "L Han; Y Li; H Zhang; P Milanfar; D Metaxas; F Yang", "journal": "", "ref_id": "b26", "title": "Svdiff: Compact parameter space for diffusion fine-tuning", "year": "2023" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Ho; T Salimans", "journal": "", "ref_id": "b28", "title": "Classifier-free diffusion guidance", "year": "2022" }, { "authors": "E J Hu; Y Shen; P Wallis; Z Allen-Zhu; Y Li; S Wang; L Wang; W Chen", "journal": "", "ref_id": "b29", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "S Hu; M Feng; R M Nguyen; G H Lee", "journal": "", "ref_id": "b30", "title": "Cvm-net: Cross-view matching network for image-based ground-to-aerial geo-localization", "year": "2018" }, { "authors": "A Hyvärinen; E Oja", "journal": "Neural networks", "ref_id": "b31", "title": "Independent component analysis: algorithms and applications", "year": "2000" }, { "authors": "A Jain; M Tancik; P Abbeel", "journal": "", "ref_id": "b32", "title": "Putting nerf on a diet: Semantically consistent few-shot view synthesis", "year": "2021" }, { "authors": "B Kawar; S Zada; O Lang; O Tov; H Chang; T Dekel; I Mosseri; M Irani", "journal": "", "ref_id": "b33", "title": "Imagic: Text-based real image editing with diffusion models", "year": "2023" }, { "authors": "S Klein; M Staring; J P Pluim", "journal": "IEEE transactions on image processing", "ref_id": "b34", "title": "Evaluation of optimization methods for nonrigid medical image registration using mutual information and b-splines", "year": "2007" }, { "authors": "D Kothandaraman; T Guan; X Wang; S Hu; M Lin; D Manocha", "journal": "Springer", "ref_id": "b35", "title": "Far: Fourier aerial video recognition", "year": "2022" }, { "authors": "D Kothandaraman; M Lin; D Manocha", "journal": "IEEE", "ref_id": "b36", "title": "Diffar: Differentiable frequency-based disentanglement for aerial video action recognition", "year": "2023" }, { "authors": "D Kothandaraman; T Zhou; M Lin; D Manocha", "journal": "", "ref_id": "b37", "title": "Aerial diffusion: Text guided ground-to-aerial view translation from a single image using diffusion models", "year": "2023" }, { "authors": "N Kumari; B Zhang; R Zhang; E Shechtman; J Y Zhu", "journal": "", "ref_id": "b38", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "J Li; D Li; S Savarese; S Hoi", "journal": "", "ref_id": "b39", "title": "Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models", "year": "2023" }, { "authors": "T Li; J Liu; W Zhang; Y Ni; W Wang; Z Li", "journal": "", "ref_id": "b40", "title": "Uav-human: A large benchmark for human behavior understanding with unmanned aerial vehicles", "year": "2021" }, { "authors": "C H Lin; J Gao; L Tang; T Takikawa; X Zeng; X Huang; K Kreis; S Fidler; M Y Liu; T Y Lin", "journal": "", "ref_id": "b41", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "G Liu; H Latapie; O Kilic; A Lawrence", "journal": "", "ref_id": "b42", "title": "Parallel generative adversarial network for third-person to first-person image generation", "year": "2022" }, { "authors": "G Liu; H Tang; H Latapie; Y Yan", "journal": "IEEE", "ref_id": "b43", "title": "Exocentric to egocentric image generation via parallel generative adversarial network", "year": "2020" }, { "authors": "G Liu; H Tang; H M Latapie; J J Corso; Y Yan", "journal": "", "ref_id": "b44", "title": "Cross-view exocentric to egocentric video synthesis", "year": "2021" }, { "authors": "J Liu; X Li", "journal": "", "ref_id": "b45", "title": "Geometrized transformer for self-supervised homography estimation", "year": "2023" }, { "authors": "M Liu; C Xu; H Jin; L Chen; Z Xu; H Su", "journal": "", "ref_id": "b46", "title": "One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization", "year": "2023" }, { "authors": "R Liu; R Wu; B Van Hoorick; P Tokmakov; S Zakharov; C Vondrick", "journal": "", "ref_id": "b47", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2023" }, { "authors": "Y Ma; T Wang; X Bai; H Yang; Y Hou; Y Wang; Y Qiao; R Yang; D Manocha; X Zhu", "journal": "", "ref_id": "b48", "title": "Vision-centric bev perception: A survey", "year": "2022" }, { "authors": "F Maes; A Collignon; D Vandermeulen; G Marchal; P Suetens", "journal": "IEEE transactions on Medical Imaging", "ref_id": "b49", "title": "Multimodality image registration by maximization of mutual information", "year": "1997" }, { "authors": "N G Nair; A Cherian; S Lohit; Y Wang; T Koike-Akino; V M Patel; T K Marks", "journal": "", "ref_id": "b50", "title": "Steered diffusion: A generalized framework for plug-and-play conditional image synthesis", "year": "2023" }, { "authors": "M Oquab; T Darcet; T Moutakanni; H Vo; M Szafraniec; V Khalidov; P Fernandez; D Haziza; F Massa; A El-Nouby", "journal": "", "ref_id": "b51", "title": "Dinov2: Learning robust visual features without supervision", "year": "2023" }, { "authors": "E Park; J Yang; E Yumer; D Ceylan; A C Berg", "journal": "", "ref_id": "b52", "title": "Transformation-grounded image generation network for novel 3d view synthesis", "year": "2017" }, { "authors": "E Pizzi; S D Roy; S N Ravindra; P Goyal; M Douze", "journal": "", "ref_id": "b53", "title": "A self-supervised descriptor for image copy detection", "year": "2022" }, { "authors": "D Podell; Z English; K Lacey; A Blattmann; T Dockhorn; J Müller; J Penna; R Rombach", "journal": "", "ref_id": "b54", "title": "Sdxl: Improving latent diffusion models for high-resolution image synthesis", "year": "2023" }, { "authors": "B Poole; A Jain; J T Barron; B Mildenhall", "journal": "", "ref_id": "b55", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "G Qian; J Mai; A Hamdi; J Ren; A Siarohin; B Li; H Y Lee; I Skorokhodov; P Wonka; S Tulyakov", "journal": "", "ref_id": "b56", "title": "Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors", "year": "2023" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b57", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "A Raj; S Kaza; B Poole; M Niemeyer; N Ruiz; B Mildenhall; S Zada; K Aberman; M Rubinstein; J Barron", "journal": "", "ref_id": "b58", "title": "Dreambooth3d: Subject-driven text-to-3d generation", "year": "2023" }, { "authors": "K Regmi; A Borji", "journal": "Computer Vision and Image Understanding", "ref_id": "b59", "title": "Cross-view image synthesis using geometry-guided conditional gans", "year": "2019" }, { "authors": "B Ren; H Tang; N Sebe", "journal": "", "ref_id": "b60", "title": "Cascaded cross mlp-mixer gans for cross-view image translation", "year": "2021" }, { "authors": "B Ren; H Tang; Y Wang; X Li; W Wang; N Sebe", "journal": "", "ref_id": "b61", "title": "Pi-trans: Parallelconvmlp and implicit-transformation based gan for cross-view image translation", "year": "2022" }, { "authors": "C Rockwell; D F Fouhey; J Johnson", "journal": "", "ref_id": "b62", "title": "Pixelsynth: Generating a 3d-consistent experience from a single image", "year": "2021" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b63", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "N Ruiz; Y Li; V Jampani; Y Pritch; M Rubinstein; K Aberman", "journal": "", "ref_id": "b64", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "N Ruiz; Y Li; V Jampani; W Wei; T Hou; Y Pritch; N Wadhwa; M Rubinstein; K Aberman", "journal": "", "ref_id": "b65", "title": "Hyperdreambooth: Hypernetworks for fast personalization of text-to-image models", "year": "2023" }, { "authors": "K Sargent; Z Li; T Shah; C Herrmann; H X Yu; Y Zhang; E R Chan; D Lagun; L Fei-Fei; D Sun", "journal": "", "ref_id": "b66", "title": "Zeronvs: Zero-shot 360-degree view synthesis from a single real image", "year": "2023" }, { "authors": "C Schuhmann; R Beaumont; R Vencu; C Gordon; R Wightman; M Cherti; T Coombes; A Katta; C Mullis; M Wortsman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b67", "title": "Laion-5b: An open largescale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Y Shen; M Luo; Y Chen; X Shao; Z Wang; X Hao; Y L Hou", "journal": "IEEE Access", "ref_id": "b68", "title": "Cross-view image translation based on local and global information guidance", "year": "2021" }, { "authors": "R Shi; H Chen; Z Zhang; M Liu; C Xu; X Wei; L Chen; C Zeng; H Su", "journal": "", "ref_id": "b69", "title": "Zero123++: a single image to consistent multi-view diffusion base model", "year": "2023" }, { "authors": "Y Shi; P Wang; J Ye; M Long; K Li; X Yang", "journal": "", "ref_id": "b70", "title": "Mvdream: Multi-view diffusion for 3d generation", "year": "2023" }, { "authors": "Y Shi; D Campbell; X Yu; H Li", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b71", "title": "Geometry-guided street-view panorama synthesis from satellite imagery", "year": "2022" }, { "authors": "Y Shi; J Wang; H Cao; B Tang; X Qi; T Yang; Y Huang; S Liu; L Zhang; H Y Shum", "journal": "", "ref_id": "b72", "title": "Toss: High-quality text-guided novel view synthesis from a single image", "year": "2023" }, { "authors": "R Szeliski", "journal": "Springer Nature", "ref_id": "b73", "title": "Computer vision: algorithms and applications", "year": "2022" }, { "authors": "M Tancik; V Casser; X Yan; S Pradhan; B Mildenhall; P P Srinivasan; J T Barron; H Kretzschmar", "journal": "", "ref_id": "b74", "title": "Block-nerf: Scalable large scene neural view synthesis", "year": "2022" }, { "authors": "H Tang; D Xu; N Sebe; Y Wang; J J Corso; Y Yan", "journal": "", "ref_id": "b75", "title": "Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation", "year": "2019" }, { "authors": "J A Thomas", "journal": "", "ref_id": "b76", "title": "Elements of information theory", "year": "1991" }, { "authors": "A Toker; Q Zhou; M Maximov; L Leal-Taixé", "journal": "", "ref_id": "b77", "title": "Coming down to earth: Satelliteto-street view synthesis for geo-localization", "year": "2021" }, { "authors": "R Tucker; N Snavely", "journal": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", "ref_id": "b78", "title": "Single-view view synthesis with multiplane images", "year": "2020" }, { "authors": "G Van Tulder; Y Tong; E Marchiori", "journal": "Springer", "ref_id": "b79", "title": "Multi-view analysis of unregistered medical images using cross-view transformers", "year": "2021-10-01" }, { "authors": "P Viola; Iii Wells; W M ", "journal": "International journal of computer vision", "ref_id": "b80", "title": "Alignment by maximization of mutual information", "year": "1997" }, { "authors": "D Wang; X Cui; X Chen; Z Zou; T Shi; S Salcudean; Z J Wang; R Ward", "journal": "", "ref_id": "b81", "title": "Multi-view 3d reconstruction with transformers", "year": "2021" }, { "authors": "O Wiles; G Gkioxari; R Szeliski; J Johnson", "journal": "", "ref_id": "b82", "title": "Synsin: End-to-end view synthesis from a single image", "year": "2020" }, { "authors": "S Wu; H Tang; X Y Jing; H Zhao; J Qian; N Sebe; Y Yan", "journal": "IEEE Transactions on Multimedia", "ref_id": "b83", "title": "Cross-view panorama image synthesis", "year": "2022" }, { "authors": "R Xian; X Wang; D Kothandaraman; D Manocha", "journal": "", "ref_id": "b84", "title": "Pmi sampler: Patch similarity guided frame selection for aerial action recognition", "year": "2023" }, { "authors": "J Xu; X Wang; W Cheng; Y P Cao; Y Shan; X Qie; S Gao", "journal": "", "ref_id": "b85", "title": "Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models", "year": "2023" }, { "authors": "B Yang; S Gu; B Zhang; T Zhang; X Chen; X Sun; D Chen; F Wen", "journal": "", "ref_id": "b86", "title": "Paint by example: Exemplar-based image editing with diffusion models", "year": "2023" }, { "authors": "J Zhang; C Wang; S Liu; L Jia; N Ye; J Wang; J Zhou; J Sun", "journal": "Springer", "ref_id": "b87", "title": "Contentaware unsupervised deep homography estimation", "year": "2020" }, { "authors": "L Zhang; A Rao; M Agrawala", "journal": "", "ref_id": "b88", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Y Zhao; Y Zhang; Z Gong; H Zhu", "journal": "", "ref_id": "b89", "title": "Scene representation in bird's-eye view from surrounding cameras with transformers", "year": "2022" }, { "authors": "C Zheng; A Vedaldi", "journal": "", "ref_id": "b90", "title": "Free3d: Consistent novel view synthesis without 3d representation", "year": "2023" }, { "authors": "Z Zhou; S Tulsiani", "journal": "", "ref_id": "b91", "title": "Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction", "year": "2023" } ]
[ { "formula_coordinates": [ 6, 243.87, 160.68, 232.48, 30.47 ], "formula_id": "formula_0", "formula_text": "min eopt 0 t=T L(f (x t , t, e opt ; θ), I S ), (1" }, { "formula_coordinates": [ 6, 476.35, 171.03, 4.24, 8.8 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 6, 240.74, 286.55, 239.85, 30.48 ], "formula_id": "formula_2", "formula_text": "min θ LoRA 0 t=T L(f (x t , t, e opt ; θ), I S ).(2)" }, { "formula_coordinates": [ 7, 134.77, 320.24, 345.83, 79.33 ], "formula_id": "formula_3", "formula_text": "I(X , Y) = H(X ) + H(Y) -H(X , Y) where H(X ), H(Y) are the entropies of p(x), p(y) and H(X , Y) is the joint entropy. Entropy of a random variable X is a measure of its uncertainity, H(X ) = -x∈X p X (x)log(p X (x)); H(X , Y) = - (x,y)∈X ,Y p XY (x, y)log(p XY (x, y))." }, { "formula_coordinates": [ 7, 204.82, 420.38, 205.73, 27.27 ], "formula_id": "formula_4", "formula_text": "I(X , Y) = - (x,y)∈X ,Y p XY (x, y) log(p XY (x, y)) p X (x)p Y (y) ." } ]
2023-11-27
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b3", "b4", "b4", "b7", "b8", "b9", "b10", "b11" ], "table_ref": [], "text": "Click-Through Rate (CTR) prediction, which aims to predict the probability of a user clicking on an item, is an important task for online advertising and recommender systems. Various approaches have been proposed for effective CTR prediction [1]- [4]. These methods mainly focus on accurately modeling the complicated feature interactions to capture the underlying collaborative patterns. Most of the existing approaches concentrate on single-domain prediction, where each model is solely trained to serve the CTR prediction of a single scenario. However, in large-scale corporate enterprises, numerous business domains frequently necessitate CTR prediction to augment user contentment and enhance commercial revenue. For instance, in the case of e-commerce enterprises, the advertising scenarios encompass a wide array of options and manifest notable disparities, encompassing domains such as motion pictures, literary works, electronic devices, and culinary delights, among others. Merely mixing all the data and training a single shared CTR model cannot yield satisfactory results across all domains owing to the substantial distribution variance among diverse scenarios (domain seesaw phenomenon [5]). The domain-specific modeling paradigm severely restricts the efficient utilization of extensive user behavior data in business scenarios. Some recent studies [5]- [8] propose conducting multidomain CTR predictions. The core idea of these approaches is to introduce a shared neural network for learning the common knowledge across diverse domains, while simultaneously integrating multiple domain-specific sub-networks to capture the distinct characteristics of each domain. Although somewhat efficacious, the majority of these methods rely on modeling the ID features (e.g., item id) to develop the CTR prediction. A major obstacle of this paradigm is the limited transferability of the learned model to new recommendation scenarios, even when the underlying data structures remain unchanged.\nInspired by recent advancements in natural language recommendations [9], our objective is to devise a novel approach to learn universally applicable collaborative patterns by surpassing the constraints of ID features. Our fundamental concept entails transforming raw features, such as the location of an item, into textual data and employing Large Language Models (LLMs) to acquire transferable representations. While previous attempts have demonstrated the promise of this approach for certain recommendation tasks [10], [11], there remain several critical challenges to address in the context of multi-domain CTR predictions. First, the textual semantic space is not directly conducive to the task of CTR prediction [12]. In comparison to traditional feature interaction based methods, LLMs encounter difficulties in capturing collaborative patterns, thereby resulting in suboptimal model performance. Second, due to substantial distribution variance across different domains, effectively leveraging the collaborative knowledge from the source domain to enhance the target domain proves to be a formidable task. For instance, the interaction between features user and title proves most valuable in movie recommendations, yet its efficacy diminishes in the context of beauty recommendations.\nTo address these issues, in this paper, we propose the Universal Feature Interaction Network (UFIN) for multidomain CTR prediction. UFIN exploits textual data to acquire knowledge of universal feature interactions that can be effectively transferred across diverse domains. To learn universal feature representations, we devise a prompt to convert the raw features into text and subsequently generate a set of universal features to capture the general attributes of interactions. Notably, we regard the text and feature representations as two modalities and devise an encoder-decoder network founded on a Large Language Model (LLM) to enforce the conversion of data from the text modality to the feature modality. This scheme can be denoted as \"raw features ⇒ text ⇒ universal features\". For learning universal feature interactions, we develop an MoEenhanced adaptive feature interaction model, which can learn the generalized collaborative patterns from diverse domains. To further enhance the acquisition of collaborative knowledge, we propose a multi-domain knowledge distillation framework to supervise the training of our approach. Through these aforementioned mechanisms, UFIN can effectively bridge the semantic gap to learn common knowledge across various recommendation domains, surpassing the limitations of IDbased models. The paper's main contributions are summarized as follows:\n• We propose a novel Universal Feature Interaction Network (UFIN) for CTR prediction, intelligently acquiring the collaborative patterns across diverse domains.\n• To the best of our knowledge, UFIN is the first deep CTR model to harness Large Language Models (LLMs) to adaptively learn the feature interactions for recommendations, thereby obtaining universal feature representations from textual data. This empowers UFIN to proficiently bridge the semantic gap across various domains.\n• We propose a multi-domain knowledge distillation framework for enhancing the feature interaction learning. This motivates UFIN to proficiently acquire the collaborative knowledge from diverse domains, thereby improving the model performance.\n• We conduct extensive experiments on eight widely used datasets. UFIN outperforms a number of competitive baselines in both multi-domain and cross-platform settings, demonstrating the effectiveness of our approach." }, { "figure_ref": [], "heading": "II. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "In this section, we present a universal feature interaction network for multi-domain CTR predictions, named UFIN. Unlike previous works, our goal is to learn the universal feature interactions that are able to effectively transferred to new recommendation domains." }, { "figure_ref": [], "heading": "A. Problem Formulation", "publication_ref": [], "table_ref": [], "text": "The CTR prediction task is to estimate the probability of a user clicking on an item. An instance of the CTR task can be denoted as a tuple (x, y), where the vector x includes the user, item and context features, and the label y ∈ {0, 1} represents the whether the item is clicked or not. The feature vector X contains multiple fields {x 1 , x 2 , ..., x m }, where m is the number of feature fields, and x j is a one-hot identifier (ID) vector of the j-th feature field. In this way, the CTR dataset of a single domain D can be formulated as a set of instances {(X i , y i )} N i=1 . Real-world recommender applications often have to deal with multiple business domains. Specifically, multi-domain CTR models need to make predictions for L domains, {D 1 , D Most of multi-domain CTR prediction approaches adopt the embedding-based feature interaction learning paradigm. Since the input feature is usually sparse and high-dimensional, most CTR models employ an embedding layer to map these onehot features into low-dimensional vectors, i.e., for the feature vector x j , the corresponding embedding e j is obtained by an embedding look-up operation. As such, the features can be represented as a list of embeddings E = {e 1 , e 2 , .., e m }.\nBased on E, most approaches conduct feature interactions F to capture the collaborative patterns between users and items. The main goal of the CTR prediction is to learn the prediction function ŷ = F(E)." }, { "figure_ref": [], "heading": "B. Overview of UFIN", "publication_ref": [], "table_ref": [], "text": "As shown in Figure 1, UFIN is designed with an encoderdecoder architecture followed by an interaction network. The core idea of UFIN is to consider the text and features as two modalities for learning universal feature representations and universal feature interactions. Based on this idea, we formulate the general feature form as the natural language text and employ an MoE-enhanced large language model (LLM) as the encoder to transform the text modality into a latent space. Afterwards, the decoder performs a mapping to the feature modality, generating the universal features. As such, UFIN can capture the generalized attributes of interactions, bridging the semantic gap between different domains. For learning universal feature interactions, we derive adaptive feature interactions based on the universal features to acquire transferable collaborative knowledge. To learn the common collaborative patterns of different domains, we incorporate them into an MoE model to enhance the knowledge sharing." }, { "figure_ref": [], "heading": "Universal Feature Interaction Learning", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Universal Feature Representation Learning", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "𝓐 𝟏 :", "publication_ref": [], "table_ref": [], "text": "There is a user, whose gender is male, and occupation is student. The product is a jacket and its name is \"HOUONE\". The system time is 09:21. Fig. 1: The overall framework of UFIN is designed as an encoder-decoder architecture, followed by a feature interaction network. The encoder-decoder accomplishes the transformation from text to feature modality, yielding a collection of universal features. The feature interaction network consists of multiple adaptive learning experts, and each expert automatically learns the underlying true interaction orders within each given domain. Further, a semantic gating router is incorporated to adaptively integrate all experts for learning the universal feature interactions across diverse domains." }, { "figure_ref": [], "heading": "Input Text", "publication_ref": [], "table_ref": [], "text": "Furthermore, we propose a multi-domain knowledge distillation framework to enhance feature interaction learning.\nIn what follows, we introduce the details of universal feature representation learning (Section II-C) and universal feature interaction learning (Section II-D)." }, { "figure_ref": [], "heading": "C. Universal Feature Representation Learning", "publication_ref": [ "b11", "b12", "b11", "b13", "b14", "b15", "b9", "b10", "b16", "b9", "b9", "b9", "b17", "b18", "b19", "b20" ], "table_ref": [], "text": "To deliver transferable recommendations, we first map the features from various domains to a common semantic space. Prior studies rely on the embedding look-up operation to acquire the representations for the given feature IDs. These approaches have two significant limitations. Firstly, it relinquishes the innate semantic information of features, thereby greatly impairing the model's transferability. Secondly, these models cannot be transferred to alternative platforms that possess distinct feature fields (for instance, transitioning from Amazon to MovieLens).\nTo learn transferable feature representations, we adopt natural language text as the universal data form, which is derived from raw features through a prompt. As increasingly more evidence shows [12], text and feature representations can be regarded as two modalities that can be mutually transformed. Based on this idea, we employ an MoE-enhanced LLM as the encoder to enforce the text modality into a latent space and develop a decoder that performs a mapping to the feature modality, thereby generating the universal feature representations (universal features). This approach can be expressed as \"raw features ⇒ text ⇒ universal features\", providing a means to bridge the semantic gap across different domains.\n1) Feature Textualization: The first step in learning universal feature representations is to transform the raw features into textual data, as described by a prompt. As previous work shows [13], an effective prompt should consist of personalized fields for different users and items. For this purpose, we design a prompt, that includes the user profile, item description and contextual information to conduct such transformation. As shown in Figure 1, given the user features (i.e., male, student), item features (i.e., jacket, HOUONE) and context features (i.e., 09:21), the transformed textual data is shown as:\nThere is a user, whose gender is male, and occupation is student. The product is a jacket and its name is \"HOUONE\". The system time is 09:21. In our prompt, different types (i.e., user-side, item-side and context-side) of raw features (i.e., {x 1 , x 2 , ..., x m }) are sequentially summarized into a natural language sentence, where the descriptions of different sides are separated by the period \".\", and the features are separated by the comma \",\". As such, a CTR instance can be denoted as {w 1 , w 2 , ..., w n }, where n is the number of words in a sentence. In this way, we can obtain a universal data form (i.e., natural language text) to represent the CTR instances across various domains or platforms. Note that we can also design other types of prompts to conduct such transformation. Unlike prior work [12], the format of the prompt does not have an obvious effect in our framework, while the semantic information of the prompt, specifically the features it contains, has a large influence on the performance, which will be explored in the Section III-C7.\n2) Textual Feature Encoding: Given the universal data form in the text modality, our goal is to transform it into feature modality to obtain the universal features. Most multimodal works [14] (e.g., text-to-image) employ an encoder-decoder architecture to align the representations of different modalities. Following them, we first employ an MoE-enhanced LLM as the encoder to project the textual data into a common latent space.\nLLM based Textual Encoding. Motivated by the recent advances in large language models (LLMs [15]), which show excellent language modeling capacity, we adopt FLAN-T5 [16] to learn the latent representations of the text. Given the words of textual data {w 1 , w 2 , ..., w n }, we feed them into the LLM, and we have:\n{v 1 , v 2 , ..., v n } = LLM({w 1 , w 2 , ..., w n }),(1)\ns = LayerNorm( n j=1 v j ),(2)\nwhere {v 1 , ..., v n } ∈ R d V ×n is the last hidden state of LLM, d V is the state dimension and s is the latent representation. Unlike existing studies [10], [11], we employ sum pooling to preserve token-level semantics of features and apply Lay-erNorm [17] to adjust the semantic distributions. Note that the LLM is solely for text encoding, which is not tuned during training. Therefore, we can cache the last hidden state {v 1 , ..., v n } to ensure the efficiency of our approach.\nMulti-Domain Semantic Fusion. In the above, we obtain semantic representations from LLMs. However, recent study [10] found that the original semantic space of PLMs is not suitable for the recommendation task. To address this issue, a commonly used approach is to employ a neural network to learn the appropriate semantic space for enhancing the representations. Since different domains usually correspond to varying semantic contexts, merely learning a shared semantic space for all domains will suffer from the domain seesaw phenomenon that degrades the model capacity. Inspired by the recent study [10], our idea is to learn an independent semantic space for each domain and adaptively combine them based on the semantic context.\nTo achieve this purpose, we employ a single-layer perception (SLP) to learn a suitable semantic space for each domain, and incorporate them into an MoE model to enhance the domain fusion. Specifically, given L domains, we introduce L experts, each learning in a different subspaces, and combine them through a gating router:\nz = L j=1 σ(W j s + b j ) • g j ,(3)\ng = Softmax(W g s),(4)\nwhere W j ∈ R d V ×d V and b j ∈ R d V are the weight and bias of the j-th expert, σ is the activation function, g j is the j-th combination weight of the gating router calculated by Eq. ( 4), W g ∈ R L×d V is the router weight, and z is the enhanced representations.\nFusing Anonymous Features. As there exists numerous anonymous features (such as the identifiers of the user/item) that lack semantic information, we refrain from including them in our prompt template. Nevertheless, these features may play a significant role, particularly in situations where semantic features are limited or nonexistent. For instance, in the case of the Amazon dataset, only user id features are accessible on the user side. To generate more comprehensive predictions, we expand our methodology to encompass these features. There are many ways to achieve this purpose, and we follow the existing ID-Text fusion work [10], which employs a distinct embedding for each anonymous feature and merges them with the textual representations:\nz = z + c k=1 U k h k ,(5)\nwhere {h 1 , ..., h c } ∈ R d A ×c represents the anonymous embeddings, c is the number of anonymous fields, and\nU k ∈ R d V ×d A\nis the projection matrix. Note that the anonymous features are only auxiliary representations and are not used unless specified. For efficiency considerations, we do not employ other complex mechanisms (e.g., self-attention [18]), which will be studied in our future work.\n3) Universal Feature Generation: With the above textual encoding procedure, we can obtain the universal representation of the instances. Previous works [19] directly feed the textual representations into a feedforward network (e.g., MLP) to make predictions, resulting in suboptimal performance. As the recent study shows [20], it is challenging for an MLP to capture effective collaborative patterns compared with the feature-wise interactions (e.g., FM [21]). Our proposed solution, by contrast, entails harnessing textual data to generate universal features that transcend various domains, thereby capturing the collective patterns that are commonly observed. To illustrate, we anticipate generating the universal attribute \"amusing\" from the textual expressions \"This movie is hilarious\" and \"This book is whimsical\" within the realms of movie and book recommendations. Based on this idea, we develop a decoder that conducts a transformation to map the latent representation of textual data to the feature modality, shown as:\nẽj = LayerNorm(V j z),(6)\nwhere j ∈ {1, 2, ..., n u } and n u is the field number of universal features. Here we incorporate a set of projection matrices {V j ∈ R d×d V } nu j=1 to generate a set of universal features Ẽ = {ẽ 1 , ẽ2 , ..., ẽnu }, each measuring different aspects from different representation subspaces. As such, we use the generated universal representations for subsequent feature interaction modeling." }, { "figure_ref": [], "heading": "D. Universal Feature Interaction Learning", "publication_ref": [ "b20", "b21", "b21", "b20", "b22", "b23", "b21", "b21", "b21", "b21", "b5", "b5", "b24", "b11", "b21", "b25", "b26" ], "table_ref": [], "text": "The core of our proposed UFIN is to learn the universal feature interactions for intelligently acquiring the generalized collaborative knowledge across diverse domains. To this end, our approach is to model the adaptive feature interactions based on the generated universal features to capture the common collaborative patterns. Furthermore, to promote feature interaction learning, we introduce a framework for distilling knowledge to guide the model's learning process and subsequently enhance its performance.\n1) Adaptive Feature Interaction Learning at a Single Domain: For learning universal feature interactions, an important issue is how to accurately model the interactions orders/forms within each domain, as different domains typically correspond to varying feature relationships. Traditional methods manually design a maximal order and further remove the useless interactions from them, e.g., FM [21] empirically enumerates all second-order feature interaction terms. These approaches not only results in inaccurate modeling of the underlying true feature interactions in real-world scenarios but also limits model transferability. As a promising approach, recent study EulerNet [22] proposes to model the adaptive feature interactions, i.e., the interaction forms are automatically learned from data, allowing for arbitrary orders and a flexible number of terms. Given the input features Ẽ = {ẽ 1 , ..., ẽnu } (See Eq. ( 6)), the adaptive feature interaction learning function of EulerNet [22] is formulated as:\nF( Ẽ; A) = w ⊤ α∈A ẽα1 1 ⊙ ẽα2 2 ⊙ ... ⊙ ẽαn u nu ,(7)\nwhere α = [α 1 , α 2 , ..., α nu ] denotes the learnable order parameter of each feature, A is the parameter set of all the learnable orders, and w is a transition vector for generating a scalar result. For a given domain, the underlying true feature interaction forms are automatically learned from the parameter A, e.g., the interactions of FMs [21] can be learned by A = {α| m j=1 α j = 2, ∀α j ∈ {0, 1}}. Previous works [23], [24] face challenges in achieving this, because when the embedding e j contains negative values, the order α j must be set to an integer value to avoid invalid operation (e.g., (-1) 0.5 ). As a solution, EulerNet [22] leverages Euler's formula to learn the feature interactions in a complex vector space that enables the efficient learning of arbitrary-order feature interactions, without additional restrictions (e.g., non-negative embedding or integer order).\nIn our case, we utilize EulerNet [22] to adaptively learn the underlying true feature interaction orders/forms within each given domain. We mainly transfer the knowledge of interaction orders (i.e., A) to enhance predictions in a target domain.\n2) Multi-Domain Feature Interaction Learning: In the above, we have discussed the feature interaction learning within a single domain. In this section, we aim to generalize the methodology for adapting the modeling of feature interactions across multiple domains. Intuitively, we can train distinct models independently to adapt to the distribution of each domain. However, this approach fails to grasp the interconnectedness between diverse domains, leading to challenges in the cold-start scenario. Our objective is to acquire knowledge of the universal feature interactions that encompass the shared collaborative patterns between different domains. Our proposed solution entails the introduction of multiple sets of interaction orders (i.e., A), each of which learns the underlying true feature interaction orders for a single domain, and shares some of them across different domains to acquire knowledge of the common collaborative patterns.\nIn practice, we employ an MoE model to implement our idea. Given L domains, we introduce L experts, each expert j is implemented by EulerNet [22] with the learnable order parameters A j . All experts share the input embedding Ẽ = {ẽ 1 , ..., ẽnu } (See Eq. ( 6)), combined by a gating router based on the semantic representation z (See Eq. ( 5)):\nζ = L j=1 F( Ẽ; A j ) • gj ,(8)\ng = TopK Softmax( Wg z) ,(9)\nwhere TopK(•) retains only the top-K elements of the input tensor and sets all other elements as zero, A j (learnable parameter) is the interaction order of j-th expert, F(•) is the feature interaction function (See Eq. ( 7)) of EulerNet [22], gj is the j-th combination weight, Wg ∈ R L×d V is the router weight, and ζ is the output logits. For an instance in a given domain u, we use its corresponding semantic representations z to select K experts with the order sets S u = {A u j } K j=1 for collaboratively learning the feature interactions. Formally we set K > ⌈L/2⌉, and we have the following finding:\nTheorem 1. Given L domains D = {D 1 , D 2 , ..., D L } and L experts A = {A 1 , A 2 , ..., A L }, each domain D u select K experts S u = {A u 1 , A u 2 , ..., A u K } from A, i.e., S u ⊆ A. If K > ⌈L/2⌉\n, then for any given domain D u and D v , the set of the selected experts S u and S v must have a same element, i.e., S u ∩ S v ̸ = ∅, ∀u ̸ = v. See proofs in Section VI.\nIt demonstrates that for any given domains u and v, there exist at least one shared expert that learns the common feature interactions. Therefore, our approach can capture the common feature interactions between arbitrary domain pairs, thereby capable of learning the generalized feature relationship across all domains. Finally, we apply the sigmoid function on the logits (See Eq. ( 8)) to obtain the prediction:\nŷ = Sigmoid(ζ). (10\n)\nAs mentioned in the work [6], a good multi-domain CTR model should contain the features that depict the domainspecific information. For this purpose, we can further incorporate a feature adaptor, which takes as input the domain-specific features, to precisely capture the distinct characteristics of each domain. Following existing multi-domain methods [6], we add the output logits of the feature adaptor (i.e., ζ f ) to our approach (i.e., ζ) for prediction:\nŷ = Sigmoid(ζ + ζ f ).(11)\nIt allows the flexibility to choose any method. Here we employ the simplest regression model (e.g., LR [25]) as the feature adaptor to ensure the efficiency of the model.\n3) Knowledge Distillation Enhanced Training: With the above approaches, UFIN is able to learn the universal feature interactions based on the representations generated by LLMs. However, as existing study shows [12], it is challenging for the representations of LLMs to capture the feature co-occurrence correlation that results in the poor performance compared with traditional collaborative models. To facilitate our approach in capturing the underlying collaborative patterns, we propose a multi-domain knowledge distillation framework that promotes feature interaction learning.\nMulti-Domain Guided Distillation. In the knowledge distillation framework, for each domain, we pre-train a feature interaction model as the guided network to learn the domainspecific collaborative patterns. As such, multiple guided networks from different domains are incorporated as teacher model to supervise the training of our approach. It allows flexibility to choose any teacher model for each domain, and we specifically employ the EulerNet [22] as the teacher with consideration for the consistency. Following existing studies [26], [27], we use MSE loss to align the output logits between the guided network and our approach:\nL KD = M p=1 Np i=1 ||ζ G p,i -ζ p,i || 2 ,(12)\nwhere ζ G p,i and ζ p,i (see Eq. ( 8)) are the logits of the guided network and our proposed UFIN for the i-th instance in the p-th domain. Note that the guided networks are only used for auxiliary training, which are discard during inference, thus ensuring the transferability of our approach.\nMulti-Task Learning. To further promote the utilization of LLMs in the CTR prediction task, we incorporate the commonly adopted binary cross-entropy loss:\nL CT R = M p=1 Np i=1 y p i log(ŷ p i ) + (1 -y p i ) log(1 -ŷp i ) ,(13)\nwhere y p i and ŷp i are the ground-truth label and predicted result of i-th instance in the p-th domain. As for the model training, we adopt a multi-task training strategy to jointly optimize the knowledge distillation loss and the binary classification loss:\nL = L KD + L CT R .(14)" }, { "figure_ref": [], "heading": "E. Discussion", "publication_ref": [ "b5", "b27", "b11" ], "table_ref": [ "tab_3" ], "text": "In the literature, a number of CTR models have been proposed. To better highlight the novelty and difference of our approach, we make a brief comparison of different CTR methods. For ID-based methods, such as STAR [6], they rely on the ID features to develop the prediction, which impairs the inherent semantic of features that makes them incapable of being applied in the new platforms. While for semantic methods, such as P5 [28], attempt to leverage the world knowledge of LLMs to uplift the prediction. The primary challenge of these approaches lies in their inability to capture collaborative signals, resulting in poor performance. Although CTRL [12] proposes a contrastive learning framework to transfer the knowledge of LLMs to a domain-specific collaborative model. Due to the limited capacity of the collaborative model, it cannot adequately learn the common knowledge across different domains and suffers the seesaw phenomenon (See Section III-B). In contrast, our model is more universal in the application of multidomain or cross-platform settings, naturally integrating the semantic knowledge of LLMs and collaborative knowledge of interactions. The overall comparison is presented in Table I. " }, { "figure_ref": [], "heading": "III. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce the experimental settings and then give the results and analysis." }, { "figure_ref": [], "heading": "A. Experimental Settings", "publication_ref": [ "b28", "b29", "b30", "b31", "b21", "b32", "b33", "b34", "b35", "b5", "b4", "b36", "b11", "b30", "b4", "b37", "b38", "b39" ], "table_ref": [ "tab_4" ], "text": "We present the experimental settings, including the datasets, baseline approaches, the details of hyper-parameters, and evaluation metrics.\n1) Datasets: It is worth mentioning that the majority of alternative public datasets cannot be utilized to evaluate our model due to their inclusion of solely anonymous features and the absence of semantic features. To evaluate the performance of our model, we conduct experiments on the Amazon1 and MovieLens-1M2 datasets in both multi-domain setting and cross-platform setting. The statistics of datasets are summerized in Table II.\n• Amazon [29] is a widely-used dataset for recommender systems research. We keep the five-core datasets and select seven subsets for the multi-domain setting (i.e., \"Movies and TV\", \"Books\", \"Electronics\", \"Office Products\", \"Musical Instruments\", \"Toys and Games\" and \"Grocery and Gourmet Food\"). • MovieLens-1M is a movie recommendation dataset, which does not contain overlapped users or items with Amazon. We use this dataset to evaluate the performance in a cross-platform setting. 2) Compared Models: We compare the UFIN with twelve state-of-the-art methods, including single-domain methods, multi-domain methods and semantic methods. Single-Domain Methods:\n• DeepFM [30] combines traditional second-order factorization machines with a feed-forward neural network (MLP), utilizing a shared embedding layer. • DCNV2 [31] is the state-of-the-art model which captures the high-order feature interactions by taking kernel product operation of concatenated feature vectors.\n• xDeepFM [32] introduces the Compressed Interaction Network (CIN), which forms the cornerstone of the xDeepFM model. It employs the outer product of stacked feature matrices at the vector-wise level. Additionally, a feed-forward neural network is integrated to enhance the feature representations.\n• EulerNet [22] utilize the Euler's formula to map the feature representations from the real vector space in a complex vector space, which provides a way to cast the feature interaction orders to the linear coefficients. As such, it can automatically learn the arbitrary-order feature interactions from data.\nMulti-Domain Methods:\n• SharedBottom [33] is a multi-task model that parameterizes the bottom layers. In our implementation, we apply SharedBottom to share the embedding layer, while each domain retains its distinct fully-connected network that remains unshared. • MMoE [34] implicitly captures task relationships in multitask learning, accommodating diverse label spaces for different tasks. We extend MMoE to the context of multidomain CTR prediction, configuring each expert as a fully-connected network. The count of experts matches the domain count, and MMoE additionally incorporates gating networks for each domain. These networks process input features and yield softmax gates, dynamically combining experts with varying weights.\n• HMOE [35] extends MMoE to scenario-aware experts using a gradient cutting trick to explicitly encode scenario correlations. • AESM 2 [36] introduces an innovative expert selection algorithm that automatically identifies scenario-/taskspecific and shared experts for each input.\n• STAR [6] proposes a star topology architecture, which consists of a shared center network and multiple domainspecific networks for the adaptation of multi-domain distributions.\n• PEPNet [5] introduces an embedding personalized network to align multi-domain feature representations. The common knowledge from different domains can be learned in a shared Gated-Net.\nSemantic Methods:\n• P5 [28] is a semantic-based recommendation model that transforms diverse recommendation tasks into text generation tasks through prompt learning, leveraging T5 [37] as the underlying model.\n• CTRL [12] proposes to combine the collaborative signals and semantic knowledge in a contrastive learning framework. Specifically, it utilizes the knowledge of LLMs to enhance the learning of a collaborative. Only the collaborative model will online.\nFor our approach, since there are no effective user-side textual features (e.g., only user id available on the Amazon dataset), we incorporate the user id features into our approach (See Section II-C2). We introduce two versions of our approach:\n(1) UFIN t denotes the model using only item text; (2) UFIN t+f denotes the model using item text and integrating a feature adaptor (See Section II-D2).\n3) Implementation Details: The dataset is randomly split into 80% for training, 10% for validation and 10% for testing. Following the work [31], we convert the ratings of 4-5 as label 1; ratings of 1-2 as label 0; and remove the ratings of 3. Following the work [5], we separately train each single-domain model in each domain to report their best results. For multidomain evaluation, we mix all train data of the Amazon dataset to train the multi-domain models and semantic models, and evaluate their performance in each given domain. For crossplatform evaluation, we first pre-train each semantic model on the mixed Amazon dataset, and then fine-tune them on the MovieLens-1M dataset.\nThe baseline implementation follows FuxiCTR [38]. For each method, the grid search is applied to find the optimal settings. The embedding size is 16, learning rate is tuned in {1e-3, 1e-4, 1e-5}, and batch size is 1024. The optimizer is Adam [39]. The L 2 penalty weight is in {1e-3, 1e-5, 1e-7}. The default settings for MLP components are: (1) the architecture of hidden layer is 400 × 400 × 400; (2) the dropout rate is 0.2. For DeepFM, the hidden layer of MLP is in {128 × 128 × 128, 256 × 256 × 256, 400×400×400}. For xDeepFM, the depth of CIN is in {1, 2, 3, 4, 5} and the hidden size is in {100, 200, 400}. For DCNV2, the depth of CrossNet is in {1, 2, 3, 4, 5}. For EulerNet, the number of Euler interaction layer is in {1, 2, 3, 4, 5}, and the number of order vectors is in {5, 10, 20, 30, 40}. For SharedBottom, the hidden layer of MLP for each domain is in {128 × 128 × 128, 256 × 256 × 256, 400 × 400 × 400}. For MMoE, the numer of experts is in {2, 4, 8, 10}, the hidden layer of gating network is in {64 × 64 × 64, 128 × 128 × 128, 400 × 400 × 400}. For HMoE, the numer of experts is in {2, 4, 8, 10}, the hidden layer of scenario network is in {64 × 64 × 64, 128 × 128 × 128, 400 × 400 × 400}. For AMSE 2 , the numer of experts is in {2, 4, 8, 10}, the layer number is in {1, 2, 3, 4, 5}. For STAR, the hidden layer of the auxiliary networks and topology network is in {128 × 128 × 128, 256 × 256 × 256, 400 × 400 × 400}. For PEPNet, the gate hideen dim is in {16, 32, 48, 64, 80}, and hidden units of PPNet is in {128 × 128 × 128, 256 × 256 × 256, 400 × 400 × 400}. For P5, we use T5 as the base model. For CTRL, we try to use RoBERTa and T5 as the semantic model. The collaborative model is AutoInt, the depth, number of head and attention size is 2, 2, 40 respectively. For our approach, the number of experts in the semantic fusion MoE and feature interaction MoE is equal to the number of domains (i.e., L = 7). The universal feature fields n u is set to 7 and dimension d is set to 16. The K value of T opK function is 5. The SLP hidden size is 128. For the EulerNet expert, the layer number is 1, and the number of order vectors is 7.\n4) Evaluation Metrics: We use two popular metrics to evaluate the model performance: AUC and LogLoss.\n• AUC [40] (Area Under the Curve) is a metric used to evaluate the performance of binary classification or ranking models, particularly in tasks where the prediction involves ranking items. It quantifies the ability of a model to correctly rank higher the positive instances compared to negative instances. A higher AUC value indicates better model performance.\nIn the context of ranking tasks, AUC is calculated using the following formula:\nAUC = (i,j)∈S 1(s i > s j )" }, { "figure_ref": [], "heading": "|S|", "publication_ref": [ "b40" ], "table_ref": [], "text": "Where S is the set of all pairs of instances (i, j) where i is a positive instance and j is a negative instance, s i and s j represent the predicted scores or ranks of instances i and j, respectively. 1(s i > s j ) is the indicator function that equals 1 if the predicted score of i is greater than that of j, and 0 otherwise. |S| denotes the total number of pairs in the set S. In this context, AUC measures the probability that a randomly chosen positive instance will be ranked higher than a randomly chosen negative instance according to the model's predictions.\n• LogLoss [41], also known as logarithmic loss or crossentropy loss, is a common evaluation metric used in binary classification and ranking tasks. It measures the difference between predicted probabilities and actual binary labels. In the context of ranking tasks, LogLoss is often used to assess the quality of predicted relevance scores for different items. In a ranking task, the LogLoss formula can be expressed as:\nLogLoss = - 1 N N i=1 y i • log(p i ) + (1 -y i ) • log(1 -p i )\nwhere N is the number of instances (samples), y i is the true binary label (0 or 1) for the i-th instance, and p i is the predicted probability of the positive class for the i-th instance. LogLoss penalizes larger differences between predicted probabilities and true labels. It encourages the model to make confident predictions that are close to the true labels. A lower LogLoss value indicates better alignment between predicted probabilities and actual labels, reflecting better model performance." }, { "figure_ref": [], "heading": "B. Overall Performance", "publication_ref": [ "b29", "b31", "b30", "b21", "b4", "b32", "b33", "b34", "b35", "b5", "b4", "b27", "b11", "b29", "b31", "b30", "b21", "b32", "b33", "b5", "b4", "b35", "b34", "b27", "b11" ], "table_ref": [ "tab_4" ], "text": "We compare the proposed approach with twelve baseline methods on seven multi-domain datasets and one cross-platform dataset. The overall performance is presented in Table III, and we have the following observations:\n(1) Single-domain methods (i.e., DeepFM [30], xDeepFM [32], DCNV2 [31] and EulerNet [22]) perform well on the Movies and Books datasets, but exhibit subpar performance on sparse domains with less interactions (i.e., Instruments and Office). This is consistent with the observations reported in PEPNet [5], where single-domain methods exhibit varying performance across different domains.\n(2) Multi-domain methods (i.e., SharedBottom [33], MMoE [34], HMoE [35], AESM 2 [36], STAR [6] and PEP-Net [5]) achieve comparable performance on the Instruments and Office datasets, showing the effectiveness of multi-domain information sharing.\n(3) For semantic methods, P5 [28] performs poorly across all datasets due to the limitation of LLMs in capturing collaborative signals. Besides, CTRL [12] achieves better performance by aligning the representations between LLMs and collaborative models, but it becomes less effective on the Instruments dataset, showing its limitation of capturing the relatedness between different domains.\n(4) Our proposed approach achieves the best performance in almost all the cases. It signifies that our suggested universal feature interactions are better suited for the adaptation of multidomain distributions. Further, the cross-platform evaluation results show that our approach can be effectively transferred to a new platform. Besides, UFIN t+f has a great improvement over UFIN t , showing the effectiveness of incorporating feature adaptor to learn domain-specific collaborative patterns.\nRegarding the efficiency of the model, it can be observed that the latency of single-domain models (i.e., DeepFM [30], xDeepFM [32], DCNV2 [31] and EulerNet [22]) is relatively small due to their simplistic architecture. In the case of SharedBottom [33], MMoE [34], STAR [6], and PEPNet [5], they are more efficient as their learning strategies are simpler. Conversely, the latency of AESM 2 [36] and HMoE [35] is relatively large due to their intricate architecture and learning algorithm. For semantic models, the latency of P5 [28] is much larger since the training process of PLMs is extremely time-consuming. In contrast, CTRL [12] and our approach are much more efficient. This is because we can cache the textual representations of the LLMs, and only the lightweight feature interaction backbone needs to be deployed in the inference stage, thereby preserving the efficient online inference akin to traditional recommendation models." }, { "figure_ref": [ "fig_0", "fig_1", "fig_1" ], "heading": "C. Further Analysis 1) Zero-shot Learning Analysis:", "publication_ref": [ "b0", "b29", "b41", "b21", "b27", "b11", "b5", "b42", "b36", "b15", "b36", "b15", "b41", "b31", "b30", "b21", "b43" ], "table_ref": [ "tab_6" ], "text": "To show the transfer learning ability of our approach, we evaluate the zero-shot TABLE III: Performance comparison of different CTR models. \"Improv.\" indicates the relative improvement ratios of our proposed approach over the best baseline. \"*\" denotes that statistical significance for p < 0.01 compare to the best baseline. Note that a higher AUC or lower Logloss at 0.001-level is regarded significant, as stated in previous studies [1], [30], [42]. [22], P5 [28], CTRL [12], STAR [6] and our proposed UFIN), and compare the results to the best performance of fully-trained singledomain methods. In this setting, we train the model on three pre-trained datasets (i.e., Movies, Books and Electronics) and directly test them on two downstream datasets (i.e., Instruments and Toys) without further training. The downstream datasets retain only the interactions involving overlapped users from the pre-trained datasets. As shown in Figure 2, UFIN achieves the best zero-shot performance on both datasets. On the Instruments dataset, UFIN performs even better than the fullytrained single-doamin methods. These results demonstrate the strong transferability and inductive capability of our approach in learning general knowledge across different domains. 2) Cross-Platform Learning Analysis: In this part, we investigate the effectiveness of the proposed LLM based representation approach. Specially, we examine whether the model pre-trained on the Amazon datasets performs better than those without pre-training. We compare the performance of the following representation methods: (1) Embedding Look-up: use an embedding vector to represent each feature, (2) PLM based encoding: encode the textual data with PLMs (i.e., BERT [43], T5 [37] and FLAN-T5 [16]).\nExperimental results are shown in Table IV. We observe that PLM-based methods perform better when pre-training is applied, whereas the Embedding Look-up method is negatively affected by pre-training. It indicates that natural language is more promising as the general representations across different scenarios. Furthermore, the approach with T5 [37] and FLAN-T5 [16] largely outperforms other methods, showing the excellent language modeling capacity of LLMs. 3) Ablation Study: In this part, we analyze the impact of each proposed technique or component on the model performance. We propose four variants as: (1) w/o KD removing the knowledge distillation procedure, (2) w/o MoE-E removing the MoE model of the encoder (z = s in Eq. ( 3)), (3) w/o UniF without generating universal features (e j in Eq.( 6)), i.e., directly feeding the latent vector z (Eq. ( 6)) into an MLP, and (4) w/o Uid without fusing user id features (See Section II-C2). We show the results of ablation study in Figure 3(a). We observe that all the proposed components are effective to improve the model performance.\nBesides, we explore the effect of the feature interaction experts (Section II-D2). We show the results of five methods (i.e., MLP, AutoInt [42], CIN [32], CrossNet [31], EulerNet [22]) in Figure 3(b). We can observe that choice of EulerNet achieves the best performance, indicating that adaptive feature interaction learning is the key component to improve the model capacity for domain adaptation.\n4) Visualizing the Universal Features: In the literature of CTR predictions, there are few works that can transfer knowledge across different platforms. To gain a deeper comprehension of the actual knowledge being transmuted, we visually represent the ubiquitous feature representations (See Eq. ( 6)) on the datasets of Amazon Movies, Amazon Books, and MovieLens-1M by projecting them onto a two-dimensional plane using t-SNE [44]. As shown in Figure 4, we can observe that the distribution of MovieLens-1M is more closely aligned with Amazon Movies than Amazon Books. This is reasonable since the contexts of MovieLens-1M and Amazon Movies are similar. Figure 4 further presents a case of two closely aligned interactions (highlight in red circle) in the Amazon Movies and MovieLens-1M dataset. Notably, both sentences encompass the keyword thriller, thereby demonstrating our approach's ability to capture the collective attributes across disparate scenarios through general textual semantics." }, { "figure_ref": [], "heading": "There is a movie & tv, its title is \"Valkyrie\", its descrpiton is \"Based on the incredible true story of Colonel Claus von Stauffenberg (Cruise) and his ingenious assassination plot targeting Adolph Hitler, this engrossing thriller reenacts the daring operation to eliminate one of the ...\"", "publication_ref": [], "table_ref": [], "text": "There is a movie, its title is \"Leon: The Professional\", its genre is Crime Drama Romance Thriller, and it is released at 1994." }, { "figure_ref": [], "heading": "Amazon_Movies MovieLens-1M", "publication_ref": [], "table_ref": [], "text": "Fig. 4: Visualization of the universal features." }, { "figure_ref": [ "fig_3" ], "heading": "5)", "publication_ref": [], "table_ref": [], "text": "Visualizing the Universal Feature Interactions: As discussed in Section II-D, UFIN captures the common feature interactions to learn the relatedness between different domains. To verify the feature interactions learned in UFIN, we visualize the interaction orders (i.e., A j in Eq.( 8)) of the Top-1 expert for the Amazon movies, MovieLens in Figure 5(a) and (b) respectively. We can observe that their learned feature interactions exhibit substantial differences. We further visualize one of their shared experts in Figure 5(c). Notably, the orders learned in the shared expert exhibit some intersections with those in the domain-specific expert of Amazon movies (i.e., α 2 , α 6 , α 7 ) and MovieLens (i.e., α 3 , α 5 , α 6 ). These shared feature interactions enhance the transferability of our model and enable it to capture more generalized collaborative knowledge for CTR predictions. 9)) can balance the number of domain shared experts. To analyze the influence of K, we vary K in the range of 1 to 7 and report the results in Figure 6. We can observe that the performance on the MovieLens dataset increases as K increases from 1 to 4, indicating that the common feature interactions learned in the shared experts are valuable. Whereas on the Amazon Movies dataset, UFIN achieves the best performance as K increases to 5. However, the model performance decreases when K exceeds 5. This indicates that adding more shared experts may hinder the learning of domain-specific knowledge that hurt the model performance. 7) The analysis of the Prompt Design: In this section, we explore the impact of different prompts construction methods on training UFIN. Specifically, we consider several rules for constructing prompts: 1) Omit auxiliary text descriptions and directly combine feature fields and values using \":\"; 2) Mask the feature fields with a meaningless unified word \"Field\"; 3) Remove certain feature field;\nBase: There is a user, whose gender is male, and occupation is student. There is a movie, its title is \"Leon: The Professional\", its genre is Crime Drama Romance Thriller, and it is released at 1994. Prompt-1: gender: male; occupation: student; title: \"Leon: The Professional\"; genre: Crime Drama Romance Thriller; release year: 1994. Prompt-2: There is a user, whose Field is male, and Field is student. There is a movie, its Field is \"Leon:\nThe Professional\", its Field is Crime Drama Romance Thriller, and it is Field at 1994.\nPrompt-3: There is a user, whose gender is male, and occupation is student. There is a movie, its title is \"Leon: The Professional\", its genre is Crime Drama Romance Thriller. We utilize three types of prompts to train UFIN, and the results on both the Amazon and MovieLens-1M datasets are depicted in Figures 7 and8 correspondingly. We can observe that on both the Amazon and MovieLens-1M datasets, the performance disparity of the model between Prompt-1, Prompt-2, and the base on both the Amazon and MovieLens-1M datasets is negligible, indicating that our model is not sensitive to the design or format of the template content. Regarding Prompt-3 on the Amazon dataset, we eliminated the title (w.o. title) and description (w.o. desc) fields. The analysis from Figure 7 reveals that eliminating either the title or description results in decreased model performance, implying the importance of the semantic information contained within these text features. Besides, the removal of the description field leads to a greater decline in model performance. This disparity is due to the description containing more substantial information compared to the title. On the MovieLens-1M dataset, we remove the fields release year (i.e., w.o. Year) and genre (i.e., w.o. genre). As shown in Figure 8, we can observe that the model's performance changes little after removing release year, but there is a sharp decline in the model's performance after removing the genre field. This is because compared to the movie genre, release year contains fewer personalized item information. These findings suggest that our approach demonstrates minimal correlation with the design of the prompt text but exhibits a strong relationship with the quantity of information (personalized features) it encompasses. " }, { "figure_ref": [], "heading": "IV. RELATED WORK", "publication_ref": [ "b44", "b45", "b48", "b20", "b47", "b49", "b30", "b31", "b31", "b30", "b21", "b23" ], "table_ref": [], "text": "A. Feature Interaction Learning.\nPredicting the probability of users clicking ads or items is critical in online advertising [45] and recommender systems.\nThe key to achieve a great CTR performance is to learning effective feature interactions. Typically, most of existing CTR models [46]- [49] adopt the embedding look-up and feature interaction learning framework to capture the underlying correlation between users and items. For example, FM [21] introduces low-dimensional vectors to represent the features and uses inner product of them to capture second-order feature interactions. Besides, FwFM [48] and FmFM [50] propose incorporating field information to improve the pair-wise feature interactions. However, these approaches only focus on learning the low-order feature interactions, which largely degrades the model capacity and results in the suboptimal performance.\nRecently, a number of approaches [31], [32] are proposed to model the high-order feature interactions. Among them, xDeepFM [32] conducts multi-layer convolution operations on the concatenated vectors to model high-order interactions; DCNV2 [31] performs the kernel product on the concatenated feature vectors to construct higher-order feature interactions. These approaches have largely raised the performance bar of CTR predictions. Typically, these approaches follow a common paradigm to model the feature interactions: they first predefine a maximal order and only consider conducting feature interactions within the pre-defined orders. Though effective to some extend, the orders of feature interactions are empirically designed, which cannot accurately capture the underlying feature relationship in the real-world scenarios. Further, due to the exponential growth of high-order feature combinations, such approaches often set a small order, which cannot scale to the high-order cases in the industrial scenarios.\nConsidering the above limitations, some approaches [22]- [24] are proposed to automatically learn the feature interaction orders from the data. The core idea of these approaches is to encode the features into a special vector space (e.g., logarithm vector space). As such, the complicated feature interactions are converted to the simple linear combinations and the orders are cast into the learnable coefficients, enabling the adaptive learning of the interaction orders. However, previous studies mainly focus on learning the orders within a single-domain, and seldom investigate the transferability of the order information across different domains or tasks." }, { "figure_ref": [], "heading": "B. Multi-Domain CTR predictions.", "publication_ref": [ "b4", "b7", "b32", "b33", "b34", "b4", "b5", "b34", "b35", "b5", "b4" ], "table_ref": [], "text": "Traditional CTR prediction models are developed for modeling the user's interest within a single domain, i.e., they are training using examples collected from a single domain and serving the prediction of a single task. However, in largescale real-world industrial platforms, the user behavior data are often collected from multi-domains. Merely mixing all the data and training a single shared CTR model cannot yield satisfactory results across all domains owing to the substantial distribution variance among diverse scenarios, which is called domain seesaw phenomenon in recommender systems. Such paradigm severely restricts the efficient utilization of extensive user behavior data in business scenarios.\nTo address these issues, a number of approaches [5]- [8] are proposed to enhance the information sharing and improve the multi-domain performance. Among them, SharedBottom [33] introduces a unique embedding layer to learn a commonly shared feature representations for different domains. Besides, MMoE [34] extends the multi-task learning (MTL) methods into multi-domain scenarios, by regarding each domain as a specific task; HMoE [35] extends MMoE to scenario-aware experts using a gradient cutting trick to explicitly encode scenario correlations. Despite the progress, these approaches cannot effectively exploit the domain relationship and suffer from the degradation of model capability.\nAs a promising research direction, some recent studies [5], [6], [35], [36] propose to explicitly introduce the domain-shared parameters to compactly learn the common knowledge across different domains. The core idea of these approaches is to introduce a shared neural network for learning the common knowledge across diverse domains, while simultaneously integrating multiple domain-specific sub-networks to capture the distinct characteristics of each domain. For example, STAR [6] proposes a star topology architecture, which consists of a shared center network and multiple domain-specific networks for the adaptation of multi-domain distributions; PEPNet [5] introduces an embedding personalized network to align multidomain feature representations. Despite the progress, these approaches still rely on explicit embedding look-up operation for the given feature IDs, which impairs the inherent semantic of features and makes it difficult to be applied in different platforms." }, { "figure_ref": [], "heading": "C. Semantic CTR Prediction Models.", "publication_ref": [ "b14", "b27", "b36", "b12", "b18", "b11" ], "table_ref": [], "text": "With the development of natural language processing (NLP), the large language models (LLMs [15]) have shown excellent language modeling capacity in various downstream tasks, which promises researchers to apply LLMs into the recommendation tasks. In the context of CTR predictions, the large language models (LLMs) have two common application methods in CTR prediction models: scoring/ranking function and feature engineering encoder.\nFor the first kind of application, the LLMs mainly serve as the learning backbone to generate predictive results. For example, P5 [28] converts different recommendation task into text generation and employs T5 [37] model to generate the result. Besides, M6-Rec [13] uses the M6 [19] model to deliver the recommendation in a prompting learning framework. These approaches exploit the textual data as a general input form, which can effectively bridge the semantic gap between different domains, enabling them to be effectively transferred to a new domain or platform.\nDespite the progress, compared to the traditional CTR prediction models, LLMs cannot effectively capture the collaborative patterns that severely limits the model capacity. As a promising approach, a recent study [12] proposes a contrastive learning framework to align the knowledge between LLMs and collaborative models. Due to the limited capacity of collaborative model, it cannot adequately learn the common knowledge across different domains. In contrast, our solution is to utilize the LLMs to generate universal features from textual data, learning the collective attributes of different domains. As such, we can learn the universal feature interactions to capture the generalized collaborative patterns across diverse domains, naturally integrating the world knowledge of LLMs and the collaborative knowledge between users and items." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose the Universal Feature Interaction Network (UFIN) for multi-domain CTR prediction. Unlike previous approaches that heavily rely on modeling ID features for developing the CTR predictions, our approach leverage the textual data to learn the universal feature interactions. Specifically, we regard the text and features as two modalities that can be mutually converted. As such, we employ a LLMbased encoder-decoder architecture to transform the data from text modality to feature modality, obtaining universal feature representations. Building upon this foundation, we design an adaptive feature interaction model enhanced by a mixture-ofexperts (MoE) architecture for capturing the generlized feature interactions across different domains. To effectively learn the collaborative patterns across different domains, we propose a multi-domain knowledge distillation framework to improve the training of our approach.\nAs future work, we will explore how to better integrate anonymous features with generated universal feature representations. In addition, we will also consider incorporating the Conversion Rate (CVR) prediction task into our approach to capture more effective correlation between different tasks." }, { "figure_ref": [], "heading": "VI. PROOFS OF THEOREM 1", "publication_ref": [], "table_ref": [], "text": "Proof. By contradiction, we assume there exists two domains D u and D v (u ̸ = v) that satisfies S u ∩ S v = ∅.\nAccording to the principle of inclusion-exclusion, we have:\n|S u ∪ S v | = |S u | + |S v | -|S u ∩ S v | = K + K -0 = 2K > 2 × ⌈L/2⌉ ≥ L\nOn the other hand, since S u ⊆ A and S v ⊆ A, we have S u ∪ S v ⊆ A.\nTherefore, we have:\n|S u ∪ S v | ≤ |A| = L.\nSince the assumption leads to a contradiction, our initial assumption that ∃u ̸ = v, S u ∩ S v = ∅ must be False.\nTherefore, ∀u ̸ = v, S u ∩ S v ̸ = ∅." } ]
Click-Through Rate (CTR) prediction, which aims to estimate the probability of a user clicking on an item, is a key task in online advertising. Numerous existing CTR models concentrate on modeling the feature interactions within a solitary domain, thereby rendering them inadequate for fulfilling the requisites of multi-domain recommendations in real industrial scenarios. Some recent approaches propose intricate architectures to enhance knowledge sharing and augment model training across multiple domains. However, these approaches encounter difficulties when being transferred to new recommendation domains, owing to their reliance on the modeling of ID features (e.g., item id). To address the above issue, we propose the Universal Feature Interaction Network (UFIN) approach for CTR prediction. UFIN exploits textual data to learn universal feature interactions that can be effectively transferred across diverse domains. For learning universal feature representations, we regard the text and feature as two different modalities and propose an encoder-decoder network founded on a Large Language Model (LLM) to enforce the transfer of data from the text modality to the feature modality. Building upon the above foundation, we further develop a mixtureof-experts (MoE) enhanced adaptive feature interaction model to learn transferable collaborative patterns across multiple domains. Furthermore, we propose a multi-domain knowledge distillation framework to enhance feature interaction learning. Based on the above methods, UFIN can effectively bridge the semantic gap to learn common knowledge across various domains, surpassing the constraints of ID-based models. Extensive experiments conducted on eight datasets show the effectiveness of UFIN, in both multidomain and cross-platform settings. Our code is available at https://github.com/RUCAIBox/UFIN.
UFIN: Universal Feature Interaction Network for Multi-Domain Click-Through Rate Prediction
[ { "figure_caption": "Fig. 2 :2Fig. 2: Performance comparison under the zero-shot setting.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: Ablation study of UFIN on the Amazon dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "e 1 e 2 e 3 e 4 e 5 e 6 e 7 e 1 e 2 e 3 e 4 e 5 e 6 e 7 e 1 e 2 e 3 e 4 e 5 e 6 e 7 Fig. 5 : 6 )77756Fig. 5: Visualization of the feature interactions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "77756", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Impact of the selected experts number.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :Fig. 8 :78Fig. 7: Performance on Amazon w.r.t. different prompts.", "figure_data": "", "figure_id": "fig_4", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "2 ", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", • • • , D L }. Compared to single-domain CTR prediction, multi-domain CTR prediction often poses greater challenges, as different domains may have different feature fields and data distributions.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of different CTR methods.", "figure_data": "MethodsTransfer LearningKnowledge PatternsMulti-Domain Cross-Platform Collaborative SemanticSTAR [6]CTRL [12]P5 [28]UFIN (ours)", "figure_id": "tab_3", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "The statistics of datasets.", "figure_data": "Dataset# Users# Items# InteractionsAmazon1,002,827 2,530,8747,427,505-Movies295,90840,7922,141,592-Books585,167191,8264,077,731-Electronics184,87626,336780,698-Food14,5525,47486,518-Instruments1,4297367,835-Office4,8901,80536,572-Toys19,2319,142116,559MovieLens-1M6,0413,669739,012", "figure_id": "tab_4", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Cross-platform performance on the MovieLens-1M dataset.", "figure_data": "Representation Methodsw/o. Pre-train AUC Log Lossw. Pre-train AUC Log LossEffect.Embedding Look-up0.89970.31020.89900.3109-0.08%BERT [43]0.90010.30890.90090.3084+0.09%T5 [37]0.90080.31050.90270.3054+0.21%FLAN-T5 [16])0.90130.30740.90300.3050+0.19%", "figure_id": "tab_6", "figure_label": "IV", "figure_type": "table" } ]
Zhen Tian; Changwang Zhang; Wayne Xin Zhao; Xin Zhao; Ji-Rong Wen; Zhao Cao
[ { "authors": "H.-T Cheng; L Koc; J Harmsen; T Shaked; T Chandra; H Aradhye; G Anderson; G Corrado; W Chai; M Ispir", "journal": "", "ref_id": "b0", "title": "Wide & deep learning for recommender systems", "year": "2016" }, { "authors": "X He; T.-S Chua", "journal": "", "ref_id": "b1", "title": "Neural factorization machines for sparse predictive analytics", "year": "2017" }, { "authors": "Z Li; W Cheng; Y Chen; H Chen; W Wang", "journal": "", "ref_id": "b2", "title": "Interpretable clickthrough rate prediction through hierarchical attention", "year": "2020" }, { "authors": "Y Shan; T R Hoens; J Jiao; H Wang; D Yu; J Mao", "journal": "", "ref_id": "b3", "title": "Deep crossing: Web-scale modeling without manually crafted combinatorial features", "year": "2016" }, { "authors": "J Chang; C Zhang; Y Hui; D Leng; Y Niu; Y Song", "journal": "", "ref_id": "b4", "title": "Pepnet: Parameter and embedding personalized network for infusing with personalized prior information", "year": "2023" }, { "authors": "X.-R Sheng; L Zhao; G Zhou; X Ding; B Dai; Q Luo; S Yang; J Lv; C Zhang; H Deng", "journal": "", "ref_id": "b5", "title": "One model to serve all: Star topology adaptive recommender for multi-domain ctr prediction", "year": "2021" }, { "authors": "Y Zhang; X Wang; J Hu; K Gao; C Lei; F Fang", "journal": "", "ref_id": "b6", "title": "Scenario-adaptive and self-supervised model for multi-scenario personalized recommendation", "year": "2022" }, { "authors": "J Zhou; X Cao; W Li; L Bo; K Zhang; C Luo; Q Yu", "journal": "", "ref_id": "b7", "title": "Hinet: Novel multi-scenario & multi-task learning with hierarchical information extraction", "year": "2023" }, { "authors": "J Lin; X Dai; Y Xi; W Liu; B Chen; X Li; C Zhu; H Guo; Y Yu; R Tang; W Zhang", "journal": "", "ref_id": "b8", "title": "How can recommender systems benefit from large language models: A survey", "year": "2023" }, { "authors": "Y Hou; S Mu; W X Zhao; Y Li; B Ding; J.-R Wen", "journal": "", "ref_id": "b9", "title": "Towards universal sequence representation learning for recommender systems", "year": "2022" }, { "authors": "J Li; M Wang; J Li; J Fu; X Shen; J Shang; J Mcauley", "journal": "", "ref_id": "b10", "title": "Text is all you need: Learning language representations for sequential recommendation", "year": "2023" }, { "authors": "X Li; B Chen; L Hou; R Tang", "journal": "", "ref_id": "b11", "title": "Ctrl: Connect tabular and language model for ctr prediction", "year": "2023" }, { "authors": "Z Cui; J Ma; C Zhou; J Zhou; H Yang", "journal": "", "ref_id": "b12", "title": "M6-rec: Generative pretrained language models are open-ended recommender systems", "year": "2022" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b13", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "W X Zhao; K Zhou; J Li; T Tang; X Wang; Y Hou; Y Min; B Zhang; J Zhang; Z Dong", "journal": "", "ref_id": "b14", "title": "A survey of large language models", "year": "2023" }, { "authors": "H W Chung; L Hou; S Longpre; B Zoph; Y Tay; W Fedus; E Li; X Wang; M Dehghani; S Brahma", "journal": "", "ref_id": "b15", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "J L Ba; J R Kiros; G E Hinton", "journal": "", "ref_id": "b16", "title": "Layer normalization", "year": "2016" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L U Kaiser; I Polosukhin", "journal": "", "ref_id": "b17", "title": "Attention is all you need", "year": "2017" }, { "authors": "J Lin; R Men; A Yang; C Zhou; M Ding; Y Zhang; P Wang; A Wang; L Jiang; X Jia", "journal": "", "ref_id": "b18", "title": "M6: A chinese multimodal pretrainer", "year": "2021" }, { "authors": "S Rendle; W Krichene; L Zhang; J Anderson", "journal": "", "ref_id": "b19", "title": "Neural collaborative filtering vs. matrix factorization revisited", "year": "2020" }, { "authors": "S Rendle", "journal": "IEEE", "ref_id": "b20", "title": "Factorization machines", "year": "2010" }, { "authors": "Z Tian; T Bai; W X Zhao; J.-R Wen; Z Cao", "journal": "", "ref_id": "b21", "title": "Eulernet: Adaptive feature interaction learning via euler's formula for ctr prediction", "year": "2023" }, { "authors": "W Cheng; Y Shen; L Huang", "journal": "", "ref_id": "b22", "title": "Adaptive factorization network: Learning adaptive-order feature interactions", "year": "2020" }, { "authors": "S Cai; K Zheng; G Chen; H Jagadish; B C Ooi; M Zhang", "journal": "", "ref_id": "b23", "title": "Arm-net: Adaptive relation modeling network for structured data", "year": "2021" }, { "authors": "M Richardson; E Dominowska; R Ragno", "journal": "", "ref_id": "b24", "title": "Predicting clicks: estimating the click-through rate for new ads", "year": "2007" }, { "authors": "J Zhu; J Liu; W Li; J Lai; X He; L Chen; Z Zheng", "journal": "", "ref_id": "b25", "title": "Ensembled ctr prediction via knowledge distillation", "year": "2020" }, { "authors": "Z Tian; T Bai; Z Zhang; Z Xu; K Lin; J.-R Wen; W X Zhao", "journal": "", "ref_id": "b26", "title": "Directed acyclic graph factorization machines for ctr prediction via knowledge distillation", "year": "2023" }, { "authors": "S Geng; S Liu; Z Fu; Y Ge; Y Zhang", "journal": "", "ref_id": "b27", "title": "Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5)", "year": "2022" }, { "authors": "J Ni; J Li; J Mcauley", "journal": "", "ref_id": "b28", "title": "Justifying recommendations using distantlylabeled reviews and fine-grained aspects", "year": "2019" }, { "authors": "H Guo; R Tang; Y Ye; Z Li; X He", "journal": "", "ref_id": "b29", "title": "Deepfm: a factorizationmachine based neural network for ctr prediction", "year": "2017" }, { "authors": "R Wang; R Shivanna; D Cheng; S Jain; D Lin; L Hong; E Chi", "journal": "", "ref_id": "b30", "title": "Dcn v2: Improved deep & cross network and practical lessons for webscale learning to rank systems", "year": "2021" }, { "authors": "J Lian; X Zhou; F Zhang; Z Chen; X Xie; G Sun", "journal": "", "ref_id": "b31", "title": "xdeepfm: Combining explicit and implicit feature interactions for recommender systems", "year": "2018" }, { "authors": "S Ruder", "journal": "", "ref_id": "b32", "title": "An overview of multi-task learning in deep neural networks", "year": "2017" }, { "authors": "J Ma; Z Zhao; X Yi; J Chen; L Hong; E H Chi", "journal": "", "ref_id": "b33", "title": "Modeling task relationships in multi-task learning with multi-gate mixture-of-experts", "year": "2018" }, { "authors": "H Tang; J Liu; M Zhao; X Gong", "journal": "", "ref_id": "b34", "title": "Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations", "year": "2020" }, { "authors": "X Zou; Z Hu; Y Zhao; X Ding; Z Liu; C Li; A Sun", "journal": "", "ref_id": "b35", "title": "Automatic expert selection for multi-scenario and multi-task search", "year": "2022" }, { "authors": "C Raffel; N Shazeer; A Roberts; K Lee; S Narang; M Matena; Y Zhou; W Li; P J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b36", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "J Zhu; J Liu; S Yang; Q Zhang; X He", "journal": "", "ref_id": "b37", "title": "Open benchmarking for click-through rate prediction", "year": "2021" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b38", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "J M Lobo; A Jiménez-Valverde; R ", "journal": "Global ecology and Biogeography", "ref_id": "b39", "title": "Auc: a misleading measure of the performance of predictive distribution models", "year": "2008" }, { "authors": "A Buja; W Stuetzle; Y Shen", "journal": "", "ref_id": "b40", "title": "Loss functions for binary class probability estimation and classification: Structure and applications", "year": "2005-11" }, { "authors": "W Song; C Shi; Z Xiao; Z Duan; Y Xu; M Zhang; J Tang", "journal": "", "ref_id": "b41", "title": "Autoint: Automatic feature interaction learning via self-attentive neural networks", "year": "2019" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b42", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "L Van Der Maaten; G Hinton", "journal": "Journal of machine learning research", "ref_id": "b43", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "D Agarwal; B Long; J Traupman; D Xin; L Zhang", "journal": "", "ref_id": "b44", "title": "Laser: A scalable response prediction platform for online advertising", "year": "2014" }, { "authors": "C Xu; M Wu", "journal": "", "ref_id": "b45", "title": "Learning feature interactions with lorentzian factorization machine", "year": "2020" }, { "authors": "W Lu; Y Yu; Y Chang; Z Wang; C Li; B Yuan", "journal": "", "ref_id": "b46", "title": "A dual input-aware factorization machine for ctr prediction", "year": "2021" }, { "authors": "J Pan; J Xu; A L Ruiz; W Zhao; S Pan; Y Sun; Q Lu", "journal": "", "ref_id": "b47", "title": "Field-weighted factorization machines for click-through rate prediction in display advertising", "year": "2018" }, { "authors": "M Blondel; A Fujino; N Ueda; M Ishihata", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "Higher-order factorization machines", "year": "2016" }, { "authors": "Y Sun; J Pan; A Zhang; A Flores", "journal": "", "ref_id": "b49", "title": "Fm2: Field-matrixed factorization machines for recommender systems", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 85.27, 281.84, 215.42, 9.68 ], "formula_id": "formula_0", "formula_text": "{v 1 , v 2 , ..., v n } = LLM({w 1 , w 2 , ..., w n }),(1)" }, { "formula_coordinates": [ 4, 142.83, 294.48, 157.86, 30.32 ], "formula_id": "formula_1", "formula_text": "s = LayerNorm( n j=1 v j ),(2)" }, { "formula_coordinates": [ 4, 119.52, 670.18, 181.17, 30.32 ], "formula_id": "formula_2", "formula_text": "z = L j=1 σ(W j s + b j ) • g j ,(3)" }, { "formula_coordinates": [ 4, 119.68, 706.02, 181.01, 9.68 ], "formula_id": "formula_3", "formula_text": "g = Softmax(W g s),(4)" }, { "formula_coordinates": [ 4, 398.28, 279.31, 165.43, 30.55 ], "formula_id": "formula_4", "formula_text": "z = z + c k=1 U k h k ,(5)" }, { "formula_coordinates": [ 4, 504.11, 330.16, 57.93, 11.23 ], "formula_id": "formula_5", "formula_text": "U k ∈ R d V ×d A" }, { "formula_coordinates": [ 4, 389.97, 615.82, 173.74, 9.79 ], "formula_id": "formula_6", "formula_text": "ẽj = LayerNorm(V j z),(6)" }, { "formula_coordinates": [ 5, 81.93, 381.43, 218.76, 22.61 ], "formula_id": "formula_7", "formula_text": "F( Ẽ; A) = w ⊤ α∈A ẽα1 1 ⊙ ẽα2 2 ⊙ ... ⊙ ẽαn u nu ,(7)" }, { "formula_coordinates": [ 5, 376.42, 235.24, 187.28, 30.32 ], "formula_id": "formula_8", "formula_text": "ζ = L j=1 F( Ẽ; A j ) • gj ,(8)" }, { "formula_coordinates": [ 5, 376.44, 271.66, 187.26, 11.5 ], "formula_id": "formula_9", "formula_text": "g = TopK Softmax( Wg z) ,(9)" }, { "formula_coordinates": [ 5, 311.65, 419.42, 251.39, 44.67 ], "formula_id": "formula_10", "formula_text": "Theorem 1. Given L domains D = {D 1 , D 2 , ..., D L } and L experts A = {A 1 , A 2 , ..., A L }, each domain D u select K experts S u = {A u 1 , A u 2 , ..., A u K } from A, i.e., S u ⊆ A. If K > ⌈L/2⌉" }, { "formula_coordinates": [ 5, 404.24, 586.68, 155.32, 8.96 ], "formula_id": "formula_11", "formula_text": "ŷ = Sigmoid(ζ). (10" }, { "formula_coordinates": [ 5, 559.55, 587, 4.15, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 393.39, 706.05, 170.32, 9.65 ], "formula_id": "formula_13", "formula_text": "ŷ = Sigmoid(ζ + ζ f ).(11)" }, { "formula_coordinates": [ 6, 111.77, 350.37, 188.92, 31.4 ], "formula_id": "formula_14", "formula_text": "L KD = M p=1 Np i=1 ||ζ G p,i -ζ p,i || 2 ,(12)" }, { "formula_coordinates": [ 6, 61.3, 497.89, 239.39, 42.8 ], "formula_id": "formula_15", "formula_text": "L CT R = M p=1 Np i=1 y p i log(ŷ p i ) + (1 -y p i ) log(1 -ŷp i ) ,(13)" }, { "formula_coordinates": [ 6, 133.61, 604.24, 167.08, 9.65 ], "formula_id": "formula_16", "formula_text": "L = L KD + L CT R .(14)" }, { "formula_coordinates": [ 8, 115.25, 381.8, 117.28, 19.78 ], "formula_id": "formula_17", "formula_text": "AUC = (i,j)∈S 1(s i > s j )" }, { "formula_coordinates": [ 8, 54.53, 624.13, 233.97, 30.32 ], "formula_id": "formula_18", "formula_text": "LogLoss = - 1 N N i=1 y i • log(p i ) + (1 -y i ) • log(1 -p i )" }, { "formula_coordinates": [ 12, 363.54, 558.17, 147.94, 38.63 ], "formula_id": "formula_19", "formula_text": "|S u ∪ S v | = |S u | + |S v | -|S u ∩ S v | = K + K -0 = 2K > 2 × ⌈L/2⌉ ≥ L" }, { "formula_coordinates": [ 12, 393.4, 660.46, 88.22, 9.65 ], "formula_id": "formula_20", "formula_text": "|S u ∪ S v | ≤ |A| = L." } ]
2024-03-07
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b14", "b64", "b18", "b39", "b9", "b26", "b68", "b56", "b46" ], "table_ref": [], "text": "Complementary-label learning is a weakly supervised learning problem that has received a lot of attention recently [Ishida et al., 2017, Feng et al., 2020a, Gao and Zhang, 2021, Liu et al., 2023]. In complementary-label learning, we are given training data associated with complementary labels that specify the classes to which the examples do not belong. The task is to learn a multiclass classifier that assigns correct labels to test data as in the standard supervised learning.\nCollecting training data with complementary labels is much easier and cheaper than collecting ordinary-label data. For example, when asking workers on crowdsourcing platforms to annotate training data, we only need to randomly select a candidate label and then ask them whether the example belongs to that class or not. Such \"yes\" or \"no\" questions are much easier to answer than asking workers to determine the ground-truth label from a large set of candidate labels. The benefits and effectiveness of complementary-label learning have also been demonstrated in several machine learning problems and applications, such as domain adaptation [Zhang et al., 2021, Han et al., 2023], semi-supervised learning [Chen et al., 2020b, Ma et al., 2023, Deng et al., 2024], noisy-label learning [Kim et al., 2019], adversarial robustness [Zhou et al., 2022], few-shot learning [Wei et al., 2022], and medical image analysis [Rezaei et al., 2020].\nTable 1: Comparison between SCARCE and previous risk-consistent or classifier-consistent complementary-label learning methods." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b23", "b61", "b24", "b34", "b23", "b24", "b5", "b14", "b34", "b61", "b35", "b61", "b14" ], "table_ref": [], "text": "Uniform distribution assumption-free Ordinary-label training set-free Classifierconsistent Riskconsistent PC [Ishida et al., 2017] ✗ ✓ ✓ ✓ Forward [Yu et al., 2018] ✓ ✗ ✓ ✗ NN [Ishida et al., 2019] ✗ ✓ ✓ ✓ LMCL [Feng et al., 2020a] ✗ ✓ ✓ ✓ OP [Liu et al., 2023]\n✗ ✓ ✓ ✗ SCARCE (Ours) ✓ ✓ ✓ ✓\nExisting research works with consistency guarantees have attempted to solve complementarylabel learning problems by making assumptions about the distribution of complementary labels. The remedy started with Ishida et al. [2017], which proposed the uniform distribution assumption that a label other than the ground-truth label is sampled from the uniform distribution to be the complementary label. A subsequent work extended it to arbitrary loss functions and models [Ishida et al., 2019] based on the same distribution assumption. Then, Feng et al. [2020a] extended the problem setting to the existence of multiple complementary labels. Recent works have proposed discriminative methods that work by modeling the posterior probabilities of complementary labels instead of the generation process [Chou et al., 2020, Gao and Zhang, 2021, Liu et al., 2023, Lin and Lin, 2023]. However, the uniform distribution assumption is still necessary to ensure the classifier consistency property [Liu et al., 2023]. Yu et al. [2018] proposed the biased distribution assumption, elaborating that the generation of complementary labels follows a transition matrix, i.e., the complementary-label distribution is determined by the true label.\nIn summary, previous complementary-label learning approaches all require either the uniform distribution assumption or the biased distribution assumption to guarantee the consistency property, to the best of our knowledge. However, such assumptions may not be satisfied in realworld scenarios. On the one hand, the uniform distribution assumption is too strong, since the transition probability for different complementary labels is undifferentiated, i.e., the transition probability from the true label to a complementary label is constant for all labels. Such an assumption is not realistic since the annotations may be imbalanced and biased [Wei et al., 2023b, Wang et al., 2023a]. On the other hand, although the biased distribution assumption is more practical, an ordinary-label training set with deterministic labels, also known as anchor points [Liu and Tao, 2015], is essential for estimating transition probabilities during the training phase [Yu et al., 2018]. However, the collection of ordinary-label data with deterministic labels is often unrealistic in complementary-label learning problems [Gao and Zhang, 2021].\nTo this end, we propose a novel risk-consistent approach named SCARCE, i.e., Selected-Completely-At-Random ComplEmentary-label learning, without relying on the uniform distribution assumption or an additional ordinary-label training set. Inspired by the PU learning literature, we propose the Selected Completely At Random (SCAR) assumption for complementarylabel learning and propose an unbiased risk estimator accordingly. We then introduce a riskcorrection approach to mitigate overfitting issues with risk consistency maintained. Furthermore, we show that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems when using the one-versus-rest (OVR) strategy. Table 1 shows the comparison between SCARCE and previous methods. The main contributions of this work are summarized as follows:\n• Methodologically, we propose the first consistent complementary-label learning approach without relying on the uniform distribution assumption or an additional ordinary-label dataset in non-uniform cases. • Theoretically, we uncover the relation between complementary-label learning and negativeunlabeled learning, which provides a new perspective for understanding complementary-label learning. We also prove the convergence rate of the proposed risk estimator by providing an estimation error bound. • Empirically, the proposed approach is shown to achieve superior performance over state-ofthe-art methods on both synthetic and real-world benchmark datasets." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "In this section, we review the background of learning with ordinary labels, complementary labels, and PU learning. Then, we introduce a new data distribution assumption for generating complementary labels." }, { "figure_ref": [], "heading": "Learning with Ordinary Labels", "publication_ref": [ "b0" ], "table_ref": [], "text": "Let X = R d denote the d-dimensional feature space and Y = {1, 2, . . . , q} denote the label space with q class labels. Let p(x, y) be the joint probability density over the random variables (x, y) ∈ X × Y, then the classification risk is\nR(f ) = E p(x,y) [L(f (x), y)] ,(1)\nwhere f (x) is the model prediction and L can be any classification-calibrated loss function, such as the cross-entropy loss [Bartlett et al., 2006]. Let p(x) denote the marginal density of unlabeled data. Besides, let π k = p(y = k) be the class-prior probability of the k-th class and p(x|y = k) denote the class-conditional density. Then, the classification risk in Eq. ( 1) can be written as\nR(f ) = q k=1 π k E p(x|y=k) [L(f (x), k)] .\n(2)" }, { "figure_ref": [], "heading": "Learning with Complementary Labels", "publication_ref": [ "b23", "b14" ], "table_ref": [], "text": "In complementary-label learning, each training example is associated with one or multiple complementary labels specifying the classes to which the example does not belong.\nLet D = x i , Ȳi n i=1\ndenote the complementary-label training set sampled i.i.d. from an unknown density p(x, Ȳ ). Here, x ∈ X is a feature vector, and Ȳ ⊆ Y is a complementary-label set associated with x. In the literature, complementary-label learning can be categorized into single complementary-label learning [Ishida et al., 2017, Gao and Zhang, 2021, Liu et al., 2023] and multiple complementary-label learning [Feng et al., 2020a].\nIn this paper, we consider a more general case where Ȳ may contain any number of complementary labels, ranging from zero to q -1. For ease of notation, we use a q-dimensional label vector ȳ = [ȳ 1 , ȳ2 , . . . , ȳq ] ∈ {0, 1} q to denote the vector version of Ȳ , where ȳk = 1 when k ∈ Ȳ and ȳk = 0 otherwise. Let πk = p (ȳ k = 1) denote the fraction of training data where the k-th class is considered as a complementary label. Let p (x|ȳ k = 1) and p (x|ȳ k = 0) denote the marginal densities where the k-th class is considered as a complementary label or not. The task of complementary-label learning is to learn a multi-class classifier f : X → Y from D." }, { "figure_ref": [], "heading": "Learning from Positive and Unlabeled Data", "publication_ref": [ "b11", "b10", "b28", "b10", "b42", "b11" ], "table_ref": [], "text": "In PU learning [Elkan and Noto, 2008, du Plessis et al., 2014, Kiryo et al., 2017], the goal is to learn a binary classifier only from a positive dataset D P = {(x i , +1)} n P i=1 and an unlabeled dataset\nD U = {x i } n U i=1 .\nThere are mainly two problem settings for PU learning, i.e., the twosample setting [du Plessis et al., 2014, Niu et al., 2016] and the one-sample setting [Elkan and Noto, 2008]. In the two-sample setting, we assume that D P is sampled from the positive-class density p(x|y = +1) and D U is sampled from the marginal density p(x). In contrast, in the onesample setting, we assume that an unlabeled dataset is first sampled from the marginal density p(x). Then, if a training example is positive, its label is observed with a constant probability c, and the example remains unlabeled with probability 1 -c. If a training example is negative, its label is never observed and the example remains unlabeled with probability 1. In this paper, we make use of the one-sample setting for complementary-label learning." }, { "figure_ref": [], "heading": "Generation Process of Complementary Labels", "publication_ref": [ "b11", "b20" ], "table_ref": [], "text": "Inspired by the Selected Completely At Random assumption in PU learning [Elkan andNoto, 2008, Coudray et al., 2023], we introduce the Selected Completely At Random assumption for generating complementary labels, which can be summarized as follows.\nAssumption 1 (Selected Completely At Random (SCAR)). The complementary-label data with the k-th class as a complementary label are sampled completely at random from the marginal density of the data not belonging to the k-th class, i.e.,\np k ∈ Ȳ |x, k ∈ Y\\{y} = p k ∈ Ȳ |k ∈ Y\\{y} = c k , (3\n)\nwhere c k = πk /(1 -π k ) is a constant specifying the fraction of data with the k-th class as a complementary label and (x, y) is sampled from the density p(x, y).\nOur motivation is that complementary labels are often generated in a class-wise manner. They can be collected by answering \"yes\" or \"no\" questions given a pair of an example and a candidate label [Hu et al., 2019, Wang et al., 2021]. During an annotation round, we randomly select a candidate label and ask the annotators whether the example belongs to that class or not. The process is repeated iteratively, so that each example may be annotated with multiple complementary labels. The SCAR assumption differs from the biased distribution assumption, where only one single complementary label is generated by sampling only once from a multinomial distribution. Moreover, the SCAR assumption can be generalized to non-uniform cases by setting c k to different values for different labels. Therefore, our assumption is more practical in real-world scenarios.\nWe generate the complementary-label training set D as follows. First, an unlabeled dataset is sampled from p(x). Then, if the latent ground-truth label of an example is not the k-th class, we assign it a complementary label k with probability c k and still consider it to be an unlabeled example with probability 1 -c k . We generate complementary labels for all the examples by following the procedure w.r.t. each of the q labels. The data generation process is summarized in Appendix C." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce a risk rewrite formulation for complementary-label learning. Then, we propose an unbiased risk estimator, followed by its theoretical analysis. Finally, we present a risk-correction approach to improve the generalization performance." }, { "figure_ref": [], "heading": "Risk Rewrite", "publication_ref": [ "b16", "b66" ], "table_ref": [], "text": "Under the SCAR assumption, the ordinary multi-class classification risk in Eq. ( 1) can be rewritten as follows (the proof is given in Appendix D).\nTheorem 2. Under Assumption 1, the classification risk in Eq. ( 1) can be equivalently expressed as\nR(f ) = q k=1 E p(x|ȳ k =1) [(π k + π k -1) L(f (x), k)] + E p(x|ȳ k =0) [(1 -πk ) L(f (x), k)] . (4)\nTheorem 2 shows that the ordinary classification risk in Eq. ( 1) can be equivalently expressed using densities p (x|ȳ k = 1) and p (x|ȳ k = 0). Therefore, we can perform empirical risk minimization by minimizing an unbiased estimation of Eq. ( 4) with training data sampled from p (x|ȳ k = 1) and p (x|ȳ k = 0).\nIdeally, any multi-class loss function can be used to instantiate L, such as cross-entropy loss. In addition, any model and optimizer can be used, which reveals the universality of our proposed approach. According to our experimental results, we find that the cross-entropy loss is not robust and often leads to inferior performance, possibly due to its unboundedness [Ghosh et al., 2017, Zhang and Sabuncu, 2018, Feng et al., 2020a, Wei et al., 2023a], which will be discussed in Section 4.3. In Section 3.2, we provide an instantiation based on the OVR strategy." }, { "figure_ref": [], "heading": "OVR Strategy", "publication_ref": [ "b47", "b36", "b48", "b45", "b15", "b60", "b1", "b49", "b61" ], "table_ref": [], "text": "The OVR strategy decomposes multi-class classification into a series of binary classification problems, which is a common strategy with extensive theoretical guarantees and sound performance [Rifkin andKlautau, 2004, Zhang, 2004]. It instantiates the loss function L in Eq. ( 1) with the OVR loss, i.e.\nR (f 1 , f 2 , . . . , f q ) = E p(x,y)   ℓ (f y (x)) + k∈Y\\{y} ℓ (-f k (x))   .\n(5)\nHere, f k is a binary classifier w.r.t. the k-th class, E denotes the expectation, and ℓ : R → R + is a non-negative binary-class loss function. Then, the predicted label for a test instance x is determined as f (x) = arg max k∈Y f k (x). The goal is to find optimal classifiers f * 1 , f * 2 , . . . , f * q in a function class F which achieve the minimum classification risk in Eq. ( 5), i.e., f * 1 , f * 2 , . . . , f * q = arg min f 1 ,f 2 ,...,fq∈F R (f 1 , f 2 , . . . , f q ). However, since the joint probability distribution is unknown in practice, the classification risk in Eq. ( 5) is often approximated by the empirical risk\nR (f 1 , f 2 , . . . , f q ) = n i=1 ℓ (f y i (x i )) + k∈Y\\{y i } ℓ (-f k (x i )) /n given an ordinary-label dataset D O = {(x i , y i )} n i=1 consists of n training examples.\nAccordingly, the optimal classifier w.r.t. the empirical risk is f 1 , f 2 , . . . , f q = arg min f 1 ,f 2 ,...,fq∈F R (f 1 , f 2 , . . . , f q ). We may add regularization terms to R (f 1 , f 2 , . . . , f q ) when necessary [Loshchilov and Hutter, 2019].\nWe show that the OVR risk can be rewritten using densities p (x|ȳ k = 1) and p (x|ȳ k = 0) as well.\nTheorem 3. When the OVR loss is used, the classification risk in Eq. ( 5) can be equivalently expressed as\nR(f 1 , f 2 , . . . , f q ) = q k=1 R k (f k ), where R k (f k ) = E p(x|ȳ k =1) [(1 -π k )ℓ (-f k (x)) + (π k + π k -1) ℓ (f k (x))] + E p(x|ȳ k =0) [(1 -πk ) ℓ (f k (x))] . (6\n)\nThe proof is given in Appendix E. Since the true densities p (x|ȳ k = 1) and p (x|ȳ k = 0) are not directly accessible, we approximate the risk empirically. Suppose we have binary-class datasets D N k and D U k sampled i.i.d. from p (x|ȳ k = 1) and p (x|ȳ k = 0), respectively. Then, an unbiased risk estimator can be derived from these binary-class datasets to approximate the classification risk in Theorem 3 as\nR(f 1 , f 2 , . . . , f q ) = q k=1 R k (f k ), where R k (f k ) = 1 n N k n N k i=1 (1 -π k ) ℓ -f k (x N k,i ) + (π k + π k -1) ℓ f k (x N k,i ) + (1 -πk ) n U k n U k i=1 ℓ f k (x U k,i ) . (7\n)\nThis paper considers generating the binary-class datasets D N k and D U k by duplicating instances of D. Specifically, if the k-th class is a complementary label of a training example, we regard its duplicated instance as a negative example sampled from p (x|ȳ k = 1) and put the duplicated instance in D N k . If the k-th class is not a complementary label of a training example, we regard its duplicated instance as an unlabeled example sampled from p (x|ȳ k = 0) and put the duplicated instance in D U k . In this way, we can obtain q negative binary-class datasets and q unlabeled binary-class datasets (k ∈ Y):\nD N k = (x N k,i , -1) n N k i=1 = (x j , -1)|(x j , Ȳj ) ∈ D, k ∈ Ȳj ; (8) D U k = x U k,i n U k i=1 = x j |(x j , Ȳj ) ∈ D, k / ∈ Ȳj . (9\n)\nWhen the class priors π k are not accessible to the learning algorithm, they can be estimated by off-the-shelf mixture proportion estimation approaches [Scott, 2015, Ramaswamy et al., 2016, Zhang et al., 2020, Garg et al., 2021, Yao et al., 2022] with D N k and D U k . Notably, the irreducibility [Blanchard et al., 2010, Scott et al., 2013] assumption is necessary for class-prior estimation. However, it is still less demanding than the biased distribution assumption, which requires additional ordinary-label training data with deterministic labels, a.k.a. anchor points, to estimate the transition matrix [Yu et al., 2018]. We present the details of a class-prior estimation algorithm in Appendix A." }, { "figure_ref": [], "heading": "Relation to Negative-Unlabeled Learning", "publication_ref": [ "b11", "b15", "b32", "b8" ], "table_ref": [], "text": "We observe that the multi-class classification risk in Theorem 3 is the sum of the classification risk in negative-unlabeled learning [Elkan and Noto, 2008] by considering each class as a positive class in turn. This shows that besides minimizing the negative-unlabeled classification risk R k (f k ), we can adopt any other PU learning approach [Chen et al., 2020a, Garg et al., 2021, Li et al., 2022, Wang et al., 2023, Jiang et al., 2023, Dai et al., 2023] to derive the binary classifier f k by swapping the positive class and the negative class. Finally, we can predict the label for a test instance as the class of the minimum model output, since the positive and negative classes are swapped. Therefore, the proposal can be considered as a general framework for solving complementary-label learning problems. Based on this finding, we propose a meta complementary-label learning algorithm in Appendix C, and the proposed method SCARCE can be considered as an instantiation. In particular, when employing deep neural networks as the model architecture for PU learning algorithms, we can share the representation learning layers and use specific classification layers for different labels, which may allow training different classifiers simultaneously." }, { "figure_ref": [], "heading": "Theoretical Analysis", "publication_ref": [ "b17" ], "table_ref": [], "text": "Calibration. We show that the proposed risk can be calibrated to the 0-1 loss.\nLet R 0-1 (f ) = E p(x,y) I(f (x) ̸ = y) denote the expected 0-1 loss where f (x) = arg max k∈Y f k (x) and R * 0-1 = min f R 0-1 (f ) denote the Bayes error. Besides, let R * = min f 1 ,f 2 ,...,fq R(f 1 , f 2 , . . . , f q ) denote\nthe minimum risk of the proposed risk. Then we have the following theorem (its proof is given in Appendix F).\nTheorem 4. Suppose the binary-class loss function ℓ is convex, bounded below, differential, and satisfies ℓ(z) ≤ ℓ(-z) when z > 0. Then we have that for any ϵ 1 > 0, there exists an\nϵ 2 > 0 such that R (f 1 , f 2 , . . . , f q ) ≤ R * + ϵ 2 ⇒ R 0-1 (f ) ≤ R * 0-1 + ϵ 1 . (10\n)\nRemark 5. The infinite-sample consistency elucidates that the proposed risk can be calibrated to the 0-1 loss. Therefore, if we minimize the proposed risk and obtain the optimal classifier, the classifier also achieves the Bayes error.\nEstimation error bound. We further elaborate the convergence property of the empirical risk estimator R(f 1 , f 2 , . . . , f q ) by providing its estimation error bound. In this paper, we assume that there exists some constant\nC f such that sup f ∈F ∥f ∥ ∞ ≤ C f and some constant C ℓ such that sup |z|≤C f ℓ(z) ≤ C ℓ .\nWe also assume that the binary-class loss function ℓ(z) is Lipschitz continuous w.r.t. z with a Lipschitz constant L ℓ . Then we have the following theorem (its proof is given in Appendix G).\nTheorem 6. Based on the above assumptions, for any δ > 0, the following inequality holds with probability at least 1 -δ:\nR f 1 , f 2 , . . . , f q -R f * 1 , f * 2 , . . . , f * q ≤ q k=1 (4 -4π k )L ℓ R n U k ,p U k (F) + (1 -πk )C ℓ 2 ln (2/δ) n U k +(8 -8π k -4π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ 2 ln (2/δ) n N k , (11\n)\nwhere\nR n U k ,p U k (F) and R n N k ,p N k (F) denote the Rademacher complexity of F given n U k unlabeled data sampled from p (x|ȳ k = 0) and n N k negative data sampled from p (x|ȳ k = 1) respectively.\nRemark 7. Theorem 6 elucidates an estimation error bound of our proposed risk estimator.\nWhen\nn U k and n N k → ∞, R f 1 , f 2 , . . . , f q → R f * 1 , f * 2 , . . . , f * q because R n U k ,p U k (F) → 0 and R n N k ,p N k (F) → 0\nfor all parametric models with a bounded norm such as deep neural networks with weight decay [Golowich et al., 2018]. Furthermore, the estimation error bound converges\nin O p q k=1 1/ n N k + 1/ n U k\n, where O p denotes the order in probability. " }, { "figure_ref": [ "fig_0" ], "heading": "Risk-correction Approach", "publication_ref": [ "b28", "b37", "b2", "b37", "b17", "b37" ], "table_ref": [], "text": "Although the URE has sound theoretical properties, we have found that it can encounter several overfitting problems when using complex models such as deep neural networks. The training curves and test curves of the method that works by minimizing the URE in Eq. ( 7) are shown in Figure 1. 1 We can observe that the overfitting phenomena often occur almost simultaneously when the training loss becomes negative. We conjecture the overfitting problems are related with the negative terms in Eq. ( 7) [Kiryo et al., 2017, Lu et al., 2020, Cao et al., 2021]. Therefore, following Lu et al. [2020], Wang et al. [2023b], we wrap each potentially negative term with a non-negative risk-correction function g(z), such as the absolute value function g(z) = |z|. For ease of notation, we introduce\nR P k (f k ) = πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U k,i ) . (12\n)\nThen, the corrected risk estimator can be written as\nR (f 1 , f 2 , . . . , f q ) = q k=1 R k (f k ), where R k (f k ) = g R P k (f k ) + 1 -π k n N k n N k i=1 ℓ -f k (x N k,i ) . (13\n)\nIt is obvious that Eq. ( 13) is an upper bound of Eq. ( 7), so the bias is always present. Next, we perform a theoretical analysis to clarify that the corrected risk estimator is biased but consistent.\nSince E R P k (f k ) = π k E p(x|y=k) ℓ (f k (x)\n) is non-negative, we assume that there exists a non-negative constant β such that for ∀k ∈ Y, E R P k (f k ) ≥ β. We also assume that the risk-correction function g(z) is Lipschitz continuous with a Lipschitz constant L g . Besides, we assume that there exists some constant C R such that the Rademacher complexity R n,p (F) for unlabeled (with n = n U k , p = p U k ) and negative data (with\nn = n N k , p = p N k ) sat- isfies R n,p (F) ≤ C R / √ n.\nThis assumption holds for many models, such as fully connected neural networks and linear-in-parameter models with a bounded norm [Golowich et al., 2018, Lu et al., 2020]. We introduce\nf 1 , f 2 , . . . , f q = arg min f 1 ,f 2 ,...,fq∈F R (f 1 , f 2 , . . . , f q ) and ∆ k = exp -2β 2 / (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k .\nThen we have the following theorems (the proofs are given in Appendix H and I respectively).\nTheorem 8. Based on the above assumptions, the bias of the expectation of the corrected risk estimator has the following lower and upper bounds:\n0 ≤ E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k . (14\n)\nFurthermore, for any δ > 0, the following inequality holds with probability at least 1 -δ:\n| R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q )| ≤ O p q k=1 1/ n N k + 1/ n U k . (15\n)\nTheorem 9. Based on the above assumptions, for any δ > 0, the following inequality holds with probability at least 1 -δ:\nR( f 1 , f 2 , . . . , f q ) -R(f * 1 , f * 2 , . . . , f * q ) ≤ O p q k=1 1/ n N k + 1/ n U k . (16\n) Remark 10. Theorem 8 shows that R(f 1 , f 2 , . . . , f q ) → R(f 1 , f 2 , . . . , f q ) as n U k and n N k → ∞,\nindicating that the corrected risk estimator is biased but consistent. An estimation error bound is also shown in Theorem 9. The convergence rate of the estimation error bound is still the same after employing the risk-correction function." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we validate the effectiveness of SCARCE through extensive experiments." }, { "figure_ref": [], "heading": "Experiments on Synthetic Benchmark Datasets", "publication_ref": [ "b6", "b58", "b29", "b24", "b24", "b14", "b14", "b34" ], "table_ref": [ "tab_0", "tab_2" ], "text": "We conducted experiments on synthetic benchmark datasets, including MNIST [LeCun et al., 1998], Kuzushiji-MNIST [Clanuwat et al., 2018], Fashion-MNIST [Xiao et al., 2017], and CIFAR-10 [ Krizhevsky and Hinton, 2009]. We considered various generation processes of complementary labels by following the uniform, biased, and SCAR assumptions. Details of the datasets, models, and hyperparameters can be found in Appendix J. We evaluated the classification performance of SCARCE against six single complementary-label learning methods, including PC [Ishida et al., 2017], NN [Ishida et al., 2019], GA [Ishida et al., 2019], L-UW [Gao and Zhang, 2021], L-W [Gao and Zhang, 2021], and OP [Liu et al., 2023]. We assumed that the class priors were accessible to the learning algorithm. We randomly generated complementary labels five times with different seeds and recorded the mean accuracy and standard deviations. In addition, a pairwise t-test at the 0.05 significance level is further performed to show whether the performance advantages are significant. Tables 2 and4 show the classification performance of each method with different models and generation settings of complementary labels on MNIST and Kuzushiji-MNIST respectively. The experimental results on Fashion-MNIST and CIFAR-10 are shown in Appendix K. We can observe that: a) Out of 40 cases of different distributions and datasets, SCARCE achieves the best performance in 39 cases, which clearly validates its effectiveness. b) Some consistent approaches based on the uniform distribution assumption can achieve comparable or better or comparable performance to SCARCE for the \"uniform\" setting. For example, GA outperforms SCARCE on CIFAR-10. However, its performance drops significantly in other cases based on " }, { "figure_ref": [], "heading": "Experiments on Real-world Benchmark Datasets", "publication_ref": [ "b38", "b57", "b62", "b44", "b59" ], "table_ref": [ "tab_1" ], "text": "We also verified the effectiveness of SCARCE on two real-world complementary-label datasets CLCIFAR-10 and CLCIFAR-20 [Wang et al., 2023a]. The datasets were annotated by human annotators from Amazon Mechanical Turk (MTurk). The distribution of complementary labels is too complex to be captured by any of the above assumptions. Moreover, the complementary labels may be noisy, which means that the complementary labels may be annotated as groundtruth labels by mistake. There are three human-annotated complementary labels for each example, so they can be considered as multiple complementary-label datasets. We evaluated the classification performance of SCARCE against nine multiple complementary-label learning or partial-label learning methods, including CC [Feng et al., 2020b], PRODEN [Lv et al., 2020], EXP [Feng et al., 2020a], MAE [Feng et al., 2020a], Phuber-CE [Feng et al., 2020a], LWS [Wen et al., 2021], CAVL [Zhang et al., 2022], IDGP [Qiao et al., 2023], and POP [Xu et al., 2023]. We found that the performance of some approaches was unstable with different network initialization, so we randomly initialized the network five times with different seeds and recorded the mean accuracy and standard deviations. Table 3 shows the experimental results on CLCIFAR-10 and CLCIFAR-20 with different models. We can observe that: a) SCARCE achieves the best performance in all cases, further confirming its effectiveness. b) The superiority is even more evident on CLCIFAR-20, a more complex dataset with extremely limited supervision. It demonstrates the advantages of SCARCE in dealing with real-world datasets." }, { "figure_ref": [ "fig_2" ], "heading": "Further Analysis", "publication_ref": [ "b16", "b66" ], "table_ref": [], "text": "Comparison between different instantiations of SCARCE. In Theorem 2, any multiclass loss function can be used to instantiate L. Therefore, we also investigated the classification performance of the cross-entropy loss (CCE). Furthermore, we adopted the risk-correction approach to mitigate overfitting problems. We also included another instantiation of the metaalgorithm in Section 3.3. We used VPU [Chen et al., 2020a] as the PU learning approach. For a fair comparison, we did not use the mixup loss. We compared them with the default instantiation of SCARCE, i.e. the OVR loss, and Figure 2 (a) shows the experimental results. We generated complementary labels with the uniform distribution assumption and used LeNet as the model architecture. We can observe that the OVR loss outperforms CCE and VPU. We conjecture that the inferior performance of CCE may be related to its unboundedness [Ghosh et al., 2017, Zhang and Sabuncu, 2018, Wei et al., 2023a]." }, { "figure_ref": [ "fig_2" ], "heading": "Sensitivity analysis.", "publication_ref": [], "table_ref": [], "text": "We investigated the influence of inaccurate class priors on the classification performance of SCARCE. Specifically, let πk = ϵ k π k denote the corrupted class prior probability for the k-th class where ϵ k is sampled from a normal distribution N (1, σ 2 ). We further normalized the obtained corrupted class priors to ensure that they sum up to one. Figure 2 (b) shows the classification performance given inaccurate class priors using the uniform generation process and LeNet as the model architecture. From Figure 2 (b), we can see that the performance is still satisfactory with small perturbations of the class priors. However, the performance will degenerate if the class priors deviate too much from the ground-truth values." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed the first attempt towards consistent complementary-label learning without relying on the uniform distribution assumption or an ordinary-label training set to estimate the transition matrix in non-uniform cases. Based on a more practical distribution assumption, a consistent approach was proposed with theoretical guarantees. We also observed that complementary-label learning could be expressed as a set of negative-unlabeled classification problems when using the OVR strategy. Extensive experimental results on benchmark datasets validated the effectiveness of our proposed approach." }, { "figure_ref": [], "heading": "A Class-prior Estimation", "publication_ref": [ "b45", "b48", "b15", "b60" ], "table_ref": [], "text": "When the class priors π k are not accessible to the learning algorithm, they can be estimated by off-the-shelf mixture proportion estimation approaches [Ramaswamy et al., 2016, Scott, 2015, Garg et al., 2021, Yao et al., 2022]. In this section, we discuss the problem formulation and how to adapt a state-of-the-art class-prior estimation method to our problem as an example." }, { "figure_ref": [], "heading": "Mixture proportion estimation.", "publication_ref": [ "b49", "b48", "b49", "b35", "b45", "b15", "b15", "b8", "b15", "b28", "b67" ], "table_ref": [ "tab_3" ], "text": "Let F be a mixture distribution of two component distributions G and H with a proportion θ * , i.e.,\nF = (1 -θ * )G + θ * H.\nThe task of mixture proportion estimation problems is to estimate θ * given training examples sampled from F and H. For PU learning, we consider F = p(x), G = p(x|y = -1), and H = p(x|y = +1). Then, the estimation of θ * corresponds to the estimation of the class prior p(y = +1). It is shown that θ * cannot be identified without any additional assumptions [Scott et al., 2013, Scott, 2015]. Hence, various assumptions have been proposed to ensure the identifiability, including the irreducibility assumption [Scott et al., 2013], the anchor point assumption [Scott, 2015, Liu andTao, 2015], the separability assumption [Ramaswamy et al., 2016], etc.\nBest Bin Estimation. We use Best Bin Estimation (BBE) [Garg et al., 2021] where\nz P i = f PvU x PVal i and z U i = f PvU x UVal i\n. Besides, they introduce q(z) = Az p(x) dx where\nA z = x ∈ X |f PvU (x) ≥ z .\nThen, q(z) can be regarded as the proportion of data with the model output no less than z. For p (x|y = +1) and p (x), they define q P (z) and q U (z) respectively. they estimate them empirically as\nq P (z) = n PVal i=1 I f PvU x PVal i ≥ z n PVal and q U (z) = n UVal i=1 I f PvU x UVal i ≥ z n UVal . (17\n)\nThen, they obtain z as\nz = arg max z∈[0,1]   q U (z) q P (z) + 1 + γ q P (z)   ln (4/δ) 2n PVal + ln (4/δ) 2n UVal     (18\n)\nwhere γ and δ are hyperparameters respectively. Finally, they calculate the estimation value of the mixture proportion as\nθ = q U ( z) q P ( z) (19)\nand they prove that θ is an unbiased estimator of θ * when satisfying the pure positive bin assumption, a variant of the irreducibility assumption. More detailed descriptions of the approach can be found in Garg et al. [2021].\nClass-prior estimation for SCARCE. Our class-prior estimation approach is based on BBE. First, we split complementary-label data into training and validation data. Then, we generate q negative binary-class datasets D NTr k and q unlabeled binary-class datasets D UTr k by Eq. ( 8) and Eq. ( 9) with training data (k ∈ Y). We also generate q negative binary-class datasets D NVal k and q unlabeled binary-class datasets D UVal k by Eq. ( 8) and Eq. ( 9) with validation data (k ∈ Y). Then, we estimate the class priors (1 -π k ) for each label k ∈ Y by BBE adapted by interchanging the positive and negative classes. Finally, we normalize π k to ensure that they sum up to one. The algorithm detail is summarized in Algorithm 1. 8) and Eq. ( 9); Estimate the value of (1 -π k ) by employing the BBE algorithm and interchanging the positive and negative classes; end for Normalize π k to ensure they sum up to one;\nOutput: Class priors π k (k ∈ Y).\nExperimental results. We assumed that the ground-truth class priors for all labels and datasets are 0.1, which means that the test set was balanced. We generated complementary labels using the SCAR assumption with πk = 0.5. We repeated the generation process with different random seeds for 5 times. Table 5 shows the experimental results of the proposed class-prior estimation approach. We can observe that the class priors are accurately estimated in general with the proposed class-prior estimation method. et al., 2023, Dai et al., 2023, Garg et al., 2021]. Cost-sensitive methods are based on an unbiased risk estimator, which rewrites the classification risk as that only on positive and unlabeled data [Kiryo et al., 2017, Jiang et al., 2023, Zhao et al., 2022]." }, { "figure_ref": [], "heading": "C Algorithm Details", "publication_ref": [], "table_ref": [], "text": "The algorithm details of SCARCE is shown in Algorithm 2 and the algorithm details of the meta-algorithm is shown in Algorithm 3. The generation process of complementary labels is summarized in Algorithm 4. " }, { "figure_ref": [], "heading": "D Proof of Theorem 2", "publication_ref": [], "table_ref": [], "text": "First, we introduce the following lemma." }, { "figure_ref": [], "heading": "Lemma 11. Under Assumption 1, we have p (x|ȳ", "publication_ref": [], "table_ref": [], "text": "k = 1) = p(x|y ̸ = k).\nProof. On one hand, we have\np(x|ȳ k = 1, y ̸ = k) = p (x|ȳ k = 1) p(y ̸ = k|x, ȳk = 1) p(y ̸ = k|ȳ k = 1) .\nAccording to the definition of complementary labels, we have p(y ̸ = k|x, ȳk = 1) = p(y ̸ = k|ȳ k = 1) = 1. Therefore, we have p(x|ȳ k = 1, y ̸ = k) = p (x|ȳ k = 1). On the other hand, we have\np(x|ȳ k = 1, y ̸ = k) = p(x|y ̸ = k)p(ȳ k = 1|x, y ̸ = k) p(ȳ k = 1|y ̸ = k) = p(x|y ̸ = k),\nwhere the first equation is derived from Assumption 1. The proof is completed.\nThen, the proof of Theorem 2 is given.\nProof of Theorem 2.\nR(f ) =E p(x,y) [L(f (x), y)] = q k=1 π k E p(x|y=k) [L(f (x), k)] = q k=1 E p(x) [L(f (x), k)] -(1 -π k )E p(x|y̸ =k) [L(f (x), k)] = q k=1 E p(x) [L(f (x), k)] -(1 -π k )E p(x|ȳ k =1) [L(f (x), k)] = q k=1 E p(x|ȳ k =1) [(π k + π k -1) L(f (x), k)] + E p(x|ȳ k =0) [(1 -πk ) L(f (x), k)] ,\nwhich concludes the proof." }, { "figure_ref": [], "heading": "E Proof of Theorem 3", "publication_ref": [], "table_ref": [], "text": "Proof.\nR(f 1 , f 2 , . . . , f q ) =E p(x,y)   ℓ (f y (x)) + q k=1,k̸ =y ℓ (-f k (x))   =E p(x,y) q k=1 (I(k = y)ℓ(f k (x)) + I(k ̸ = y)ℓ(-f k (x))) = q k=1 E p(x,y) [I(k = y)ℓ (f k (x)) + I(k ̸ = y)ℓ (-f k (x))] = q k=1 π k E p(x|y=k) [ℓ (f k (x))] + (1 -π k ) E p(x|y̸ =k) [ℓ (-f k (x))] = q k=1 E p(x) [ℓ(f k (x))] -(1 -π k )E p(x|y̸ =k) [ℓ(f k (x))] +(1 -π k )E p(x|y̸ =k) [ℓ(-f k (x))] = q k=1 E p(x) [ℓ(f k (x))] -(1 -π k )E p(x|ȳ k =1) [ℓ(f k (x))] +(1 -π k )E p(x|ȳ k =1) [ℓ(-f k (x))] = q k=1 πk E p(x|ȳ k =1) [ℓ(f k (x))] + (1 -πk )E p(x|ȳ k =0) [ℓ(f k (x))] -(1 -π k )E p(x|ȳ k =1) [ℓ(f k (x))] + (1 -π k )E p(x|ȳ k =1) [ℓ(-f k (x))] = q k=1 E p(x|ȳ k =1) [(1 -π k )ℓ (-f k (x)) + (π k + π k -1) ℓ (f k (x))] +E p(x|ȳ k =0) [(1 -πk ) ℓ (f k (x))] .\nHere, I(•) returns 1 if predicate holds. Otherwise, I(•) returns 0. The proof is completed." }, { "figure_ref": [], "heading": "F Proof of Theorem 4", "publication_ref": [ "b63", "b63", "b63" ], "table_ref": [], "text": "To begin with, we show the following theoretical results about infinite-sample consistency from Zhang [2004]. For ease of notations, let f (x) = [f 1 (x), f 2 (x), . . . , f q (x)] denote the vector form of modeling outputs of all the binary classifiers. First, we elaborate the infinite-sample consistency property of the OVR strategy.\nTheorem 12 (Theorem 10 of Zhang [2004]). Consider the OVR strategy, whose surrogate loss function is defined as\nΨ y (f (x)) = ψ(f y (x))+ k∈Y\\{y} ψ(-f k (x)\n). Assume ψ is convex, bounded below, differentiable, and ψ(z) < ψ(-z) when z > 0. Then, the OVR strategy is infinite-sample consistent on Ω = R K with respect to the 0-1 classification risk.\nThen, we elaborate the relationship between the minimum classification risk of an infinitesample consistent method and the Bayes error.\nTheorem 13 (Theorem 3 of Zhang [2004]). Let B be the set of all vector Borel measurable functions, which take values in R q . For Ω\n⊂ R q , let B Ω = {f ∈ B : ∀x, f (x) ∈ Ω}. If [Ψ y (•)\n] is infinite-sample consistent on Ω with respect to the 0-1 classification risk, then for any ϵ 1 > 0, there exists an ϵ 2 > 0 such that for all underlying Borel probability measurable p, and f (•) ∈ B Ω ,\nE (x,y)∼p [Ψ y (f (x))] ≤ inf f ′ ∈B Ω E (x,y)∼p [Ψ y (f ′ (x))] + ϵ 2 (20) implies R 0-1 (T (f (•))) ≤ R * 0-1 + ϵ 1 , (21\n)\nwhere T (•) is defined as T (f (x)) := arg max k=1,...,q f k (x).\nThen, we give the proof of Theorem 4.\nProof of Theorem 4. According to Theorem 3, the proposed classification risk R(f 1 , f 2 , . . . , f q ) is equivalent to the OVR risk. Therefore, it is sufficient to elaborate the theoretical properties of the OVR risk to prove Theorem 4." }, { "figure_ref": [], "heading": "G Proof of Theorem 6", "publication_ref": [ "b40", "b31" ], "table_ref": [], "text": "First, we give the definition of Rademacher complexity.\nDefinition 14 (Rademacher complexity). Let X n = {x 1 , . . . x n } denote n i.i.d. random variables drawn from a probability distribution with density p(x), F = {f : X → R} denote a class of measurable functions, and σ = (σ 1 , σ 2 , . . . , σ n ) denote Rademacher variables taking values from {+1, -1} uniformly. Then, the (expected) Rademacher complexity of F is defined as\nR n,p (F) = E Xn E σ sup f ∈F 1 n n i=1 σ i f (x i ) . (22\n)\nFor ease of notation, we define D\n= D U 1 D U 2 . . . D U q D N 1 D N 2 . . . D N\nq denote the set of all the binary-class training data. Then, we have the following lemma.\nLemma 15. For any δ > 0, the inequalities below hold with probability at least 1 -δ:\nsup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk )C ℓ ln (2/δ) 2n U k +(2 -2π k )L ℓ R n U k ,p U k (F) + (4 -4π k -2π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k . (23\n)\nProof. In the following proofs, we consider a general case where all the datasets D N k and D U k are mutually independent. We can observe that when an unlabeled example\nx U k,i ∈ D U k is substituted by another unlabeled example x U k,j , the value of sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) changes at most (1 -πk )C ℓ /n U k . Besides, when a negative example x N k,i ∈ D N k is substituted by another negative example x N k,j , the value of sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) changes at most (2 -2π k -πk )C ℓ /n N k .\nAccording to the McDiarmid's inequality, for any δ > 0, the following inequality holds with probability at least 1 -δ/2:\nsup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤E D sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) + q k=1 (1 -πk )C ℓ ln (2/δ) 2n U k + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k , (24\n)\nwhere the inequality is deduced since\n√ a + b ≤ √ a + √ b.\nIt is a routine work to show by symmetrization [Mohri et al., 2012] that\nE D sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (2 -2π k )R n U k ,p U k (ℓ • F) + (4 -4π k -2π k )R n N k ,p N k (ℓ • F) , (25\n)\nwhere R n,p (ℓ•F) is the Rademacher complexity of the composite function class (ℓ•F). According to Talagrand's contraction lemma [Ledoux and Talagrand, 1991], we have\nR n U k ,p U k (ℓ • F) ≤ L ℓ R n U k ,p U k (F),(26)\nR n N k ,p N k (ℓ • F) ≤ L ℓ R n N k ,p N k (F).(27)\nBy combining Inequality (24), Inequality (25), Inequality (26), and Inequality (27), the following inequality holds with probability at least 1 -δ/2:\nsup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (2 -2π k )L ℓ R n U k ,p U k (F) +(1 -πk )C ℓ ln (2/δ) 2n U k + (4 -4π k -2π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k . (28\n)\nIn the same way, we have the following inequality with probability at least 1 -δ/2:\nsup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk )C ℓ ln (2/δ) 2n U k +(2 -2π k )L ℓ R n U k ,p U k (F) + (4 -4π k -2π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k . (29\n)\nBy combining Inequality (28) and Inequality (29), we have the following inequality with probability at least 1 -δ:\nsup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk )C ℓ ln (2/δ) 2n U k +(2 -2π k )L ℓ R n U k ,p U k (F) + (4 -4π k -2π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k ,(30)\nwhich concludes the proof.\nProof of Theorem 6.\nR( f 1 , f 2 , . . . , f q ) -R(f * 1 , f * 2 , . . . , f * q ) =R( f 1 , f 2 , . . . , f q ) -R( f 1 , f 2 , . . . , f q ) + R( f 1 , f 2 , . . . , f q ) -R(f * 1 , f * 2 , . . . , f * q ) + R(f * 1 , f * 2 , . . . , f * q ) -R(f * 1 , f * 2 , . . . , f * q ) ≤R( f 1 , f 2 , . . . , f q ) -R( f 1 , f 2 , . . . , f q ) + R(f * 1 , f * 2 , . . . , f * q ) -R(f * 1 , f * 2 , . . . , f * q ) ≤2 sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) (31)\nThe first inequality is deduced because ( f 1 , f 2 , . . . , f q ) is the minimizer of R(f 1 , f 2 , . . . , f q ). Combining Inequality (31) and Lemma 15, the proof is completed." }, { "figure_ref": [], "heading": "H Proof of Theorem 8", "publication_ref": [], "table_ref": [], "text": "Let D + k (f k ) = D N k , D U k | R P k (f k ) ≥ 0 and D - k (f k ) = D N k , D U k | R P k (f k ) < 0 denote\nthe sets of NU data pairs having positive and negative empirical risk respectively. Then we have the following lemma.\nLemma 16. The probability measure of D - k (f k ) can be bounded as follows:\nP D - k (f k ) ≤ exp -2β 2 (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k . (32\n) Proof. Let p D N k = p x N k,1 |ȳ k = 1 p x N k,2 |ȳ k = 1 . . . p x N k,n N k |ȳ k = 1 and p D U k = p x U k,1 |ȳ k = 0 p x U k,2 |ȳ k = 0 . . . p x U k,n U k |ȳ k = 0\ndenote the probability density of D N k and D U k respectively. The joint probability density of\nD N k and D U k is p D N k , D U k = p D N k p D U k .\nThen, the probability measure P D - k (f k ) is defined as\nP D - k (f k ) = (D N k ,D U k )∈D - k (f k ) p D N k , D U k d D N k , D U k = (D N k ,D U k )∈D - k (f k ) p D N k , D U k dx N k,1 . . . dx N k,n N k dx U k,1 . . . dx U k,n U k .\nWhen a negative example in D N k is substituted by another different negative example, the change of the value of R P k (f k ) is no more than (1 -π k -πk )C ℓ /n N k ; when an unlabeled example in D U k is substituted by another different unlabeled example, the change of the value of R P k (f k ) is no more than (1 -πk )C ℓ /n U k . Therefore, by applying the McDiarmid's inequality, we can obtain the following inequality:\nP E R P k (f k ) -R P k (f k ) ≥ β ≤ exp -2β 2 (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k . (33\n)\nTherefore, we have\nP D - k (f k ) =P R P k (f k ) ≤ 0 ≤P R P k (f k ) ≤ E R P k (f k ) -β =P E R P k (f k ) -R P k (f k ) ≥ β ≤ exp -2β 2 (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k , (34\n)\nwhich concludes the proof.\nWe present a more complete version of Theorem 8 and its proof.\nTheorem 17. Based on the above assumptions, the bias of the expectation of the corrected risk estimator has the following lower and upper bounds:\n0 ≤ E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k , (35\n)\nwhere\n∆ k = exp -2β 2 / (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k\n. Furthermore, for any δ > 0, the following inequality holds with probability at least 1 -δ:\n| R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q )| ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + (2 -2π k -π k ) (L g + 1) C ℓ ∆ k + ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k .\nProof. First, we have\nE R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) = E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) . Since R(f 1 , f 2 , . . . , f q ) is an upper bound of R(f 1 , f 2 , . . . , f q ), we have E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≥ 0.\nBesides, we have\nE R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) = q k=1 (D N k ,D U k )∈D - k (f k ) g R P k (f k ) -R P k (f k ) p D N k , D U k d D N k , D U k ≤ q k=1 sup (D N k ,D U k )∈D - k (f k ) g R P k (f k ) -R P k (f k ) (D N k ,D U )∈D - k (f k ) p D N k , D U k d D N k , D U k = q k=1 sup (D N k ,D U k )∈D - k (f k ) g R P k (f k ) -R P k (f k ) P D - k (f k ) ≤ q k=1 sup (D N k ,D U k )∈D - k (f k ) (L g R P k (f k ) + R P k (f k ) )P D - k (f k ) .\nBesides,\nR P k (f k ) = πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U k,i ) ≤ πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U k,i ) ≤(1 -π k -πk )C ℓ + (1 -πk )C ℓ = (2 -2π k -π k ) C ℓ .\nTherefore, we have\nE R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 sup (D N k ,D U k )∈D - k (f k ) (L g R P k (f k ) + R P k (f k ) )P(D - k (f k )). ≤ q k=1 (2 -2π k -π k ) (L g + 1) C ℓ exp -2β 2 (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k = q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k ,\nwhich concludes the first part of the proof of Theorem 3. Before giving the proof for the second part, we give the upper bound of E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) . When an unlabeled example from D U k is substituted by another unlabeled example, the value of R(f 1 , f 2 , . . . , f q ) changes at most (1 -πk ) C ℓ L g /n U k . When a negative example from D N k is substituted by a different example, the value of R(f 1 , f 2 , . . . , f q ) changes at most ((1 -π k -πk ) L g + 1 -π k ) C ℓ /n N k . By applying McDiarmid's inequality, we have the following inequalities with probability at least 1 -δ/2:\nR(f 1 , f 2 , . . . , f q ) -E R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k , E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k .\nThen, with probability at least 1 -δ, we have\nE R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k .\nTherefore, with probability at least 1 -δ we have\nR(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) = R(f 1 , f 2 , . . . , f q ) -E[ R(f 1 , f 2 , . . . , f q )] + E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) ≤ R(f 1 , f 2 , . . . , f q ) -E[ R(f 1 , f 2 , . . . , f q )] + E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) = R(f 1 , f 2 , . . . , f q ) -E[ R(f 1 , f 2 , . . . , f q )] + E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k + q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k ,\nwhich concludes the proof." }, { "figure_ref": [], "heading": "I Proof of Theorem 9", "publication_ref": [ "b31", "b40" ], "table_ref": [], "text": "In this section, we adopt an alternative definition of Rademacher complexity:\nR ′ n,p (F) = E Xn E σ sup f ∈F 1 n n i=1 σ i f (x i ) .(36)\nThen, we introduce the following lemmas.\nLemma 18. Without any composition, for any F, we have R\n′ n,p (F) ≥ R n,p (F). If F is closed under negation, we have R ′ n,p (F) = R n,p (F).\nLemma 19 (Theorem 4.12 in [Ledoux and Talagrand, 1991]). If ψ : R → R is a Lipschitz continuous function with a Lipschitz constant L ψ and satisfies ψ(0) = 0, we have\nR ′ n,p (ψ • F) ≤ 2L ψ R ′ n,p (F), where ψ • F = {ψ • f |f ∈ F}.\nBefore giving the proof of Theorem 9, we give the following lemma.\nLemma 20. For any δ > 0, the inequalities below hold with probability at least 1 -δ:\nsup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (1/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (1/δ) 2n N k + q k=1 (4 -4π k ) L g L ℓ R n U k ,p U k (F) + ((4 -4π k -4π k ) L g + 4 -4π k ) L ℓ R n N k ,p N k (F) + q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k .\nProof. Similar to previous proofs, we can observe that when an unlabeled example from D U k is substituted by another unlabeled example, the value of sup f\n1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) changes at most (1 -πk ) C ℓ L g /n U k . When a negative example from D N k is substituted by a dif- ferent example, the value of sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) changes at most ((1 -π k -πk ) L g + 1 -π k ) C ℓ /n N k .\nBy applying McDiarmid's inequality, we have the following inequality with probability at least 1 -δ:\nsup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) -E sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (1/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (1/δ) 2n N k . (37\n)\nBesides, we have\nE sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) =E D sup f 1 ,f 2 ,...,fq∈F E D′ R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤E D, D′ sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ; D) -R(f 1 , f 2 , . . . , f q ; D′ ) ,(38)\nwhere the last inequality is deduced by applying Jensen's inequality twice since the absolute value function and the supremum function are both convex. Here, R(f 1 , f 2 , . . . , f q ; D) denotes the value of R(f 1 , f 2 , . . . , f q ) calculated on D. To ensure that the conditions in Lemma 19 hold, we introduce l(z) = ℓ(z)-ℓ(0). It is obvious that l(0) = 0 and l(z) is also a Lipschitz continuous function with a Lipschitz constant L ℓ . Then, we have\nR(f 1 , f 2 , . . . , f q ; D) -R(f 1 , f 2 , . . . , f q ; D′ ) ≤ q k=1 g    πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U k,i )    -g    πk + π k -1 n N k n N k i=1 ℓ f k (x N ′ k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U ′ k,i )    + q k=1 1 -π k n N k n N k i=1 ℓ -f k (x N k,i ) - 1 -π k n N k n N k i=1 ℓ -f k (x N ′ k,i ) ≤ q k=1 L g πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) -ℓ f k (x N ′ k,i ) + 1 -πk n N k n U k i=1 ℓ f k (x U k,i ) -ℓ f k (x U ′ k,i ) + q k=1 1 -π k n N k n N k i=1 ℓ -f k (x N k,i ) -ℓ -f k (x N ′ k,i ) . (39\n)\nBesides, we can observe ℓ(z 1 ) -ℓ(z 2 ) = l(z 1 ) -l(z 2 ). Therefore, the RHS of Inequality (39) can be expressed as\nq k=1 L g πk + π k -1 n N k n N k i=1 l f k (x N k,i ) -l f k (x N ′ k,i ) + 1 -πk n N k n U k i=1 l f k (x U k,i ) -l f k (x U ′ k,i ) + q k=1 1 -π k n N k n N k i=1 l -f k (x N k,i ) -l -f k (x N ′ k,i ) .\nThen, it is a routine work to show by symmetrization [Mohri et al., 2012] that E D, D′ sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ; D) -R(f 1 , f 2 , . . . , f q ; D′ )\n≤ q k=1 (2 -2π k ) L g R ′ n U k ,p U k ( l • F) + ((2 -2π k -2π k ) L g + 2 -2π k ) R ′ n N k ,p N k ( l • F) ≤ q k=1 (4 -4π k ) L g L ℓ R ′ n U k ,p U k (F) + ((4 -4π k -4π k ) L g + 4 -4π k ) L ℓ R ′ n N k ,p N k (F) = q k=1 (4 -4π k ) L g L ℓ R n U k ,p U k (F) + ((4 -4π k -4π k ) L g + 4 -4π k ) L ℓ R n N k ,p N k (F) , (40\n)\nwhere the second inequality is deduced according to Lemma 19 and the last equality is based on the assumption that F is closed under negation. By combing Inequality (37), Inequality (38), and Inequality (40), we have the following inequality with probability at least 1 -δ: sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1\n(1 -πk ) C ℓ L g ln (1/δ) 2n\nU k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (1/δ) 2n N k + q k=1 (4 -4π k ) L g L ℓ R n U k ,p U k (F) + ((4 -4π k -4π k ) L g + 4 -4π k ) L ℓ R n N k ,p N k (F) . (41\n)\nThen, we have sup\nf 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) = sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -E R(f 1 , f 2 , . . . , f q ) +E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -E R(f 1 , f 2 , . . . , f q ) + sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) . (42\n)\nCombining Inequality (42) with Inequality (41) and Inequality ( 14), the proof is completed.\nWe present a more complete version of Theorem 9 and its proof. \n                 \n.\nFor each example, we sample a complementary label from a multinomial distribution parameterized by the row vector of the transition matrix indexed by the ground-truth label.\nFor the \"SCAR-a\" and \"SCAR-b\" settings, we followed the generation process in Section 3.1 with the following class priors of complementary labels: We repeated the sampling procedure to ensure that each example had a single complementary label." }, { "figure_ref": [], "heading": "J.2 Descriptions of Compared Approaches", "publication_ref": [ "b23", "b24", "b24", "b14", "b14", "b34" ], "table_ref": [], "text": "The compared methods in the experiments of synthetic benchmark datasets:\n• PC [Ishida et al., 2017]: A risk-consistent complementary-label learning approach using the pairwise comparison loss. • NN [Ishida et al., 2019]: A risk-consistent complementary-label learning approach using the non-negative risk estimator. • GA [Ishida et al., 2019]: A variant of the non-negative risk estimator of complementary-label learning by using the gradient ascent technique. • L-UW [Gao and Zhang, 2021]: A discriminative approach by minimizing the outputs corresponding to complementary labels. • L-W Gao and Zhang [2021]: A weighted loss based on L-UW by considering the prediction uncertainty.\n• OP [Liu et al., 2023]: A classifier-consistent complementary-label learning approach by minimizing the outputs of complementary labels." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Theorem 21. Based on the above assumptions, for any δ > 0, the following inequality holds with probability at least 1 -δ:\nProof.\nThe first inequality is deduced because ( f 1 , f 2 , . . . , f q ) is the minimizer of R(f 1 , f 2 , . . . , f q ). Combining Inequality ( 43) and Lemma 20, the proof is completed." }, { "figure_ref": [], "heading": "J Details of Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "J.1 Details of synthetic benchmark datasets", "publication_ref": [ "b38", "b57", "b62", "b44", "b59" ], "table_ref": [], "text": "We considered the single complementary-label setting and similar results could be observed with multiple complementary labels.\nFor the \"uniform\" setting, a label other than the ground-truth label was sampled randomly following the uniform distribution to be the complementary label.\nFor the \"biased-a\" and \"biased-b\" settings, we adopted the following row-normalized transition matrices of p(ȳ|y) to generate complementary labels:\nThe compared methods in the experiments of real-world benchmark datasets:\n• CC [Feng et al., 2020b]: A classifier-consistent partial-label learning approach based on the uniform distribution assumption of partial labels. • PRODEN [Lv et al., 2020]: A risk-consistent partial-label learning approach using the selftraining strategy to identify the ground-truth labels. • EXP [Feng et al., 2020a]: A classifier-consistent multiple complementary-label learning approach by using the exponential loss function.\n• MAE [Feng et al., 2020a]: A classifier-consistent multiple complementary-label learning approach by using the Mean Absolute Error loss function. • Phuber-CE [Feng et al., 2020a]: A classifier-consistent multiple complementary-label learning approach by using the Partially Huberised Cross Entropy loss function.\n• LWS [Wen et al., 2021]: A partial-label learning approach by leveraging a weight to account for the tradeoff between losses on partial and non-partial labels. • CAVL [Zhang et al., 2022]: A partial-label learning approach by using the class activation value to identify the true labels. • IDGP [Qiao et al., 2023]: A instance-dependent partial-label learning approach by modeling the generation process of partial labels. • POP [Xu et al., 2023]: A partial-label learning approach by purifying candidate label sets progressively." }, { "figure_ref": [], "heading": "J.3 Details of Models and Hyperparameters", "publication_ref": [ "b19", "b21", "b41", "b22", "b19", "b21", "b43", "b27" ], "table_ref": [], "text": "The logistic loss was adopted to instantiate the binary-class loss function l, and the absolute value function was used as the risk-correction function g for SCARCE.\nFor CIFAR-10, we used 34-layer ResNet [He et al., 2016] and 22-layer DenseNet [Huang et al., 2017] as the model architectures. For the other three datasets, we used a multilayer perceptron (MLP) with three hidden layers of width 300 equipped with the ReLU [Nair and Hinton, 2010] activation function and batch normalization [Ioffe and Szegedy, 2015] and 5-layer LeNet [LeCun et al., 1998] as the model architectures.\nFor CLCIFAR-10 and CLCIFAR-20, we adopted the same data augmentation techniques for all the methods, including random horizontal flipping and random cropping. We used 34-layer ResNet [He et al., 2016] and 22-layer DenseNet [Huang et al., 2017] as the model architectures.\nAll the methods were implemented in PyTorch [Paszke et al., 2019]. We used the Adam optimizer [Kingma and Ba, 2015]. The learning rate and batch size were fixed to 1e-3 and 256 for all the datasets, respectively. The weight decay was 1e-3 for CIFAR-10, CLCIFAR-10, and CLCIFAR-20 and 1e-5 for the other three datasets. The number of epochs was set to 200, and we recorded the mean accuracy in the last ten epochs." }, { "figure_ref": [], "heading": "K More Experimental Results", "publication_ref": [], "table_ref": [], "text": "" } ]
Complementary-label learning is a weakly supervised learning problem in which each training example is associated with one or multiple complementary labels indicating the classes to which it does not belong. Existing consistent approaches have relied on the uniform distribution assumption to model the generation of complementary labels, or on an ordinary-label training set to estimate the transition matrix in non-uniform cases. However, either condition may not be satisfied in real-world scenarios. In this paper, we propose a novel consistent approach that does not rely on these conditions. Inspired by the positiveunlabeled (PU) learning literature, we propose an unbiased risk estimator based on the Selected Completely At Random assumption for complementary-label learning. We then introduce a risk-correction approach to address overfitting problems. Furthermore, we find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems when using the one-versus-rest strategy. Extensive experimental results on both synthetic and real-world benchmark datasets validate the superiority of our proposed approach over state-of-the-art methods.
The Selected-completely-at-random Complementary Label is a Practical Weak Supervision for Multi-class Classification
[ { "figure_caption": "Figure 1 :1Figure1: Training curves and test curves of the method that minimizes the URE and test curves of our proposed risk-correction approach SCARCE. The green dashed lines indicate when the URE becomes negative while the yellow dashed lines indicate when the overfitting phenomena occur. The complementary labels are generated by following the uniform distribution assumption. ResNet is used as the model architecture for CIFAR-10 and MLP is used for other datasets.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: (a) Classification accuracy of different instantiations of SCARCE on different datasets. (b) Classification accuracy given inaccurate class priors.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "as the base algorithm for class-prior estimation since it can achieve nice performance with easy implementations. First, they split the PU data into PU training data D PTr = x PTr i train a positive-versus-unlabeled (PvU) classifier f PvU with D PTr and D UTr . They collect the model outputs of PU validation data Z P = z P", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Algorithm 11Class-prior Estimation Input: Complementary-label training set D. for k ∈ Y do Generate training datasets D NTr k , D UTr k , validation data D NVal k , and D UVal k by Eq. (", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "SCAR-a: [0.05, 0.05, 0.2, 0.2, 0.1, 0.1, 0.05, 0.05, 0.1, 0.1] , SCAR-b: [0.1, 0.1, 0.2, 0.05, 0.05, 0.1, 0.1, 0.2, 0.05, 0.05] .", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Classification accuracy (mean±std) of each method on MNIST. The best performance is shown in bold (pairwise t-test at the 0.05 significance level).", "figure_data": "SettingUniformBiased-aBiased-bSCAR-aSCAR-bModelMLP LeNet MLP LeNet MLP LeNet MLP LeNet MLP LeNetPC71.11 ±0.8382.69 ±1.1569.29 ±0.9787.82 ±0.6971.59 ±0.8587.66 ±0.6666.97 ±1.0311.00 ±0.7957.67 ±0.9849.17 ±35.9NN67.75 ±0.9686.16 ±0.6930.59 ±2.3146.27 ±2.6138.50 ±3.9363.67 ±3.7567.39 ±0.6886.58 ±0.9563.95 ±0.5679.94 ±0.48GA88.00 ±0.8596.02 ±0.1565.97 ±7.8794.55 ±0.4375.77 ±1.4894.87 ±0.2862.62 ±2.2990.23 ±0.9256.91 ±2.0878.66 ±0.61L-UW73.49 ±0.8877.74 ±0.9739.63 ±0.5732.21 ±1.2042.77 ±1.4234.57 ±1.9035.08 ±1.5933.82 ±2.4430.24 ±1.8124.28 ±2.74L-W62.24 ±0.5063.04 ±1.5836.90 ±0.3429.25 ±0.9441.55 ±0.6332.98 ±2.2533.53 ±2.0826.02 ±1.3128.99 ±2.3823.69 ±2.94OP78.87 ±0.4688.76 ±1.6873.46 ±0.7185.96 ±1.0274.16 ±0.5287.23 ±1.3176.29 ±0.2386.94 ±1.9468.12 ±0.5171.67 ±2.30SCARCE91.27 ±0.2097.00 ±0.3088.14 ±0.7096.14 ±0.3289.51 ±0.4496.62 ±0.1090.98 ±0.2796.72 ±0.1681.85 ±0.2587.05 ±0.28", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Classification accuracy (mean±std) of each method on CLCIFAR-10 and CLCIFAR-20. The best performance is shown in bold (pairwise t-test at the 0.05 significance level).", "figure_data": "DatasetModelCC PRODEN EXP MAE Phuber-CE LWS CAVL IDGP POP SCARCECLCIFAR-10ResNet31.56 ±2.1726.37 ±0.9834.84 ±4.1919.48 ±2.8841.13 ±0.7413.05 ±4.1824.12 ±3.3210.00 ±0.0026.75 ±1.2842.04 ±0.96DenseNet37.03 ±1.7731.31 ±1.0643.27 ±1.3322.77 ±0.2239.92 ±0.9110.00 ±0.0025.31 ±4.0610.00 ±0.0031.45 ±1.1644.41 ±0.43CLCIFAR-20ResNet5.00 ±0.006.69 ±0.317.21 ±0.175.00 ±0.008.10 ±0.185.20 ±0.455.00 ±0.004.96 ±0.096.40 ±0.3320.08 ±0.62DenseNet5.00 ±0.005.00 ±0.007.51 ±0.915.67 ±1.497.22 ±0.395.00 ±0.005.09 ±0.135.00 ±0.005.00 ±0.0019.91 ±0.68", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Classification accuracy (mean±std) of each method on Kuzushiji-MNIST. The best performance is shown in bold (pairwise t-test at the 0.05 significance level).", "figure_data": "SettingUniformBiased-aBiased-bSCAR-aSCAR-bModelMLP LeNet MLP LeNet MLP LeNet MLP LeNet MLP LeNetPC42.93 ±0.3356.79 ±1.5441.60 ±0.9767.39 ±1.0442.53 ±0.8066.81 ±1.3339.58 ±1.3542.59 ±29.833.95 ±1.1437.67 ±25.3NN39.42 ±0.6858.57 ±1.1523.97 ±2.5331.10 ±2.9529.93 ±1.8048.72 ±2.8939.31 ±1.1856.84 ±2.1038.68 ±0.5856.70 ±1.08GA60.83 ±1.3776.17 ±0.4443.22 ±3.0375.04 ±0.9248.03 ±2.9377.05 ±1.6736.56 ±2.9659.16 ±3.3033.02 ±2.3152.92 ±2.39L-UW43.00 ±1.2049.31 ±1.9527.89 ±0.5125.82 ±0.7831.53 ±0.4230.05 ±1.6321.49 ±0.5719.71 ±1.4418.36 ±1.2316.67 ±1.86L-W37.21 ±0.5942.69 ±2.5426.75 ±0.6125.86 ±0.6430.10 ±0.5727.94 ±1.6821.22 ±0.7718.28 ±2.1118.41 ±1.6616.25 ±1.51OP51.78 ±0.4165.94 ±1.3845.66 ±0.9065.59 ±1.7147.47 ±1.2664.65 ±1.6849.95 ±0.7959.93 ±1.3842.72 ±0.9556.36 ±2.15SCARCE67.95 ±1.2979.81 ±1.1962.43 ±1.0275.99 ±0.9164.98 ±0.7278.53 ±0.5766.72 ±0.6978.27 ±1.0961.78 ±0.3672.03 ±0.45different distribution assumptions.", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Estimated values (mean±std) of class priors.", "figure_data": "Label Index12345MNIST0.104±0.011 0.119±0.012 0.110±0.009 0.099±0.008 0.101±0.010Kuzushiji-MNIST 0.108±0.026 0.098±0.011 0.087±0.012 0.104±0.004 0.101±0.021Fashion-MNIST 0.091±0.016 0.118±0.005 0.090±0.024 0.090±0.009 0.077±0.020CIFAR-100.085±0.016 0.102±0.039 0.073±0.019 0.109±0.047 0.100±0.031Label Index678910MNIST0.087±0.007 0.089±0.005 0.106±0.019 0.091±0.008 0.096±0.016Kuzushiji-MNIST 0.095±0.010 0.105±0.025 0.095±0.007 0.094±0.016 0.113±0.035Fashion-MNIST 0.117±0.007 0.070±0.010 0.114±0.023 0.117±0.016 0.117±0.016CIFAR-100.098±0.013 0.115±0.023 0.120±0.033 0.097±0.041 0.100±0.013", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Complementary-label training set D, class priors π k (k ∈ Y), unseen instance x * , epoch T max , iteration I max . Complementary-label training set D, PU learning algorithm A, unseen instance x * , epoch T max , iteration I max , number of labels q. = arg min k∈Y f k (x * ); Output: Predicted label y * .", "figure_data": "Algorithm 2 SCARCE Input: end for end for Return y Algorithm 3 SCARCE-Meta Input: , q do Construct a negative dataset D N k and an unlabeled dataset D U k according to Eq. (8) and (9); Train a binary classifier f k ← A(D N k , D U k ) by regarding negative data as positive. end for end for end for Input: Density p(x, y), label space Y, dataset size n, probability c k (k ∈ Y). Initialize a complementary-label dataset D = ∅; for i = 1, 2, . . . , n do Sample an example (x i , y i ) from p(x, y); Initialize Ȳi = ∅; for k ∈ Y do if y i ̸ = k then Assign Ȳi = Ȳi ∪ {k} with c k ; end if end for Assign D = D ∪ {(x i , Ȳi )}; end for Return y Algorithm 4 Data Generation Process Output: Complementary-label dataset D.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Wei Wang; Takashi Ishida; Yu-Jie Zhang; Gang Niu; Masashi Sugiyama
[ { "authors": "Michael I Peter L Bartlett; Jon D Jordan; Mcauliffe", "journal": "Journal of the American Statistical Association", "ref_id": "b0", "title": "Convexity, classification, and risk bounds", "year": "2006" }, { "authors": "Gilles Blanchard; Gyemin Lee; Clayton Scott", "journal": "Journal of Machine Learning Research", "ref_id": "b1", "title": "Semi-supervised novelty detection", "year": "2010" }, { "authors": "Yuzhou Cao; Lei Feng; Yitian Xu; Bo An; Gang Niu; Masashi Sugiyama", "journal": "", "ref_id": "b2", "title": "Learning from similarity-confidence data", "year": "2021" }, { "authors": "Hui Chen; Fangqing Liu; Yin Wang; Liyue Zhao; Hao Wu", "journal": "", "ref_id": "b3", "title": "A variational approach for learning from positive and unlabeled data", "year": "2020" }, { "authors": "John Chen; Vatsal Shah; Anastasios Kyrillidis", "journal": "", "ref_id": "b4", "title": "Negative sampling in semi-supervised learning", "year": "2020" }, { "authors": "Yu-Ting Chou; Gang Niu; Hsuan-Tien Lin; Masashi Sugiyama", "journal": "", "ref_id": "b5", "title": "Unbiased risk estimators can mislead: A case study of learning with complementary labels", "year": "2020" }, { "authors": "Tarin Clanuwat; Mikel Bober-Irizar; Asanobu Kitamoto; Alex Lamb; Kazuaki Yamamoto; David Ha", "journal": "", "ref_id": "b6", "title": "Deep learning for classical Japanese literature", "year": "2018" }, { "authors": "Olivier Coudray; Christine Keribin; Pascal Massart; Patrick Pamphile", "journal": "Journal of Machine Learning Research", "ref_id": "b7", "title": "Risk bounds for positive-unlabeled learning under the selected at random assumption", "year": "2023" }, { "authors": "Songmin Dai; Xiaoqiang Li; Yue Zhou; Xichen Ye; Tong Liu", "journal": "", "ref_id": "b8", "title": "GradPU: Positive-unlabeled learning via gradient penalty and positive upweighting", "year": "2023" }, { "authors": "Qinyi Deng; Yong Guo; Zhibang Yang; Haolin Pan; Jian Chen", "journal": "Neural Networks", "ref_id": "b9", "title": "Boosting semi-supervised learning with contrastive complementary labeling", "year": "2024" }, { "authors": "C Marthinus; Gang Du Plessis; Masashi Niu; Sugiyama", "journal": "", "ref_id": "b10", "title": "Analysis of learning from positive and unlabeled data", "year": "2014" }, { "authors": "Charles Elkan; Keith Noto", "journal": "", "ref_id": "b11", "title": "Learning classifiers from only positive and unlabeled data", "year": "2008" }, { "authors": "Lei Feng; Takuo Kaneko; Bo Han; Gang Niu; Bo An; Masashi Sugiyama", "journal": "", "ref_id": "b12", "title": "Learning with multiple complementary labels", "year": "2020" }, { "authors": "Lei Feng; Jiaqi Lv; Bo Han; Miao Xu; Gang Niu; Xin Geng; Bo An; Masashi Sugiyama", "journal": "", "ref_id": "b13", "title": "Provably consistent partial-label learning", "year": "2020" }, { "authors": "Yi Gao; Min-Ling Zhang", "journal": "", "ref_id": "b14", "title": "Discriminative complementary-label learning with weighted loss", "year": "2021" }, { "authors": "Saurabh Garg; Yifan Wu; Alexander J Smola; Sivaraman Balakrishnan; Zachary C Lipton", "journal": "", "ref_id": "b15", "title": "Mixture proportion estimation and PU learning: A modern approach", "year": "2021" }, { "authors": "Aritra Ghosh; Himanshu Kumar; Shanti Sastry", "journal": "", "ref_id": "b16", "title": "Robust loss functions under label noise for deep neural networks", "year": "2017" }, { "authors": "Noah Golowich; Alexander Rakhlin; Ohad Shamir", "journal": "", "ref_id": "b17", "title": "Size-independent sample complexity of neural networks", "year": "2018" }, { "authors": "Jiayi Han; Longbin Zeng; Liang Du; Weiyang Ding; Jianfeng Feng", "journal": "", "ref_id": "b18", "title": "Rethinking precision of pseudo label: Test-time adaptation via complementary learning", "year": "2023" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b19", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Peiyun Hu; Zachary C Lipton; Anima Anandkumar; Deva Ramanan", "journal": "", "ref_id": "b20", "title": "Active learning with partial feedback", "year": "2019" }, { "authors": "Gao Huang; Zhuang Liu; Laurens Van Der Maaten; Kilian Q Weinberger", "journal": "", "ref_id": "b21", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "Sergey Ioffe; Christian Szegedy", "journal": "", "ref_id": "b22", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "Takashi Ishida; Gang Niu; Weihua Hu; Masashi Sugiyama", "journal": "", "ref_id": "b23", "title": "Learning from complementary labels", "year": "2017" }, { "authors": "Takashi Ishida; Gang Niu; Aditya K Menon; Masashi Sugiyama", "journal": "", "ref_id": "b24", "title": "Complementary-label learning for arbitrary losses and models", "year": "2019" }, { "authors": "Yangbangyan Jiang; Qianqian Xu; Yunrui Zhao; Zhiyong Yang; Peisong Wen; Xiaochun Cao; Qingming Huang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b25", "title": "Positive-unlabeled learning with label distribution alignment", "year": "2023" }, { "authors": "Youngdong Kim; Junho Yim; Juseung Yun; Junmo Kim", "journal": "", "ref_id": "b26", "title": "NLNL: Negative learning for noisy labels", "year": "2019" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b27", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Ryuichi Kiryo; Gang Niu; C Marthinus; Masashi Du Plessis; Sugiyama", "journal": "", "ref_id": "b28", "title": "Positive-unlabeled learning with non-negative risk estimator", "year": "2017" }, { "authors": "Alex Krizhevsky; Geoffrey E Hinton", "journal": "", "ref_id": "b29", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Yann Lecun; Léon Bottou; Yoshua Bengio; Patrick Haffner", "journal": "", "ref_id": "b30", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "Michel Ledoux; Michel Talagrand", "journal": "Springer Science & Business Media", "ref_id": "b31", "title": "Probability in Banach Spaces: Isoperimetry and Processes", "year": "1991" }, { "authors": "Changchun Li; Ximing Li; Lei Feng; Jihong Ouyang", "journal": "", "ref_id": "b32", "title": "Who is your right mixup partner in positive and unlabeled learning", "year": "2022" }, { "authors": "Wei-I Lin; Hsuan-Tien Lin", "journal": "", "ref_id": "b33", "title": "Reduction from complementary-label learning to probability estimates", "year": "2023" }, { "authors": "Shuqi Liu; Yuzhou Cao; Qiaozhen Zhang; Lei Feng; Bo An", "journal": "", "ref_id": "b34", "title": "Consistent complementary-label learning via order-preserving losses", "year": "2023" }, { "authors": "Tongliang Liu; Dacheng Tao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b35", "title": "Classification with noisy labels by importance reweighting", "year": "2015" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b36", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Nan Lu; Tianyi Zhang; Gang Niu; Masashi Sugiyama", "journal": "", "ref_id": "b37", "title": "Mitigating overfitting in supervised classification from two unlabeled datasets: A consistent risk correction approach", "year": "2020" }, { "authors": "Jiaqi Lv; Miao Xu; Lei Feng; Gang Niu; Xin Geng; Masashi Sugiyama", "journal": "", "ref_id": "b38", "title": "Progressive identification of true labels for partial-label learning", "year": "2020" }, { "authors": "Qiankun Ma; Jiyao Gao; Bo Zhan; Yunpeng Guo; Jiliu Zhou; Yan Wang", "journal": "", "ref_id": "b39", "title": "Rethinking safe semi-supervised learning: Transferring the open-set problem to a close-set one", "year": "2023" }, { "authors": "Mehryar Mohri; Afshin Rostamizadeh; Ameet Talwalkar", "journal": "The MIT Press", "ref_id": "b40", "title": "Foundations of Machine Learning", "year": "2012" }, { "authors": "Vinod Nair; Geoffrey E Hinton", "journal": "", "ref_id": "b41", "title": "Rectified linear units improve restricted boltzmann machines", "year": "2010" }, { "authors": "Gang Niu; C Marthinus; Tomoya Du Plessis; Yao Sakai; Masashi Ma; Sugiyama", "journal": "", "ref_id": "b42", "title": "Theoretical comparisons of positive-unlabeled learning against positive-negative learning", "year": "2016" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "", "ref_id": "b43", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Congyu Qiao; Ning Xu; Xin Geng", "journal": "", "ref_id": "b44", "title": "Decompositional generation process for instancedependent partial label learning", "year": "2023" }, { "authors": "G Harish; Clayton Ramaswamy; Ambuj Scott; Tewari", "journal": "", "ref_id": "b45", "title": "Mixture proportion estimation via kernel embeddings of distributions", "year": "2016" }, { "authors": "Mina Rezaei; Haojin Yang; Christoph Meinel", "journal": "Multimedia Tools and Applications", "ref_id": "b46", "title": "Recurrent generative adversarial network for learning imbalanced medical image semantic segmentation", "year": "2020" }, { "authors": "Ryan Rifkin; Aldebaro Klautau", "journal": "Journal of Machine Learning Research", "ref_id": "b47", "title": "In defense of one-vs-all classification", "year": "2004" }, { "authors": "Clayton Scott", "journal": "", "ref_id": "b48", "title": "A rate of convergence for mixture proportion estimation, with application to learning from noisy labels", "year": "2015" }, { "authors": "Clayton Scott; Gilles Blanchard; Gregory Handy", "journal": "", "ref_id": "b49", "title": "Classification with asymmetric label noise: Consistency and maximal denoising", "year": "2013" }, { "authors": "Deng-Bao Wang; Lei Feng; Min-Ling Zhang", "journal": "", "ref_id": "b50", "title": "Learning from complementary labels via partial-output consistency regularization", "year": "2021" }, { "authors": "Hsiu-Hsuan Wang; Wei-I Lin; Hsuan-Tien Lin", "journal": "", "ref_id": "b51", "title": "CLCIFAR: CIFAR-derived benchmark datasets with human annotated complementary labels", "year": "2023" }, { "authors": "Wei Wang; Lei Feng; Yuchen Jiang; Gang Niu; Min-Ling Zhang; Masashi Sugiyama", "journal": "", "ref_id": "b52", "title": "Binary classification with confidence difference", "year": "2023" }, { "authors": "Xinrui Wang; Wenhai Wan; Chuanxing Geng; Shao-Yuan; Songcan Li; Chen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b53", "title": "Beyond myopia: Learning from positive and unlabeled data through holistic predictive trends", "year": "2023" }, { "authors": "Hongxin Wei; Huiping Zhuang; Renchunzi Xie; Lei Feng; Gang Niu; Bo An; Yixuan Li", "journal": "", "ref_id": "b54", "title": "Mitigating memorization of noisy labels by clipping the model prediction", "year": "2023" }, { "authors": "Meng Wei; Yong Zhou; Zhongnian Li; Xinzheng Xu", "journal": "Neural Networks", "ref_id": "b55", "title": "Class-imbalanced complementary-label learning via weighted loss", "year": "2023" }, { "authors": "Xiu-Shen Wei; He-Yang Xu; Faen Zhang; Yuxin Peng; Wei Zhou", "journal": "", "ref_id": "b56", "title": "An embarrassingly simple approach to semi-supervised few-shot learning", "year": "2022" }, { "authors": "Hongwei Wen; Jingyi Cui; Hanyuan Hang; Jiabin Liu; Yisen Wang; Zhouchen Lin", "journal": "", "ref_id": "b57", "title": "Leveraged weighted loss for partial label learning", "year": "2021" }, { "authors": "Han Xiao; Kashif Rasul; Roland Vollgraf", "journal": "", "ref_id": "b58", "title": "Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms", "year": "2017" }, { "authors": "Ning Xu; Biao Liu; Jiaqi Lv; Congyu Qiao; Xin Geng", "journal": "", "ref_id": "b59", "title": "Progressive purification for instancedependent partial label learning", "year": "2023" }, { "authors": "Yu Yao; Tongliang Liu; Bo Han; Mingming Gong; Gang Niu; Masashi Sugiyama; Dacheng Tao", "journal": "", "ref_id": "b60", "title": "Rethinking class-prior estimation for positive-unlabeled learning", "year": "2022" }, { "authors": "Xiyu Yu; Tongliang Liu; Mingming Gong; Dacheng Tao", "journal": "", "ref_id": "b61", "title": "Learning with biased complementary labels", "year": "2018" }, { "authors": "Fei Zhang; Lei Feng; Bo Han; Tongliang Liu; Gang Niu; Tao Qin; Masashi Sugiyama", "journal": "", "ref_id": "b62", "title": "Exploiting class activation value for partial-label learning", "year": "2022" }, { "authors": "Tong Zhang", "journal": "Journal of Machine Learning Research", "ref_id": "b63", "title": "Statistical analysis of some multi-category large margin classification methods", "year": "2004" }, { "authors": "Yiyang Zhang; Feng Liu; Zhen Fang; Bo Yuan; Guangquan Zhang; Jie Lu", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b64", "title": "Learning from a complementary-label source domain: Theory and algorithms", "year": "2021" }, { "authors": "Yu-Jie Zhang; Peng Zhao; Lanjihong Ma; Zhi-Hua Zhou", "journal": "", "ref_id": "b65", "title": "An unbiased risk estimator for learning with augmented classes", "year": "2020" }, { "authors": "Zhilu Zhang; Mert Sabuncu", "journal": "", "ref_id": "b66", "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "year": "2018" }, { "authors": "Yunrui Zhao; Qianqian Xu; Yangbangyan Jiang; Peisong Wen; Qingming Huang", "journal": "", "ref_id": "b67", "title": "Dist-PU: Positive-unlabeled learning from a label distribution perspective", "year": "2022" }, { "authors": "Jianan Zhou; Jianing Zhu; Jingfeng Zhang; Tongliang Liu; Gang Niu; Bo Han; Masashi Sugiyama", "journal": "", "ref_id": "b68", "title": "Adversarial training with complementary labels: On the benefit of gradually informative attacks", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b69", "title": "Classification accuracy (mean±std) of each method on Fashion-MNIST. The best performance is", "year": "" }, { "authors": "", "journal": "L-W", "ref_id": "b70", "title": "", "year": "" }, { "authors": "", "journal": "", "ref_id": "b71", "title": "Table 7: Classification accuracy (mean±std) of each method on CIFAR-10 with a single complementary label. The best performance is shown in bold (pairwise t-test at the 0.05 significance level). Setting Uniform Biased-a Biased-b SCAR-a SCAR", "year": "" }, { "authors": "", "journal": "L-W", "ref_id": "b72", "title": "", "year": "" } ]
[ { "formula_coordinates": [ 2, 85.35, 209.02, 426.3, 29.48 ], "formula_id": "formula_0", "formula_text": "✗ ✓ ✓ ✗ SCARCE (Ours) ✓ ✓ ✓ ✓" }, { "formula_coordinates": [ 3, 240.91, 396.37, 291.72, 11.95 ], "formula_id": "formula_1", "formula_text": "R(f ) = E p(x,y) [L(f (x), y)] ,(1)" }, { "formula_coordinates": [ 3, 214.92, 483.24, 182.17, 32.78 ], "formula_id": "formula_2", "formula_text": "R(f ) = q k=1 π k E p(x|y=k) [L(f (x), k)] ." }, { "formula_coordinates": [ 3, 91.96, 566.11, 440.67, 31.55 ], "formula_id": "formula_3", "formula_text": "Let D = x i , Ȳi n i=1" }, { "formula_coordinates": [ 4, 118.23, 165.08, 70.86, 15.34 ], "formula_id": "formula_4", "formula_text": "D U = {x i } n U i=1 ." }, { "formula_coordinates": [ 4, 179, 425.72, 348.98, 14.26 ], "formula_id": "formula_5", "formula_text": "p k ∈ Ȳ |x, k ∈ Y\\{y} = p k ∈ Ȳ |k ∈ Y\\{y} = c k , (3" }, { "formula_coordinates": [ 4, 527.98, 428.48, 4.65, 10.91 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 5, 107.8, 234.82, 424.83, 32.78 ], "formula_id": "formula_7", "formula_text": "R(f ) = q k=1 E p(x|ȳ k =1) [(π k + π k -1) L(f (x), k)] + E p(x|ȳ k =0) [(1 -πk ) L(f (x), k)] . (4)" }, { "formula_coordinates": [ 5, 163.09, 504.11, 285.83, 35.09 ], "formula_id": "formula_8", "formula_text": "R (f 1 , f 2 , . . . , f q ) = E p(x,y)   ℓ (f y (x)) + k∈Y\\{y} ℓ (-f k (x))   ." }, { "formula_coordinates": [ 5, 79.37, 634.16, 453.27, 31.13 ], "formula_id": "formula_9", "formula_text": "R (f 1 , f 2 , . . . , f q ) = n i=1 ℓ (f y i (x i )) + k∈Y\\{y i } ℓ (-f k (x i )) /n given an ordinary-label dataset D O = {(x i , y i )} n i=1 consists of n training examples." }, { "formula_coordinates": [ 6, 79.37, 84.62, 461.46, 52.68 ], "formula_id": "formula_10", "formula_text": "R(f 1 , f 2 , . . . , f q ) = q k=1 R k (f k ), where R k (f k ) = E p(x|ȳ k =1) [(1 -π k )ℓ (-f k (x)) + (π k + π k -1) ℓ (f k (x))] + E p(x|ȳ k =0) [(1 -πk ) ℓ (f k (x))] . (6" }, { "formula_coordinates": [ 6, 527.98, 126.39, 4.65, 10.91 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 6, 79.37, 202.22, 461.42, 75.12 ], "formula_id": "formula_12", "formula_text": "R(f 1 , f 2 , . . . , f q ) = q k=1 R k (f k ), where R k (f k ) = 1 n N k n N k i=1 (1 -π k ) ℓ -f k (x N k,i ) + (π k + π k -1) ℓ f k (x N k,i ) + (1 -πk ) n U k n U k i=1 ℓ f k (x U k,i ) . (7" }, { "formula_coordinates": [ 6, 527.98, 266.43, 4.65, 10.91 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 6, 171.22, 396.06, 361.4, 49.95 ], "formula_id": "formula_14", "formula_text": "D N k = (x N k,i , -1) n N k i=1 = (x j , -1)|(x j , Ȳj ) ∈ D, k ∈ Ȳj ; (8) D U k = x U k,i n U k i=1 = x j |(x j , Ȳj ) ∈ D, k / ∈ Ȳj . (9" }, { "formula_coordinates": [ 6, 527.98, 429.54, 4.65, 10.91 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 7, 79.37, 177.65, 453.26, 39.32 ], "formula_id": "formula_16", "formula_text": "Let R 0-1 (f ) = E p(x,y) I(f (x) ̸ = y) denote the expected 0-1 loss where f (x) = arg max k∈Y f k (x) and R * 0-1 = min f R 0-1 (f ) denote the Bayes error. Besides, let R * = min f 1 ,f 2 ,...,fq R(f 1 , f 2 , . . . , f q ) denote" }, { "formula_coordinates": [ 7, 79.37, 267.91, 453.26, 39.52 ], "formula_id": "formula_17", "formula_text": "ϵ 2 > 0 such that R (f 1 , f 2 , . . . , f q ) ≤ R * + ϵ 2 ⇒ R 0-1 (f ) ≤ R * 0-1 + ϵ 1 . (10" }, { "formula_coordinates": [ 7, 527.78, 295.01, 4.85, 10.91 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 7, 79.37, 400.53, 453.26, 26.92 ], "formula_id": "formula_19", "formula_text": "C f such that sup f ∈F ∥f ∥ ∞ ≤ C f and some constant C ℓ such that sup |z|≤C f ℓ(z) ≤ C ℓ ." }, { "formula_coordinates": [ 7, 83.41, 499.17, 444.38, 67.52 ], "formula_id": "formula_20", "formula_text": "R f 1 , f 2 , . . . , f q -R f * 1 , f * 2 , . . . , f * q ≤ q k=1 (4 -4π k )L ℓ R n U k ,p U k (F) + (1 -πk )C ℓ 2 ln (2/δ) n U k +(8 -8π k -4π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ 2 ln (2/δ) n N k , (11" }, { "formula_coordinates": [ 7, 527.78, 545.67, 4.85, 10.91 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 7, 79.37, 578.03, 453.26, 30.72 ], "formula_id": "formula_22", "formula_text": "R n U k ,p U k (F) and R n N k ,p N k (F) denote the Rademacher complexity of F given n U k unlabeled data sampled from p (x|ȳ k = 0) and n N k negative data sampled from p (x|ȳ k = 1) respectively." }, { "formula_coordinates": [ 7, 79.37, 631.9, 453.26, 32.13 ], "formula_id": "formula_23", "formula_text": "n U k and n N k → ∞, R f 1 , f 2 , . . . , f q → R f * 1 , f * 2 , . . . , f * q because R n U k ,p U k (F) → 0 and R n N k ,p N k (F) → 0" }, { "formula_coordinates": [ 7, 79.37, 679.71, 152.29, 15.67 ], "formula_id": "formula_24", "formula_text": "in O p q k=1 1/ n N k + 1/ n U k" }, { "formula_coordinates": [ 8, 156.73, 576.73, 371.05, 35.39 ], "formula_id": "formula_25", "formula_text": "R P k (f k ) = πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U k,i ) . (12" }, { "formula_coordinates": [ 8, 527.78, 590.22, 4.85, 10.91 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 8, 190.13, 623.13, 337.65, 62.46 ], "formula_id": "formula_27", "formula_text": "R (f 1 , f 2 , . . . , f q ) = q k=1 R k (f k ), where R k (f k ) = g R P k (f k ) + 1 -π k n N k n N k i=1 ℓ -f k (x N k,i ) . (13" }, { "formula_coordinates": [ 8, 527.78, 663.69, 4.85, 10.91 ], "formula_id": "formula_28", "formula_text": ")" }, { "formula_coordinates": [ 9, 122.84, 100.74, 191.47, 14.17 ], "formula_id": "formula_29", "formula_text": "Since E R P k (f k ) = π k E p(x|y=k) ℓ (f k (x)" }, { "formula_coordinates": [ 9, 79.37, 162.18, 453.26, 26.26 ], "formula_id": "formula_30", "formula_text": "n = n N k , p = p N k ) sat- isfies R n,p (F) ≤ C R / √ n." }, { "formula_coordinates": [ 9, 79.37, 205.96, 453.26, 31.83 ], "formula_id": "formula_31", "formula_text": "f 1 , f 2 , . . . , f q = arg min f 1 ,f 2 ,...,fq∈F R (f 1 , f 2 , . . . , f q ) and ∆ k = exp -2β 2 / (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k ." }, { "formula_coordinates": [ 9, 126.67, 297.62, 401.12, 32.78 ], "formula_id": "formula_32", "formula_text": "0 ≤ E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k . (14" }, { "formula_coordinates": [ 9, 527.78, 308.23, 4.85, 10.91 ], "formula_id": "formula_33", "formula_text": ")" }, { "formula_coordinates": [ 9, 138.2, 362.44, 389.58, 32.78 ], "formula_id": "formula_34", "formula_text": "| R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q )| ≤ O p q k=1 1/ n N k + 1/ n U k . (15" }, { "formula_coordinates": [ 9, 527.78, 373.06, 4.85, 10.91 ], "formula_id": "formula_35", "formula_text": ")" }, { "formula_coordinates": [ 9, 139.39, 440.81, 388.4, 32.78 ], "formula_id": "formula_36", "formula_text": "R( f 1 , f 2 , . . . , f q ) -R(f * 1 , f * 2 , . . . , f * q ) ≤ O p q k=1 1/ n N k + 1/ n U k . (16" }, { "formula_coordinates": [ 9, 79.37, 451.43, 453.26, 48.65 ], "formula_id": "formula_37", "formula_text": ") Remark 10. Theorem 8 shows that R(f 1 , f 2 , . . . , f q ) → R(f 1 , f 2 , . . . , f q ) as n U k and n N k → ∞," }, { "formula_coordinates": [ 18, 253.52, 202.24, 104.96, 12.68 ], "formula_id": "formula_38", "formula_text": "F = (1 -θ * )G + θ * H." }, { "formula_coordinates": [ 18, 79.37, 446.99, 201.18, 13.9 ], "formula_id": "formula_39", "formula_text": "z P i = f PvU x PVal i and z U i = f PvU x UVal i" }, { "formula_coordinates": [ 18, 79.37, 465.92, 138.82, 12.63 ], "formula_id": "formula_40", "formula_text": "A z = x ∈ X |f PvU (x) ≥ z ." }, { "formula_coordinates": [ 18, 98.67, 518.58, 429.11, 31.45 ], "formula_id": "formula_41", "formula_text": "q P (z) = n PVal i=1 I f PvU x PVal i ≥ z n PVal and q U (z) = n UVal i=1 I f PvU x UVal i ≥ z n UVal . (17" }, { "formula_coordinates": [ 18, 527.78, 532.25, 4.85, 10.91 ], "formula_id": "formula_42", "formula_text": ")" }, { "formula_coordinates": [ 18, 168.32, 578.94, 359.46, 31.97 ], "formula_id": "formula_43", "formula_text": "z = arg max z∈[0,1]   q U (z) q P (z) + 1 + γ q P (z)   ln (4/δ) 2n PVal + ln (4/δ) 2n UVal     (18" }, { "formula_coordinates": [ 18, 527.78, 591.41, 4.85, 10.91 ], "formula_id": "formula_44", "formula_text": ")" }, { "formula_coordinates": [ 18, 280.84, 649.69, 251.79, 27.04 ], "formula_id": "formula_45", "formula_text": "θ = q U ( z) q P ( z) (19)" }, { "formula_coordinates": [ 19, 84.45, 318.53, 162.09, 11.51 ], "formula_id": "formula_46", "formula_text": "Output: Class priors π k (k ∈ Y)." }, { "formula_coordinates": [ 21, 319.05, 345.68, 97.93, 11.5 ], "formula_id": "formula_47", "formula_text": "k = 1) = p(x|y ̸ = k)." }, { "formula_coordinates": [ 21, 181.05, 384.64, 249.9, 26.37 ], "formula_id": "formula_48", "formula_text": "p(x|ȳ k = 1, y ̸ = k) = p (x|ȳ k = 1) p(y ̸ = k|x, ȳk = 1) p(y ̸ = k|ȳ k = 1) ." }, { "formula_coordinates": [ 21, 152.42, 452.44, 307.64, 26.37 ], "formula_id": "formula_49", "formula_text": "p(x|ȳ k = 1, y ̸ = k) = p(x|y ̸ = k)p(ȳ k = 1|x, y ̸ = k) p(ȳ k = 1|y ̸ = k) = p(x|y ̸ = k)," }, { "formula_coordinates": [ 21, 108.41, 551.64, 395.19, 153.75 ], "formula_id": "formula_50", "formula_text": "R(f ) =E p(x,y) [L(f (x), y)] = q k=1 π k E p(x|y=k) [L(f (x), k)] = q k=1 E p(x) [L(f (x), k)] -(1 -π k )E p(x|y̸ =k) [L(f (x), k)] = q k=1 E p(x) [L(f (x), k)] -(1 -π k )E p(x|ȳ k =1) [L(f (x), k)] = q k=1 E p(x|ȳ k =1) [(π k + π k -1) L(f (x), k)] + E p(x|ȳ k =0) [(1 -πk ) L(f (x), k)] ," }, { "formula_coordinates": [ 22, 111.1, 120.25, 383.86, 368.49 ], "formula_id": "formula_51", "formula_text": "R(f 1 , f 2 , . . . , f q ) =E p(x,y)   ℓ (f y (x)) + q k=1,k̸ =y ℓ (-f k (x))   =E p(x,y) q k=1 (I(k = y)ℓ(f k (x)) + I(k ̸ = y)ℓ(-f k (x))) = q k=1 E p(x,y) [I(k = y)ℓ (f k (x)) + I(k ̸ = y)ℓ (-f k (x))] = q k=1 π k E p(x|y=k) [ℓ (f k (x))] + (1 -π k ) E p(x|y̸ =k) [ℓ (-f k (x))] = q k=1 E p(x) [ℓ(f k (x))] -(1 -π k )E p(x|y̸ =k) [ℓ(f k (x))] +(1 -π k )E p(x|y̸ =k) [ℓ(-f k (x))] = q k=1 E p(x) [ℓ(f k (x))] -(1 -π k )E p(x|ȳ k =1) [ℓ(f k (x))] +(1 -π k )E p(x|ȳ k =1) [ℓ(-f k (x))] = q k=1 πk E p(x|ȳ k =1) [ℓ(f k (x))] + (1 -πk )E p(x|ȳ k =0) [ℓ(f k (x))] -(1 -π k )E p(x|ȳ k =1) [ℓ(f k (x))] + (1 -π k )E p(x|ȳ k =1) [ℓ(-f k (x))] = q k=1 E p(x|ȳ k =1) [(1 -π k )ℓ (-f k (x)) + (π k + π k -1) ℓ (f k (x))] +E p(x|ȳ k =0) [(1 -πk ) ℓ (f k (x))] ." }, { "formula_coordinates": [ 22, 180.92, 637.59, 201.91, 12.48 ], "formula_id": "formula_52", "formula_text": "Ψ y (f (x)) = ψ(f y (x))+ k∈Y\\{y} ψ(-f k (x)" }, { "formula_coordinates": [ 23, 283.11, 85.97, 234.07, 12.68 ], "formula_id": "formula_53", "formula_text": "⊂ R q , let B Ω = {f ∈ B : ∀x, f (x) ∈ Ω}. If [Ψ y (•)" }, { "formula_coordinates": [ 23, 79.37, 137.03, 453.26, 57.01 ], "formula_id": "formula_54", "formula_text": "E (x,y)∼p [Ψ y (f (x))] ≤ inf f ′ ∈B Ω E (x,y)∼p [Ψ y (f ′ (x))] + ϵ 2 (20) implies R 0-1 (T (f (•))) ≤ R * 0-1 + ϵ 1 , (21" }, { "formula_coordinates": [ 23, 527.78, 181.62, 4.85, 10.91 ], "formula_id": "formula_55", "formula_text": ")" }, { "formula_coordinates": [ 23, 213.32, 418.06, 314.46, 31.85 ], "formula_id": "formula_56", "formula_text": "R n,p (F) = E Xn E σ sup f ∈F 1 n n i=1 σ i f (x i ) . (22" }, { "formula_coordinates": [ 23, 527.78, 428.01, 4.85, 10.91 ], "formula_id": "formula_57", "formula_text": ")" }, { "formula_coordinates": [ 23, 262.45, 461.49, 214.46, 13.77 ], "formula_id": "formula_58", "formula_text": "= D U 1 D U 2 . . . D U q D N 1 D N 2 . . . D N" }, { "formula_coordinates": [ 23, 87.51, 520.75, 440.27, 67.52 ], "formula_id": "formula_59", "formula_text": "sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk )C ℓ ln (2/δ) 2n U k +(2 -2π k )L ℓ R n U k ,p U k (F) + (4 -4π k -2π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k . (23" }, { "formula_coordinates": [ 23, 527.78, 567.25, 4.85, 10.91 ], "formula_id": "formula_60", "formula_text": ")" }, { "formula_coordinates": [ 23, 79.37, 613.16, 461.07, 65.84 ], "formula_id": "formula_61", "formula_text": "x U k,i ∈ D U k is substituted by another unlabeled example x U k,j , the value of sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) changes at most (1 -πk )C ℓ /n U k . Besides, when a negative example x N k,i ∈ D N k is substituted by another negative example x N k,j , the value of sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) changes at most (2 -2π k -πk )C ℓ /n N k ." }, { "formula_coordinates": [ 24, 156.06, 110.89, 371.72, 91.22 ], "formula_id": "formula_62", "formula_text": "sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤E D sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) + q k=1 (1 -πk )C ℓ ln (2/δ) 2n U k + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k , (24" }, { "formula_coordinates": [ 24, 527.78, 179.94, 4.85, 10.91 ], "formula_id": "formula_63", "formula_text": ")" }, { "formula_coordinates": [ 24, 268.23, 204.26, 98.48, 19.45 ], "formula_id": "formula_64", "formula_text": "√ a + b ≤ √ a + √ b." }, { "formula_coordinates": [ 24, 152.8, 256.03, 374.99, 56.4 ], "formula_id": "formula_65", "formula_text": "E D sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (2 -2π k )R n U k ,p U k (ℓ • F) + (4 -4π k -2π k )R n N k ,p N k (ℓ • F) , (25" }, { "formula_coordinates": [ 24, 527.78, 290.27, 4.85, 10.91 ], "formula_id": "formula_66", "formula_text": ")" }, { "formula_coordinates": [ 24, 233.69, 356.55, 298.94, 15.45 ], "formula_id": "formula_67", "formula_text": "R n U k ,p U k (ℓ • F) ≤ L ℓ R n U k ,p U k (F),(26)" }, { "formula_coordinates": [ 24, 233.69, 376.13, 298.94, 15.45 ], "formula_id": "formula_68", "formula_text": "R n N k ,p N k (ℓ • F) ≤ L ℓ R n N k ,p N k (F).(27)" }, { "formula_coordinates": [ 24, 91.26, 432.42, 436.53, 67.52 ], "formula_id": "formula_69", "formula_text": "sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (2 -2π k )L ℓ R n U k ,p U k (F) +(1 -πk )C ℓ ln (2/δ) 2n U k + (4 -4π k -2π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k . (28" }, { "formula_coordinates": [ 24, 527.78, 478.92, 4.85, 10.91 ], "formula_id": "formula_70", "formula_text": ")" }, { "formula_coordinates": [ 24, 87.51, 528.46, 440.27, 67.52 ], "formula_id": "formula_71", "formula_text": "sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk )C ℓ ln (2/δ) 2n U k +(2 -2π k )L ℓ R n U k ,p U k (F) + (4 -4π k -2π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k . (29" }, { "formula_coordinates": [ 24, 527.78, 574.96, 4.85, 10.91 ], "formula_id": "formula_72", "formula_text": ")" }, { "formula_coordinates": [ 24, 87.51, 637.44, 445.12, 67.52 ], "formula_id": "formula_73", "formula_text": "sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk )C ℓ ln (2/δ) 2n U k +(2 -2π k )L ℓ R n U k ,p U k (F) + (4 -4π k -2π k )L ℓ R n N k ,p N k (F) + (2 -2π k -πk )C ℓ ln (2/δ) 2n N k ,(30)" }, { "formula_coordinates": [ 25, 126.08, 96.25, 406.55, 97.47 ], "formula_id": "formula_74", "formula_text": "R( f 1 , f 2 , . . . , f q ) -R(f * 1 , f * 2 , . . . , f * q ) =R( f 1 , f 2 , . . . , f q ) -R( f 1 , f 2 , . . . , f q ) + R( f 1 , f 2 , . . . , f q ) -R(f * 1 , f * 2 , . . . , f * q ) + R(f * 1 , f * 2 , . . . , f * q ) -R(f * 1 , f * 2 , . . . , f * q ) ≤R( f 1 , f 2 , . . . , f q ) -R( f 1 , f 2 , . . . , f q ) + R(f * 1 , f * 2 , . . . , f * q ) -R(f * 1 , f * 2 , . . . , f * q ) ≤2 sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) (31)" }, { "formula_coordinates": [ 25, 79.37, 279.09, 414.75, 14.99 ], "formula_id": "formula_75", "formula_text": "Let D + k (f k ) = D N k , D U k | R P k (f k ) ≥ 0 and D - k (f k ) = D N k , D U k | R P k (f k ) < 0 denote" }, { "formula_coordinates": [ 25, 156.64, 355.9, 371.14, 29.66 ], "formula_id": "formula_76", "formula_text": "P D - k (f k ) ≤ exp -2β 2 (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k . (32" }, { "formula_coordinates": [ 25, 79.37, 364.55, 453.26, 98.08 ], "formula_id": "formula_77", "formula_text": ") Proof. Let p D N k = p x N k,1 |ȳ k = 1 p x N k,2 |ȳ k = 1 . . . p x N k,n N k |ȳ k = 1 and p D U k = p x U k,1 |ȳ k = 0 p x U k,2 |ȳ k = 0 . . . p x U k,n U k |ȳ k = 0" }, { "formula_coordinates": [ 25, 79.37, 468.88, 452.76, 43.1 ], "formula_id": "formula_78", "formula_text": "D N k and D U k is p D N k , D U k = p D N k p D U k ." }, { "formula_coordinates": [ 25, 128.77, 554.9, 354.46, 53.12 ], "formula_id": "formula_79", "formula_text": "P D - k (f k ) = (D N k ,D U k )∈D - k (f k ) p D N k , D U k d D N k , D U k = (D N k ,D U k )∈D - k (f k ) p D N k , D U k dx N k,1 . . . dx N k,n N k dx U k,1 . . . dx U k,n U k ." }, { "formula_coordinates": [ 25, 104.37, 697.57, 423.42, 29.66 ], "formula_id": "formula_80", "formula_text": "P E R P k (f k ) -R P k (f k ) ≥ β ≤ exp -2β 2 (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k . (33" }, { "formula_coordinates": [ 25, 527.78, 706.21, 4.85, 10.91 ], "formula_id": "formula_81", "formula_text": ")" }, { "formula_coordinates": [ 26, 157.25, 96, 370.53, 94.6 ], "formula_id": "formula_82", "formula_text": "P D - k (f k ) =P R P k (f k ) ≤ 0 ≤P R P k (f k ) ≤ E R P k (f k ) -β =P E R P k (f k ) -R P k (f k ) ≥ β ≤ exp -2β 2 (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k , (34" }, { "formula_coordinates": [ 26, 527.78, 169.58, 4.85, 10.91 ], "formula_id": "formula_83", "formula_text": ")" }, { "formula_coordinates": [ 26, 120.3, 276.4, 407.48, 32.78 ], "formula_id": "formula_84", "formula_text": "0 ≤ E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k , (35" }, { "formula_coordinates": [ 26, 527.78, 287.01, 4.85, 10.91 ], "formula_id": "formula_85", "formula_text": ")" }, { "formula_coordinates": [ 26, 110.15, 321.26, 280.33, 14.17 ], "formula_id": "formula_86", "formula_text": "∆ k = exp -2β 2 / (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k" }, { "formula_coordinates": [ 26, 121.96, 357.71, 368.09, 67.52 ], "formula_id": "formula_87", "formula_text": "| R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q )| ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + (2 -2π k -π k ) (L g + 1) C ℓ ∆ k + ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k ." }, { "formula_coordinates": [ 26, 79.37, 458.54, 420.15, 63.62 ], "formula_id": "formula_88", "formula_text": "E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) = E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) . Since R(f 1 , f 2 , . . . , f q ) is an upper bound of R(f 1 , f 2 , . . . , f q ), we have E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≥ 0." }, { "formula_coordinates": [ 26, 105.34, 559.15, 394.88, 161.69 ], "formula_id": "formula_89", "formula_text": "E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) = q k=1 (D N k ,D U k )∈D - k (f k ) g R P k (f k ) -R P k (f k ) p D N k , D U k d D N k , D U k ≤ q k=1 sup (D N k ,D U k )∈D - k (f k ) g R P k (f k ) -R P k (f k ) (D N k ,D U )∈D - k (f k ) p D N k , D U k d D N k , D U k = q k=1 sup (D N k ,D U k )∈D - k (f k ) g R P k (f k ) -R P k (f k ) P D - k (f k ) ≤ q k=1 sup (D N k ,D U k )∈D - k (f k ) (L g R P k (f k ) + R P k (f k ) )P D - k (f k ) ." }, { "formula_coordinates": [ 27, 151.61, 97.6, 302.84, 102.48 ], "formula_id": "formula_90", "formula_text": "R P k (f k ) = πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U k,i ) ≤ πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U k,i ) ≤(1 -π k -πk )C ℓ + (1 -πk )C ℓ = (2 -2π k -π k ) C ℓ ." }, { "formula_coordinates": [ 27, 116.67, 238.29, 369.09, 122.83 ], "formula_id": "formula_91", "formula_text": "E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 sup (D N k ,D U k )∈D - k (f k ) (L g R P k (f k ) + R P k (f k ) )P(D - k (f k )). ≤ q k=1 (2 -2π k -π k ) (L g + 1) C ℓ exp -2β 2 (1 -π k -πk ) 2 C 2 ℓ /n N k + (1 -πk ) 2 C 2 ℓ /n U k = q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k ," }, { "formula_coordinates": [ 27, 96.09, 480.42, 419.82, 140.44 ], "formula_id": "formula_92", "formula_text": "R(f 1 , f 2 , . . . , f q ) -E R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k , E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k ." }, { "formula_coordinates": [ 27, 96.09, 653.02, 423.14, 68.67 ], "formula_id": "formula_93", "formula_text": "E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k ." }, { "formula_coordinates": [ 28, 108.83, 99.41, 391.02, 151.81 ], "formula_id": "formula_94", "formula_text": "R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) = R(f 1 , f 2 , . . . , f q ) -E[ R(f 1 , f 2 , . . . , f q )] + E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) ≤ R(f 1 , f 2 , . . . , f q ) -E[ R(f 1 , f 2 , . . . , f q )] + E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) = R(f 1 , f 2 , . . . , f q ) -E[ R(f 1 , f 2 , . . . , f q )] + E[ R(f 1 , f 2 , . . . , f q )] -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (2/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (2/δ) 2n N k + q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k ," }, { "formula_coordinates": [ 28, 210, 342.48, 322.63, 31.85 ], "formula_id": "formula_95", "formula_text": "R ′ n,p (F) = E Xn E σ sup f ∈F 1 n n i=1 σ i f (x i ) .(36)" }, { "formula_coordinates": [ 28, 79.37, 406.37, 453.26, 27.19 ], "formula_id": "formula_96", "formula_text": "′ n,p (F) ≥ R n,p (F). If F is closed under negation, we have R ′ n,p (F) = R n,p (F)." }, { "formula_coordinates": [ 28, 79.37, 479.95, 292.61, 37.18 ], "formula_id": "formula_97", "formula_text": "R ′ n,p (ψ • F) ≤ 2L ψ R ′ n,p (F), where ψ • F = {ψ • f |f ∈ F}." }, { "formula_coordinates": [ 28, 109.26, 577.02, 387.52, 127.49 ], "formula_id": "formula_98", "formula_text": "sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (1/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (1/δ) 2n N k + q k=1 (4 -4π k ) L g L ℓ R n U k ,p U k (F) + ((4 -4π k -4π k ) L g + 4 -4π k ) L ℓ R n N k ,p N k (F) + q k=1 (2 -2π k -π k ) (L g + 1) C ℓ ∆ k ." }, { "formula_coordinates": [ 29, 79.37, 89.5, 514.48, 62.14 ], "formula_id": "formula_99", "formula_text": "1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) changes at most (1 -πk ) C ℓ L g /n U k . When a negative example from D N k is substituted by a dif- ferent example, the value of sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) changes at most ((1 -π k -πk ) L g + 1 -π k ) C ℓ /n N k ." }, { "formula_coordinates": [ 29, 123.77, 176.01, 404.01, 91.22 ], "formula_id": "formula_100", "formula_text": "sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) -E sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ q k=1 (1 -πk ) C ℓ L g ln (1/δ) 2n U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (1/δ) 2n N k . (37" }, { "formula_coordinates": [ 29, 527.78, 245.06, 4.85, 10.91 ], "formula_id": "formula_101", "formula_text": ")" }, { "formula_coordinates": [ 29, 151.95, 305.59, 380.68, 88.02 ], "formula_id": "formula_102", "formula_text": "E sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) =E D sup f 1 ,f 2 ,...,fq∈F E D′ R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤E D, D′ sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ; D) -R(f 1 , f 2 , . . . , f q ; D′ ) ,(38)" }, { "formula_coordinates": [ 29, 79.37, 478.88, 448.41, 242.11 ], "formula_id": "formula_103", "formula_text": "R(f 1 , f 2 , . . . , f q ; D) -R(f 1 , f 2 , . . . , f q ; D′ ) ≤ q k=1 g    πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U k,i )    -g    πk + π k -1 n N k n N k i=1 ℓ f k (x N ′ k,i ) + 1 -πk n U k n U k i=1 ℓ f k (x U ′ k,i )    + q k=1 1 -π k n N k n N k i=1 ℓ -f k (x N k,i ) - 1 -π k n N k n N k i=1 ℓ -f k (x N ′ k,i ) ≤ q k=1 L g πk + π k -1 n N k n N k i=1 ℓ f k (x N k,i ) -ℓ f k (x N ′ k,i ) + 1 -πk n N k n U k i=1 ℓ f k (x U k,i ) -ℓ f k (x U ′ k,i ) + q k=1 1 -π k n N k n N k i=1 ℓ -f k (x N k,i ) -ℓ -f k (x N ′ k,i ) . (39" }, { "formula_coordinates": [ 29, 527.78, 698.82, 4.85, 10.91 ], "formula_id": "formula_104", "formula_text": ")" }, { "formula_coordinates": [ 30, 83.64, 110.23, 431.31, 81.48 ], "formula_id": "formula_105", "formula_text": "q k=1 L g πk + π k -1 n N k n N k i=1 l f k (x N k,i ) -l f k (x N ′ k,i ) + 1 -πk n N k n U k i=1 l f k (x U k,i ) -l f k (x U ′ k,i ) + q k=1 1 -π k n N k n N k i=1 l -f k (x N k,i ) -l -f k (x N ′ k,i ) ." }, { "formula_coordinates": [ 30, 102.9, 261.46, 424.88, 103.18 ], "formula_id": "formula_106", "formula_text": "≤ q k=1 (2 -2π k ) L g R ′ n U k ,p U k ( l • F) + ((2 -2π k -2π k ) L g + 2 -2π k ) R ′ n N k ,p N k ( l • F) ≤ q k=1 (4 -4π k ) L g L ℓ R ′ n U k ,p U k (F) + ((4 -4π k -4π k ) L g + 4 -4π k ) L ℓ R ′ n N k ,p N k (F) = q k=1 (4 -4π k ) L g L ℓ R n U k ,p U k (F) + ((4 -4π k -4π k ) L g + 4 -4π k ) L ℓ R n N k ,p N k (F) , (40" }, { "formula_coordinates": [ 30, 527.78, 342.48, 4.85, 10.91 ], "formula_id": "formula_107", "formula_text": ")" }, { "formula_coordinates": [ 30, 108.05, 450.76, 419.73, 67.98 ], "formula_id": "formula_108", "formula_text": "U k + q k=1 ((1 -π k -πk ) L g + 1 -π k ) C ℓ ln (1/δ) 2n N k + q k=1 (4 -4π k ) L g L ℓ R n U k ,p U k (F) + ((4 -4π k -4π k ) L g + 4 -4π k ) L ℓ R n N k ,p N k (F) . (41" }, { "formula_coordinates": [ 30, 527.78, 496.58, 4.85, 10.91 ], "formula_id": "formula_109", "formula_text": ")" }, { "formula_coordinates": [ 30, 170.91, 552.86, 356.87, 126.13 ], "formula_id": "formula_110", "formula_text": "f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) = sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -E R(f 1 , f 2 , . . . , f q ) +E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) ≤ sup f 1 ,f 2 ,...,fq∈F R(f 1 , f 2 , . . . , f q ) -E R(f 1 , f 2 , . . . , f q ) + sup f 1 ,f 2 ,...,fq∈F E R(f 1 , f 2 , . . . , f q ) -R(f 1 , f 2 , . . . , f q ) . (42" }, { "formula_coordinates": [ 30, 527.78, 659.23, 4.85, 10.91 ], "formula_id": "formula_111", "formula_text": ")" }, { "formula_coordinates": [ 32, 485.44, 235.25, 6.64, 119.95 ], "formula_id": "formula_112", "formula_text": "                 " } ]
10.1162/coli.07-034-R2
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b10" ], "table_ref": [], "text": "Named Entity Recognition (NER) (Li et al., 2020) is a crucial task in natural language processing with various applications including information retrieval, text summarization, question answering, machine translation, and knowledge graph. Its objective is to identify specific entities such as person, location, and organization from text. Although great progress has been made in news domain and some vertical domains, NER research in literary domain has been limited due to the lack of annotated data (Jockers, 2013).\nTo promote the research of literary NER, we build the first NER corpus of online Chinese novels with multi-genres, which contains 260 novels from 13 genres, totaling 105,851 sentences, 5,379,749 Chinese characters, 263,135 entities and 24,458 unique entities of three types person, location and organization. Based on the corpus, we analyze characteristics of entities from different genres. For literary NER, we compare different baseline models and conduct cross-genre and cross-domain experiments. We find that genre difference significantly impact NER performance though not as much as domain difference like literary domain and news domain.\nThe main contributions of this paper are as follows:\n• We build the first large-scale corpus of online Chinese novels with multi-genres for literary NER and we will release it to the public later.\n• We analyze characteristics of entities from different genres and carry out cross-genre and cross-domain experiments for literary NER." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b11", "b1", "b14", "b3", "b2", "b6", "b15", "b9" ], "table_ref": [ "tab_0" ], "text": "Currently, there is relatively little research on NER in the literary field due to the diverse types of entities and significant differences in naming styles and background knowledge (Labatut and Bost, 2019).\nEstablishing a general NER model for the literary field is challenging, and the lack of large-scale NER datasets limits the development of NER research in this domain (Augenstein et al., 2017).\nSeveral previous studies have proposed different approaches and built different corpora for named entity recognition in literary works. Vala et al. (2015) introduce a graph-based pipeline model specifically for character recognition. Brooke et al. (2016) propose the LitNER model, which utilizes the bootstrap method. Bamman et al. (2019) build LitBank corpus, annotating named entities in 100 English novels and trained an NER model tailored to the literary field. Dekker et al. (2019) conduct an evaluation of natural language processing tools to extract characters and build social networks from novels. For Chinese literary NER, Xu et al. (2017) construct a dataset for NER and relationship extraction from essays. In addition, we (Jia et al., 2021) create a named entity dataset using Jin Yong's novels and develop a named entity recognition model that incorporates document-level information. The overview statistics of above datasets are shown in Table 1.\nHowever, for Chinese novels, the existing NER corpus is limited in scale and genre. To build a larger-scale multi-genre NER corpus is necessary to enhance further research of literacy NER in Chinese novels." }, { "figure_ref": [], "heading": "Corpus Construction", "publication_ref": [], "table_ref": [], "text": "From Qidian Chinese website1 , we collect novels of 13 different genres, including Xianxia(仙 侠), Sport(体育), Military(军事), History(历史), Fantasy(奇 幻), Suspense(悬 疑), Wuxia(武 侠), Game(游戏), Xuanhuan(玄幻), Reality(现实), Sci-Fi(科幻), Urban(都市), and Light Novel(轻小说). For each genre, we crawl the top 20 works from the genre's collection list (as of 2021) and annotate the first 10 chapters of each selected work. All annotated chapters are publicly accessible." }, { "figure_ref": [], "heading": "Entity Annotation Guidelines", "publication_ref": [], "table_ref": [], "text": "Considering the characteristics of online Chinese novels, we focus on three entity types, person (PER), location (LOC), and organization (ORG). We follow the entity annotation guidelines of ACE (Consortium et al., 2005). In addition, (1) We omit single-character entities due to their high ambiguity. (2) We do not annotate nested entities and only annotate the longest one. (3) An entity is composed of head nouns without quantifiers, pronouns and adjective modifiers, etc. (4) An entity must refer to a specific entity in the novel." }, { "figure_ref": [], "heading": "Person", "publication_ref": [], "table_ref": [], "text": "Person entities in texts can be represented by various features. For instance, real names of characters such as \"高远\"(Gao Yuan) can serve as entities. Additionally, a character's occupation, like \"医 生\"(doctor), or family relationship, such as \"父 亲\"(father), can also be indication of entity. Furthermore, a general term like \"小男孩\"(little boy) can be used to represent a person entity. Relationships between characters can be denoted by a set of characters, like \"父子\"(father and son). Nicknames, such as \"菜鸟\"(novice), can also indicate person entities. In the case of deceased individuals or human remains, they could be recorded as person entities, like \"丧尸\"(zombie). Even nouns referring to animals or non-human entities, such as \"兽人\"(beastman) or \"冰蚕\"(ice silkworm), can be used to describe person entities in some genres." }, { "figure_ref": [], "heading": "Location", "publication_ref": [], "table_ref": [], "text": "Location entities typically refer to entities that denote a specific location, such as countries (e.g. 西 域,Western Regions) that do not necessarily have a political status, cities (e.g. 羊城,Sheep City), and natural features such as mountains and rivers (e.g. 泰山,Mount Tai). In Chinese novels locations mostly refer to where the story takes place (e.g. 餐馆,restaurants,训练场,training grounds,小 镇, small town)." }, { "figure_ref": [], "heading": "Organization", "publication_ref": [], "table_ref": [], "text": "The named entities in the corpus include a range of organizations, such as government agencies (e.g. 组织部,organizational departments), political parties (e.g. 共产党,Communist Party), corporations, universities, high schools, and religious organizations (e.g. 光明圣教,Bright Holy Church). Notably, a substantial portion of the organizational entities in the Chinese novel corpus are fictional, created based on the authors' imagination and settings (e.g. 皇家魔法学院,the Royal School of Magic)." }, { "figure_ref": [], "heading": "Inter-annotator Agreement", "publication_ref": [ "b4", "b0" ], "table_ref": [ "tab_1" ], "text": "To ensure consistent and high-quality annotation, we adopt a multi-round iterative approach. Two annotators simultaneously annotate each novel, crosscheck and review each other's work, guaranteeing reliable results. The annotation process consisted of two stages: experimental and formal annotation. In the experimental stage, we use the LTP (Che et al., 2020) named entity recognition tool to preannotate the novels' text, gain familiarity with the corpus and improve the annotation guidelines. In the formal annotation stage, one annotator initially annotates the text, which is then verified by a second annotator to resolve any inconsistencies. The final results are confirmed by the first annotator. This process involves seven annotators and is completed in 70 days.\nWe assess annotation consistency using the F1 score as the evaluation metric. Results show a micro-averaged F1 score (MicroF1) of 92.15% and a Micro-averaged F1 score (MacroF1) of 88.62%, and Poesio, 2008). The consistency varies across entity types, with person entities demonstrating higher consistency compared to organization and location entities. The complex structures of organization and location entities pose challenges in identifying their boundaries. Detailed values are given in Table 2. " }, { "figure_ref": [], "heading": "Corpus Analysis", "publication_ref": [], "table_ref": [ "tab_2", "tab_3", "tab_4" ], "text": "Table 3 presents the statistical information of the dataset, which consists of 260 novels covering 13 genres. The dataset includes a total of 105,851 sentences and 263,135 named entities. In general, the dominant type of named entities in novels is person, highlighting the focus on protagonists. Locations constitute the second largest category, serving as the backdrop for storylines and descriptive environments. On the other hand, named entities pertaining to organizations are relatively rare. Additionally, person and location entities tend to be shorter, with an average of 3.64 and 3.60 Chinese characters, respectively, while organization entities tend to be longer, averaging 4.87 Chinese characters. The specific statistics are shown in the Table 4, where the largest proportion and average length of each entity type are in bold.\nFurthermore, we perform genre-specific statistics and identify distinct characteristics in highfrequency person, location, and organization entities among different literary genres. In Table 5, we highlight several genres that exemplify these distinctive characteristics.\nFor sport genre, high-frequency location entities are typically real-world places such as continents, countries, and cities, while high-frequency organization entities include universities, teams, and leagues. For history genre, high-frequency location entities refer to ancient countries or cities, and highfrequency organization entities are ancient government institutions such as the \"锦衣卫\"(Jinyiwei) and \"中书省\"(Zhongshu Province). For fantasy and science fiction genres, high-frequency location entities are fictional places like castles, towns, and laboratories, while high-frequency organization entities include fictional organizations like \"神 盾局\"(S.H.I.E.L.D.),\"学院\"(academies), and \"联 邦\"(federations). For urban genre, high-frequency location entities are everyday places, and highfrequency organization entities are companies, hospitals, and universities." }, { "figure_ref": [], "heading": "Literary Named Entity Recognition", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Baseline Models", "publication_ref": [ "b7", "b8", "b12" ], "table_ref": [ "tab_5" ], "text": "The corpus is divided into training, validation, and test sets in an 8:1:1 ratio for this study, which aims to train multiple models for named entity recognition in the literary domain. The F1 scores of various models are compared for the three categories of person, location, and organization, as well as the MicroF1 and MacroF1 scores.\nTable 6 demonstrates that BERT-BiLSTM-CRF (Devlin et al., 2019;Huang et al., 2015) exhibits the highest values in terms of MicroF1 and MacroF1 metrics, indicating its superior overall performance. The best value on each entity or metric is in bold. The recognition performance is best \nLOC 商 场(Shopping malls),网 吧(internet cafes),中 云 市(Zhongyun City),办 公 室(offices),江南(Jiangnan) ORG 医 院(hospitals),学 府(academic institutions),大 学(universities),战 争 学 府(war academies)\nfor person, followed by location and organization.\nThis study shows that models using pre-trained model as feature extractor perform the best, while models based only on BiLSTM and CRF (Lafferty et al., 2001) perform relatively poorly. This highlights the significant enhancement in the overall performance of named entity recognition through the incorporation of pre-trained models.\nTable 7 provides a summary of the performance of the BERT-BiLSTM-CRF model, with a particular focus on its handling of Out of Vocabulary (OOV) entities. In the test set, the ratio of OOV to in-vocabulary (IV) entities is approximately 1:2, consisting of 1417 OOV entities and 3109 IV entities. The results reveal that the model exhibits declined performance in recognizing OOV entities, particularly struggling in identifying OOV LOC entities, achieving a F1 score of only 31.63%." }, { "figure_ref": [], "heading": "One-model-one-type vs. One-model-all-types", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "To investigate the impact of recognizing multiple entities simultaneously, we train a model with the same parameters as BERT-BiLSTM-CRF while separately training individual entities. As shown in the Table 8, the model that predicts multiple entities simultaneously can allow the model to learn more diverse knowledge, leading to an improvement in the model's recognition performance on single entities. This finding highlights the advantage of incorporating a multi-entity recognition approach, as it enables the model to leverage contextual information and inter-dependencies among entities to enhance its accuracy and effectiveness in named entity recognition tasks." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Cross-genre NER in Novels", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In this study, we train a BERT-BiLSTM-CRF model for each genre. The corpus is divided into training, validation, and testing sets in an 8:1:1 ratio. We use these models to predict entities in novels from 13 different genres and compare the performance variations across genres using a confusion matrix, where each row represents the predictions made by a specific genre model. The MicroF1 values of the predictions are shown in Figure 1. It is noteworthy that when predicting entities in historical novels, models trained on the Xianxia and Wuxia genres perform well, benefiting from their historical backgrounds. Conversely, the model trained on urban novels set in modern times shows the poorest performance in the historical genre. Through cross-genre experiments, we observe the significant impact of different themes on named entity recognition, even within the same domain of novels. Furthermore, we discover some unexpected results when predicting organization (ORG) entities across genres. As shown in Figure 2, the model trained on Suspense novels performs even worse when predicting entities in the same genre novels. This could be attributed to the scarcity of organization entities in Suspense novels and the less distinctive thematic features, as evident in Table 4. Additionally, the predictions from the confusion matrix validate certain distributional differences among various genre novels, even for those with similar characteristics. For instance, when predicting organization entities in Reality novels, there is a significant disparity between the Xianxia and Wuxia genres, with a score of 39.13% for Xianxia and 31.44% for Wuxia. In summary, our study demonstrates the impact of corpus sources on model performance in named entity recognition, showcasing the variations across genres and highlighting the importance of considering genre-specific characteristics in the training and prediction processes." }, { "figure_ref": [], "heading": "Cross-domain NER", "publication_ref": [], "table_ref": [ "tab_7", "tab_0" ], "text": "To investigate the degree to which NER depends on domain-specific knowledge, we conduct crossdomain experiments to compare NER performance on different corpora. Specifically, we utilize news articles from People's Daily (Peopledaily) spanning from January to June 1998 as the general domain corpus and compare it with the Chinese novel corpus Qidian. The statistics of the two corpora are shown in Table 9. We train NER models on each corpus and compare their performance. As shown in Table 10, NER performance varies significantly across corpora from different domains, indicating its high sensitivity to domain-specific information.\nFurthermore, when we employ the Peopledaily dataset for training our model to predict Chinese novel data, we make an intriguing observation. The F1 score for recognizing ORG entities is remarkably low at 0.47%, with a recall rate of just 0.0023%. However, the precision is quite close to that of the other two entity types. We attribute this outcome to the fact that the Peopledaily dataset encompasses numerous political organization entities which are rarely used in online novels. " }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 11 gives two examples for experiments of the section of baseline models. In the first example, even models incorporating pre-trained models wrongly recognize boundaries of person entities, like \"女伯爵\"(Countess). Due to the limitations of the training set, the models mislabel the type of entity \"匈牙利\"(Hungary). In the second example, the person entity \"古河\"(Gu He) contains \"河\"(river), the frequently occurring suffix of location entity, causing all the models based on pre-trained model to erroneously classify \"古 河\"(Gu He) as a location entity. These examples fully demonstrate the significant impact of domain and contextual information on named entity recognition." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we build the largest multi-genre corpus of Chinese novels for literary NER. We describe the annotation guidelines and analyze characteristics and distributions of entities from different genres. We propose several baseline models for literary NER and find that the pre-trained model can significantly improve the performance. Our corpus provides a valuable dataset for cross-genre NER investigation, which shows that genre difference makes obvious decline of performance. The cross-domain experiments between literary domain and news domain show that literary NER still needs improvement and domain difference makes much more severe performance drop, reaffirming the necessity of a domain corpus for vertical domain NER. The comparison between one-model-one-type and one-model-all-types NER shows that learning multi-types of entities simultaneously can enhance the entity recognition of each type.\nIn the future, we will further study cross-genre and cross-domain problems in literary NER. The OOV problem is more challenging in literary texts, which is another problem we plan to address." } ]
Entities like person, location, organization are important for literary text analysis. The lack of annotated data hinders the progress of named entity recognition (NER) in literary domain. To promote the research of literary NER, we build the largest multi-genre literary NER corpus containing 263,135 entities in 105,851 sentences from 260 online Chinese novels spanning 13 different genres. Based on the corpus, we investigate characteristics of entities from different genres. We propose several baseline NER models and conduct cross-genre and cross-domain experiments. Experimental results show that genre difference significantly impact NER performance though not as much as domain difference like literary domain and news domain. Compared with NER in news domain, literary NER still needs much improvement and the Out-of-Vocabulary (OOV) problem is more challenging due to the high variety of entities in literary works.
A Corpus for Named Entity Recognition in Chinese Novels with Multi-genres
[ { "figure_caption": "Figure 1 :1Figure 1: Confusion matrix of MicroF1 for different genres.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Confusion matrix of ORG-F1 for different genres.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Literary NER datasets.", "figure_data": "DatasetLanguage Tags Release-Year SizeLitBank (Bamman et al., 2019)English62019200,000 wordsChinese-Literature-NER (Xu et al., 2017) Chinese7201728,897 sentencesJinYong (Jia et al., 2021)Chinese4202121,927 sentencesindicating high reliability of the dataset (Artstein", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Inter-annotator agreement.", "figure_data": "EntityF1-score(%)PER93.65LOC90.66ORG81.56MicroF1 92.15MacroF1 88.62", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Statistics of corpus.", "figure_data": "Entity CountDistinct Avg.LengthPER197,597 17,0133.64LOC45,0944,6413.60ORG 20,4442,8044.87Total263,135 24,4583.73", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Statistics of entities in Chinese novels of various genres. DC represents distinct count, while DR refers to distinct ratio.", "figure_data": "GenreEntity Count Ratio(%) DC DR(%) Avg.LengthPER 1832974.391817 62.893.18XianxiaLOC447118.1581828.312.95ORG18397.462548.793.19PER 1664170.121964 64.044.32SportLOC343314.4756418.393.46ORG365815.4153917.574.82PER 1636574.031540 56.083.10MilitaryLOC358916.2368925.092.84ORG21539.7451718.833.99PER 1992580.612447 70.053.07HistoryLOC382215.4682023.482.54ORG9703.922266.472.94PER 1361773.361717 63.953.70FantasyLOC332717.9264924.173.06ORG16188.7231911.884.01PER 1289777.001479 61.653.18SuspenseLOC312718.6770829.512.94ORG7254.332128.843.37PER 1748275.641976 66.603.18WuxiaLOC404617.5174225.012.70ORG15856.862498.393.09PER 1475871.971805 64.443.32GameLOC406919.8466323.672.87ORG16798.1933311.893.70PER 1718976.511547 62.943.29XuanhuanLOC384617.1267327.382.91ORG14326.372389.683.46PER 1528076.751570 62.503.22RealityLOC316315.8964725.762.99ORG14677.3729511.743.69PER 1399373.551555 57.043.53Sci-FiLOC370219.4677028.253.35ORG13296.9940114.714.22PER603971.6783557.633.14UrbanLOC153718.2438626.642.83ORG85010.0922815.733.90PER 1508278.621480 64.943.36Light Novel LOC296215.4457825.362.98ORG11395.942219.703.64", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Common entities in Chinese novels of different genres.", "figure_data": "GenreType High-frequency entitiesPER 球迷(Fan),教练(Coach),于指导(Guidance Yu ),职业球员(Professional player)SportLOC 中国(China),美国(USA),英格兰(England),西雅图(Seattle)ORG NBA,青年队(youth teams),森林队(forest teams)PER 太宰(Taizai),崇祯(Chongzhen),大魏天子(Emperor of Wei),刘总管(General Man-ager Liu)History LOC 秦国(Qin State),京城(Capital),汴梁(Bianliang),宜城(Yicheng)ORG 锦 衣 卫(Jinyiwei),中 书 省(Zhongshu Province),东 宫 卫 队(the Eastern PalaceGuard),豫山书院(Yushan Academy)PER 老法师(Old Mage),女巫(Witch),黑衣武士(Black-clad Warrior),魔法师(Magician)Fantasy LOC 城 堡(Castle),鲜 花 镇(Flower Town),乌 山 镇(Wushan Town),荆 棘岭(Thornridge),圣域(Sanctum)ORG 神盾局(S.H.I.E.L.D.),魔法学院(Academy of Magic),死局帮(Deadlock Gang)PER 玄幽道人(Xuanyou Taoist),独孤败天(Dugu Baitian),刘三刀(Liu Sandao),司徒傲月(Situ Aoyue)WuxiaLOC 蜀山(Shu Mountain),华山(Hua Mountain),通州(Tongzhou),中原(Central Plains)ORG 飞鹰帮(Feiying Gang),画剑派(Huajian Sect),李家(Li Family)PER 青年导游(Youth Tour Guide),赢胖子(Fatty Ying),王秘书(Secretary Wang),副经理(Deputy Manager)Urban", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of baseline models.", "figure_data": "ModelPER(%) LOC(%) ORG(%) MicroF1(%) MacroF1(%)BiLSTM-CRF78.5964.3752.0974.4765.02BERT-CRF87.8486.2177.4486.5583.83BERT-BiLSTM-CRF 87.7285.4179.0986.7384.07Table 7: OOV vs. IVPER(%) LOC(%) ORG(%) MicroF1(%) MacroF1(%)OOV(1417) 49.7031.6335.2745.0738.67IV(3109)91.4391.4385.3691.0289.41", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "One-model-one-type vs. One-model-all-types.", "figure_data": "ModelPER(%)LOC(%)ORG(%)One-model-one-type85.9985.0476.51One-model-all-types 87.72(+1.73) 85.41(+0.37) 79.09(+2.58)", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Statistics comparison: Qidian vs. Peopledaily", "figure_data": "QidianPeopledailySentences105,851123,882Words5,379,749 11,978,551Entities263,135323,368Unique entities 24,45843,249", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" } ]
Hanjie Zhao; Jinge Xie; Yuchen Yan; Yuxiang Jia; Yawen Ye; Hongying Zan
[ { "authors": "Ron Artstein; Massimo Poesio", "journal": "Computational Linguistics", "ref_id": "b0", "title": "Survey article: Inter-coder agreement for computational linguistics", "year": "2008" }, { "authors": "Isabelle Augenstein; Leon Derczynski; Kalina Bontcheva", "journal": "Computer Speech & Language", "ref_id": "b1", "title": "Generalisation in named entity recognition: A quantitative analysis", "year": "2017" }, { "authors": "David Bamman; Sejal Popat; Sheng Shen", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "An annotated dataset of literary entities", "year": "2019" }, { "authors": "Julian Brooke; Adam Hammond; Timothy Baldwin", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Bootstrapped text-level named entity recognition for literature", "year": "2016" }, { "authors": "Wanxiang Che; Yunlong Feng; Libo Qin; Ting Liu", "journal": "", "ref_id": "b4", "title": "N-ltp: An open-source neural language technology platform for chinese", "year": "2020" }, { "authors": " ", "journal": "-CRF 古河 LOC 名列加玛帝国 ORG 十大强者之一", "ref_id": "b5", "title": "Ace (automatic content extraction) chinese annotation guidelines for events. Table 10: Cross-domain NER. Domain PER(%) LOC(%) ORG(%) MicroF1(%", "year": "0546" }, { "authors": "Niels Dekker; Tobias Kuhn; Marieke Van Erp", "journal": "PeerJ Computer Science", "ref_id": "b6", "title": "Evaluating named entity recognition tools for extracting social networks from novels", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Zhiheng Huang; Wei Xu; Kai Yu", "journal": "", "ref_id": "b8", "title": "Bidirectional lstm-crf models for sequence tagging", "year": "2015" }, { "authors": "Yuxiang Jia; Rui Chao; Hongying Zan; Huayi Dou; Shuai Cao; Shuo Xu", "journal": "Chinese Information Processing Society of China", "ref_id": "b9", "title": "融入篇章信息 的文学作品命名实体识别(document-level literary named entity recognition)", "year": "2021" }, { "authors": "L Matthew; Jockers", "journal": "University of Illinois Press", "ref_id": "b10", "title": "Macroanalysis: Digital methods and literary history", "year": "2013" }, { "authors": "Vincent Labatut; Xavier Bost", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b11", "title": "Extraction and analysis of fictional character networks: A survey", "year": "2019" }, { "authors": "John Lafferty; Andrew Mccallum; Fernando Cn Pereira", "journal": "", "ref_id": "b12", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "year": "2001" }, { "authors": "Jing Li; Aixin Sun; Jianglei Han; Chenliang Li", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b13", "title": "A survey on deep learning for named entity recognition", "year": "2020" }, { "authors": "Hardik Vala; David Jurgens; Andrew Piper; Derek Ruths", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Mr. bennet, his coachman, and the archbishop walk into a bar but only one of them gets recognized: On the difficulty of detecting characters in literary texts", "year": "2015" }, { "authors": "Jingjing Xu; Ji Wen; Xu Sun; Qi Su", "journal": "", "ref_id": "b15", "title": "A discourse-level named entity recognition and relation extraction dataset for chinese literature text", "year": "2017" } ]
[ { "formula_coordinates": [ 5, 122.57, 361.83, 396.04, 52.23 ], "formula_id": "formula_0", "formula_text": "LOC 商 场(Shopping malls),网 吧(internet cafes),中 云 市(Zhongyun City),办 公 室(offices),江南(Jiangnan) ORG 医 院(hospitals),学 府(academic institutions),大 学(universities),战 争 学 府(war academies)" } ]
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b10", "b53", "b37", "b14", "b5", "b37", "b51", "b66", "b50", "b66", "b67" ], "table_ref": [], "text": "Rendering a scene from a novel camera position is essential in view synthesis [5,11,54]. The recent advancement of Neural Radiance Field (NeRF) [38] has shown impressive results in creating photo-realistic images from novel viewpoints. However, conventional NeRF methods are either typically scene-specific, necessitating retraining for novel scenes [15,16,38,59, 65], or require a large number of reference views as input for generalizing to novel scenarios [6,50,52,67]. These constraints highlight the complexity of the few-shot generalizable neural rendering, which aims to render unseen scenes from novel viewpoints with a limited number of reference images.\nGeneralizing NeRF to novel scenes often involves using pixel-level feature embeddings encoded from input images, as seen in existing methods [51,67]. These methods adapt NeRF to novel scenes by separating the scene representation from the model through an image encoder. However, relying solely on pixel-level features has its drawbacks: it requires highly precise epipolar geometry and often overlooks occlusion in complex scenes. Moreover, employing pixellevel features ignores the inherent interconnections within objects in the scene, treating the prediction of each pixel independently. This becomes problematic with a limited number of input reference images, as the data scarcity amplifies prediction ambiguity, significantly influenced by the biases of the input camera views.\nWe present CaesarNeRF, a method that advances the generalizability of NeRF by incorporating calibrated semantic representation. This enables rendering from novel viewpoints using as few as one input reference view, as depicted in Figure 1. Our approach combines semantic scene-level representation with per-pixel features, enhancing consistency across different views of the same scene. The encoder-generated scene-level representations capture both semantic features and biases linked to specific camera poses. When reference views are limited, these biases can introduce uncertainty in the rendered images. To counter this, CaesarNeRF integrates camera pose transformations into the semantic representation, hence the term calibrated. By isolating pose-specific information from the scene-level representation, our model harmonizes features across input views, mitigating view-specific biases and, in turn, reducing ambiguity. In addition, CaesarNeRF introduces a sequential refinement process, which equips the model with varying levels of detail needed to enhance the semantic features. Extensive experiments on datasets such as LLFF [36], Shiny [59], mip-NeRF 360 [4], and the newly released MVImgNet [68] demonstrate that CaesarNeRF outperforms current state-of-the-art methods, proving effective in generalizable settings with as few as one reference view.\nIn summary, our contributions are as follows: • We introduce CaesarNeRF, which utilizes scene-level calibrated semantic representation to achieve few-shot, generalizable neural rendering. This innovation leads to coherent and high-quality renderings. • We integrate semantic scene context with pixel-level de-tails, in contrast to existing methods that rely solely on pixel-level features. We also address view-specific biases by modeling camera pose transformations and enhance the scene understanding through the sequential refinement of semantic features.\n• We demonstrate through extensive experiments that Cae-sarNeRF consistently outperforms state-of-the-art generalizable NeRF methods across a variety of datasets. Furthermore, integrating the Caesar pipeline into other baseline methods leads to consistent performance gains, highlighting its effectiveness and adaptability." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b36", "b37", "b59", "b70", "b7", "b55", "b56", "b65", "b66", "b66", "b50", "b65", "b55", "b52", "b9", "b51", "b1", "b68", "b16", "b12", "b11", "b48" ], "table_ref": [], "text": "Neural Radiance Field (NeRF) implicitly captures the density and appearance of points within a scene or object [37,38] and enable rendering from novel camera positions. In recent years, NeRF has witnessed improvements in a wide range of applications, such as photo-realistic novel view synthesis for large-scale scenes [34,60,71 Generalizable NeRF aims to adapt a single NeRF model to multiple scenes by separating the scene representation from the model. This field has seen notable advancements, with efforts focused on avoiding the need for retraining [8,18,33,56,57,66,67]. PixelNeRF [67] and GRF [51] pioneered the application of an image encoder to transform images into per-pixel features, with NeRF functioning as a decoder for predicting density and color from these features. MVSNeRF [6] introduces the use of a cost volume from MVSNet [66] to encode 3-D features from multiple views. Recognizing the intrinsic connection between points along a ray, IBRNet [56] employs selfattention to enhance point density predictions. Transformerbased [53] networks like GNT [10,52], GeoNeRF [22], and GPNR [50] are explored as alternatives to volume rendering, concentrating on pixel and patch-level representations. Additionally, InsertNeRF [2] utilizes hypernet modules to adapt parameters for novel scenes efficiently.\nThese methods primarily depend on image encoders to extract pixel-aligned features from reference views. As a result, many of them lack a comprehensive understanding of the entire scene. Furthermore, with few reference views, the features become intertwined with view-specific details, compromising the quality of the rendering results.\nFew-shot Neural Radiance Field aims to render novel views using a limited number of reference images. To this end, various methods have been developed, incorporating additional information such as normalization-flow [69], se- mantic constraints [17,19], depth cues [13,46], geometry consistency [3, 26, 39, 55, 61], and frequency content [65]. Others [6, 9] emphasize pretraining on large-scale datasets.\nWhile these methods offer reasonable reconstructions with few inputs, they typically still require training or finetuning for specific scenes. Moreover, these methods usually require at least three reference images. With fewer than three, view-specific biases lead to ambiguity, complicating the rendering. Diffusion-based [12,32,49,73] and other generative methods [24, 77] have been explored for singleview synthesis or generative rendering, yet they are mostly limited to single-object rendering and generally fall short for complex scenes, which often result in a style change as shown in the supplementary material.\nCaesarNeRF confronts the above challenges by leveraging calibrated semantic representations that exploit scene geometry and variations in camera viewpoints. As a result, CaesarNeRF overcomes the limitations of pixel-level features and reduces dependency on external data or extensive pretraining, delivering high-quality renderings in few-shot and generalizable settings." }, { "figure_ref": [ "fig_0" ], "heading": "The proposed method", "publication_ref": [], "table_ref": [], "text": "We first outline the general framework of existing generalizable NeRF in Section 3.1. Then, we present our proposed CaesarNeRF, as illustrated in Figure 2. This model integrates elements of semantic representation, calibration, and sequential refinement, detailed in Section 3.2, 3.3, and 3.4, respectively. The training objective is given in Section 3.5." }, { "figure_ref": [], "heading": "NeRF and generalizable NeRF", "publication_ref": [ "b36", "b37", "b66", "b51", "b52", "b51" ], "table_ref": [], "text": "Neural Radiance Field (NeRF) [37,38] aims to render 3D scenes by predicting both the density and RGB values at points where light rays intersect the radiance field. For a query point x ∈ R 3 and a viewing direction d on the unit sphere S 2 in 3D space, the NeRF model F is defined as:\nσ, c = F (x, d).(1)\nHere, σ ∈ R and c ∈ R 3 denote the density and the RGB values, respectively. After computing these values for a collection of discretized points along each ray, volume rendering techniques are employed to calculate the final RGB values for each pixel, thus reconstructing the image.\nHowever, traditional NeRF models F are limited by their requirement for scene-specific training, making it unsuitable for generalizing to novel scenes. To overcome this, generalizable NeRF models, denoted by F G , are designed to render images of novel scenes without per-scene training. Given N reference images {I n } N n=1 , an encoder-based generalizable NeRF model F G decouples the object representation from the original NeRF by using an encoder to extract per-pixel feature maps {F n } N n=1 from the input images. To synthesize a pixel associated with a point x along a ray in direction d, it projects {F n } N n=1 from nearby views and aggregates this multi-view pixel-level information using techniques such as average pooling [67] or cost volumes [6]. This results in a fused feature embedding F , allowing F G to predict density σ and RGB values c for each point along the ray, as expressed by:\nσ, c = F G (x, d, F ).\n(2)\nIn our method, we adopt the recently introduced fully attention-based generalizable NeRF method, GNT [52], as both the backbone and the baseline. GNT shares a similar paradigm with (2) but employs transformers [53] to aggregate pixel-level features into F . It uses a view transformer to fuse projected pixel-level features from reference views, and a ray transformer to combine features from different points along a ray, eliminating the need for volume rendering. Further details about GNT can be found in [52]. We also demonstrate that our approach can be extended to other generalizable NeRF models, as discusses in Section 4.2." }, { "figure_ref": [], "heading": "Scene-level semantic representation", "publication_ref": [ "b50", "b66", "b51", "b51" ], "table_ref": [], "text": "Both encoder-based generalizable NeRF models [6, 51,67] and their attention-based counterparts [50, 52] mainly rely on pixel-level feature representations. While effective, this approach restricts their capability for a holistic scene understanding, especially when reference views are scarce. This limitation also exacerbates challenges in resolving depth ambiguities between points along the rays, a problem that becomes more pronounced with fewer reference views.\nTo address these challenges, we introduce semantic representations aimed at enriching the scene-level understanding. We utilize a shared CNN encoder and apply a Global Average Pooling (GAP) to its C-dimensional output feature map, generating N global feature vectors {S n } N n=1 corresponding to each input view. These feature vectors are then averaged to form a unified scene-level representation S, i.e.,\nS = 1 N N n=1 S n ∈ R C .(3)\nIn GNT [52], which uses a view transformer to aggregate pxiel-level features into an L-dimensional vector F , we extend this by concatenating F with S to construct a globallocal embedding E, as formulated by:\nE = Concat( F , S) ∈ R L+C . (4\n)\nThis combined embedding E is then subjected to the standard self-attention mechanism used in GNT [52]. This approach enables the scene-level semantic representation (S) to integrate with per-point features ( F ), offering a more nuanced understanding at both levels. It also allows each point to selectively draw from the scene-level information. To maintain dimensional consistency across the input and output layers of multiple transformer modules, we employ a two-layer MLP to project the enhanced features back to the original dimension L of the per-point embedding F . " }, { "figure_ref": [], "heading": "Calibration of semantic representation", "publication_ref": [ "b51" ], "table_ref": [], "text": "The integration of the scene-level semantic representation S, generated through simple averaging of global feature vectors as in (3), improves rendering quality. However, this approach has limitations when dealing with multiple views. As illustrated in Figure 3, viewing the same object from distinct angles may retain spatial attributes but can lead to conflicting semantic meanings. Merely averaging these global feature vectors without accounting for camera positions can result in a distorted scene-level understanding.\nTo mitigate this inconsistency, we propose a semantic calibration technique using feature rotation. This adjustment aligns the semantic representation across different camera poses. Our inspiration comes from the use of camera pose projection in computing the fused pixel-level feature F and is further motivated by [45], which demonstrates that explicit rotation operations in feature spaces are feasible. Unlike point clouds in [45] that inherently lack a defined canonical orientation, NeRF explicitly encodes differences between camera viewpoints, thereby enabling precise calibration between the reference and target images.\nBuilding on this observation, we calculate calibrated semantic representations { S n } N n=1 from the N original semantic representations {S n } N n=1 derived from the reference views. We accomplish this by leveraging their respective rotation matrices {T n } N n=1 to model the rotational variations between each input view and the target view. The alignment of the original semantic features is performed as follows:\nS n = P(T n • P -1 (S n )), where T n = T w2c out • T c2w n .(5)\nHere, T c2w n is the inverse of the extrinsic matrix used for I n , and T w2c out is the extrinsic matrix for the target view. P(•) and P -1 (•) are the flattening and inverse flattening operations, which reshape the feature to a 1D vector of shape 1-by-C and a 2D matrix of shape 3-by-C 3 , respectively. Note that for the extrinsic matrix, we consider only the top-left 3 × 3 submatrix that accounts for rotation. Using GAP to condense feature maps of various sizes into a 1-by- C feature vector eliminates the need for scaling parameters in the semantic representation. As a result, modeling the intrinsic matrix is unnecessary, assuming no skewing, making our approach adaptable to different camera configurations.\nWith the calibrated semantic features { S n } N n=1 for each reference view, we average these, similar to (3), to obtain the calibrated scene-level semantic representation S, i.e.,\nS = 1 N N n=1 S n ∈ R C .(6)\nFinally, akin to (4), we concatenate the pixel-level fused feature F with the calibrated scene-level semantic representation S to form the final global-local embedding E:\nE = Concat( F , S) ∈ R L+C .(7)\nThis unified embedding then feeds into ray transformers, passing through standard self-attention mechanisms. In the original GNT [52], multiple view transformers and ray transformers are stacked alternately for sequential feature processing. The last ray transformer integrates features from multiple points along a ray to yield the final RGB value. We denote the corresponding feature representations at stage k as F (k) and E (k) . Notably, the calibrated semantic representation S remains constant across these stages." }, { "figure_ref": [ "fig_2", "fig_0" ], "heading": "Sequential refinement", "publication_ref": [ "b11", "b6" ], "table_ref": [], "text": "While leveraging S improves consistency, a single, uniform S may not be adequate for deeper layers that demand more nuanced details. In fact, we find that deeper transformers capture finer details compared to shallower ones, as shown in Figure 4. To address this limitation, we introduce a sequential semantic feature refinement module that progressively enriches features at each stage. Specifically, we learn the residual ∆ (k) to update S at each stage k as follows:\nS (k+1) ← S (k) + ∆ (k) .(8)\nHere, ∆ (k) is calculated by first performing specialized cross-attentions between S (k) and the original, uncalibrated per-frame semantic features {S n } N n=1 (see Figure 2), followed by their summation. Our goal is to fuse information from different source views to enrich the scene-level semantic representation with features from each reference frame. With this sequential refinement, we combine S (k) with F (k) at each stage, yielding a stage-specific globallocal embedding E (k) , which completes our approach.\nDiscussion. In scenarios with few reference views, especially when limited to just one, the primary issue is inaccurate depth estimation, resulting in depth ambiguity [12]. This compromises the quality of images when rendered from novel viewpoints. Despite this, essential visual information generally remains accurate across different camera poses. Incorporating our proposed scene-level representation improves the understanding of the overall scene layout [7], distinguishing our approach from existing generalizable NeRF models that predict pixels individually. The advantage of our approach is its holistic view; the semantic representation enriches per-pixel predictions by providing broader context. This semantic constraint ensures that fewer abrupt changes between adjacent points. Consequently, it leads to more reliable depth estimations, making the images rendered from limited reference views more plausible." }, { "figure_ref": [], "heading": "Training objectives", "publication_ref": [ "b36" ], "table_ref": [], "text": "During training, we employ three different loss functions: MSE loss. The Mean Square Error (MSE) loss is the standard photometric loss used in NeRF [37]. It computes the MSE between the actual and predicted pixel values.\nCentral loss. To ensure frame-wise calibrated semantic features { S n } N n=1 are consistent when projected onto the same target view, we introduce a central loss, defined as:\nL central = 1 N N n=1 S n -S 1 .(9)\nPoint-wise perceptual loss. During the rendering of a bath of pixels in a target view, we inpaint the ground-truth image by replacing the corresponding pixels with the predicted ones. Then, a perceptual loss [23] is computed between the inpainted image and the target image to guide the training process at the whole-image level.\nThe final loss function is formulated as follows:\nL = L MSE + λ 1 L central + λ 2 L perc . (10\n)\nEmpirically, we set λ 1 = 1 and λ 2 = 0.001, following [29]." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental setups", "publication_ref": [ "b51", "b71", "b55", "b67", "b47", "b51", "b51", "b67", "b66", "b55", "b51", "b1", "b7" ], "table_ref": [], "text": "For the experimental setups, we begin by describing the datasets used in our experiments. This is followed by implementation details of our proposed method and the baseline methods we employed for comparison. Datasets. Firstly, following [52], we construct our training data from both synthetic and real data. This collection includes scanned models from Google Scanned Objects [14], RealEstate10K [72], and handheld phone captures [56]. For evaluation, we utilize real data encompassing complex scenes from sources such as LLFF [36], Shiny [59], and mip-NeRF 360 [4]. Additionally, we train and test our model using the recently released MVImgNet dataset [68]. We adhere to the official split, focusing on examples from the containers category, and select 2,500 scenes for training. During inference, we choose 100 scenes, using their first images as target views and the spatially nearest images as references. Since MVImgNet does not provide camera poses, we utilize COLMAP [47,48] to deduce the camera positions within these scenes.\nImplementation details. CaesarNeRF is built upon GNT [52], for which we maintain the same configuration, setting the ray and view transformers stack number (K) to 8 for generalizable setting and 4 for single-scene setting. The feature encoder extracts bottleneck features, applies GAP, and then uses a fully connected (FC) layer to reduce the input dimension C to 96. Training involves 500,000 iterations using the Adam optimizer [25], with learning rate set at 0.001 for the feature encoder and 0.0005 for CaesarN-eRF, halving them every 100,000 iterations. Each iteration samples 4,096 rays from a single scene. In line with [52] we randomly choose between 8 to 10 reference views for training, and 3 to 7 views when using the MVImgNet [68]. Baseline methods. We compare CaesarNeRF with several state-of-the-art methods suited for generalizable NeRF applications, including earlier works such as MVS-NeRF [6], PixelNeRF [67], and IBRNet [56], alongside more recent ones, including GPNR [50], NeuRay [33], GNT [52], GeoNeRF [22] and MatchNeRF [8]." }, { "figure_ref": [ "fig_4" ], "heading": "Results and analysis", "publication_ref": [ "b67", "b7", "b65", "b1", "b55", "b51", "b51", "b55", "b51", "b7", "b55", "b51" ], "table_ref": [ "tab_6", "tab_7" ], "text": "We compare results in two settings: a generalizable setting, where the model is trained on multiple scenes without finetuning during inference for both few and all reference view cases, and a single-scene setting where the model is trained and evaluated on just one scene. Following these comparisons, we conduct ablation studies and test the generalizability of our method with other state-of-the-art approaches.\nGeneralizable rendering. In the generalizable setting, we adopt two training strategies. First, we train the model on multiple datasets as described in Section 4. datasets. In addition, the model is trained and tested on the MVImgNet [68] for object-centric generalizability. (a) LLFF, Shiny, and mip-NeRF 360. The results for few-reference view scenarios on these datasets are shown in Tables 1, 2 and 3, respectively. Methods like Match-NeRF [8], MVSNeRF [66], and GeoNeRF [22] require at least two reference views. On the LLFF dataset, all methods experience a performance decline as the number of views decreases. CaesarNeRF, however, consistently outperforms others across varying reference view numbers, with the performance gap becoming more significant with fewer views. For example, with 3 views, while IBRNet [56] and GNT [52] have comparable PSNRs, CaesarNeRF demonstrates a more substantial lead in LPIPS and SSIM metrics.\nSimilar patterns are observed on the Shiny [59] and mip-NeRF 360 [4] datasets. We apply the highest-performing methods from the LLFF evaluations and report the results for those that produce satisfactory outcomes with few reference views. CaesarNeRF maintains superior performance throughout. Notably, for complex datasets like mip-NeRF 360 [4], which have sparse camera inputs, the quality of rendered images generally decreases with fewer available reference views. Nonetheless, CaesarNeRF shows the most robust performance compared to the other methods.\n(b) MVImgNet. We extend our comparison of CaesarN-eRF with GNT [52] and IBRNet [56] on the MVImgNet dataset, focusing on object-centric scenes, as shown in Table 4. We examine a variant of CaesarNeRF where semantic all three metrics, showing a significant improvement over our baseline method, GNT [52].\nAdaptability. To test the adaptability of our Caesar pipeline, we apply it to two other state-of-the-art methods that use view transformers, namely MatchNeRF [8] and IBRNet [56]. We demonstrate in Table 6 that our enhancements in scene-level semantic understanding significantly boost the performance of these methods across all metrics. This indicates that the Caesar framework is not only beneficial in our CaesarNeRF, which is based on GNT [52], but can also be a versatile addition to other NeRF pipelines.\nAblation analysis. We conduct ablation studies on the \"orchid\" scene from the LLFF dataset, with findings detailed in Table 7. Testing variations in representation and the impact of the sequential refinement and calibration modules, we find that increasing the latent size in GNT yields marginal benefits. However, incorporating even a modest semantic representation size distinctly improves results. The length of the semantic representation has a minimal impact on quality. Our ablation studies indicate that while sequential refinement and calibration each offer slight perfor-mance gains, their combined effect is most significant. In a single-scene context, semantic information is effectively embedded within the representation, making the benefits of individual modules subtler. Together, however, they provide a framework where sequential refinement can leverage calibrated features for deeper insights.\nVisualizations. We present our visualization results in Figure 5, where we compare our method with others using one or two views from the LLFF dataset. Additional visual comparisons are provided in the supplementary materials. These visualizations highlight that in scenarios with few views, our method significantly surpasses the competitors, particularly excelling when only a single view is available. In such cases, CaesarNeRF demonstrates enhanced clarity, with sharper boundaries and more distinct objects." }, { "figure_ref": [], "heading": "Conclusion and limitation", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce CaesarNeRF, a few-shot and generalizable NeRF pipeline that combines scene-level semantic with per-pixel feature representations, aiding in rendering from novel camera positions with limited reference views. We calibrate the semantic representations across different input views and employ a sequential refinement network to offer distinct semantic representations at various levels. Our method has been extensively evaluated on a broad range of datasets, exhibiting state-of-the-art performance in both generalizable and single-scene settings.\nLimitations. CaesarNeRF could be further improved by integrating explicit depth information and generative capabilities, which could provide a richer basis for rendering from novel views with few reference images." } ]
Figure 1. Novel view synthesis for novel scenes using ONE reference view on Shiny [59], LLFF [36], and MVImgNet [68] (top to bottom). Each pair of images corresponds to the results from GNT [52] (left) and CaesarNeRF (right).
CaesarNeRF: Calibrated Semantic Representation for Few-Shot Generalizable Neural Rendering
[ { "figure_caption": "Figure 2 .2Figure 2. Overview of CaesarNeRF. CaesarNeRF employs a shared encoder to capture two types of features from input views, including scene-level semantic representation {Sn} and pixel-level feature representation {Fn}. Following calibration and aggregation of {Sn} from various views, we concatenate it with the pixel-level fused feature, processed by the view transformer. Subsequent use of the raytransformer, coupled with sequential refinement, enables us to render the final RGB values for each pixel in the target view.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3. An illustration of conflicting semantic meanings from multiple viewpoints of the same object. When observing the cup from distinct angles, the features retain spatial information but are inconsistent in the scene-level semantic understanding.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visualization of decoded feature maps for \"orchid\" in LLFF dataset, produced by ray transformers [52] at different stages. From left to right, the transformer stages increase in depth.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Using one image as reference view. (b) Using two images as reference view.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Comparative visualization of our proposed method against other state-of-the-art methods.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "InputMethodPSNR (↑) LPIPS (↓)SSIM (↑)PixelNeRF [67]9.320.8980.264GPNR [50]15.910.5270.4001-viewNeuRay [33] IBRNet [56]16.18 16.850.584 0.5420.393 0.507GNT [52]16.570.5000.424Ours18.310.4350.521PixelNeRF [67]11.230.7660.282GPNR [50]18.790.3800.575NeuRay [33]17.710.3360.646GeoNeRF [22]18.760.4730.5002-viewMatchNeRF [8]21.080.2720.689MVSNeRF [6]19.150.3360.704IBRNet [56]21.250.3330.685GNT [52]20.880.2510.691Ours21.940.2240.736PixelNeRF [67]11.240.6710.486GPNR [50]21.570.2880.695NeuRay [33]18.260.3100.672GeoNeRF [22]23.400.2460.7663-viewMatchNeRF [8]22.300.2340.731MVSNeRF [6]19.840.3140.729IBRNet [56]23.000.2620.752GNT [52]23.210.1780.782Ours23.450.1760.794", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": ", Results for generalizable scene rendering on Shiny with few reference views.", "figure_data": "InputMethodPSNR (↑) LPIPS (↓)SSIM (↑)IBRNet [56]14.930.6250.4011-viewGNT [52]15.990.5480.400Ours17.570.4670.472MatchNeRF [8]20.280.2780.636MVSNeRF [6]17.250.4160.5772-viewIBRNet [56]18.400.4000.595GNT [52]20.420.3270.617Ours21.470.2930.652MatchNeRF [8]20.770.2490.672MVSNeRF [6]18.550.3430.6453-viewIBRNet [56]21.960.2810.710GNT [52]22.470.2470.720Ours22.740.2410.723InputMethodPSNR (↑) LPIPS (↓)SSIM (↑)IBRNet [56]14.120.6820.2831-viewGNT [52]13.480.6300.314Ours15.200.5920.350MatchNeRF [8]17.000.5660.392MVSNeRF [6]14.230.6810.3662-viewIBRNet [56]16.240.6180.360GNT [52]15.210.5590.370Ours17.050.5380.403MatchNeRF [8]17.260.5510.407MVSNeRF [6]14.290.6740.4063-viewIBRNet [56]17.700.5550.420GNT [52]15.590.5380.395Ours17.550.5120.430", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results for generalizable scene rendering on mip-NeRF 360 with few reference views.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on MVImgNet across varying numbers of reference views. 'C.' represents the use of calibration before averaging.", "figure_data": "1 and eval-", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of per-scene optimization on LLFF, in comparison with state-of-the-art methods.", "figure_data": "MethodPSNR (↑) LPIPS (↓)SSIM (↑)LLFF [36]23.270.2120.798NeRF [38]26.500.2500.811NeX [59]27.260.1790.904GNT [52]27.240.0870.889Ours27.640.0810.904MethodInputPSNR (↑)SSIM (↑)LPIPS (↓)MatchNeRF [8]2-view 3-view20.59 22.430.775 0.8050.276 0.244Caesar-MatchNeRF2-view 3-view21.55 22.980.782 0.8240.268 0.2421-view16.850.5070.542IBRNet [56]2-view21.250.6850.3333-view23.000.7520.2621-view17.760.5430.500Caesar-IBRNet2-view22.390.7400.2753-view23.670.7720.242", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results on LLFF for few-shot generalization after adapting Caesar to other baseline methods.", "figure_data": "Model VariationsPSNR (↑) LPIPS (↓) SSIM (↑)R len.Seq.Cali.(Baseline GNT)20.930.1850.731Ext.20.850.1730.735+3221.430.1520.763+6421.490.1490.766+9621.460.1500.766+12821.490.1470.763+ 96✓21.530.1460.770+ 96✓21.510.1470.769+ 96✓✓21.670.1390.781", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablations on the semantic representation length R, sequential refinement (Seq.) and calibration (Cali.). 'Ext.' denotes the extension of per-pixel representation to a length of 64 in GNT.", "figure_data": "calibration is substituted with simple feature averaging frommultiple frames. While the performance of all methods im-proves with more views, CaesarNeRF consistently outper-forms GNT and IBRNet. Notably, CaesarNeRF with featureaveraging surpasses GNT in 1-view case but lags with ad-ditional views, implying that the absence of calibration leadto ambiguities when rendering from multiple views.Per-scene optimization. Beyond the multi-scene gen-eralizable setting, we demonstrate per-scene optimizationresults in", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Testing across 8 categories from the LLFF dataset [36], we calculate the average performance over these scenes. CaesarNeRF consistently outperforms nearly all state-of-the-art methods in the comparison, across", "figure_data": "", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" } ]
Haidong Zhu; Tianyu Ding; Tianyi Chen; Ilya Zharkov; Ram Nevatia; Luming Liang
[ { "authors": "Chong Bao; Yinda Zhang; Bangbang Yang; Tianxing Fan; Zesong Yang; Hujun Bao; Guofeng Zhang; Zhaopeng Cui", "journal": "", "ref_id": "b0", "title": "Sine: Semantic-driven image-based nerf editing with prior-guided editing field", "year": "2023" }, { "authors": "Yanqi Bao; Tianyu Ding; Jing Huo; Wenbin Li; Yuxin Li; Yang Gao", "journal": "", "ref_id": "b1", "title": "Insertnerf: Instilling generalizability into nerf with hypernet modules", "year": "2023" }, { "authors": "Yanqi Bao; Yuxin Li; Jing Huo; Tianyu Ding; Xinyue Liang; Wenbin Li; Yang Gao", "journal": "", "ref_id": "b2", "title": "Where and how: Mitigating confusion in neural radiance fields from sparse inputs", "year": "2023" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Dor Verbin; Peter Pratul P Srinivasan; Hedman", "journal": "", "ref_id": "b3", "title": "Mip-nerf 360: Unbounded anti-aliased neural radiance fields", "year": "2022" }, { "authors": "Chris Buehler; Michael Bosse; Leonard Mcmillan; Steven Gortler; Michael Cohen", "journal": "", "ref_id": "b4", "title": "Unstructured lumigraph rendering", "year": "2001" }, { "authors": "Anpei Chen; Zexiang Xu; Fuqiang Zhao; Xiaoshuai Zhang; Fanbo Xiang; Jingyi Yu; Hao Su", "journal": "", "ref_id": "b5", "title": "Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo", "year": "2021" }, { "authors": "Weihua Chen; Xianzhe Xu; Jian Jia; Hao Luo; Yaohua Wang; Fan Wang; Rong Jin; Xiuyu Sun", "journal": "", "ref_id": "b6", "title": "Beyond appearance: a semantic controllable self-supervised learning framework for human-centric visual tasks", "year": "2023" }, { "authors": "Yuedong Chen; Haofei Xu; Qianyi Wu; Chuanxia Zheng; Tat-Jen Cham; Jianfei Cai", "journal": "", "ref_id": "b7", "title": "Explicit correspondence matching for generalizable neural radiance fields", "year": "2023" }, { "authors": "Julian Chibane; Aayush Bansal; Verica Lazova; Gerard Pons-Moll", "journal": "", "ref_id": "b8", "title": "Stereo radiance fields (srf): Learning view synthesis for sparse views of novel scenes", "year": "2021" }, { "authors": "Wenyan Cong; Hanxue Liang; Peihao Wang; Zhiwen Fan; Tianlong Chen; Mukund Varma; Yi Wang; Zhangyang Wang", "journal": "", "ref_id": "b9", "title": "Enhancing nerf akin to enhancing llms: Generalizable nerf transformer with mixture-of-view-experts", "year": "2023" }, { "authors": "Camillo J Paul E Debevec; Jitendra Taylor; Malik", "journal": "", "ref_id": "b10", "title": "Modeling and rendering architecture from photographs: A hybrid geometry-and image-based approach", "year": "1996" }, { "authors": "Congyue Deng; \" Chiyu; \" Max; Charles R Jiang; Xinchen Qi; Yin Yan; Leonidas Zhou; Dragomir Guibas; Anguelov", "journal": "", "ref_id": "b11", "title": "Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors", "year": "2023" }, { "authors": "Kangle Deng; Andrew Liu; Jun-Yan Zhu; Deva Ramanan", "journal": "", "ref_id": "b12", "title": "Depth-supervised nerf: Fewer views and faster training for free", "year": "2022" }, { "authors": "Laura Downs; Anthony Francis; Nate Koenig; Brandon Kinman; Ryan Hickman; Krista Reymann; Thomas B Mchugh; Vincent Vanhoucke", "journal": "", "ref_id": "b13", "title": "Google scanned objects: A highquality dataset of 3d scanned household items", "year": "2022" }, { "authors": "Sara Fridovich-Keil; Giacomo Meanti; Frederik Rahbaek Warburg; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b14", "title": "K-planes: Explicit radiance fields in space, time, and appearance", "year": "2023" }, { "authors": "Yang Fu; Ishan Misra; Xiaolong Wang", "journal": "", "ref_id": "b15", "title": "Multiplane nerfsupervised disentanglement of depth and camera pose from videos", "year": "2022" }, { "authors": "Yiming Gao; Yan-Pei Cao; Ying Shan", "journal": "", "ref_id": "b16", "title": "Surfelnerf: Neural surfel radiance fields for online photorealistic reconstruction of indoor scenes", "year": "2023" }, { "authors": "Muhammad Zubair Irshad; Sergey Zakharov; Katherine Liu; Vitor Guizilini; Thomas Kollar; Adrien Gaidon; Zsolt Kira; Rares Ambrus", "journal": "", "ref_id": "b17", "title": "Neo 360: Neural fields for sparse view synthesis of outdoor scenes", "year": "2023" }, { "authors": "Ajay Jain; Matthew Tancik; Pieter Abbeel", "journal": "", "ref_id": "b18", "title": "Putting nerf on a diet: Semantically consistent few-shot view synthesis", "year": "2021" }, { "authors": "Ajay Jain; Ben Mildenhall; Jonathan T Barron; Pieter Abbeel; Ben Poole", "journal": "", "ref_id": "b19", "title": "Zero-shot text-guided object generation with dream fields", "year": "2022" }, { "authors": "Yifan Jiang; Peter Hedman; Ben Mildenhall; Dejia Xu; Jonathan T Barron; Zhangyang Wang; Tianfan Xue", "journal": "", "ref_id": "b20", "title": "Alignerf: High-fidelity neural radiance fields via alignmentaware training", "year": "2023" }, { "authors": "Mohammad Mahdi; Johari ; Yann Lepoittevin; François Fleuret", "journal": "", "ref_id": "b21", "title": "Geonerf: Generalizing nerf with geometry priors", "year": "2022" }, { "authors": "Justin Johnson; Alexandre Alahi; Li Fei-Fei", "journal": "Springer", "ref_id": "b22", "title": "Perceptual losses for real-time style transfer and super-resolution", "year": "2016" }, { "authors": "Adam Kania; Artur Kasymov; Maciej Zięba; Przemysław Spurek", "journal": "", "ref_id": "b23", "title": "Hypernerfgan: Hypernetwork approach to 3d nerf gan", "year": "2023" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b24", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Minseop Kwak; Jiuhn Song; Seungryong Kim", "journal": "", "ref_id": "b25", "title": "Geconerf: Few-shot neural radiance fields via geometric consistency", "year": "2023" }, { "authors": "Zhengqi Li; Qianqian Wang; Forrester Cole; Richard Tucker; Noah Snavely", "journal": "", "ref_id": "b26", "title": "Dynibar: Neural dynamic image-based rendering", "year": "2023" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b27", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Haotong Lin; Sida Peng; Zhen Xu; Yunzhi Yan; Qing Shuai; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b28", "title": "Efficient neural radiance fields for interactive free-viewpoint video", "year": "2022" }, { "authors": "Yiqi Lin; Haotian Bai; Sijia Li; Haonan Lu; Xiaodong Lin; Hui Xiong; Lin Wang", "journal": "", "ref_id": "b29", "title": "Componerf: Text-guided multiobject compositional nerf with editable 3d scene layout", "year": "2023" }, { "authors": "Lingjie Liu; Marc Habermann; Viktor Rudnev; Kripasindhu Sarkar; Jiatao Gu; Christian Theobalt", "journal": "TOC", "ref_id": "b30", "title": "Neural actor: Neural free-view synthesis of human actors with pose control", "year": "2021" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b31", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2023" }, { "authors": "Yuan Liu; Sida Peng; Lingjie Liu; Qianqian Wang; Peng Wang; Christian Theobalt; Xiaowei Zhou; Wenping Wang", "journal": "", "ref_id": "b32", "title": "Neural rays for occlusion-aware image-based rendering", "year": "2022" }, { "authors": "Ricardo Martin-Brualla; Noha Radwan; S M Mehdi; Jonathan T Sajjadi; Alexey Barron; Daniel Dosovitskiy; Duckworth", "journal": "", "ref_id": "b33", "title": "Nerf in the wild: Neural radiance fields for unconstrained photo collections", "year": "2021" }, { "authors": "Gal Metzer; Elad Richardson; Or Patashnik; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b34", "title": "Latent-nerf for shape-guided generation of 3d shapes and textures", "year": "2023" }, { "authors": "Ben Mildenhall; P Pratul; Rodrigo Srinivasan; Nima Ortiz-Cayon; Ravi Khademi Kalantari; Ren Ramamoorthi; Abhishek Ng; Kar", "journal": "TOG", "ref_id": "b35", "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "year": "2007" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b36", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b37", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Michael Niemeyer; Jonathan T Barron; Ben Mildenhall; S M Mehdi; Andreas Sajjadi; Noha Geiger; Radwan", "journal": "", "ref_id": "b38", "title": "Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs", "year": "2022" }, { "authors": "Atsuhiro Noguchi; Xiao Sun; Stephen Lin; Tatsuya Harada", "journal": "", "ref_id": "b39", "title": "Neural articulated radiance field", "year": "2021" }, { "authors": "Keunhong Park; Utkarsh Sinha; Jonathan T Barron; Sofien Bouaziz; Dan B Goldman; Steven M Seitz; Ricardo Martin-Brualla", "journal": "", "ref_id": "b40", "title": "Nerfies: Deformable neural radiance fields", "year": "2021" }, { "authors": "Sida Peng; Junting Dong; Qianqian Wang; Shangzhan Zhang; Qing Shuai; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b41", "title": "Animatable neural radiance fields for modeling dynamic human bodies", "year": "2021" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b42", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Albert Pumarola; Enric Corona; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b43", "title": "D-nerf: Neural radiance fields for dynamic scenes", "year": "2021" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b44", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Barbara Roessle; Jonathan T Barron; Ben Mildenhall; Matthias Pratul P Srinivasan; Nießner", "journal": "", "ref_id": "b45", "title": "Dense depth priors for neural radiance fields from sparse input views", "year": "2022" }, { "authors": "Johannes Lutz; Schönberger ; Jan-Michael Frahm", "journal": "", "ref_id": "b46", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": "Johannes Lutz Schönberger; Enliang Zheng; Marc Pollefeys; Jan-Michael Frahm", "journal": "", "ref_id": "b47", "title": "Pixelwise view selection for unstructured multi-view stereo", "year": "2016" }, { "authors": "Ryan Shue; Eric Ryan Chan; Ryan Po; Zachary Ankner; Jiajun Wu; Gordon Wetzstein", "journal": "", "ref_id": "b48", "title": "3d neural field generation using triplane diffusion", "year": "2023" }, { "authors": "Mohammed Suhail; Carlos Esteves; Leonid Sigal; Ameesh Makadia", "journal": "", "ref_id": "b49", "title": "Generalizable patch-based neural rendering", "year": "2022" }, { "authors": "Alex Trevithick; Bo Yang", "journal": "", "ref_id": "b50", "title": "Grf: Learning a general radiance field for 3d representation and rendering", "year": "2021" }, { "authors": "Mukund Varma; Peihao Wang; Xuxi Chen; Tianlong Chen; Subhashini Venugopalan; Zhangyang Wang", "journal": "ICLR", "ref_id": "b51", "title": "Is attention all that nerf needs?", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "NeurIPS", "ref_id": "b52", "title": "Attention is all you need", "year": "2017" }, { "authors": "Michael Waechter; Nils Moehrle; Michael Goesele", "journal": "", "ref_id": "b53", "title": "Let there be color! large-scale texturing of 3d reconstructions", "year": "2014" }, { "authors": "Guangcong Wang; Zhaoxi Chen; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b54", "title": "Sparsenerf: Distilling depth ranking for few-shot novel view synthesis", "year": "2023" }, { "authors": "Qianqian Wang; Zhicheng Wang; Kyle Genova; P Pratul; Howard Srinivasan; Jonathan T Zhou; Ricardo Barron; Noah Martin-Brualla; Thomas Snavely; Funkhouser", "journal": "", "ref_id": "b55", "title": "Ibrnet: Learning multi-view image-based rendering", "year": "2021" }, { "authors": "Tengfei Wang; Bo Zhang; Ting Zhang; Shuyang Gu; Jianmin Bao; Tadas Baltrusaitis; Jingjing Shen; Dong Chen; Fang Wen; Qifeng Chen", "journal": "", "ref_id": "b56", "title": "Rodin: A generative model for sculpting 3d digital avatars using diffusion", "year": "2023" }, { "authors": "Yi Wei; Shaohui Liu; Yongming Rao; Wang Zhao; Jiwen Lu; Jie Zhou", "journal": "", "ref_id": "b57", "title": "Nerfingmvs: Guided optimization of neural radiance fields for indoor multi-view stereo", "year": "2021" }, { "authors": "Suttisak Wizadwongsa; Pakkapon Phongthawee; Jiraphon Yenphraphai; Supasorn Suwajanakorn", "journal": "", "ref_id": "b58", "title": "Nex: Real-time view synthesis with neural basis expansion", "year": "2021" }, { "authors": "Yuanbo Xiangli; Linning Xu; Xingang Pan; Nanxuan Zhao; Anyi Rao; Christian Theobalt; Bo Dai; Dahua Lin", "journal": "", "ref_id": "b59", "title": "Bungeenerf: Progressive neural radiance field for extreme multi-scale scene rendering", "year": "2022" }, { "authors": "Dejia Xu; Yifan Jiang; Peihao Wang; Zhiwen Fan; Humphrey Shi; Zhangyang Wang", "journal": "", "ref_id": "b60", "title": "Sinnerf: Training neural radiance fields on complex scenes from a single image", "year": "2022" }, { "authors": "Qiangeng Xu; Zexiang Xu; Julien Philip; Sai Bi; Zhixin Shu; Kalyan Sunkavalli; Ulrich Neumann", "journal": "", "ref_id": "b61", "title": "Point-nerf: Pointbased neural radiance fields", "year": "2022" }, { "authors": "Bangbang Yang; Yinda Zhang; Yinghao Xu; Yijin Li; Han Zhou; Hujun Bao; Guofeng Zhang; Zhaopeng Cui", "journal": "", "ref_id": "b62", "title": "Learning object-compositional neural radiance field for editable scene rendering", "year": "2021" }, { "authors": "Hao Yang; Lanqing Hong; Aoxue Li; Tianyang Hu; Zhenguo Li; Gim ; Hee Lee; Liwei Wang", "journal": "", "ref_id": "b63", "title": "Contranerf: Generalizable neural radiance fields for synthetic-to-real novel view synthesis via contrastive learning", "year": "2023" }, { "authors": "Jiawei Yang; Marco Pavone; Yue Wang", "journal": "", "ref_id": "b64", "title": "Freenerf: Improving few-shot neural rendering with free frequency regularization", "year": "2023" }, { "authors": "Yao Yao; Zixin Luo; Shiwei Li; Tian Fang; Long Quan", "journal": "", "ref_id": "b65", "title": "Mvsnet: Depth inference for unstructured multi-view stereo", "year": "2018" }, { "authors": "Alex Yu; Vickie Ye; Matthew Tancik; Angjoo Kanazawa", "journal": "", "ref_id": "b66", "title": "pixelnerf: Neural radiance fields from one or few images", "year": "2021" }, { "authors": "Xianggang Yu; Mutian Xu; Yidan Zhang; Haolin Liu; Chongjie Ye; Yushuang Wu; Zizheng Yan; Chenming Zhu; Zhangyang Xiong; Tianyou Liang", "journal": "", "ref_id": "b67", "title": "Mvimgnet: A largescale dataset of multi-view images", "year": "2023" }, { "authors": "Jason Zhang; Gengshan Yang; Shubham Tulsiani; Deva Ramanan", "journal": "NeurIPS", "ref_id": "b68", "title": "Ners: Neural reflectance surfaces for sparse-view 3d reconstruction in the wild", "year": "2021" }, { "authors": "Jingbo Zhang; Xiaoyu Li; Ziyu Wan; Can Wang; Jing Liao", "journal": "", "ref_id": "b69", "title": "Text2nerf: Text-driven 3d scene generation with neural radiance fields", "year": "2023" }, { "authors": "M I Zhenxing; Dan Xu", "journal": "ICLR", "ref_id": "b70", "title": "Switch-nerf: Learning scene decomposition with mixture of experts for large-scale neural radiance fields", "year": "2023" }, { "authors": "Tinghui Zhou; Richard Tucker; John Flynn; Graham Fyffe; Noah Snavely", "journal": "", "ref_id": "b71", "title": "Stereo magnification: Learning view synthesis using multiplane images", "year": "2018" }, { "authors": "Zhizhuo Zhou; Shubham Tulsiani", "journal": "", "ref_id": "b72", "title": "Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction", "year": "2023" }, { "authors": "Haidong Zhu; Yuyin Sun; Chi Liu; Lu Xia; Jiajia Luo; Nan Qiao; Ram Nevatia; Cheng-Hao Kuo", "journal": "", "ref_id": "b73", "title": "Multimodal neural radiance field", "year": "2023" }, { "authors": "Haidong Zhu; Zhaoheng Zheng; Wanrong Zheng; Ram Nevatia", "journal": "", "ref_id": "b74", "title": "Cat-nerf: Constancy-aware tx2former for dynamic body modeling", "year": "2023" }, { "authors": "Yiyu Zhuang; Hao Zhu; Xusen Sun; Xun Cao", "journal": "", "ref_id": "b75", "title": "Mofanerf: Morphable facial neural radiance field", "year": "2022" }, { "authors": "Dominik Zimny; Przemyslaw Trzciński; Spurek", "journal": "", "ref_id": "b76", "title": "Points2nerf: Generating neural radiance fields from 3d point cloud", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 394.47, 444.76, 150.64, 8.99 ], "formula_id": "formula_0", "formula_text": "σ, c = F (x, d).(1)" }, { "formula_coordinates": [ 4, 125.59, 98.32, 85.3, 9.68 ], "formula_id": "formula_1", "formula_text": "σ, c = F G (x, d, F )." }, { "formula_coordinates": [ 4, 122.39, 472.33, 163.98, 30.2 ], "formula_id": "formula_2", "formula_text": "S = 1 N N n=1 S n ∈ R C .(3)" }, { "formula_coordinates": [ 4, 108.93, 571.34, 173.56, 11.37 ], "formula_id": "formula_3", "formula_text": "E = Concat( F , S) ∈ R L+C . (4" }, { "formula_coordinates": [ 4, 282.49, 573.73, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 315.05, 595.48, 230.06, 13.11 ], "formula_id": "formula_5", "formula_text": "S n = P(T n • P -1 (S n )), where T n = T w2c out • T c2w n .(5)" }, { "formula_coordinates": [ 5, 122.39, 288.16, 163.98, 30.2 ], "formula_id": "formula_6", "formula_text": "S = 1 N N n=1 S n ∈ R C .(6)" }, { "formula_coordinates": [ 5, 108.93, 371.26, 177.43, 11.37 ], "formula_id": "formula_7", "formula_text": "E = Concat( F , S) ∈ R L+C .(7)" }, { "formula_coordinates": [ 5, 119.07, 631.62, 167.29, 11.03 ], "formula_id": "formula_8", "formula_text": "S (k+1) ← S (k) + ∆ (k) .(8)" }, { "formula_coordinates": [ 5, 366.31, 435.41, 178.8, 30.2 ], "formula_id": "formula_9", "formula_text": "L central = 1 N N n=1 S n -S 1 .(9)" }, { "formula_coordinates": [ 5, 359.36, 571.66, 181.61, 9.81 ], "formula_id": "formula_10", "formula_text": "L = L MSE + λ 1 L central + λ 2 L perc . (10" }, { "formula_coordinates": [ 5, 540.96, 571.98, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" } ]