Unnamed: 0
int64
0
2.72k
title
stringlengths
14
153
Arxiv link
stringlengths
1
31
authors
stringlengths
5
1.5k
arxiv_id
float64
2k
2.41k
abstract
stringlengths
435
2.86k
Model
stringclasses
1 value
GitHub
stringclasses
1 value
Space
stringclasses
1 value
Dataset
stringclasses
1 value
id
int64
0
2.72k
400
Cinematic Behavior Transfer via NeRF-based Differentiable Filming
http://arxiv.org/abs/2311.17754
Xuekun Jiang, Anyi Rao, Jingbo Wang, Dahua Lin, Bo Dai
2,311.17754
In the evolving landscape of digital media and video production the precise manipulation and reproduction of visual elements like camera movements and character actions are highly desired. Existing SLAM methods face limitations in dynamic scenes and human pose estimation often focuses on 2D projections neglecting 3D statuses. To address these issues we first introduce a reverse filming behavior estimation technique. It optimizes camera trajectories by leveraging NeRF as a differentiable renderer and refining SMPL tracks. We then introduce a cinematic transfer pipeline that is able to transfer various shot types to a new 2D video or a 3D virtual environment. The incorporation of 3D engine workflow enables superior rendering and control abilities which also achieves a higher rating in the user study.
[]
[]
[]
[]
400
401
SeaBird: Segmentation in Bird's View with Dice Loss Improves Monocular 3D Detection of Large Objects
Abhinav Kumar, Yuliang Guo, Xinyu Huang, Liu Ren, Xiaoming Liu
null
Monocular 3D detectors achieve remarkable performance on cars and smaller objects. However their performance drops on larger objects leading to fatal accidents. Some attribute the failures to training data scarcity or the receptive field requirements of large objects. In this paper we highlight this understudied problem of generalization to large objects. We find that modern frontal detectors struggle to generalize to large objects even on nearly balanced datasets. We argue that the cause of failure is the sensitivity of depth regression losses to noise of larger objects. To bridge this gap we comprehensively investigate regression and dice losses examining their robustness under varying error levels and object sizes. We mathematically prove that the dice loss leads to superior noise-robustness and model convergence for large objects compared to regression losses for a simplified case. Leveraging our theoretical insights we propose SeaBird (Segmentation in Bird's View) as the first step towards generalizing to large objects. SeaBird effectively integrates BEV segmentation on foreground objects for 3D detection with the segmentation head trained with the dice loss. SeaBird achieves SoTA results on the KITTI-360 leaderboard and improves existing detectors on the nuScenes leaderboard particularly for large objects.
[]
[]
[]
[]
401
402
Text-Driven Image Editing via Learnable Regions
http://arxiv.org/abs/2311.16432
Yuanze Lin, Yi-Wen Chen, Yi-Hsuan Tsai, Lu Jiang, Ming-Hsuan Yang
2,311.16432
Language has emerged as a natural interface for image editing. In this paper we introduce a method for region-based image editing driven by textual prompts without the need for user-provided masks or sketches. Specifically our approach leverages an existing pre-trained text-to-image model and introduces a bounding box generator to identify the editing regions that are aligned with the textual prompts. We show that this simple approach enables flexible editing that is compatible with current image generation models and is able to handle complex prompts featuring multiple objects complex sentences or lengthy paragraphs. We conduct an extensive user study to compare our method against state-of-the-art methods. The experiments demonstrate the competitive performance of our method in manipulating images with high fidelity and realism that correspond to the provided language descriptions. Our project webpage can be found at: https://yuanzelin.me/LearnableRegions_page.
[]
[]
[]
[]
402
403
Relation Rectification in Diffusion Model
http://arxiv.org/abs/2403.20249
Yinwei Wu, Xingyi Yang, Xinchao Wang
2,403.20249
Despite their exceptional generative abilities large T2I diffusion models much like skilled but careless artists often struggle with accurately depicting visual relationships between objects. This issue as we uncover through careful analysis arises from a misaligned text encoder that struggles to interpret specific relationships and differentiate the logical order of associated objects. To resolve this we introduce a novel task termed Relation Rectification aiming to refine the model to accurately represent a given relationship it initially fails to generate. To address this we propose an innovative solution utilizing a Heterogeneous Graph Convolutional Network (HGCN). It models the directional relationships between relation terms and corresponding objects within the input prompts. Specifically we optimize the HGCN on a pair of prompts with identical relational words but reversed object orders supplemented by a few reference images. The lightweight HGCN adjusts the text embeddings generated by the text encoder ensuring accurate reflection of the textual relation in the embedding space. Crucially our method retains the parameters of the text encoder and diffusion model preserving the model's robust performance on unrelated descriptions. We validated our approach on a newly curated dataset of diverse relational data demonstrating both quantitative and qualitative enhancements in generating images with precise visual relations. Project page: https://wuyinwei-hah.github.io/rrnet.github.io/ .
[]
[]
[]
[]
403
404
NOPE: Novel Object Pose Estimation from a Single Image
Van Nguyen Nguyen, Thibault Groueix, Georgy Ponimatkin, Yinlin Hu, Renaud Marlet, Mathieu Salzmann, Vincent Lepetit
null
The practicality of 3D object pose estimation remains limited for many applications due to the need for prior knowledge of a 3D model and a training period for new objects. To address this limitation we propose an approach that takes a single image of a new object as input and predicts the relative pose of this object in new images without prior knowledge of the object's 3D model and without requiring training time for new objects and categories. We achieve this by training a model to directly predict discriminative embeddings for viewpoints surrounding the object. This prediction is done using a simple U-Net architecture with attention and conditioned on the desired pose which yields extremely fast inference. We compare our approach to state-of-the-art methods and show it outperforms them both in terms of accuracy and robustness.
[]
[]
[]
[]
404
405
Mocap Everyone Everywhere: Lightweight Motion Capture With Smartwatches and a Head-Mounted Camera
http://arxiv.org/abs/2401.00847
Jiye Lee, Hanbyul Joo
2,401.00847
We present a lightweight and affordable motion capture method based on two smartwatches and a head-mounted camera. In contrast to the existing approaches that use six or more expert-level IMU devices our approach is much more cost-effective and convenient. Our method can make wearable motion capture accessible to everyone everywhere enabling 3D full-body motion capture in diverse environments. As a key idea to overcome the extreme sparsity and ambiguities of sensor inputs with different modalities we integrate 6D head poses obtained from the head-mounted cameras for motion estimation. To enable capture in expansive indoor and outdoor scenes we propose an algorithm to track and update floor level changes to define head poses coupled with a multi-stage Transformer-based regression module. We also introduce novel strategies leveraging visual cues of egocentric images to further enhance the motion capture quality while reducing ambiguities. We demonstrate the performance of our method on various challenging scenarios including complex outdoor environments and everyday motions including object interactions and social interactions among multiple individuals.
[]
[]
[]
[]
405
406
Fast ODE-based Sampling for Diffusion Models in Around 5 Steps
http://arxiv.org/abs/2312.00094
Zhenyu Zhou, Defang Chen, Can Wang, Chun Chen
2,312.00094
Sampling from diffusion models can be treated as solving the corresponding ordinary differential equations (ODEs) with the aim of obtaining an accurate solution with as few number of function evaluations (NFE) as possible. Recently various fast samplers utilizing higher-order ODE solvers have emerged and achieved better performance than the initial first-order one. However these numerical methods inherently result in certain approximation errors which significantly degrades sample quality with extremely small NFE (e.g. around 5). In contrast based on the geometric observation that each sampling trajectory almost lies in a two-dimensional subspace embedded in the ambient space we propose Approximate MEan-Direction Solver (AMED-Solver) that eliminates truncation errors by directly learning the mean direction for fast diffusion sampling. Besides our method can be easily used as a plugin to further improve existing ODE-based samplers. Extensive experiments on image synthesis with the resolution ranging from 32 to 512 demonstrate the effectiveness of our method. With only 5 NFE we achieve 6.61 FID on CIFAR-10 10.74 FID on ImageNet 64x64 and 13.20 FID on LSUN Bedroom. Our code is available at https://github.com/zju-pi/diff-sampler.
[]
[]
[]
[]
406
407
Dual-View Visual Contextualization for Web Navigation
http://arxiv.org/abs/2402.04476
Jihyung Kil, Chan Hee Song, Boyuan Zheng, Xiang Deng, Yu Su, Wei-Lun Chao
2,402.04476
Automatic web navigation aims to build a web agent that can follow language instructions to execute complex and diverse tasks on real-world websites. Existing work primarily takes HTML documents as input which define the contents and action spaces (i.e. actionable elements and operations) of webpages. Nevertheless HTML documents may not provide a clear task-related context for each element making it hard to select the right (sequence of) actions. In this paper we propose to contextualize HTML elements through their "dual views" in webpage screenshots: each HTML element has its corresponding bounding box and visual content in the screenshot. We build upon the insight---web developers tend to arrange task-related elements nearby on webpages to enhance user experiences---and propose to contextualize each element with its neighbor elements using both textual and visual features. The resulting representations of HTML elements are more informative for the agent to take action. We validate our method on the recently released Mind2Web dataset which features diverse navigation domains and tasks on real-world websites. Our method consistently outperforms the baseline in all the scenarios including cross-task cross-website and cross-domain ones.
[]
[]
[]
[]
407
408
Language-driven Grasp Detection
An Dinh Vuong, Minh Nhat Vu, Baoru Huang, Nghia Nguyen, Hieu Le, Thieu Vo, Anh Nguyen
null
Grasp detection is a persistent and intricate challenge with various industrial applications. Recently many methods and datasets have been proposed to tackle the grasp detection problem. However most of them do not consider using natural language as a condition to detect the grasp poses. In this paper we introduce Grasp-Anything++ a new language-driven grasp detection dataset featuring 1M samples over 3M objects and upwards of 10M grasping instructions. We utilize foundation models to create a large-scale scene corpus with corresponding images and grasp prompts. We approach the language-driven grasp detection task as a conditional generation problem. Drawing on the success of diffusion models in generative tasks and given that language plays a vital role in this task we propose a new language-driven grasp detection method based on diffusion models. Our key contribution is the contrastive training objective which explicitly contributes to the denoising process to detect the grasp pose given the language instructions. We illustrate that our approach is theoretically supportive. The intensive experiments show that our method outperforms state-of-the-art approaches and allows real-world robotic grasping. Finally we demonstrate our large-scale dataset enables zero-short grasp detection and is a challenging benchmark for future work.
[]
[]
[]
[]
408
409
Towards Modern Image Manipulation Localization: A Large-Scale Dataset and Novel Methods
Chenfan Qu, Yiwu Zhong, Chongyu Liu, Guitao Xu, Dezhi Peng, Fengjun Guo, Lianwen Jin
null
In recent years image manipulation localization has attracted increasing attention due to its pivotal role in ensuring social media security. However effectively identifying forged regions remains an open challenge. The high acquisition cost and the severe scarcity of high-quality data are major factors hindering the performance improvement of modern image manipulation localization systems. To address this issue we propose a novel paradigm termed as CAAA to automatically and accurately annotate the manually forged images from the web at the pixel-level. We further propose a novel metric termed as QES to assist in filtering out unreliable annotations. With CAAA and QES we construct a large-scale diverse and high-quality dataset comprising 123150 manually forged images with mask annotations. Furthermore we develop a new model termed as APSC-Net for accurate image manipulation localization. According to extensive experiments our methods outperforms previous state-of-the-art methods our dataset significantly improves the performance of various models on the widely-used benchmarks. The dataset and codes are publicly available at https://github.com/qcf-568/MIML.
[]
[]
[]
[]
409
410
Mitigating Noisy Correspondence by Geometrical Structure Consistency Learning
http://arxiv.org/abs/2405.16996
Zihua Zhao, Mengxi Chen, Tianjie Dai, Jiangchao Yao, Bo Han, Ya Zhang, Yanfeng Wang
2,405.16996
Noisy correspondence that refers to mismatches in cross-modal data pairs is prevalent on human-annotated or web-crawled datasets. Prior approaches to leverage such data mainly consider the application of uni-modal noisy label learning without amending the impact on both cross-modal and intra-modal geometrical structures in multimodal learning. Actually we find that both structures are effective to discriminate noisy correspondence through structural differences when being well-established. Inspired by this observation we introduce a Geometrical Structure Consistency (GSC) method to infer the true correspondence. Specifically GSC ensures the preservation of geometrical structures within and between modalities allowing for the accurate discrimination of noisy samples based on structural differences. Utilizing these inferred true correspondence labels GSC refines the learning of geometrical structures by filtering out the noisy samples. Experiments across four cross-modal datasets confirm that GSC effectively identifies noisy samples and significantly outperforms the current leading methods. Source code is available at https://github.com/MediaBrain-SJTU/GSC.
[]
[]
[]
[]
410
411
CLiC: Concept Learning in Context
http://arxiv.org/abs/2311.17083
Mehdi Safaee, Aryan Mikaeili, Or Patashnik, Daniel Cohen-Or, Ali Mahdavi-Amiri
2,311.17083
This paper addresses the challenge of learning a local visual pattern of an object from one image and generating images depicting objects with that pattern. Learning a localized concept and placing it on an object in a target image is a nontrivial task as the objects may have different orientations and shapes. Our approach builds upon recent advancements in visual concept learning. It involves acquiring a visual concept (e.g. an ornament) from a source image and subsequently applying it to an object (e.g. a chair) in a target image. Our key idea is to perform in-context concept learning acquiring the local visual concept within the broader context of the objects they belong to. To localize the concept learning we employ soft masks that contain both the concept within the mask and the surrounding image area. We demonstrate our approach through object generation within an image showcasing plausible embedding of in-context learned concepts. We also introduce methods for directing acquired concepts to specific locations within target images employing cross-attention mechanisms and establishing correspondences between source and target objects. The effectiveness of our method is demonstrated through quantitative and qualitative experiments along with comparisons against baseline techniques.
[]
[]
[]
[]
411
412
CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise Sketch Instance Guided Attention
Mohammad Sadil Khan, Elona Dupont, Sk Aziz Ali, Kseniya Cherenkova, Anis Kacem, Djamila Aouada
null
Reverse engineering in the realm of Computer-Aided Design (CAD) has been a longstanding aspiration though not yet entirely realized. Its primary aim is to uncover the CAD process behind a physical object given its 3D scan. We propose CAD-SIGNet an end-to-end trainable and auto-regressive architecture to recover the design history of a CAD model represented as a sequence of sketch- and-extrusion from an input point cloud. Our model learns CAD visual-language representations by layer-wise cross-attention between point cloud and CAD language embedding. In particular a new Sketch instance Guided Attention (SGA) module is proposed in order to reconstruct the fine- grained details of the sketches. Thanks to its auto-regressive nature CAD-SIGNet not only reconstructs a unique full design history of the corresponding CAD model given an in- put point cloud but also provides multiple plausible design choices. This allows for an interactive reverse engineering scenario by providing designers with multiple next step choices along with the design process. Extensive experiments on publicly available CAD datasets showcase the effectiveness of our approach against existing baseline models in two settings namely full design history recovery and conditional auto-completion from point clouds.
[]
[]
[]
[]
412
413
Object Recognition as Next Token Prediction
http://arxiv.org/abs/2312.02142
Kaiyu Yue, Bor-Chun Chen, Jonas Geiping, Hengduo Li, Tom Goldstein, Ser-Nam Lim
2,312.02142
We present an approach to pose object recognition as next token prediction. The idea is to apply a language decoder that auto-regressively predicts the text tokens from image embeddings to form labels. To ground this prediction process in auto-regression we customize a non-causal attention mask for the decoder incorporating two key features: modeling tokens from different labels to be independent and treating image tokens as a prefix. This masking mechanism inspires an efficient method -- one-shot sampling -- to simultaneously sample tokens of multiple labels in parallel and rank generated labels by their probabilities during inference. To further enhance the efficiency we propose a simple strategy to construct a compact decoder by simply discarding the intermediate blocks of a pretrained language model. This approach yields a decoder that matches the full model's performance while being notably more efficient. The code is available at https://github.com/kaiyuyue/nxtp.
[]
[]
[]
[]
413
414
CLIB-FIQA: Face Image Quality Assessment with Confidence Calibration
Fu-Zhao Ou, Chongyi Li, Shiqi Wang, Sam Kwong
null
Face Image Quality Assessment (FIQA) is pivotal for guaranteeing the accuracy of face recognition in unconstrained environments. Recent progress in deep quality-fitting-based methods that train models to align with quality anchors has shown promise in FIQA. However these methods heavily depend on a recognition model to yield quality anchors and indiscriminately treat the confidence of inaccurate anchors as equivalent to that of accurate ones during the FIQA model training leading to a fitting bottleneck issue. This paper seeks a solution by putting forward the Confidence-Calibrated Face Image Quality Assessment (CLIB-FIQA) approach underpinned by the synergistic interplay between the quality anchors and objective quality factors such as blur pose expression occlusion and illumination. Specifically we devise a joint learning framework built upon the vision-language alignment model which leverages the joint distribution with multiple quality factors to facilitate the quality fitting of the FIQA model. Furthermore to alleviate the issue of the model placing excessive trust in inaccurate quality anchors we propose a confidence calibration method to correct the quality distribution by exploiting to the fullest extent of these objective quality factors characterized as the merged-factor distribution during training. Experimental results on eight datasets reveal the superior performance of the proposed method.
[]
[]
[]
[]
414
415
DVMNet: Computing Relative Pose for Unseen Objects Beyond Hypotheses
http://arxiv.org/abs/2403.13683
Chen Zhao, Tong Zhang, Zheng Dang, Mathieu Salzmann
2,403.13683
Determining the relative pose of an object between two images is pivotal to the success of generalizable object pose estimation. Existing approaches typically approximate the continuous pose representation with a large number of discrete pose hypotheses which incurs a computationally expensive process of scoring each hypothesis at test time. By contrast we present a Deep Voxel Matching Network (DVMNet) that eliminates the need for pose hypotheses and computes the relative object pose in a single pass. To this end we map the two input RGB images reference and query to their respective voxelized 3D representations. We then pass the resulting voxels through a pose estimation module where the voxels are aligned and the pose is computed in an end-to-end fashion by solving a least-squares problem. To enhance robustness we introduce a weighted closest voxel algorithm capable of mitigating the impact of noisy voxels. We conduct extensive experiments on the CO3D LINEMOD and Objaverse datasets demonstrating that our method delivers more accurate relative pose estimates for novel objects at a lower computational cost compared to state-of-the-art methods. Our code is released at: https://github.com/sailor-z/DVMNet.
[]
[]
[]
[]
415
416
Transcriptomics-guided Slide Representation Learning in Computational Pathology
http://arxiv.org/abs/2405.11618
Guillaume Jaume, Lukas Oldenburg, Anurag Vaidya, Richard J. Chen, Drew F.K. Williamson, Thomas Peeters, Andrew H. Song, Faisal Mahmood
2,405.11618
Self-supervised learning (SSL) has been successful in building patch embeddings of small histology images (e.g. 224 x 224 pixels) but scaling these models to learn slide embeddings from the entirety of giga-pixel whole-slide images (WSIs) remains challenging. Here we leverage complementary information from gene expression profiles to guide slide representation learning using multi-modal pre-training. Expression profiles constitute highly detailed molecular descriptions of a tissue that we hypothesize offer a strong task-agnostic training signal for learning slide embeddings. Our slide and expression (S+E) pretraining strategy called TANGLE employs modality-specific encoders the outputs of which are aligned via contrastive learning. TANGLE was pre-trained on samples from three different organs: liver (n=6597 S+E pairs) breast (n=1020) and lung (n=1012) from two different species (Homo sapiens and Rattus norvegicus). Across three independent test datasets consisting of 1265 breast WSIs 1946 lung WSIs and 4584 liver WSIs TANGLE shows significantly better few-shot performance compared to supervised and SSL baselines. When assessed using prototype-based classification and slide retrieval TANGLE also shows a substantial performance improvement over all baselines. Code available at https://github.com/mahmoodlab/TANGLE.
[]
[]
[]
[]
416
417
Predicated Diffusion: Predicate Logic-Based Attention Guidance for Text-to-Image Diffusion Models
http://arxiv.org/abs/2311.16117
Kota Sueyoshi, Takashi Matsubara
2,311.16117
Diffusion models have achieved remarkable success in generating high-quality diverse and creative images. However in text-based image generation they often struggle to accurately capture the intended meaning of the text. For instance a specified object might not be generated or an adjective might incorrectly alter unintended objects. Moreover we found that relationships indicating possession between objects are frequently overlooked. Despite the diversity of users' intentions in text existing methods often focus on only some aspects of these intentions. In this paper we propose Predicated Diffusion a unified framework designed to more effectively express users' intentions. It represents the intended meaning as propositions using predicate logic and treats the pixels in attention maps as fuzzy predicates. This approach provides a differentiable loss function that offers guidance for the image generation process to better fulfill the propositions. Comparative evaluations with existing methods demonstrated that Predicated Diffusion excels in generating images faithful to various text prompts while maintaining high image quality as validated by human evaluators and pretrained image-text models.
[]
[]
[]
[]
417
418
MuRF: Multi-Baseline Radiance Fields
http://arxiv.org/abs/2312.04565
Haofei Xu, Anpei Chen, Yuedong Chen, Christos Sakaridis, Yulun Zhang, Marc Pollefeys, Andreas Geiger, Fisher Yu
2,312.04565
We present Multi-Baseline Radiance Fields (MuRF) a general feed-forward approach to solving sparse view synthesis under multiple different baseline settings (small and large baselines and different number of input views). To render a target novel view we discretize the 3D space into planes parallel to the target image plane and accordingly construct a target view frustum volume. Such a target volume representation is spatially aligned with the target view which effectively aggregates relevant information from the input views for high-quality rendering. It also facilitates subsequent radiance field regression with a convolutional network thanks to its axis-aligned nature. The 3D context modeled by the convolutional network enables our method to synthesis sharper scene structures than prior works. Our MuRF achieves state-of-the-art performance across multiple different baseline settings and diverse scenarios ranging from simple objects (DTU) to complex indoor and outdoor scenes (RealEstate10K and LLFF). We also show promising zero-shot generalization abilities on the Mip-NeRF 360 dataset demonstrating the general applicability of MuRF.
[]
[]
[]
[]
418
419
CLIP-BEVFormer: Enhancing Multi-View Image-Based BEV Detector with Ground Truth Flow
Chenbin Pan, Burhaneddin Yaman, Senem Velipasalar, Liu Ren
null
Autonomous driving stands as a pivotal domain in computer vision shaping the future of transportation. Within this paradigm the backbone of the system plays a crucial role in interpreting the complex environment. However a notable challenge has been the loss of clear supervision when it comes to Bird's Eye View elements. To address this limitation we introduce CLIP-BEVFormer a novel approach that leverages the power of contrastive learning techniques to enhance the multi-view image-derived BEV backbones with ground truth information flow. We conduct extensive experiments on the challenging nuScenes dataset and showcase significant and consistent improvements over the SOTA. Specifically CLIP-BEVFormer achieves an impressive 8.5% and 9.2% enhancement in terms of NDS and mAP respectively over the previous best BEV model on the 3D object detection task.
[]
[]
[]
[]
419
420
CLOVA: A Closed-LOop Visual Assistant with Tool Usage and Update
http://arxiv.org/abs/2312.10908
Zhi Gao, Yuntao Du, Xintong Zhang, Xiaojian Ma, Wenjuan Han, Song-Chun Zhu, Qing Li
2,312.10908
Utilizing large language models (LLMs) to compose off-the-shelf visual tools represents a promising avenue of research for developing robust visual assistants capable of addressing diverse visual tasks. However these methods often overlook the potential for continual learning typically by freezing the utilized tools thus limiting their adaptation to environments requiring new knowledge. To tackle this challenge we propose CLOVA a Closed-Loop Visual Assistant which operates within a framework encompassing inference reflection and learning phases. During the inference phase LLMs generate programs and execute corresponding tools to complete assigned tasks. In the reflection phase a multimodal global-local reflection scheme analyzes human feedback to determine which tools require updating. Lastly the learning phase employs three flexible approaches to automatically gather training data and introduces a novel prompt tuning scheme to update the tools allowing CLOVA to efficiently acquire new knowledge. Experimental findings demonstrate that CLOVA surpasses existing tool-usage methods by 5% in visual question answering and multiple-image reasoning by 10% in knowledge tagging and by 20% in image editing. These results underscore the significance of the continual learning capability in general visual assistants.
[]
[]
[]
[]
420
421
Depth Prompting for Sensor-Agnostic Depth Estimation
http://arxiv.org/abs/2405.11867
Jin-Hwi Park, Chanhwi Jeong, Junoh Lee, Hae-Gon Jeon
2,405.11867
Dense depth maps have been used as a key element of visual perception tasks. There have been tremendous efforts to enhance the depth quality ranging from optimization-based to learning-based methods. Despite the remarkable progress for a long time their applicability in the real world is limited due to systematic measurement biases such as density sensing pattern and scan range. It is well-known that the biases make it difficult for these methods to achieve their generalization. We observe that learning a joint representation for input modalities (e.g. images and depth) which most recent methods adopt is sensitive to the biases. In this work we disentangle those modalities to mitigate the biases with prompt engineering. For this we design a novel depth prompt module to allow the desirable feature representation according to new depth distributions from either sensor types or scene configurations. Our depth prompt can be embedded into foundation models for monocular depth estimation. Through this embedding process our method helps the pretrained model to be free from restraint of depth scan range and to provide absolute scale depth maps. We demonstrate the effectiveness of our method through extensive evaluations. Source code is publicly available at https://github.com/JinhwiPark/DepthPrompting.
[]
[]
[]
[]
421
422
G3DR: Generative 3D Reconstruction in ImageNet
http://arxiv.org/abs/2403.00939
Pradyumna Reddy, Ismail Elezi, Jiankang Deng
2,403.00939
We introduce a novel 3D generative method Generative 3D Reconstruction (G3DR) in ImageNet capable of generating diverse and high-quality 3D objects from single images addressing the limitations of existing methods. At the heart of our framework is a novel depth regularization technique that enables the generation of scenes with high-geometric fidelity. G3DR also leverages a pretrained language-vision model such as CLIP to enable reconstruction in novel views and improve the visual realism of generations. Additionally G3DR designs a simple but effective sampling procedure to further improve the quality of generations. G3DR offers diverse and efficient 3D asset generation based on class or text conditioning. Despite its simplicity G3DR is able to beat state-of-theart methods improving over them by up to 22% in perceptual metrics and 90% in geometry scores while needing only half of the training time. Code is available at https://github.com/preddy5/G3DR
[]
[]
[]
[]
422
423
MoML: Online Meta Adaptation for 3D Human Motion Prediction
Xiaoning Sun, Huaijiang Sun, Bin Li, Dong Wei, Weiqing Li, Jianfeng Lu
null
In the academic field the research on human motion prediction tasks mainly focuses on exploiting the observed information to forecast human movements accurately in the near future horizon. However a significant gap appears when it comes to the application field as current models are all trained offline with fixed parameters that are inherently suboptimal to handle the complex yet ever-changing nature of human behaviors. To bridge this gap in this paper we introduce the task of online meta adaptation for human motion prediction based on the insight that finding "smart weights" capable of swift adjustments to suit different motion contexts along the time is a key to improving predictive accuracy. We propose MoML which ingeniously borrows the bilevel optimization spirit of model-agnostic meta-learning to transform previous predictive mistakes into strong inductive biases to guide online adaptation. This is achieved by our MoAdapter blocks that can learn error information by facilitating efficient adaptation via a few gradient steps which fine-tunes our meta-learned "smart" initialization produced by the generic predictor. Considering real-time requirements in practice we further propose Fast-MoML a more efficient variant of MoML that features a closed-form solution instead of conventional gradient update. Experimental results show that our approach can effectively bring many existing offline motion prediction models online and improves their predictive accuracy.
[]
[]
[]
[]
423
424
CAT-DM: Controllable Accelerated Virtual Try-on with Diffusion Model
Jianhao Zeng, Dan Song, Weizhi Nie, Hongshuo Tian, Tongtong Wang, An-An Liu
null
Generative Adversarial Networks (GANs) dominate the research field in image-based virtual try-on but have not resolved problems such as unnatural deformation of garments and the blurry generation quality. While the generative quality of diffusion models is impressive achieving controllability poses a significant challenge when applying it to virtual try-on and multiple denoising iterations limit its potential for real-time applications. In this paper we propose Controllable Accelerated virtual Try-on with Diffusion Model (CAT-DM). To enhance the controllability a basic diffusion-based virtual try-on network is designed which utilizes ControlNet to introduce additional control conditions and improves the feature extraction of garment images. In terms of acceleration CAT-DM initiates a reverse denoising process with an implicit distribution generated by a pre-trained GAN-based model. Compared with previous try-on methods based on diffusion models CAT-DM not only retains the pattern and texture details of the in-shop garment but also reduces the sampling steps without compromising generation quality. Extensive experiments demonstrate the superiority of CAT-DM against both GAN-based and diffusion-based methods in producing more realistic images and accurately reproducing garment patterns.
[]
[]
[]
[]
424
425
Hyperspherical Classification with Dynamic Label-to-Prototype Assignment
http://arxiv.org/abs/2403.16937
Mohammad Saeed Ebrahimi Saadabadi, Ali Dabouei, Sahar Rahimi Malakshan, Nasser M. Nasrabadi
2,403.16937
Aiming to enhance the utilization of metric space by the parametric softmax classifier recent studies suggest replacing it with a non-parametric alternative. Although a non-parametric classifier may provide better metric space utilization it introduces the challenge of capturing inter-class relationships. A shared characteristic among prior non-parametric classifiers is the static assignment of labels to prototypes during the training i.e. each prototype consistently represents a class throughout the training course. Orthogonal to previous works we present a simple yet effective method to optimize the category assigned to each prototype (label-to-prototype assignment) during the training. To this aim we formalize the problem as a two-step optimization objective over network parameters and label-to-prototype assignment mapping. We solve this optimization using a sequential combination of gradient descent and Bipartide matching. We demonstrate the benefits of the proposed approach by conducting experiments on balanced and long-tail classification problems using different backbone network architectures. In particular our method outperforms its competitors by 1.22% accuracy on CIFAR-100 and 2.15% on ImageNet-200 using a metric space dimension half of the size of its competitors. \href https://github.com/msed-Ebrahimi/DL2PA_CVPR24 Code
[]
[]
[]
[]
425
426
VTimeLLM: Empower LLM to Grasp Video Moments
http://arxiv.org/abs/2311.18445
Bin Huang, Xin Wang, Hong Chen, Zihan Song, Wenwu Zhu
2,311.18445
Large language models (LLMs) have shown remarkable text understanding capabilities which have been extended as Video LLMs to handle video data for comprehending visual details. However existing Video LLMs can only provide a coarse description of the entire video failing to capture the precise start and end time boundary of specific events. In this paper we solve this issue via proposing VTimeLLM a novel Video LLM designed for fine-grained video moment understanding and reasoning with respect to time boundary. Specifically our VTimeLLM adopts a boundary-aware three-stage training strategy which respectively utilizes image-text pairs for feature alignment multiple-event videos to increase temporal-boundary awareness and high-quality video-instruction tuning to further improve temporal understanding ability as well as align with human intents. Extensive experiments demonstrate that in fine-grained time-related comprehension tasks for videos such as Temporal Video Grounding and Dense Video Captioning VTimeLLM significantly outperforms existing Video LLMs. Besides benefits from the fine-grained temporal understanding of the videos further enable VTimeLLM to beat existing Video LLMs in video dialogue benchmark showing its superior cross-modal understanding and reasoning abilities.
[]
[]
[]
[]
426
427
FLHetBench: Benchmarking Device and State Heterogeneity in Federated Learning
Junyuan Zhang, Shuang Zeng, Miao Zhang, Runxi Wang, Feifei Wang, Yuyin Zhou, Paul Pu Liang, Liangqiong Qu
null
Federated learning (FL) is a powerful technology that enables collaborative training of machine learning models without sharing private data among clients. The fundamental challenge in FL lies in learning over extremely heterogeneous data distributions device capacities and device state availabilities all of which adversely impact performance and communication efficiency. While data heterogeneity has been well-studied in the literature this paper introduces FLHetBench the first FL benchmark targeted toward understanding device and state heterogeneity. FLHetBench comprises two new sampling methods to generate real-world device and state databases with varying heterogeneity and new metrics for quantifying the success of FL methods under these real-world constraints. Using FLHetBench we conduct a comprehensive evaluation of existing methods and find that they struggle under these settings which inspires us to propose BiasPrompt+ a new method employing staleness-aware aggregation and fast weights to tackle these new heterogeneity challenges. Experiments on various FL tasks and datasets validate the effectiveness of our BiasPrompt+ method and highlight the value of FLHetBench in fostering the development of more efficient and robust FL solutions under real-world device and state constraints.
[]
[]
[]
[]
427
428
Flattening the Parent Bias: Hierarchical Semantic Segmentation in the Poincare Ball
Simon Weber, Bar?? Zöngür, Nikita Araslanov, Daniel Cremers
null
Hierarchy is a natural representation of semantic taxonomies including the ones routinely used in image segmentation. Indeed recent work on semantic segmentation reports improved accuracy from supervised training leveraging hierarchical label structures. Encouraged by these results we revisit the fundamental assumptions behind that work. We postulate and then empirically verify that the reasons for the observed improvement in segmentation accuracy may be entirely unrelated to the use of the semantic hierarchy. To demonstrate this we design a range of cross-domain experiments with a representative hierarchical approach. We find that on the new testing domains a flat (non-hierarchical) segmentation network in which the parents are inferred from the children has superior segmentation accuracy to the hierarchical approach across the board. Complementing these findings and inspired by the intrinsic properties of hyperbolic spaces we study a more principled approach to hierarchical segmentation using the Poincare ball model. The hyperbolic representation largely outperforms the previous (Euclidean) hierarchical approach as well and is on par with our flat Euclidean baseline in terms of segmentation accuracy. However it additionally exhibits surprisingly strong calibration quality of the parent nodes in the semantic hierarchy especially on the more challenging domains. Our combined analysis suggests that the established practice of hierarchical segmentation may be limited to in-domain settings whereas flat classifiers generalize substantially better especially if they are modeled in the hyperbolic space.
[]
[]
[]
[]
428
429
Privacy-Preserving Optics for Enhancing Protection in Face De-Identification
http://arxiv.org/abs/2404.00777
Jhon Lopez, Carlos Hinojosa, Henry Arguello, Bernard Ghanem
2,404.00777
The modern surge in camera usage alongside widespread computer vision technology applications poses significant privacy and security concerns. Current artificial intelligence (AI) technologies aid in recognizing relevant events and assisting in daily tasks in homes offices hospitals etc. The need to access or process personal information for these purposes raises privacy concerns. While software-level solutions like face de-identification provide a good privacy/utility trade-off they present vulnerabilities to sniffing attacks. In this paper we propose a hardware-level face de-identification method to solve this vulnerability. Specifically our approach first learns an optical encoder along with a regression model to obtain a face heatmap while hiding the face identity from the source image. We also propose an anonymization framework that generates a new face using the privacy-preserving image face heatmap and a reference face image from a public dataset as input. We validate our approach with extensive simulations and hardware experiments.
[]
[]
[]
[]
429
430
SmartRefine: A Scenario-Adaptive Refinement Framework for Efficient Motion Prediction
http://arxiv.org/abs/2403.11492
Yang Zhou, Hao Shao, Letian Wang, Steven L. Waslander, Hongsheng Li, Yu Liu
2,403.11492
Predicting the future motion of surrounding agents is essential for autonomous vehicles (AVs) to operate safely in dynamic human-robot-mixed environments. Context information such as road maps and surrounding agents' states provides crucial geometric and semantic information for motion behavior prediction. To this end recent works explore two-stage prediction frameworks where coarse trajectories are first proposed and then used to select critical context information for trajectory refinement. However they either incur a large amount of computation or bring limited improvement if not both. In this paper we introduce a novel scenario-adaptive refinement strategy named SmartRefine to refine prediction with minimal additional computation. Specifically SmartRefine can comprehensively adapt refinement configurations based on each scenario's properties and smartly chooses the number of refinement iterations by introducing a quality score to measure the prediction quality and remaining refinement potential of each scenario. SmartRefine is designed as a generic and flexible approach that can be seamlessly integrated into most state-of-the-art motion prediction models. Experiments on Argoverse (1 & 2) show that our method consistently improves the prediction accuracy of multiple state-of-the-art prediction models. Specifically by adding SmartRefine to QCNet we outperform all published ensemble-free works on the Argoverse 2 leaderboard (single agent track) at submission. Comprehensive studies are also conducted to ablate design choices and explore the mechanism behind multi-iteration refinement. Codes are available at https://github.com/opendilab/SmartRefine/.
[]
[]
[]
[]
430
431
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
http://arxiv.org/abs/2311.17005
Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, Limin Wang, Yu Qiao
2,311.17005
With the rapid development of Multi-modal Large Language Models (MLLMs) a number of diagnostic benchmarks have recently emerged to evaluate the comprehension capabilities of these models. However most benchmarks predominantly assess spatial understanding in the static image tasks while overlooking temporal understanding in the dynamic video tasks. To alleviate this issue we introduce a comprehensive Multi-modal Video understanding Benchmark namely MVBench which covers 20 challenging video tasks that cannot be effectively solved with a single frame. Specifically we first introduce a novel static-to-dynamic method to define these temporal-related tasks. By transforming various static tasks into dynamic ones we enable the systematic generation of video tasks that require a broad spectrum of temporal skills ranging from perception to cognition. Then guided by the task definition we automatically convert public video annotations into multiple-choice QA to evaluate each task. On one hand such a distinct paradigm allows us to build MVBench efficiently without much manual intervention. On the other hand it guarantees evaluation fairness with ground-truth video annotations avoiding the biased scoring of LLMs. Moreover we further develop a robust video MLLM baseline i.e. VideoChat2 by progressive multi-modal training with diverse instruction-tuning data. The extensive results on our MVBench reveal that the existing MLLMs are far from satisfactory in temporal understanding while our VideoChat2 largely surpasses these leading models by over 15% on MVBench.
[]
[]
[]
[]
431
432
Multi-Scale Video Anomaly Detection by Multi-Grained Spatio-Temporal Representation Learning
Menghao Zhang, Jingyu Wang, Qi Qi, Haifeng Sun, Zirui Zhuang, Pengfei Ren, Ruilong Ma, Jianxin Liao
null
ecent progress in video anomaly detection suggests that the features of appearance and motion play crucial roles in distinguishing abnormal patterns from normal ones. However we note that the effect of spatial scales of anomalies is ignored. The fact that many abnormal events occur in limited localized regions and severe background noise interferes with the learning of anomalous changes. Meanwhile most existing methods are limited by coarse-grained modeling approaches which are inadequate for learning highly discriminative features to discriminate subtle differences between small-scale anomalies and normal patterns. To this end this paper address multi-scale video anomaly detection by multi-grained spatio-temporal representation learning. We utilize video continuity to design three proxy tasks to perform feature learning at both coarse-grained and fine-grained levels i.e. continuity judgment discontinuity localization and missing frame estimation. In particular we formulate missing frame estimation as a contrastive learning task in feature space instead of a reconstruction task in RGB space to learn highly discriminative features. Experiments show that our proposed method outperforms state-of-the-art methods on four datasets especially in scenes with small-scale anomalies.
[]
[]
[]
[]
432
433
An Aggregation-Free Federated Learning for Tackling Data Heterogeneity
http://arxiv.org/abs/2404.18962
Yuan Wang, Huazhu Fu, Renuga Kanagavelu, Qingsong Wei, Yong Liu, Rick Siow Mong Goh
2,404.18962
The performance of Federated Learning (FL) hinges on the effectiveness of utilizing knowledge from distributed datasets. Traditional FL methods adopt an aggregate-then-adapt framework where clients update local models based on a global model aggregated by the server from the previous training round. This process can cause client drift especially with significant cross-client data heterogeneity impacting model performance and convergence of the FL algorithm. To address these challenges we introduce FedAF a novel aggregation-free FL algorithm. In this framework clients collaboratively learn condensed data by leveraging peer knowledge the server subsequently trains the global model using the condensed data and soft labels received from the clients. FedAF inherently avoids the issue of client drift enhances the quality of condensed data amid notable data heterogeneity and improves the global model performance. Extensive numerical studies on several popular benchmark datasets show FedAF surpasses various state-of-the-art FL algorithms in handling label-skew and feature-skew data heterogeneity leading to superior global model accuracy and faster convergence.
[]
[]
[]
[]
433
434
Generative Multimodal Models are In-Context Learners
http://arxiv.org/abs/2312.13286
Quan Sun, Yufeng Cui, Xiaosong Zhang, Fan Zhang, Qiying Yu, Yueze Wang, Yongming Rao, Jingjing Liu, Tiejun Huang, Xinlong Wang
2,312.13286
Humans can easily solve multimodal tasks in context with only a few demonstrations or simple instructions which current multimodal systems largely struggle to imitate. In this work we demonstrate that by effectively scaling up generative multimodal models their task-agnostic in-context learning capabilities can be significantly enhanced. We introduce Emu2 a generative multimodal model with 37 billion parameters which serves as a base model and general-purpose interface for a variety of multimodal tasks. Emu2 not only achieves strong performance in few-shot setting but can also be instruct-tuned to follow specific instructions such as visual question answering and object-grounded image generation. Emu2 even emerges to solve tasks that require on-the-fly reasoning such as visual prompting which existing models are unlikely to handle. We identify additional tasks where Emu2's in-context learning can further improve and discuss its broader societal impact. Our code and models will be made publicly available to facilitate future research.
[]
[]
[]
[]
434
435
Synergistic Global-space Camera and Human Reconstruction from Videos
http://arxiv.org/abs/2405.14855
Yizhou Zhao, Tuanfeng Yang Wang, Bhiksha Raj, Min Xu, Jimei Yang, Chun-Hao Paul Huang
2,405.14855
Remarkable strides have been made in reconstructing static scenes or human bodies from monocular videos. Yet the two problems have largely been approached independently without much synergy. Most visual SLAM methods can only reconstruct camera trajectories and scene structures up to scale while most HMR methods reconstruct human meshes in metric scale but fall short in reasoning with cameras and scenes. This work introduces Synergistic Camera and Human Reconstruction (SynCHMR) to marry the best of both worlds. Specifically we design Human-aware Metric SLAM to reconstruct metric-scale camera poses and scene point clouds using camera-frame HMR as a strong prior addressing depth scale and dynamic ambiguities. Conditioning on the dense scene recovered we further learn a Scene-aware SMPL Denoiser to enhance world-frame HMR by incorporating spatiotemporal coherency and dynamic scene constraints. Together they lead to consistent reconstructions of camera trajectories human meshes and dense scene point clouds in a common world frame.
[]
[]
[]
[]
435
436
Hierarchical Intra-modal Correlation Learning for Label-free 3D Semantic Segmentation
Xin Kang, Lei Chu, Jiahao Li, Xuejin Chen, Yan Lu
null
Recent methods for label-free 3D semantic segmentation aim to assist 3D model training by leveraging the open-world recognition ability of pre-trained vision language models. However these methods usually suffer from inconsistent and noisy pseudo-labels provided by the vision language models. To address this issue we present a hierarchical intra-modal correlation learning framework that captures visual and geometric correlations in 3D scenes at three levels: intra-set intra-scene and inter-scene to help learn more compact 3D representations. We refine pseudo-labels using intra-set correlations within each geometric consistency set and align features of visually and geometrically similar points using intra-scene and inter-scene correlation learning. We also introduce a feedback mechanism to distill the correlation learning capability into the 3D model. Experiments on both indoor and outdoor datasets show the superiority of our method. We achieve a state-of-the-art 36.6% mIoU on the ScanNet dataset and a 23.0% mIoU on the nuScenes dataset with improvements of 7.8% mIoU and 2.2% mIoU compared with previous SOTA. We also provide theoretical analysis and qualitative visualization results to discuss the mechanism and conduct thorough ablation studies to support the effectiveness of our framework.
[]
[]
[]
[]
436
437
Feature Re-Embedding: Towards Foundation Model-Level Performance in Computational Pathology
Wenhao Tang, Fengtao Zhou, Sheng Huang, Xiang Zhu, Yi Zhang, Bo Liu
null
Multiple instance learning (MIL) is the most widely used framework in computational pathology encompassing sub-typing diagnosis prognosis and more. However the existing MIL paradigm typically requires an offline instance feature extractor such as a pre-trained ResNet or a foundation model. This approach lacks the capability for feature fine-tuning within the specific downstream tasks limiting its adaptability and performance. To address this issue we propose a Re-embedded Regional Transformer (RRT) for re-embedding the instance features online which captures fine-grained local features and establishes connections across different regions. Unlike existing works that focus on pre-training powerful feature extractor or designing sophisticated instance aggregator RRT is tailored to re-embed instance features online. It serves as a portable module that can seamlessly integrate into mainstream MIL models. Extensive experimental results on common computational pathology tasks validate that: 1) feature re-embedding improves the performance of MIL models based on ResNet-50 features to the level of foundation model features and further enhances the performance of foundation model features; 2) the RRT can introduce more significant performance improvements to various MIL models; 3) RRT-MIL as an RRT-enhanced AB-MIL outperforms other latest methods by a large margin. The code is available at: https://github.com/DearCaat/RRT-MIL.
[]
[]
[]
[]
437
438
DiffSal: Joint Audio and Video Learning for Diffusion Saliency Prediction
http://arxiv.org/abs/2403.01226
Junwen Xiong, Peng Zhang, Tao You, Chuanyue Li, Wei Huang, Yufei Zha
2,403.01226
Audio-visual saliency prediction can draw support from diverse modality complements but further performance enhancement is still challenged by customized architectures as well as task-specific loss functions. In recent studies denoising diffusion models have shown more promising in unifying task frameworks owing to their inherent ability of generalization. Following this motivation a novel Diffusion architecture for generalized audio-visual Saliency prediction (DiffSal) is proposed in this work which formulates the prediction problem as a conditional generative task of the saliency map by utilizing input audio and video as the conditions. Based on the spatio-temporal audio-visual features an extra network Saliency-UNet is designed to perform multi-modal attention modulation for progressive refinement of the ground-truth saliency map from the noisy map. Extensive experiments demonstrate that the proposed DiffSal can achieve excellent performance across six challenging audio-visual benchmarks with an average relative improvement of 6.3% over the previous state-of-the-art results by six metrics.
[]
[]
[]
[]
438
439
Revisiting Single Image Reflection Removal In the Wild
http://arxiv.org/abs/2311.17320
Yurui Zhu, Xueyang Fu, Peng-Tao Jiang, Hao Zhang, Qibin Sun, Jinwei Chen, Zheng-Jun Zha, Bo Li
2,311.1732
This research focuses on the issue of single-image reflection removal (SIRR) in real-world conditions examining it from two angles: the collection pipeline of real reflection pairs and the perception of real reflection locations. We devise an advanced reflection collection pipeline that is highly adaptable to a wide range of real-world reflection scenarios and incurs reduced costs in collecting large-scale aligned reflection pairs. In the process we develop a large-scale high-quality reflection dataset named Reflection Removal in the Wild (RRW). RRW contains over 14950 high-resolution real-world reflection pairs a dataset forty-five times larger than its predecessors. Regarding perception of reflection locations we identify that numerous virtual reflection objects visible in reflection images are not present in the corresponding ground-truth images. This observation drawn from the aligned pairs leads us to conceive the Maximum Reflection Filter (MaxRF). The MaxRF could accurately and explicitly characterize reflection locations from pairs of images. Building upon this we design a reflection location-aware cascaded framework specifically tailored for SIRR. Powered by these innovative techniques our solution achieves superior performance than current leading methods across multiple real-world benchmarks. Codes and datasets are available at \href https://github.com/zhuyr97/Reflection_RemoVal_CVPR2024 \color blue here .
[]
[]
[]
[]
439
440
3D Face Reconstruction with the Geometric Guidance of Facial Part Segmentation
http://arxiv.org/abs/2312.00311
Zidu Wang, Xiangyu Zhu, Tianshuo Zhang, Baiqin Wang, Zhen Lei
2,312.00311
3D Morphable Models (3DMMs) provide promising 3D face reconstructions in various applications. However existing methods struggle to reconstruct faces with extreme expressions due to deficiencies in supervisory signals such as sparse or inaccurate landmarks. Segmentation information contains effective geometric contexts for face reconstruction. Certain attempts intuitively depend on differentiable renderers to compare the rendered silhouettes of reconstruction with segmentation which is prone to issues like local optima and gradient instability. In this paper we fully utilize the facial part segmentation geometry by introducing Part Re-projection Distance Loss (PRDL). Specifically PRDL transforms facial part segmentation into 2D points and re-projects the reconstruction onto the image plane. Subsequently by introducing grid anchors and computing different statistical distances from these anchors to the point sets PRDL establishes geometry descriptors to optimize the distribution of the point sets for face reconstruction. PRDL exhibits a clear gradient compared to the renderer-based methods and presents state-of-the-art reconstruction performance in extensive quantitative and qualitative experiments. Our project is available at https://github.com/wang-zidu/3DDFA-V3.
[]
[]
[]
[]
440
441
FreeU: Free Lunch in Diffusion U-Net
Chenyang Si, Ziqi Huang, Yuming Jiang, Ziwei Liu
null
In this paper we uncover the untapped potential of diffusion U-Net which serves as a "free lunch" that substantially improves the generation quality on the fly. We initially investigate the key contributions of the U-Net architecture to the denoising process and identify that its main backbone primarily contributes to denoising whereas its skip connections mainly introduce high-frequency features into the decoder module causing the potential neglect of crucial functions intrinsic to the backbone network. Capitalizing on this discovery we propose a simple yet effective method termed "FreeU" which enhances generation quality without additional training or finetuning. Our key insight is to strategically re-weight the contributions sourced from the U-Net's skip connections and backbone feature maps to leverage the strengths of both components of the U-Net architecture. Promising results on image and video generation tasks demonstrate that our FreeU can be readily integrated to existing diffusion models e.g. Stable Diffusion DreamBooth and ControlNet to improve the generation quality with only a few lines of code. All you need is to adjust two scaling factors during inference.
[]
[]
[]
[]
441
442
Text Prompt with Normality Guidance for Weakly Supervised Video Anomaly Detection
http://arxiv.org/abs/2404.08531
Zhiwei Yang, Jing Liu, Peng Wu
2,404.08531
Weakly supervised video anomaly detection (WSVAD) is a challenging task. Generating fine-grained pseudo-labels based on weak-label and then self-training a classifier is currently a promising solution. However since the existing methods use only RGB visual modality and the utilization of category text information is neglected thus limiting the generation of more accurate pseudo-labels and affecting the performance of self-training. Inspired by the manual labeling process based on the event description in this paper we propose a novel pseudo-label generation and self-training framework based on Text Prompt with Normality Guidance (TPWNG) for WSVAD. Our idea is to transfer the rich language-visual knowledge of the contrastive language-image pre-training (CLIP) model for aligning the video event description text and corresponding video frames to generate pseudo-labels. Specifically We first fine-tune the CLIP for domain adaptation by designing two ranking losses and a distributional inconsistency loss. Further we propose a learnable text prompt mechanism with the assist of a normality visual prompt to further improve the matching accuracy of video event description text and video frames. Then we design a pseudo-label generation module based on the normality guidance to infer reliable frame-level pseudo-labels. Finally we introduce a temporal context self-adaptive learning module to learn the temporal dependencies of different video events more flexibly and accurately. Extensive experiments show that our method achieves state-of-the-art performance on two benchmark datasets UCF-Crime and XD-Violence demonstrating the effectiveness of our proposed method.
[]
[]
[]
[]
442
443
SparseOcc: Rethinking Sparse Latent Representation for Vision-Based Semantic Occupancy Prediction
http://arxiv.org/abs/2404.09502
Pin Tang, Zhongdao Wang, Guoqing Wang, Jilai Zheng, Xiangxuan Ren, Bailan Feng, Chao Ma
2,404.09502
Vision-based perception for autonomous driving requires an explicit modeling of a 3D space where 2D latent representations are mapped and subsequent 3D operators are applied. However operating on dense latent spaces introduces a cubic time and space complexity which limits scalability in terms of perception range or spatial resolution. Existing approaches compress the dense representation using projections like Bird's Eye View (BEV) or Tri-Perspective View (TPV). Although efficient these projections result in information loss especially for tasks like semantic occupancy prediction. To address this we propose SparseOcc an efficient occupancy network inspired by sparse point cloud processing. It utilizes a lossless sparse latent representation with three key innovations. Firstly a 3D sparse diffuser performs latent completion using spatially decomposed 3D sparse convolutional kernels. Secondly a feature pyramid and sparse interpolation enhance scales with information from others. Finally the transformer head is redesigned as a sparse variant. SparseOcc achieves a remarkable 74.9% reduction on FLOPs over the dense baseline. Interestingly it also improves accuracy from 12.8% to 14.1% mIOU which in part can be attributed to the sparse representation's ability to avoid hallucinations on empty voxels.
[]
[]
[]
[]
443
444
SinSR: Diffusion-Based Image Super-Resolution in a Single Step
http://arxiv.org/abs/2311.14760
Yufei Wang, Wenhan Yang, Xinyuan Chen, Yaohui Wang, Lanqing Guo, Lap-Pui Chau, Ziwei Liu, Yu Qiao, Alex C. Kot, Bihan Wen
2,311.1476
While super-resolution (SR) methods based on diffusion models exhibit promising results their practical application is hindered by the substantial number of required inference steps. Recent methods utilize the degraded images in the initial state thereby shortening the Markov chain. Nevertheless these solutions either rely on a precise formulation of the degradation process or still necessitate a relatively lengthy generation path (e.g. 15 iterations). To enhance inference speed we propose a simple yet effective method for achieving single-step SR generation named SinSR. Specifically we first derive a deterministic sampling process from the most recent state-of-the-art (SOTA) method for accelerating diffusion-based SR. This allows the mapping between the input random noise and the generated high-resolution image to be obtained in a reduced and acceptable number of inference steps during training. We show that this deterministic mapping can be distilled into a student model that performs SR within only one inference step. Additionally we propose a novel consistency-preserving loss to simultaneously leverage the ground-truth image during the distillation process ensuring that the performance of the student model is not solely bound by the feature manifold of the teacher model resulting in further performance improvement. Extensive experiments conducted on synthetic and real-world datasets demonstrate that the proposed method can achieve comparable or even superior performance compared to both previous SOTA methods and the teacher model in just one sampling step resulting in a remarkable up to x10 speedup for inference. Our code will be released at https://github.com/wyf0912/SinSR/.
[]
[]
[]
[]
444
445
Frequency Decoupling for Motion Magnification via Multi-Level Isomorphic Architecture
http://arxiv.org/abs/2403.07347
Fei Wang, Dan Guo, Kun Li, Zhun Zhong, Meng Wang
2,403.07347
Video Motion Magnification (VMM) aims to reveal subtle and imperceptible motion information of objects in the macroscopic world. Prior methods directly model the motion field from the Eulerian perspective by Representation Learning that separates shape and texture or Multi-domain Learning from phase fluctuations. Inspired by the frequency spectrum we observe that the low-frequency components with stable energy always possess spatial structure and less noise making them suitable for modeling the subtle motion field. To this end we present FD4MM a new paradigm of Frequency Decoupling for Motion Magnification with a Multi-level Isomorphic Architecture to capture multi-level high-frequency details and a stable low-frequency structure (motion field) in video space. Since high-frequency details and subtle motions are susceptible to information degradation due to their inherent subtlety and unavoidable external interference from noise we carefully design Sparse High/Low-pass Filters to enhance the integrity of details and motion structures and a Sparse Frequency Mixer to promote seamless recoupling. Besides we innovatively design a contrastive regularization for this task to strengthen the model's ability to discriminate irrelevant features reducing undesired motion magnification. Extensive experiments on both Real-world and Synthetic Datasets show that our FD4MM outperforms SOTA methods. Meanwhile FD4MM reduces FLOPs by 1.63xand boosts inference speed by 1.68xthan the latest method. Our code is available at https://github.com/Jiafei127/FD4MM.
[]
[]
[]
[]
445
446
Systematic Comparison of Semi-supervised and Self-supervised Learning for Medical Image Classification
http://arxiv.org/abs/2307.08919
Zhe Huang, Ruijie Jiang, Shuchin Aeron, Michael C. Hughes
2,307.08919
In typical medical image classification problems labeled data is scarce while unlabeled data is more available. Semi-supervised learning and self-supervised learning are two different research directions that can improve accuracy by learning from extra unlabeled data. Recent methods from both directions have reported significant gains on traditional benchmarks. Yet past benchmarks do not focus on medical tasks and rarely compare self- and semi- methods together on an equal footing. Furthermore past benchmarks often handle hyperparameter tuning suboptimally. First they may not tune hyperparameters at all leading to underfitting. Second when tuning does occur it often unrealistically uses a labeled validation set that is much larger than the training set. Therefore currently published rankings might not always corroborate with their practical utility This study contributes a systematic evaluation of self- and semi- methods with a unified experimental protocol intended to guide a practitioner with scarce overall labeled data and a limited compute budget. We answer two key questions: Can hyperparameter tuning be effective with realistic-sized validation sets? If so when all methods are tuned well which self- or semi-supervised methods achieve the best accuracy? Our study compares 13 representative semi- and self-supervised methods to strong labeled-set-only baselines on 4 medical datasets. From 20000+ GPU hours of computation we provide valuable best practices to resource-constrained practitioners: hyperparameter tuning is effective and the semi-supervised method known as MixMatch delivers the most reliable gains across 4 datasets.
[]
[]
[]
[]
446
447
ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models
Lukas Höllein, Aljaž Boži?, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, Matthias Nießner
null
3D asset generation is getting massive amounts of attention inspired by the recent success on text-guided 2D content creation. Existing text-to-3D methods use pretrained text-to-image diffusion models in an optimization problem or fine-tune them on synthetic data which often results in non-photorealistic 3D objects without backgrounds. In this paper we present a method that leverages pretrained text-to-image models as a prior and learn to generate multi-view images in a single denoising process from real-world data. Concretely we propose to integrate 3D volume-rendering and cross-frame-attention layers into each block of the existing U-Net network of the text-to-image model. Moreover we design an autoregressive generation that renders more 3D-consistent images at any viewpoint. We train our model on real-world datasets of objects and showcase its capabilities to generate instances with a variety of high-quality shapes and textures in authentic surroundings. Compared to the existing methods the results generated by our method are consistent and have favorable visual quality (-30% FID -37% KID).
[]
[]
[]
[]
447
448
Hyperbolic Learning with Synthetic Captions for Open-World Detection
http://arxiv.org/abs/2404.05016
Fanjie Kong, Yanbei Chen, Jiarui Cai, Davide Modolo
2,404.05016
Open-world detection poses significant challenges as it requires the detection of any object using either object class labels or free-form texts. Existing related works often use large-scale manual annotated caption datasets for training which are extremely expensive to collect. Instead we propose to transfer knowledge from vision-language models (VLMs) to enrich the open-vocabulary descriptions automatically. Specifically we bootstrap dense synthetic captions using pre-trained VLMs to provide rich descriptions on different regions in images and incorporate these captions to train a novel detector that generalizes to novel concepts. To mitigate the noise caused by hallucination in synthetic captions we also propose a novel hyperbolic vision-language learning approach to impose a hierarchy between visual and caption embeddings. We call our detector "HyperLearner". We conduct extensive experiments on a wide variety of open-world detection benchmarks (COCO LVIS Object Detection in the Wild RefCOCO) and our results show that our model consistently outperforms existing state-of-the-art methods such as GLIP GLIPv2 and Grounding DINO when using the same backbone.
[]
[]
[]
[]
448
449
Diffusion Models Without Attention
http://arxiv.org/abs/2311.18257
Jing Nathan Yan, Jiatao Gu, Alexander M. Rush
2,311.18257
In recent advancements in high-fidelity image generation Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a key player. However their application at high resolutions presents significant computational challenges. Current methods such as patchifying expedite processes in UNet and Transformer architectures but at the expense of representational capacity. Addressing this we introduce the Diffusion State Space Model (DiffuSSM) an architecture that supplants attention mechanisms with a more scalable state space model backbone. This approach effectively handles higher resolutions without resorting to global compression thus preserving detailed image representation throughout the diffusion process. Our focus on FLOP-efficient architectures in diffusion training marks a significant step forward. Comprehensive evaluations on both ImageNet and LSUN datasets at two resolutions demonstrate that DiffuSSMs are on par or even outperform existing diffusion models with attention modules in FID and Inception Score metrics while significantly reducing total FLOP usage.
[]
[]
[]
[]
449
450
Interpretable Measures of Conceptual Similarity by Complexity-Constrained Descriptive Auto-Encoding
http://arxiv.org/abs/2402.08919
Alessandro Achille, Greg Ver Steeg, Tian Yu Liu, Matthew Trager, Carson Klingenberg, Stefano Soatto
2,402.08919
Quantifying the degree of similarity between images is a key copyright issue for image-based machine learning. In legal doctrine however determining the degree of similarity between works requires subjective analysis and fact-finders (judges and juries) can demonstrate considerable variability in these subjective judgement calls. Images that are structurally similar can be deemed dissimilar whereas images of completely different scenes can be deemed similar enough to support a claim of copying. We seek to define and compute a notion of "conceptual similarity" among images that captures high-level relations even among images that do not share repeated elements or visually similar components. The idea is to use a base multi-modal model to generate "explanations" (captions) of visual data at increasing levels of complexity. Then similarity can be measured by the length of the caption needed to discriminate between the two images: Two highly dissimilar images can be discriminated early in their description whereas conceptually dissimilar ones will need more detail to be distinguished. We operationalize this definition and show that it correlates with subjective (averaged human evaluation) assessment and beats existing baselines on both image-to-image and text-to-text similarity benchmarks. Beyond just providing a number our method also offers interpretability by pointing to the specific level of granularity of the description where the source data is differentiated.
[]
[]
[]
[]
450
451
Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion
Kiran Chhatre, Radek Dan??ek, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J. Black, Timo Bolkart
null
Existing methods for synthesizing 3D human gestures from speech have shown promising results but they do not explicitly model the impact of emotions on the generated gestures. Instead these methods directly output animations from speech without control over the expressed emotion. To address this limitation we present AMUSE an emotional speech-driven body animation model based on latent diffusion. Our observation is that content (i.e. gestures related to speech rhythm and word utterances) emotion and personal style are separable. To account for this AMUSE maps the driving audio to three disentangled latent vectors: one for content one for emotion and one for personal style. A latent diffusion model trained to generate gesture motion sequences is then conditioned on these latent vectors. Once trained AMUSE synthesizes 3D human gestures directly from speech with control over the expressed emotions and style by combining the content from the driving speech with the emotion and style of another speech sequence. Randomly sampling the noise of the diffusion model further generates variations of the gesture with the same emotional expressivity. Qualitative quantitative and perceptual evaluations demonstrate that AMUSE outputs realistic gesture sequences. Compared to the state of the art the generated gestures are better synchronized with the speech content and better represent the emotion expressed by the input speech. Our code is available at amuse.is.tue.mpg.de.
[]
[]
[]
[]
451
452
3D Feature Tracking via Event Camera
Siqi Li, Zhikuan Zhou, Zhou Xue, Yipeng Li, Shaoyi Du, Yue Gao
null
This paper presents the first 3D feature tracking method with the corresponding dataset. Our proposed method takes event streams from stereo event cameras as input to predict 3D trajectories of the target features with high-speed motion. To achieve this our method leverages a joint framework to predict the 2D feature motion offsets and the 3D feature spatial position simultaneously. A motion compensation module is leveraged to overcome the feature deformation. A patch matching module based on bi-polarity hypergraph modeling is proposed to robustly estimate the feature spatial position. Meanwhile we collect the first 3D feature tracking dataset with high-speed moving objects and ground truth 3D feature trajectories at 250 FPS named E-3DTrack which can be used as the first high-speed 3D feature tracking benchmark. Our code and dataset could be found at: https://github.com/lisiqi19971013/E-3DTrack.
[]
[]
[]
[]
452
453
Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation
http://arxiv.org/abs/2311.13602
Daichi Horita, Naoto Inoue, Kotaro Kikuchi, Kota Yamaguchi, Kiyoharu Aizawa
2,311.13602
Content-aware graphic layout generation aims to automatically arrange visual elements along with a given content such as an e-commerce product image. In this paper we argue that the current layout generation approaches suffer from the limited training data for the high-dimensional layout structure. We show that a simple retrieval augmentation can significantly improve the generation quality. Our model which is named Retrieval-Augmented Layout Transformer (RALF) retrieves nearest neighbor layout examples based on an input image and feeds these results into an autoregressive generator. Our model can apply retrieval augmentation to various controllable generation tasks and yield high-quality layouts within a unified architecture. Our extensive experiments show that RALF successfully generates content-aware layouts in both constrained and unconstrained settings and significantly outperforms the baselines.
[]
[]
[]
[]
453
454
MSU-4S - The Michigan State University Four Seasons Dataset
Daniel Kent, Mohammed Alyaqoub, Xiaohu Lu, Hamed Khatounabadi, Kookjin Sung, Cole Scheller, Alexander Dalat, Asma bin Thabit, Roberto Whitley, Hayder Radha
null
Public datasets such as KITTI nuScenes and Waymo have played a key role in the research and development of autonomous vehicles and advanced driver assistance systems. However many of these datasets fail to incorporate a full range of driving conditions; some datasets only contain clear-weather conditions underrepresenting or entirely missing colder weather conditions such as snow or autumn scenes with bright colorful foliage. In this paper we present the Michigan State University Four Seasons (MSU-4S) Dataset which contains real-world collections of autonomous vehicle data from varied types of driving scenarios. These scenarios were recorded throughout a full range of seasons and capture clear rainy snowy and fall weather conditions at varying times of day. MSU-4S contains more than 100000 two- and three-dimensional frames for camera lidar and radar data as well as Global Navigation Satellite System (GNSS) wheel speed and steering data all annotated with weather time-of-day and time-of-year. Our data includes cluttered scenes that have large numbers of vehicles and pedestrians; and it also captures industrial scenes busy traffic thoroughfare with traffic lights and numerous signs and scenes with dense foliage. While providing a diverse set of scenes our data incorporate an important feature: virtually every scene and its corresponding lidar camera and radar frames were captured in four different seasons enabling unparalleled object detection analysis and testing of the domain shift problem across weather conditions. In that context we present detailed analyses for 3D and 2D object detection showing a strong domain shift effect among MSU-4S data segments collected across different conditions. MSU-4S will also enable advanced multimodal fusion research including different combinations of camera-lidar-radar fusion which continues to be of strong interest for the computer vision autonomous driving and ADAS development communities. The MSU-4S dataset is available online at https://egr.msu.edu/waves/msu4s.
[]
[]
[]
[]
454
455
Improving Plasticity in Online Continual Learning via Collaborative Learning
http://arxiv.org/abs/2312.00600
Maorong Wang, Nicolas Michel, Ling Xiao, Toshihiko Yamasaki
2,312.006
Online Continual Learning (CL) solves the problem of learning the ever-emerging new classification tasks from a continuous data stream. Unlike its offline counterpart in online CL the training data can only be seen once. Most existing online CL research regards catastrophic forgetting (i.e. model stability) as almost the only challenge. In this paper we argue that the model's capability to acquire new knowledge (i.e. model plasticity) is another challenge in online CL. While replay-based strategies have been shown to be effective in alleviating catastrophic forgetting there is a notable gap in research attention toward improving model plasticity. To this end we propose Collaborative Continual Learning (CCL) a collaborative learning based strategy to improve the model's capability in acquiring new concepts. Additionally we introduce Distillation Chain (DC) a collaborative learning scheme to boost the training of the models. We adapt CCL-DC to existing representative online CL works. Extensive experiments demonstrate that even if the learners are well-trained with state-of-the-art online CL methods our strategy can still improve model plasticity dramatically and thereby improve the overall performance by a large margin. The source code of our work is available at https://github.com/maorong-wang/CCL-DC.
[]
[]
[]
[]
455
456
InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetuning
http://arxiv.org/abs/2304.03411
Jing Shi, Wei Xiong, Zhe Lin, Hyun Joon Jung
2,304.03411
Recent advances in personalized image generation have enabled pre-trained text-to-image models to learn new concepts from specific image sets. However these methods often necessitate extensive test-time finetuning for each new concept leading to inefficiencies in both time and scalability. To address this challenge we introduce InstantBooth an innovative approach leveraging existing text-to-image models for instantaneous text-guided image personalization eliminating the need for test-time finetuning. This efficiency is achieved through two primary innovations. Firstly we utilize an image encoder that transforms input images into a global embedding to grasp the general concept. Secondly we integrate new adapter layers into the pre-trained model enhancing its ability to capture intricate identity details while maintaining language coherence. Significantly our model is trained exclusively on text-image pairs without reliance on concept-specific paired images. When benchmarked against existing finetuning-based personalization techniques like DreamBooth and Textual-Inversion InstantBooth not only shows comparable proficiency in aligning language with image maintaining image quality and preserving identity but also boasts a 100-fold increase in processing speed.
[]
[]
[]
[]
456
457
MaxQ: Multi-Axis Query for N:M Sparsity Network
http://arxiv.org/abs/2312.07061
Jingyang Xiang, Siqi Li, Junhao Chen, Zhuangzhi Chen, Tianxin Huang, Linpeng Peng, Yong Liu
2,312.07061
N:M sparsity has received increasing attention due to its remarkable performance and latency trade-off compared with structured and unstructured sparsity. However existing N:M sparsity methods do not differentiate the relative importance of weights among blocks and leave important weights underappreciated. Besides they directly apply N:M sparsity to the whole network which will cause severe information loss. Thus they are still sub-optimal. In this paper we propose an efficient and effective Multi-Axis Query methodology dubbed as MaxQ to rectify these problems. During the training MaxQ employs a dynamic approach to generate soft N:M masks considering the weight importance across multiple axes. This method enhances the weights with more importance and ensures more effective updates. Meanwhile a sparsity strategy that gradually increases the percentage of N:M weight blocks is applied which allows the network to heal from the pruning-induced damage progressively. During the runtime the N:M soft masks can be precomputed as constants and folded into weights without causing any distortion to the sparse pattern and incurring additional computational overhead. Comprehensive experiments demonstrate that MaxQ achieves consistent improvements across diverse CNN architectures in various computer vision tasks including image classification object detection and instance segmentation. For ResNet50 with 1:16 sparse pattern MaxQ can achieve 74.6% top-1 accuracy on ImageNet and improve by over 2.8% over the state-of-the-art. Codes and checkpoints are available at https://github.com/JingyangXiang/MaxQ.
[]
[]
[]
[]
457
458
Part-aware Unified Representation of Language and Skeleton for Zero-shot Action Recognition
Anqi Zhu, Qiuhong Ke, Mingming Gong, James Bailey
null
While remarkable progress has been made on supervised skeleton-based action recognition the challenge of zero-shot recognition remains relatively unexplored. In this paper we argue that relying solely on aligning label-level semantics and global skeleton features is insufficient to effectively transfer locally consistent visual knowledge from seen to unseen classes. To address this limitation we introduce Part-aware Unified Representation between Language and Skeleton (PURLS) to explore visual-semantic alignment at both local and global scales. PURLS introduces a new prompting module and a novel partitioning module to generate aligned textual and visual representations across different levels. The former leverages a pre-trained GPT-3 to infer refined descriptions of the global and local (body-part-based and temporal-interval-based) movements from the original action labels. The latter employs an adaptive sampling strategy to group visual features from all body joint movements that are semantically relevant to a given description. Our approach is evaluated on various skeleton/language backbones and three large-scale datasets i.e. NTU-RGB+D 60 NTU-RGB+D 120 and a newly curated dataset Kinetics-skeleton 200. The results showcase the universality and superior performance of PURLS surpassing prior skeleton-based solutions and standard baselines from other domains. The source codes can be accessed at https://github.com/azzh1/PURLS.
[]
[]
[]
[]
458
459
SD2Event:Self-supervised Learning of Dynamic Detectors and Contextual Descriptors for Event Cameras
Yuan Gao, Yuqing Zhu, Xinjun Li, Yimin Du, Tianzhu Zhang
null
Event cameras offer many advantages over traditional frame-based cameras such as high dynamic range and low latency. Therefore event cameras are widely applied in diverse computer vision applications where event-based keypoint detection is a fundamental task. However achieving robust event-based keypoint detection remains challenging because the ground truth of event keypoints is difficult to obtain descriptors extracted by CNN usually lack discriminative ability in the presence of intense noise and fixed keypoint detectors are limited in detecting varied keypoint patterns. To address these challenges a novel event-based keypoint detection method is proposed by learning dynamic detectors and contextual descriptors in a self-supervised manner (SD2Event) including a contextual feature descriptor learning (CFDL) module and a dynamic keypoint detector learning (DKDL) module. The proposed SD2Event enjoys several merits. First the proposed CFDL module can model long-range contexts efficiently and effectively. Second the DKDL module generates dynamic keypoint detectors which can detect keypoints with diverse patterns across various event streams. Third the proposed self-supervised signals can guide the model's adaptation to event data. Extensive experimental results on three challenging benchmarks show that our proposed method significantly outperforms stateof-the-art event-based keypoint detection methods.
[]
[]
[]
[]
459
460
Composing Object Relations and Attributes for Image-Text Matching
Khoi Pham, Chuong Huynh, Ser-Nam Lim, Abhinav Shrivastava
null
We study the visual semantic embedding problem for image-text matching. Most existing work utilizes a tailored cross-attention mechanism to perform local alignment across the two image and text modalities. This is computationally expensive even though it is more powerful than the unimodal dual-encoder approach. This work introduces a dual-encoder image-text matching model leveraging a scene graph to represent captions with nodes for objects and attributes interconnected by relational edges. Utilizing a graph attention network our model efficiently encodes object-attribute and object-object semantic relations resulting in a robust and fast-performing system. Representing caption as a scene graph offers the ability to utilize the strong relational inductive bias of graph neural networks to learn object-attribute and object-object relations effectively. To train the model we propose losses that align the image and caption both at the holistic level (image-caption) and the local level (image-object entity) which we show is key to the success of the model. Our model is termed Composition model for Object Relations and Attributes CORA. Experimental results on two prominent image-text retrieval benchmarks Flickr30K and MS-COCO demonstrate that CORA outperforms existing state-of-the-art computationally expensive cross-attention methods regarding recall score while achieving fast computation speed of the dual encoder. Our code is available at https://github.com/vkhoi/cora_cvpr24
[]
[]
[]
[]
460
461
Previously on ... From Recaps to Story Summarization
http://arxiv.org/abs/2405.11487
Aditya Kumar Singh, Dhruv Srivastava, Makarand Tapaswi
2,405.11487
We introduce multimodal story summarization by leveraging TV episode recaps - short video sequences interweaving key story moments from previous episodes to bring viewers up to speed. We propose PlotSnap a dataset featuring two crime thriller TV shows with rich recaps and long episodes of 40 minutes. Story summarization labels are unlocked by matching recap shots to corresponding sub-stories in the episode. We propose a hierarchical model TaleSumm that processes entire episodes by creating compact shot and dialog representations and predicts importance scores for each video shot and dialog utterance by enabling interactions between local story groups. Unlike traditional summarization our method extracts multiple plot points from long videos. We present a thorough evaluation on story summarization including promising cross-series generalization. TaleSumm also shows good results on classic video summarization benchmarks.
[]
[]
[]
[]
461
462
PaReNeRF: Toward Fast Large-scale Dynamic NeRF with Patch-based Reference
Xiao Tang, Min Yang, Penghui Sun, Hui Li, Yuchao Dai, Feng Zhu, Hojae Lee
null
With photo-realistic image generation Neural Radiance Field (NeRF) is widely used for large-scale dynamic scene reconstruction as autonomous driving simulator. However large-scale scene reconstruction still suffers from extremely long training time and rendering time. Low-resolution (LR) rendering combined with upsampling can alleviate this problem but it degrades image quality. In this paper we design a lightweight reference decoder which exploits prior information from known views to improve image reconstruction quality of new views. In addition to speed up prior information search we propose an optical flow and structural similarity based prior information search method. Results on KITTI and VKITTI2 datasets show that our method significantly outperforms the baseline method in terms of training speed rendering speed and rendering quality.
[]
[]
[]
[]
462
463
mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration
Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Anwen Hu, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang
null
Multi-modal Large Language Models (MLLMs) have demonstrated impressive instruction abilities across various open-ended tasks. However previous methods have primarily focused on enhancing multi-modal capabilities. In this work we introduce a versatile multi-modal large language model mPLUG-Owl2 which effectively leverages modality collaboration to improve performance in both text and multi-modal tasks. mPLUG-Owl2 utilizes a modularized network design with the language decoder acting as a universal interface for managing different modalities. Specifically mPLUG-Owl2 incorporates shared functional modules to facilitate modality collaboration and introduces a modality-adaptive module that preserves modality-specific features. Extensive experiments reveal that mPLUG-Owl2 is capable of generalizing both text tasks and multi-modal tasks while achieving state-of-the-art performances with a single generalized model. Notably mPLUG-Owl2 is the first MLLM model that demonstrates the modality collaboration phenomenon in both pure-text and multi-modal scenarios setting a pioneering path in the development of future multi-modal foundation models.
[]
[]
[]
[]
463
464
Spectral and Polarization Vision: Spectro-polarimetric Real-world Dataset
http://arxiv.org/abs/2311.17396
Yujin Jeon, Eunsue Choi, Youngchan Kim, Yunseong Moon, Khalid Omer, Felix Heide, Seung-Hwan Baek
2,311.17396
Image datasets are essential not only in validating existing methods in computer vision but also in developing new methods. Many image datasets exist consisting of trichromatic intensity images taken with RGB cameras which are designed to replicate human vision. However polarization and spectrum the wave properties of light that animals in harsh environments and with limited brain capacity often rely on remain underrepresented in existing datasets. Although there are previous spectro-polarimetric datasets they have insufficient object diversity limited illumination conditions linear-only polarization data and inadequate image count. Here we introduce two spectro-polarimetric datasets consisting of trichromatic Stokes images and hyperspectral Stokes images. These datasets encompass both linear and circular polarization; they introduce multiple spectral channels; and they feature a broad selection of real-world scenes. With our dataset in hand we analyze the spectro-polarimetric image statistics develop efficient representations of such high-dimensional data and evaluate spectral dependency of shape-from-polarization methods. As such the proposed dataset promises a foundation for data-driven spectro-polarimetric imaging and vision research.
[]
[]
[]
[]
464
465
Learning by Correction: Efficient Tuning Task for Zero-Shot Generative Vision-Language Reasoning
http://arxiv.org/abs/2404.00909
Rongjie Li, Yu Wu, Xuming He
2,404.00909
Generative vision-language models (VLMs) have shown impressive performance in zero-shot vision-language tasks like image captioning and visual question answering.However improving their zero-shot reasoning typically requires second-stage instruction tuning which relies heavily on human-labeled or large language model-generated annotation incurring high labeling costs. To tackle this challenge we introduce Image-Conditioned Caption Correction (ICCC) a novel pre-training task designed to enhance VLMs' zero-shot performance without the need for labeled task-aware data. The ICCC task compels VLMs to rectify mismatches between visual and language concepts thereby enhancing instruction following and text generation conditioned on visual inputs. Leveraging language structure and a lightweight dependency parser we construct data samples of ICCC task from image-text datasets with low labeling and computation costs. Experimental results on BLIP-2 and InstructBLIP demonstrate significant improvements in zero-shot image-text generation-based VL tasks through ICCC instruction tuning.
[]
[]
[]
[]
465
466
Supervised Anomaly Detection for Complex Industrial Images
http://arxiv.org/abs/2405.04953
Aimira Baitieva, David Hurych, Victor Besnier, Olivier Bernard
2,405.04953
Automating visual inspection in industrial production lines is essential for increasing product quality across various industries. Anomaly detection (AD) methods serve as robust tools for this purpose. However existing public datasets primarily consist of images without anomalies limiting the practical application of AD methods in production settings. To address this challenge we present (1) the Valeo Anomaly Dataset (VAD) a novel real-world industrial dataset comprising 5000 images including 2000 instances of challenging real defects across more than 20 subclasses. Acknowledging that traditional AD methods struggle with this dataset we introduce (2) Segmentation-based Anomaly Detector (SegAD). First SegAD leverages anomaly maps as well as segmentation maps to compute local statistics. Next SegAD uses these statistics and an optional supervised classifier score as input features for a Boosted Random Forest (BRF) classifier yielding the final anomaly score. Our SegAD achieves state-of-the-art performance on both VAD (+2.1% AUROC) and the VisA dataset (+0.4% AUROC). The code and the models are publicly available.
[]
[]
[]
[]
466
467
Open3DSG: Open-Vocabulary 3D Scene Graphs from Point Clouds with Queryable Objects and Open-Set Relationships
http://arxiv.org/abs/2402.12259
Sebastian Koch, Narunas Vaskevicius, Mirco Colosi, Pedro Hermosilla, Timo Ropinski
2,402.12259
Current approaches for 3D scene graph prediction rely on labeled datasets to train models for a fixed set of known object classes and relationship categories. We present Open3DSG an alternative approach to learn 3D scene graph prediction in an open world without requiring labeled scene graph data. We co-embed the features from a 3D scene graph prediction backbone with the feature space of powerful open world 2D vision language foundation models. This enables us to predict 3D scene graphs from 3D point clouds in a zero-shot manner by querying object classes from an open vocabulary and predicting the inter-object relationships from a grounded LLM with scene graph features and queried object classes as context. Open3DSG is the first 3D point cloud method to predict not only explicit open-vocabulary object classes but also open-set relationships that are not limited to a predefined label set making it possible to express rare as well as specific objects and relationships in the predicted 3D scene graph. Our experiments show that Open3DSG is effective at predicting arbitrary object classes as well as their complex inter-object relationships describing spatial supportive semantic and comparative relationships.
[]
[]
[]
[]
467
468
SURE: SUrvey REcipes for building reliable and robust deep networks
http://arxiv.org/abs/2403.00543
Yuting Li, Yingyi Chen, Xuanlong Yu, Dexiong Chen, Xi Shen
2,403.00543
In this paper we revisit techniques for uncertainty estimation within deep neural networks and consolidate a suite of techniques to enhance their reliability. Our investigation reveals that an integrated application of diverse techniques--spanning model regularization classifier and optimization--substantially improves the accuracy of uncertainty predictions in image classification tasks. The synergistic effect of these techniques culminates in our novel SURE approach. We rigorously evaluate SURE against the benchmark of failure prediction a critical testbed for uncertainty estimation efficacy. Our results showcase a consistently better performance than models that individually deploy each technique across various datasets and model architectures. When applied to real-world challenges such as data corruption label noise and long-tailed class distribution SURE exhibits remarkable robustness delivering results that are superior or on par with current state-of-the-art specialized methods. Particularly on Animal-10N and Food-101N for learning with noisy labels SURE achieves state-of-the-art performance without any task-specific adjustments. This work not only sets a new benchmark for robust uncertainty estimation but also paves the way for its application in diverse real-world scenarios where reliability is paramount. Our code is available at https://yutingli0606.github.io/SURE/.
[]
[]
[]
[]
468
469
PolarRec: Improving Radio Interferometric Data Reconstruction Using Polar Coordinates
Ruoqi Wang, Zhuoyang Chen, Jiayi Zhu, Qiong Luo, Feng Wang
null
In radio astronomy visibility data which are measurements of wave signals from radio telescopes are transformed into images for observation of distant celestial objects. However these resultant images usually contain both real sources and artifacts due to signal sparsity and other factors. One way to obtain cleaner images is to reconstruct samples into dense forms before imaging. Unfortunately existing reconstruction methods often miss some components of visibility in frequency domain so blurred object edges and persistent artifacts remain in the images. Furthermore the computation overhead is high on irregular visibility samples due to the data skew. To address these problems we propose PolarRec a transformer-encoder-conditioned reconstruction pipeline with visibility samples converted into the polar coordinate system. This coordinate system matches the way in which radio telescopes observe a celestial area as the Earth rotates. As a result visibility samples distribute in the polar system more uniformly than in the Cartesian space. Therefore we propose to use radial distance in the loss function to help reconstruct complete visibility effectively. Also we group visibility samples by their polar angles and propose a group-based encoding scheme to improve the efficiency. Our experiments demonstrate that PolarRec markedly improves imaging results by faithfully reconstructing all frequency components in the visibility domain while significantly reducing the computation cost in visibility data encoding. The code is available at https://github.com/RapidsAtHKUST/PolarRec.
[]
[]
[]
[]
469
470
Affine Equivariant Networks Based on Differential Invariants
Yikang Li, Yeqing Qiu, Yuxuan Chen, Lingshen He, Zhouchen Lin
null
Convolutional neural networks benefit from translation equivariance achieving tremendous success. Equivariant networks further extend this property to other transformation groups. However most existing methods require discretization or sampling of groups leading to increased model sizes for larger groups such as the affine group. In this paper we build affine equivariant networks based on differential invariants from the viewpoint of symmetric PDEs without discretizing or sampling the group. To address the division-by-zero issue arising from fractional differential invariants of the affine group we construct a new kind of affine invariants by normalizing polynomial relative differential invariants to replace classical differential invariants. For further flexibility we design an equivariant layer which can be directly integrated into convolutional networks of various architectures. Moreover our framework for the affine group is also applicable to its continuous subgroups. We implement equivariant networks for the scale group the rotation-scale group and the affine group. Numerical experiments demonstrate the outstanding performance of our framework across classification tasks involving transformations of these groups. Remarkably under the out-of-distribution setting our model achieves a 3.37% improvement in accuracy over the main counterpart affConv on the affNIST dataset.
[]
[]
[]
[]
470
471
Selectively Informative Description can Reduce Undesired Embedding Entanglements in Text-to-Image Personalization
http://arxiv.org/abs/2403.15330
Jimyeong Kim, Jungwon Park, Wonjong Rhee
2,403.1533
In text-to-image personalization a timely and crucial challenge is the tendency of generated images overfitting to the biases present in the reference images. We initiate our study with a comprehensive categorization of the biases into background nearby-object tied-object substance (in style re-contextualization) and pose biases. These biases manifest in the generated images due to their entanglement into the subject embedding. This undesired embedding entanglement not only results in the reflection of biases from the reference images into the generated images but also notably diminishes the alignment of the generated images with the given generation prompt. To address this challenge we propose SID (Selectively Informative Description) a text description strategy that deviates from the prevalent approach of only characterizing the subject's class identification. SID is generated utilizing multimodal GPT-4 and can be seamlessly integrated into optimization-based models. We present comprehensive experimental results along with analyses of cross-attention maps subject-alignment non-subject-disentanglement and text-alignment.
[]
[]
[]
[]
471
472
Summarize the Past to Predict the Future: Natural Language Descriptions of Context Boost Multimodal Object Interaction Anticipation
http://arxiv.org/abs/2301.09209
Razvan-George Pasca, Alexey Gavryushin, Muhammad Hamza, Yen-Ling Kuo, Kaichun Mo, Luc Van Gool, Otmar Hilliges, Xi Wang
2,301.09209
We study object interaction anticipation in egocentric videos. This task requires an understanding of the spatio-temporal context formed by past actions on objects coined "action context". We propose TransFusion a multimodal transformer-based architecture for short-term object interaction anticipation. Our method exploits the representational power of language by summarizing the action context textually after leveraging pre-trained vision-language foundation models to extract the action context from past video frames. The summarized action context and the last observed video frame are processed by the multimodal fusion module to forecast the next object interaction. Experiments on the Ego4D next active object interaction dataset show the effectiveness of our multimodal fusion model and highlight the benefits of using the power of foundation models and language-based context summaries in a task where vision may appear to suffice. Our novel approach outperforms all state-of-the-art methods on both versions of the Ego4D dataset.
[]
[]
[]
[]
472
473
Transfer CLIP for Generalizable Image Denoising
http://arxiv.org/abs/2403.15132
Jun Cheng, Dong Liang, Shan Tan
2,403.15132
Image denoising is a fundamental task in computer vision. While prevailing deep learning-based supervised and self-supervised methods have excelled in eliminating in-distribution noise their susceptibility to out-of-distribution (OOD) noise remains a significant challenge. The recent emergence of contrastive language-image pre-training (CLIP) model has showcased exceptional capabilities in open-world image recognition and segmentation. Yet the potential for leveraging CLIP to enhance the robustness of low-level tasks remains largely unexplored. This paper uncovers that certain dense features extracted from the frozen ResNet image encoder of CLIP exhibit distortion-invariant and content-related properties which are highly desirable for generalizable denoising. Leveraging these properties we devise an asymmetrical encoder-decoder denoising network which incorporates dense features including the noisy image and its multi-scale features from the frozen ResNet encoder of CLIP into a learnable image decoder to achieve generalizable denoising. The progressive feature augmentation strategy is further proposed to mitigate feature overfitting and improve the robustness of the learnable decoder. Extensive experiments and comparisons conducted across diverse OOD noises including synthetic noise real-world sRGB noise and low-dose CT image noise demonstrate the superior generalization ability of our method.
[]
[]
[]
[]
473
474
Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models
http://arxiv.org/abs/2312.04410
Jiayi Guo, Xingqian Xu, Yifan Pu, Zanlin Ni, Chaofei Wang, Manushree Vasu, Shiji Song, Gao Huang, Humphrey Shi
2,312.0441
Recently diffusion models have made remarkable progress in text-to-image (T2I) generation synthesizing images with high fidelity and diverse contents. Despite this advancement latent space smoothness within diffusion models remains largely unexplored. Smooth latent spaces ensure that a perturbation on an input latent corresponds to a steady change in the output image. This property proves beneficial in downstream tasks including image interpolation inversion and editing. In this work we expose the non-smoothness of diffusion latent spaces by observing noticeable visual fluctuations resulting from minor latent variations. To tackle this issue we propose Smooth Diffusion a new category of diffusion models that can be simultaneously high-performing and smooth. Specifically we introduce Step-wise Variation Regularization to enforce the proportion between the variations of an arbitrary input latent and that of the output image is a constant at any diffusion training step. In addition we devise an interpolation standard deviation (ISTD) metric to effectively assess the latent space smoothness of a diffusion model. Extensive quantitative and qualitative experiments demonstrate that Smooth Diffusion stands out as a more desirable solution not only in T2I generation but also across various downstream tasks. Smooth Diffusion is implemented as a plug-and-play Smooth-LoRA to work with various community models. Code is available at https://github.com/SHI-Labs/Smooth-Diffusion.
[]
[]
[]
[]
474
475
Towards CLIP-driven Language-free 3D Visual Grounding via 2D-3D Relational Enhancement and Consistency
Yuqi Zhang, Han Luo, Yinjie Lei
null
3D visual grounding plays a crucial role in scene understanding with extensive applications in AR/VR. Despite the significant progress made in recent methods the requirement of dense textual descriptions for each individual object which is time-consuming and costly hinders their scalability. To mitigate reliance on text annotations during training researchers have explored language-free training paradigms in the 2D field via explicit text generation or implicit feature substitution. Nevertheless unlike 2D images the complexity of spatial relations in 3D coupled with the absence of robust 3D visual language pre-trained models makes it challenging to directly transfer previous strategies. To tackle the above issues in this paper we introduce a language-free training framework for 3D visual grounding. By utilizing the visual-language joint embedding in 2D large cross-modality model as a bridge we can expediently produce the pseudo-language features by leveraging the features of 2D images which are equivalent to that of real textual descriptions. We further develop a relation injection scheme with a Neighboring Relation-aware Modeling module and a Cross-modality Relation Consistency module aiming to enhance and preserve the complex relationships between the 2D and 3D embedding space. Extensive experiments demonstrate that our proposed language-free 3D visual grounding approach can obtain promising performance across three widely used datasets --ScanRefer Nr3D and Sr3D. Our codes are available at https://github.com/xibi777/3DLFVG
[]
[]
[]
[]
475
476
Optimal Transport Aggregation for Visual Place Recognition
http://arxiv.org/abs/2311.15937
Sergio Izquierdo, Javier Civera
2,311.15937
The task of Visual Place Recognition (VPR) aims to match a query image against references from an extensive database of images from different places relying solely on visual cues. State-of-the-art pipelines focus on the aggregation of features extracted from a deep backbone in order to form a global descriptor for each image. In this context we introduce SALAD (Sinkhorn Algorithm for Locally Aggregated Descriptors) which reformulates NetVLAD's soft-assignment of local features to clusters as an optimal transport problem. In SALAD we consider both feature-to-cluster and cluster-to-feature relations and we also introduce a dustbin cluster designed to selectively discard features deemed non-informative enhancing the overall descriptor quality. Additionally we leverage and fine-tune DINOv2 as a backbone which provides enhanced description power for the local features and dramatically reduces the required training time. As a result our single-stage method not only surpasses single-stage baselines in public VPR datasets but also surpasses two-stage methods that add a re-ranking with significantly higher cost.
[]
[]
[]
[]
476
477
FlowIE: Efficient Image Enhancement via Rectified Flow
Yixuan Zhu, Wenliang Zhao, Ao Li, Yansong Tang, Jie Zhou, Jiwen Lu
null
Image enhancement holds extensive applications in real-world scenarios due to complex environments and limitations of imaging devices. Conventional methods are often constrained by their tailored models resulting in diminished robustness when confronted with challenging degradation conditions. In response we propose FlowIE a simple yet highly effective flow-based image enhancement framework that estimates straight-line paths from an elementary distribution to high-quality images. Unlike previous diffusion-based methods that suffer from long-time inference FlowIE constructs a linear many-to-one transport mapping via conditioned rectified flow. The rectification straightens the trajectories of probability transfer accelerating inference by an order of magnitude. This design enables our FlowIE to fully exploit rich knowledge in the pre-trained diffusion model rendering it well-suited for various real-world applications. Moreover we devise a faster inference algorithm inspired by Lagrange's Mean Value Theorem harnessing midpoint tangent direction to optimize path estimation ultimately yielding visually superior results. Thanks to these designs our FlowIE adeptly manages a diverse range of enhancement tasks within a concise sequence of fewer than 5 steps. Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets unveiling the compelling efficacy and efficiency of our proposed FlowIE.
[]
[]
[]
[]
477
478
Aligning and Prompting Everything All at Once for Universal Visual Perception
http://arxiv.org/abs/2312.02153
Yunhang Shen, Chaoyou Fu, Peixian Chen, Mengdan Zhang, Ke Li, Xing Sun, Yunsheng Wu, Shaohui Lin, Rongrong Ji
2,312.02153
Vision foundation models have been explored recently to build general-purpose vision systems. However predominant paradigms driven by casting instance-level tasks as an object-word alignment bring heavy cross-modality interaction which is not effective in prompting object detection and visual grounding. Another line of work that focuses on pixel-level tasks often encounters a large annotation gap of things and stuff and suffers from mutual interference between foreground-object and background-class segmentation. In stark contrast to the prevailing methods we present APE a universal visual perception model for aligning and prompting everything all at once in an image to perform diverse tasks i.e. detection segmentation and grounding as an instance-level sentence-object matching paradigm. Specifically APE advances the convergence of detection and grounding by reformulating language-guided grounding as open-vocabulary detection which efficiently scales up model prompting to thousands of category vocabularies and region descriptions while maintaining the effectiveness of cross-modality fusion. To bridge the granularity gap of different pixel-level tasks APE equalizes semantic and panoptic segmentation to proxy instance learning by considering any isolated regions as individual instances. APE aligns vision and language representation on broad data with natural and challenging characteristics all at once without task-specific fine-tuning. The extensive experiments on over 160 datasets demonstrate that with only one-suit of weights APE outperforms (or is on par with) the state-of-the-art models proving that an effective yet universal perception for anything aligning and prompting is indeed feasible. Codes and trained models are released at https://github.com/shenyunhang/APE.
[]
[]
[]
[]
478
479
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities
http://arxiv.org/abs/2404.16456
Mingcheng Li, Dingkang Yang, Xiao Zhao, Shuaibing Wang, Yan Wang, Kun Yang, Mingyang Sun, Dongliang Kou, Ziyun Qian, Lihua Zhang
2,404.16456
Multimodal sentiment analysis (MSA) aims to understand human sentiment through multimodal data. Most MSA efforts are based on the assumption of modality completeness. However in real-world applications some practical factors cause uncertain modality missingness which drastically degrades the model's performance. To this end we propose a Correlation-decoupled Knowledge Distillation (CorrKD) framework for the MSA task under uncertain missing modalities. Specifically we present a sample-level contrastive distillation mechanism that transfers comprehensive knowledge containing cross-sample correlations to reconstruct missing semantics. Moreover a category-guided prototype distillation mechanism is introduced to capture cross-category correlations using category prototypes to align feature distributions and generate favorable joint representations. Eventually we design a response-disentangled consistency distillation strategy to optimize the sentiment decision boundaries of the student network through response disentanglement and mutual information maximization. Comprehensive experiments on three datasets indicate that our framework can achieve favorable improvements compared with several baselines.
[]
[]
[]
[]
479
480
Revisiting Adversarial Training at Scale
http://arxiv.org/abs/2401.04727
Zeyu Wang, Xianhang Li, Hongru Zhu, Cihang Xie
2,401.04727
The machine learning community has witnessed a drastic change in the training pipeline pivoted by those "foundation models" with unprecedented scales. However the field of adversarial training is lagging behind predominantly centered around small model sizes like ResNet-50 and tiny and low-resolution datasets like CIFAR-10. To bridge this transformation gap this paper provides a modern re-examination with adversarial training investigating its potential benefits when applied at scale. Additionally we introduce an efficient and effective training strategy to enable adversarial training with giant models and web-scale data at an affordable computing cost. We denote this newly introduced framework as AdvXL. Empirical results demonstrate that AdvXL establishes new state-of-the-art robust accuracy records under AutoAttack on ImageNet-1K. For example by training on DataComp-1B dataset our AdvXL empowers a vanilla ViT-g model to substantially surpass the previous records of l_ infinity - l_ 2 - and l_ 1 -robust accuracy by margins of 11.4% 14.2% and 12.9% respectively. This achievement posits AdvXL as a pioneering approach charting a new trajectory for the efficient training of robust visual representations at significantly larger scales. Our code is available at https://github.com/UCSC-VLAA/AdvXL.
[]
[]
[]
[]
480
481
Towards Fairness-Aware Adversarial Learning
http://arxiv.org/abs/2402.17729
Yanghao Zhang, Tianle Zhang, Ronghui Mu, Xiaowei Huang, Wenjie Ruan
2,402.17729
Although adversarial training (AT) has proven effective in enhancing the model's robustness the recently revealed issue of fairness in robustness has not been well addressed i.e. the robust accuracy varies significantly among different categories. In this paper instead of uniformly evaluating the model's average class performance we delve into the issue of robust fairness by considering the worst-case distribution across various classes. We propose a novel learning paradigm named Fairness-Aware Adversarial Learning (FAAL). As a generalization of conventional AT we re-define the problem of adversarial training as a min-max-max framework to ensure both robustness and fairness of the trained model. Specifically by taking advantage of distributional robust optimization our method aims to find the worst distribution among different categories and the solution is guaranteed to obtain the upper bound performance with high probability. In particular FAAL can fine-tune an unfair robust model to be fair within only two epochs without compromising the overall clean and robust accuracies. Extensive experiments on various image datasets validate the superior performance and efficiency of the proposed FAAL compared to other state-of-the-art methods.
[]
[]
[]
[]
481
482
LoSh: Long-Short Text Joint Prediction Network for Referring Video Object Segmentation
http://arxiv.org/abs/2306.08736
Linfeng Yuan, Miaojing Shi, Zijie Yue, Qijun Chen
2,306.08736
Referring video object segmentation (RVOS) aims to segment the target instance referred by a given text expression in a video clip. The text expression normally contains sophisticated description of the instance's appearance action and relation with others. It is therefore rather difficult for a RVOS model to capture all these attributes correspondingly in the video; in fact the model often favours more on the action- and relation-related visual attributes of the instance. This can end up with partial or even incorrect mask prediction of the target instance. We tackle this problem by taking a subject-centric short text expression from the original long text expression. The short one retains only the appearance-related information of the target instance so that we can use it to focus the model's attention on the instance's appearance. We let the model make joint predictions using both long and short text expressions; and insert a long-short cross-attention module to interact the joint features and a long-short predictions intersection loss to regulate the joint predictions. Besides the improvement on the linguistic part we also introduce a forward-backward visual consistency loss which utilizes optical flows to warp visual features between the annotated frames and their temporal neighbors for consistency. We build our method on top of two state of the art pipelines. Extensive experiments on A2D-Sentences Refer-YouTube-VOS JHMDB-Sentences and Refer-DAVIS17 show impressive improvements of our method. Code is available here.
[]
[]
[]
[]
482
483
MirageRoom: 3D Scene Segmentation with 2D Pre-trained Models by Mirage Projection
Haowen Sun, Yueqi Duan, Juncheng Yan, Yifan Liu, Jiwen Lu
null
Nowadays leveraging 2D images and pre-trained models to guide 3D point cloud feature representation has shown a remarkable potential to boost the performance of 3D fundamental models. While some works rely on additional data such as 2D real-world images and their corresponding camera poses recent studies target at using point cloud exclusively by designing 3D-to-2D projection. However in the indoor scene scenario existing 3D-to-2D projection strategies suffer from severe occlusions and incoherence which fail to contain sufficient information for fine-grained point cloud segmentation task. In this paper we argue that the crux of the matter resides in the basic premise of existing projection strategies that the medium is homogeneous thereby projection rays propagate along straight lines and behind objects are occluded by front ones. Inspired by the phenomenon of mirage where the occluded objects are exposed by distorted light rays due to heterogeneous medium refraction rate we propose MirageRoom by designing parametric mirage projection with heterogeneous medium to obtain series of projected images with various distorted degrees. We further develop a masked reprojection module across 2D and 3D latent space to bridge the gap between pre-trained 2D backbone and 3D point-wise features. Both quantitative and qualitative experimental results on S3DIS and ScanNet V2 demonstrate the effectiveness of our method.
[]
[]
[]
[]
483
484
In2SET: Intra-Inter Similarity Exploiting Transformer for Dual-Camera Compressive Hyperspectral Imaging
http://arxiv.org/abs/2312.13319
Xin Wang, Lizhi Wang, Xiangtian Ma, Maoqing Zhang, Lin Zhu, Hua Huang
2,312.13319
Dual-camera compressive hyperspectral imaging (DCCHI) offers the capability to reconstruct 3D hyperspectral image (HSI) by fusing compressive and panchromatic (PAN) image which has shown great potential for snapshot hyperspectral imaging in practice. In this paper we introduce a novel DCCHI reconstruction network intra-inter similarity exploiting Transformer (In2SET). Our key insight is to make full use of the PAN image to assist the reconstruction. To this end we propose to use the intra-similarity within the PAN image as a proxy for approximating the intra-similarity in the original HSI thereby offering an enhanced content prior for more accurate HSI reconstruction. Furthermore we propose to use the inter-similarity to align the features between HSI and PAN images thereby maintaining semantic consistency between the two modalities during the reconstruction process. By integrating In2SET into a PAN-guided deep unrolling (PGDU) framework our method substantially enhances the spatial-spectral fidelity and detail of the reconstructed images providing a more comprehensive and accurate depiction of the scene. Experiments conducted on both real and simulated datasets demonstrate that our approach consistently outperforms existing state-of-the-art methods in terms of reconstruction quality and computational complexity. The code is available at https://github.com/2JONAS/In2SET.
[]
[]
[]
[]
484
485
Dual Prototype Attention for Unsupervised Video Object Segmentation
http://arxiv.org/abs/2211.12036
Suhwan Cho, Minhyeok Lee, Seunghoon Lee, Dogyoon Lee, Heeseung Choi, Ig-Jae Kim, Sangyoun Lee
2,211.12036
Unsupervised video object segmentation (VOS) aims to detect and segment the most salient object in videos. The primary techniques used in unsupervised VOS are 1) the collaboration of appearance and motion information; and 2) temporal fusion between different frames. This paper proposes two novel prototype-based attention mechanisms inter-modality attention (IMA) and inter-frame attention (IFA) to incorporate these techniques via dense propagation across different modalities and frames. IMA densely integrates context information from different modalities based on a mutual refinement. IFA injects global context of a video to the query frame enabling a full utilization of useful properties from multiple frames. Experimental results on public benchmark datasets demonstrate that our proposed approach outperforms all existing methods by a substantial margin. The proposed two components are also thoroughly validated via ablative study.
[]
[]
[]
[]
485
486
Look-Up Table Compression for Efficient Image Restoration
Yinglong Li, Jiacheng Li, Zhiwei Xiong
null
Look-Up Table (LUT) has recently gained increasing attention for restoring High-Quality (HQ) images from Low-Quality (LQ) observations thanks to its high computational efficiency achieved through a "space for time" strategy of caching learned LQ-HQ pairs. However incorporating multiple LUTs for improved performance comes at the cost of a rapidly growing storage size which is ultimately restricted by the allocatable on-device cache size. In this work we propose a novel LUT compression framework to achieve a better trade-off between storage size and performance for LUT-based image restoration models. Based on the observation that most cached LQ image patches are distributed along the diagonal of a LUT we devise a Diagonal-First Compression (DFC) framework where diagonal LQ-HQ pairs are preserved and carefully re-indexed to maintain the representation capacity while non-diagonal pairs are aggressively subsampled to save storage. Extensive experiments on representative image restoration tasks demonstrate that our DFC framework significantly reduces the storage size of LUT-based models (including our new design) while maintaining their performance. For instance DFC saves up to 90% of storage at a negligible performance drop for x4 super-resolution. The source code is available on GitHub: https://github.com/leenas233/DFC.
[]
[]
[]
[]
486
487
TextNeRF: A Novel Scene-Text Image Synthesis Method based on Neural Radiance Fields
Jialei Cui, Jianwei Du, Wenzhuo Liu, Zhouhui Lian
null
Acquiring large-scale well-annotated datasets is essential for training robust scene text detectors yet the process is often resource-intensive and time-consuming. While some efforts have been made to explore the synthesis of scene text images a notable gap remains between synthetic and authentic data. In this paper we introduce a novel method that utilizes Neural Radiance Fields (NeRF) to model real-world scenes and emulate the data collection process by rendering images from diverse camera perspectives enriching the variability and realism of the synthesized data. A semi-supervised learning framework is proposed to categorize semantic regions within 3D scenes ensuring consistent labeling of text regions across various viewpoints. Our method also models the pose and view-dependent appearance of text regions thereby offering precise control over camera poses and significantly improving the realism of text insertion and editing within scenes. Employing our technique on real-world scenes has led to the creation of a novel scene text image dataset. Compared to other existing benchmarks the proposed dataset is distinctive in providing not only standard annotations such as bounding boxes and transcriptions but also the information of 3D pose attributes for text regions enabling a more detailed evaluation of the robustness of text detection algorithms. Through extensive experiments we demonstrate the effectiveness of our proposed method in enhancing the performance of scene text detectors.
[]
[]
[]
[]
487
488
Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-Training via Differentiable Rendering of Line Segments
Yusuke Takimoto, Hikari Takehara, Hiroyuki Sato, Zihao Zhu, Bo Zheng
null
In the film and gaming industries achieving a realistic hair appearance typically involves the use of strands originating from the scalp. However reconstructing these strands from observed surface images of hair presents significant challenges. The difficulty in acquiring Ground Truth (GT) data has led state-of-the-art learning-based methods to rely on pre-training with manually prepared synthetic CG data. This process is not only labor-intensive and costly but also introduces complications due to the domain gap when compared to real-world data. In this study we propose an optimization-based approach that eliminates the need for pre-training. Our method represents hair strands as line segments growing from the scalp and optimizes them using a novel differentiable rendering algorithm. To robustly optimize a substantial number of slender explicit geometries we introduce 3D orientation estimation utilizing global optimization strand initialization based on Laplace's equation and reparameterization that leverages geometric connectivity and spatial proximity. Unlike existing optimization-based methods our method is capable of reconstructing internal hair flow in an absolute direction. Our method exhibits robust and accurate inverse rendering surpassing the quality of existing methods and significantly improving processing speed.
[]
[]
[]
[]
488
489
Improving Training Efficiency of Diffusion Models via Multi-Stage Framework and Tailored Multi-Decoder Architecture
Huijie Zhang, Yifu Lu, Ismail Alkhouri, Saiprasad Ravishankar, Dogyoon Song, Qing Qu
null
Diffusion models emerging as powerful deep generative tools excel in various applications. They operate through a two-steps process: introducing noise into training samples and then employing a model to convert random noise into new samples (e.g. images). However their remarkable generative performance is hindered by slow training and sampling. This is due to the necessity of tracking extensive forward and reverse diffusion trajectories and employing a large model with numerous parameters across multiple timesteps (i.e. noise levels). To tackle these challenges we present a multi-stage framework inspired by our empirical findings. These observations indicate the advantages of employing distinct parameters tailored to each timestep while retaining universal parameters shared across all time steps. Our approach involves segmenting the time interval into multiple stages where we employ custom multi-decoder U-net architecture that blends time-dependent models with a universally shared encoder. Our framework enables the efficient distribution of computational resources and mitigates inter-stage interference which substantially improves training efficiency. Extensive numerical experiments affirm the effectiveness of our framework showcasing significant training and sampling efficiency enhancements on three state-of-the-art diffusion models including large-scale latent diffusion models. Furthermore our ablation studies illustrate the impact of two important components in our framework: (i) a novel timestep clustering algorithm for stage division and (ii) an innovative multi-decoder U-net architecture seamlessly integrating universal and customized hyperparameters.
[]
[]
[]
[]
489
490
In-Context Matting
http://arxiv.org/abs/2403.15789
He Guo, Zixuan Ye, Zhiguo Cao, Hao Lu
2,403.15789
We introduce in-context matting a novel task setting of image matting. Given a reference image of a certain foreground and guided priors such as points scribbles and masks in-context matting enables automatic alpha estimation on a batch of target images of the same foreground category without additional auxiliary input. This setting marries good performance in auxiliary input-based matting and ease of use in automatic matting which finds a good trade-off between customization and automation. To overcome the key challenge of accurate foreground matching we introduce IconMatting an in-context matting model built upon a pre-trained text-to-image diffusion model. Conditioned on inter- and intra-similarity matching IconMatting can make full use of reference context to generate accurate target alpha mattes. To benchmark the task we also introduce a novel testing dataset ICM-57 covering 57 groups of real-world images. Quantitative and qualitative results on the ICM-57 testing set show that IconMatting rivals the accuracy of trimap-based matting while retaining the automation level akin to automatic matting. Code is available at https://github.com/tiny-smart/in-context-matting.
[]
[]
[]
[]
490
491
Navigate Beyond Shortcuts: Debiased Learning Through the Lens of Neural Collapse
http://arxiv.org/abs/2405.05587
Yining Wang, Junjie Sun, Chenyue Wang, Mi Zhang, Min Yang
2,405.05587
Recent studies have noted an intriguing phenomenon termed Neural Collapse that is when the neural networks establish the right correlation between feature spaces and the training targets their last-layer features together with the classifier weights will collapse into a stable and symmetric structure. In this paper we extend the investigation of Neural Collapse to the biased datasets with imbalanced attributes. We observe that models will easily fall into the pitfall of shortcut learning and form a biased non-collapsed feature space at the early period of training which is hard to reverse and limits the generalization capability. To tackle the root cause of biased classification we follow the recent inspiration of prime training and propose an avoid-shortcut learning framework without additional training complexity. With well-designed shortcut primes based on Neural Collapse structure the models are encouraged to skip the pursuit of simple shortcuts and naturally capture the intrinsic correlations. Experimental results demonstrate that our method induces a better convergence property during training and achieves state-of-the-art generalization performance on both synthetic and real-world biased datasets.
[]
[]
[]
[]
491
492
DiVa-360: The Dynamic Visual Dataset for Immersive Neural Fields
Cheng-You Lu, Peisen Zhou, Angela Xing, Chandradeep Pokhariya, Arnab Dey, Ishaan Nikhil Shah, Rugved Mavidipalli, Dylan Hu, Andrew I. Comport, Kefan Chen, Srinath Sridhar
null
Advances in neural fields are enabling high-fidelity capture of the shape and appearance of dynamic 3D scenes. However their capabilities lag behind those offered by conventional representations such as 2D videos because of algorithmic challenges and the lack of large-scale multi-view real-world datasets. We address the dataset limitation with DiVa-360 a real-world 360? dynamic visual dataset that contains synchronized high-resolution and long-duration multi-view video sequences of table-scale scenes captured using a customized low-cost system with 53 cameras. It contains 21 object-centric sequences categorized by different motion types 25 intricate hand-object interaction sequences and 8 long-duration sequences for a total of 17.4 M image frames. In addition we provide foreground-background segmentation masks synchronized audio and text descriptions. We benchmark the state-of-the-art dynamic neural field methods on DiVa-360 and provide insights about existing methods and future challenges on long-duration neural field capture.
[]
[]
[]
[]
492
493
A Subspace-Constrained Tyler's Estimator and its Applications to Structure from Motion
Feng Yu, Teng Zhang, Gilad Lerman
null
We present the subspace-constrained Tyler's estimator (STE) designed for recovering a low-dimensional subspace within a dataset that may be highly corrupted with outliers. STE is a fusion of the Tyler's M-estimator (TME) and a variant of the fast median subspace. Our theoretical analysis suggests that under a common inlier-outlier model STE can effectively recover the underlying subspace even when it contains a smaller fraction of inliers relative to other methods in the field of robust subspace recovery. We apply STE in the context of Structure from Motion (SfM) in two ways: for robust estimation of the fundamental matrix and for the removal of outlying cameras enhancing the robustness of the SfM pipeline. Numerical experiments confirm the state-of-the-art performance of our method in these applications. This research makes significant contributions to the field of robust subspace recovery particularly in the context of computer vision and 3D reconstruction.
[]
[]
[]
[]
493
494
FSC: Few-point Shape Completion
http://arxiv.org/abs/2403.07359
Xianzu Wu, Xianfeng Wu, Tianyu Luan, Yajing Bai, Zhongyuan Lai, Junsong Yuan
2,403.07359
While previous studies have demonstrated successful 3D object shape completion with a sufficient number of points they often fail in scenarios when a few points e.g. tens of points are observed. Surprisingly via entropy analysis we find that even a few points e.g. 64 points could retain substantial information to help recover the 3D shape of the object. To address the challenge of shape completion with very sparse point clouds we then propose Few-point Shape Completion (FSC) model which contains a novel dual-branch feature extractor for handling extremely sparse inputs coupled with an extensive branch for maximal point utilization with a saliency branch for dynamic importance assignment. This model is further bolstered by a two-stage revision network that refines both the extracted features and the decoder output enhancing the detail and authenticity of the completed point cloud. Our experiments demonstrate the feasibility of recovering 3D shapes from a few points. The proposed Few-point Shape Completion (FSC) model outperforms previous methods on both few-point inputs and many-point inputs and shows good generalizability to different object categories.
[]
[]
[]
[]
494
495
CAD: Photorealistic 3D Generation via Adversarial Distillation
http://arxiv.org/abs/2312.06663
Ziyu Wan, Despoina Paschalidou, Ian Huang, Hongyu Liu, Bokui Shen, Xiaoyu Xiang, Jing Liao, Leonidas Guibas
2,312.06663
The increased demand for 3D data in AR/VR robotics and gaming applications gave rise to powerful generative pipelines capable of synthesizing high-quality 3D objects. Most of these models rely on the Score Distillation Sampling (SDS) algorithm to optimize a 3D representation such that the rendered image maintains a high likelihood as evaluated by a pre-trained diffusion model. However this distillation process involves finding a correct mode in the high-dimensional and large-variance distribution produced by the diffusion model. This task is challenging and often leads to issues such as over-saturation over-smoothing and Janus-like artifacts in the 3D generation. In this paper we propose a novel learning paradigm for 3D synthesis that utilizes pre-trained diffusion models. Instead of focusing on mode-seeking our method directly models the distribution discrepancy between multi-view renderings and diffusion priors in an adversarial manner which unlocks the generation of high-fidelity and photorealistic 3D content conditioned on a single image and prompt. Moreover by harnessing the latent space of GANs and expressive diffusion model priors our method enables a wide variety of 3D applications including single-view reconstruction high diversity generation and continuous 3D interpolation in open domain. Our experiments demonstrate the superiority of our pipeline compared to previous works in terms of generation quality and diversity.
[]
[]
[]
[]
495
496
Enhancing Vision-Language Pre-training with Rich Supervisions
http://arxiv.org/abs/2403.03346
Yuan Gao, Kunyu Shi, Pengkai Zhu, Edouard Belval, Oren Nuriel, Srikar Appalaraju, Shabnam Ghadar, Zhuowen Tu, Vijay Mahadevan, Stefano Soatto
2,403.03346
We propose Strongly Supervised pre-training with ScreenShots (S4) - a novel pre-training paradigm for Vision-Language Models using data from large-scale web screenshot rendering. Using web screenshots unlocks a treasure trove of visual and textual cues that are not present in using image-text pairs. In S4 we leverage the inherent tree-structured hierarchy of HTML elements and the spatial localization to carefully design 10 pre-training tasks with large scale annotated data. These tasks resemble downstream tasks across different domains and the annotations are cheap to obtain. We demonstrate that compared to current screenshot pre-training objectives our innovative pre-training method significantly enhances performance of image-to-text model in nine varied and popular downstream tasks - up to 76.1% improvements on Table Detection and at least 1% on Widget Captioning.
[]
[]
[]
[]
496
497
T-VSL: Text-Guided Visual Sound Source Localization in Mixtures
Tanvir Mahmud, Yapeng Tian, Diana Marculescu
null
Visual sound source localization poses a significant challenge in identifying the semantic region of each sounding source within a video. Existing self-supervised and weakly supervised source localization methods struggle to accurately distinguish the semantic regions of each sounding object particularly in multi-source mixtures. These methods often rely on audio-visual correspondence as guidance which can lead to substantial performance drops in complex multi-source localization scenarios. The lack of access to individual source sounds in multi-source mixtures during training exacerbates the difficulty of learning effective audio-visual correspondence for localization. To address this limitation in this paper we propose incorporating the text modality as an intermediate feature guide using tri-modal joint embedding models (e.g. AudioCLIP) to disentangle the semantic audio-visual source correspondence in multi-source mixtures. Our framework dubbed T-VSL begins by predicting the class of sounding entities in mixtures. Subsequently the textual representation of each sounding source is employed as guidance to disentangle fine-grained audio-visual source correspondence from multi-source mixtures leveraging the tri-modal AudioCLIP embedding. This approach enables our framework to handle a flexible number of sources and exhibits promising zero-shot transferability to unseen classes during test time. Extensive experiments conducted on the MUSIC VGGSound and VGGSound-Instruments datasets demonstrate significant performance improvements over state-of-the-art methods. Code is released at https://github.com/enyac-group/T-VSL/tree/main.
[]
[]
[]
[]
497
498
DemoCaricature: Democratising Caricature Generation with a Rough Sketch
http://arxiv.org/abs/2312.04364
Dar-Yen Chen, Ayan Kumar Bhunia, Subhadeep Koley, Aneeshan Sain, Pinaki Nath Chowdhury, Yi-Zhe Song
2,312.04364
In this paper we democratise caricature generation empowering individuals to effortlessly craft personalised caricatures with just a photo and a conceptual sketch. Our objective is to strike a delicate balance between abstraction and identity while preserving the creativity and subjectivity inherent in a sketch. To achieve this we present Explicit Rank-1 Model Editing alongside single-image personalisation selectively applying nuanced edits to cross-attention layers for a seamless merge of identity and style. Additionally we propose Random Mask Reconstruction to enhance robustness directing the model to focus on distinctive identity and style features. Crucially our aim is not to replace artists but to eliminate accessibility barriers allowing enthusiasts to engage in the artistry.
[]
[]
[]
[]
498
499
CapHuman: Capture Your Moments in Parallel Universes
http://arxiv.org/abs/2402.00627
Chao Liang, Fan Ma, Linchao Zhu, Yingying Deng, Yi Yang
2,402.00627
We concentrate on a novel human-centric image synthesis task that is given only one reference facial photograph it is expected to generate specific individual images with diverse head positions poses facial expressions and illuminations in different contexts. To accomplish this goal we argue that our generative model should be capable of the following favorable characteristics: (1) a strong visual and semantic understanding of our world and human society for basic object and human image generation. (2) generalizable identity preservation ability. (3) flexible and fine-grained head control. Recently large pre-trained text-to-image diffusion models have shown remarkable results serving as a powerful generative foundation. As a basis we aim to unleash the above two capabilities of the pre-trained model. In this work we present a new framework named CapHuman. We embrace the "encode then learn to align" paradigm which enables generalizable identity preservation for new individuals without cumbersome tuning at inference. CapHuman encodes identity features and then learns to align them into the latent space. Moreover we introduce the 3D facial prior to equip our model with control over the human head in a flexible and 3D-consistent manner. Extensive qualitative and quantitative analyses demonstrate our CapHuman can produce well-identity-preserved photo-realistic and high-fidelity portraits with content-rich representations and various head renditions superior to established baselines. Code and checkpoint will be released at https://github.com/VamosC/CapHuman.
[]
[]
[]
[]
499