Unnamed: 0
int64
0
2.72k
title
stringlengths
14
153
Arxiv link
stringlengths
1
31
authors
stringlengths
5
1.5k
arxiv_id
float64
2k
2.41k
abstract
stringlengths
435
2.86k
Model
stringclasses
1 value
GitHub
stringclasses
1 value
Space
stringclasses
1 value
Dataset
stringclasses
1 value
id
int64
0
2.72k
200
Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis
http://arxiv.org/abs/2403.01439
Xin Zhou, Dingkang Liang, Wei Xu, Xingkui Zhu, Yihan Xu, Zhikang Zou, Xiang Bai
2,403.01439
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models. However existing methods for model adaptation usually update all model parameters i.e. full fine-tuning paradigm which is inefficient as it relies on high computational costs (e.g. training GPU memory) and massive storage space. In this paper we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency. To achieve this goal we freeze the parameters of the default pre-trained models and then propose the Dynamic Adapter which generates a dynamic scale for each token considering the token significance to the downstream task. We further seamlessly integrate Dynamic Adapter with Prompt Tuning (DAPT) by constructing Internal Prompts capturing the instance-specific features for interaction. Extensive experiments conducted on five challenging datasets demonstrate that the proposed DAPT achieves superior performance compared to the full fine-tuning counterparts while significantly reducing the trainable parameters and training GPU memory by 95% and 35% respectively. Code is available at https://github.com/LMD0311/DAPT.
[]
[]
[]
[]
200
201
Exploring Vision Transformers for 3D Human Motion-Language Models with Motion Patches
http://arxiv.org/abs/2405.04771
Qing Yu, Mikihiro Tanaka, Kent Fujiwara
2,405.04771
To build a cross-modal latent space between 3D human motion and language acquiring large-scale and high-quality human motion data is crucial. However unlike the abundance of image data the scarcity of motion data has limited the performance of existing motion-language models. To counter this we introduce "motion patches" a new representation of motion sequences and propose using Vision Transformers (ViT) as motion encoders via transfer learning aiming to extract useful knowledge from the image domain and apply it to the motion domain. These motion patches created by dividing and sorting skeleton joints based on body parts in motion sequences are robust to varying skeleton structures and can be regarded as color image patches in ViT. We find that transfer learning with pre-trained weights of ViT obtained through training with 2D image data can boost the performance of motion analysis presenting a promising direction for addressing the issue of limited motion data. Our extensive experiments show that the proposed motion patches used jointly with ViT achieve state-of-the-art performance in the benchmarks of text-to-motion retrieval and other novel challenging tasks such as cross-skeleton recognition zero-shot motion classification and human interaction recognition which are currently impeded by the lack of data.
[]
[]
[]
[]
201
202
Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring
http://arxiv.org/abs/2404.13153
Chengxu Liu, Xuan Wang, Xiangyu Xu, Ruhao Tian, Shuai Li, Xueming Qian, Ming-Hsuan Yang
2,404.13153
Eliminating image blur produced by various kinds of motion has been a challenging problem. Dominant approaches rely heavily on model capacity to remove blurring by reconstructing residual from blurry observation in feature space. These practices not only prevent the capture of spatially variable motion in the real world but also ignore the tailored handling of various motions in image space. In this paper we propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative (MISC) Filter. In particular we use a motion estimation network to capture motion information from neighborhoods thereby adaptively estimating spatially-variant motion flow mask kernels weights and offsets to obtain the MISC Filter. The MISC Filter first aligns the motion-induced blurring patterns to the motion middle along the predicted flow direction and then collaboratively filters the aligned image through the predicted kernels weights and offsets to generate the output. This design can handle more generalized and complex motion in a spatially differentiated manner. Furthermore we analyze the relationships between the motion estimation network and the residual reconstruction network. Extensive experiments on four widely used benchmarks demonstrate that our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance. Code is available at https://github.com/ChengxuLiu/MISCFilter.
[]
[]
[]
[]
202
203
DART: Implicit Doppler Tomography for Radar Novel View Synthesis
http://arxiv.org/abs/2403.03896
Tianshu Huang, John Miller, Akarsh Prabhakara, Tao Jin, Tarana Laroia, Zico Kolter, Anthony Rowe
2,403.03896
Simulation is an invaluable tool for radio-frequency system designers that enables rapid prototyping of various algorithms for imaging target detection classification and tracking. However simulating realistic radar scans is a challenging task that requires an accurate model of the scene radio frequency material properties and a corresponding radar synthesis function. Rather than specifying these models explicitly we propose DART - Doppler Aided Radar Tomography a Neural Radiance Field-inspired method which uses radar-specific physics to create a reflectance and transmittance-based rendering pipeline for range-Doppler images. We then evaluate DART by constructing a custom data collection platform and collecting a novel radar dataset together with accurate position and instantaneous velocity measurements from lidar-based localization. In comparison to state-of-the-art baselines DART synthesizes superior radar range-Doppler images from novel views across all datasets and additionally can be used to generate high quality tomographic images.
[]
[]
[]
[]
203
204
Wonder3D: Single Image to 3D using Cross-Domain Diffusion
http://arxiv.org/abs/2310.15008
Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang
2,310.15008
In this work we introduce Wonder3D a novel method for generating high-fidelity textured meshes from single-view images with remarkable efficiency. Recent methods based on the Score Distillation Sampling (SDS) loss methods have shown the potential to recover 3D geometry from 2D diffusion priors but they typically suffer from time-consuming per-shape optimization and inconsistent geometry. In contrast certain works directly produce 3D information via fast network inferences but their results are often of low quality and lack geometric details. To holistically improve the quality consistency and efficiency of image-to-3D tasks we propose a cross-domain diffusion model that generates multi-view normal maps and the corresponding color images. To ensure consistency we employ a multi-view cross-domain attention mechanism that facilitates information exchange across views and modalities. Lastly we introduce a geometry-aware normal fusion algorithm that extracts high-quality surfaces from the multi-view 2D representations in only 2 3 minutes. Our extensive evaluations demonstrate that our method achieves high-quality reconstruction results robust generalization and remarkable efficiency compared to prior works.
[]
[]
[]
[]
204
205
Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for Video Adverse Weather Removal
http://arxiv.org/abs/2403.07684
Yijun Yang, Hongtao Wu, Angelica I. Aviles-Rivero, Yulun Zhang, Jing Qin, Lei Zhu
2,403.07684
Real-world vision tasks frequently suffer from the appearance of unexpected adverse weather conditions including rain haze snow and raindrops. In the last decade convolutional neural networks and vision transformers have yielded outstanding results in single-weather video removal. However due to the absence of appropriate adaptation most of them fail to generalize to other weather conditions. Although ViWS-Net is proposed to remove adverse weather conditions in videos with a single set of pre-trained weights it is seriously blinded by seen weather at train-time and degenerates when coming to unseen weather during test-time. In this work we introduce test-time adaptation into adverse weather removal in videos and propose the first framework that integrates test-time adaptation into the iterative diffusion reverse process. Specifically we devise a diffusion-based network with a novel temporal noise model to efficiently explore frame-correlated information in degraded video clips at training stage. During inference stage we introduce a proxy task named Diffusion Tubelet Self-Calibration to learn the primer distribution of test video stream and optimize the model by approximating the temporal noise model for online adaptation. Experimental results on benchmark datasets demonstrate that our Test-Time Adaptation method with Diffusion-based network(Diff-TTA) outperforms state-of-the-art methods in terms of restoring videos degraded by seen weather conditions. Its generalizable capability is validated with unseen weather conditions in synthesized and real-world videos.
[]
[]
[]
[]
205
206
Gradient-based Parameter Selection for Efficient Fine-Tuning
Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, Shanghang Zhang
null
With the growing size of pre-trained models full fine-tuning and storing all the parameters for various downstream tasks is costly and infeasible. In this paper we propose a new parameter-efficient fine-tuning method Gradient-based Parameter Selection (GPS) demonstrating that only tuning a few selected parameters from the pre-trained model while keeping the remainder of the model frozen can generate similar or better performance compared with the full model fine-tuning method. Different from the existing popular and state-of-the-art parameter-efficient fine-tuning approaches our method does not introduce any additional parameters and computational costs during both the training and inference stages. Another advantage is the model-agnostic and non-destructive property which eliminates the need for any other design specific to a particular model. Compared with the full fine-tuning GPS achieves 3.33% (91.78% vs. 88.45% FGVC) and 9.61% (73.1% vs. 65.57% VTAB) improvement of the accuracy with tuning only 0.36% parameters of the pre-trained model on average over 24 image classification tasks; it also demonstrates a significant improvement of 17% and 16.8% in mDice and mIoU respectively on medical image segmentation task. Moreover GPS achieves state-of-the-art performance compared with existing PEFT methods. The code will be available in https://github.com/FightingFighting/GPS.git.
[]
[]
[]
[]
206
207
Clustering for Protein Representation Learning
http://arxiv.org/abs/2404.00254
Ruijie Quan, Wenguan Wang, Fan Ma, Hehe Fan, Yi Yang
2,404.00254
Protein representation learning is a challenging task that aims to capture the structure and function of proteins from their amino acid sequences. Previous methods largely ignored the fact that not all amino acids are equally important for protein folding and activity. In this article we propose a neural clustering framework that can automatically discover the critical components of a protein by considering both its primary and tertiary structure information. Our framework treats a protein as a graph where each node represents an amino acid and each edge represents a spatial or sequential connection between amino acids. We then apply an iterative clustering strategy to group the nodes into clusters based on their 1D and 3D positions and assign scores to each cluster. We select the highest-scoring clusters and use their medoid nodes for the next iteration of clustering until we obtain a hierarchical and informative representation of the protein. We evaluate on four protein-related tasks: protein fold classification enzyme reaction classification gene ontology term prediction and enzyme commission number prediction. Experimental results demonstrate that our method achieves state-of-the-art performance.
[]
[]
[]
[]
207
208
CorrMatch: Label Propagation via Correlation Matching for Semi-Supervised Semantic Segmentation
http://arxiv.org/abs/2306.04300
Boyuan Sun, Yuqi Yang, Le Zhang, Ming-Ming Cheng, Qibin Hou
2,306.043
This paper presents a simple but performant semi-supervised semantic segmentation approach called CorrMatch. Previous approaches mostly employ complicated training strategies to leverage unlabeled data but overlook the role of correlation maps in modeling the relationships between pairs of locations. We observe that the correlation maps not only enable clustering pixels of the same category easily but also contain good shape information which previous works have omitted. Motivated by these we aim to improve the use efficiency of unlabeled data by designing two novel label propagation strategies. First we propose to conduct pixel propagation by modeling the pairwise similarities of pixels to spread the high-confidence pixels and dig out more. Then we perform region propagation to enhance the pseudo labels with accurate class-agnostic masks extracted from the correlation maps. CorrMatch achieves great performance on popular segmentation benchmarks. Taking the DeepLabV3+ with ResNet-101 backbone as our segmentation model we receive a 76%+ mIoU score on the Pascal VOC 2012 dataset with only 92 annotated images. Code is available at https://github.com/BBBBchan/CorrMatch .
[]
[]
[]
[]
208
209
Estimating Extreme 3D Image Rotations using Cascaded Attention
Shay Dekel, Yosi Keller, Martin Cadik
null
Estimating large extreme inter-image rotations is critical for numerous computer vision domains involving images related by limited or non-overlapping fields of view. In this work we propose an attention-based approach with a pipeline of novel algorithmic components. First as rotation estimation pertains to image pairs we introduce an inter-image distillation scheme using Decoders to improve embeddings. Second whereas contemporary methods compute a 4D correlation volume (4DCV) encoding inter-image relationships we propose an Encoder-based cross-attention approach between activation maps to compute an enhanced equivalent of the 4DCV. Finally we present a cascaded Decoder-based technique for alternately refining the cross-attention and the rotation query. Our approach outperforms current state-of-the-art methods on extreme rotation estimation. We make our code publicly available.
[]
[]
[]
[]
209
210
RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D
http://arxiv.org/abs/2311.16918
Lingteng Qiu, Guanying Chen, Xiaodong Gu, Qi Zuo, Mutian Xu, Yushuang Wu, Weihao Yuan, Zilong Dong, Liefeng Bo, Xiaoguang Han
2,311.16918
Lifting 2D diffusion for 3D generation is a challenging problem due to the lack of geometric prior and the complex entanglement of materials and lighting in natural images. Existing methods have shown promise by first creating the geometry through score-distillation sampling (SDS) applied to rendered surface normals followed by appearance modeling. However relying on a 2D RGB diffusion model to optimize surface normals is suboptimal due to the distribution discrepancy between natural images and normals maps leading to instability in optimization. In this paper recognizing that the normal and depth information effectively describe scene geometry and be automatically estimated from images we propose to learn a generalizable Normal-Depth diffusion model for 3D generation. We achieve this by training on the large-scale LAION dataset together with the generalizable image-to-depth and normal prior models. In an attempt to alleviate the mixed illumination effects in the generated materials we introduce an albedo diffusion model to impose data-driven constraints on the albedo component. Our experiments show that when integrated into existing text-to-3D pipelines our models significantly enhance the detail richness achieving state-of-the-art results. Our project page is at https://aigc3d.github.io/richdreamer/.
[]
[]
[]
[]
210
211
Adapt or Perish: Adaptive Sparse Transformer with Attentive Feature Refinement for Image Restoration
Shihao Zhou, Duosheng Chen, Jinshan Pan, Jinglei Shi, Jufeng Yang
null
Transformer-based approaches have achieved promising performance in image restoration tasks given their ability to model long-range dependencies which is crucial for recovering clear images. Though diverse efficient attention mechanism designs have addressed the intensive computations associated with using transformers they often involve redundant information and noisy interactions from irrelevant regions by considering all available tokens. In this work we propose an Adaptive Sparse Transformer (AST) to mitigate the noisy interactions of irrelevant areas and remove feature redundancy in both spatial and channel domains. AST comprises two core designs i.e. an Adaptive Sparse Self-Attention (ASSA) block and a Feature Refinement Feed-forward Network (FRFN). Specifically ASSA is adaptively computed using a two-branch paradigm where the sparse branch is introduced to filter out the negative impacts of low query-key matching scores for aggregating features while the dense one ensures sufficient information flow through the network for learning discriminative representations. Meanwhile FRFN employs an enhance-and-ease scheme to eliminate feature redundancy in channels enhancing the restoration of clear latent images. Experimental results on commonly used benchmarks have demonstrated the versatility and competitive performance of our method in several tasks including rain streak removal real haze removal and raindrop removal. The code and pre-trained models are available at https://github.com/joshyZhou/AST.
[]
[]
[]
[]
211
212
VINECS: Video-based Neural Character Skinning
http://arxiv.org/abs/2307.00842
Zhouyingcheng Liao, Vladislav Golyanik, Marc Habermann, Christian Theobalt
2,307.00842
Rigging and skinning clothed human avatars is a challenging task and traditionally requires a lot of manual work and expertise. Recent methods addressing it either generalize across different characters or focus on capturing the dynamics of a single character observed under different pose configurations. However the former methods typically predict solely static skinning weights which perform poorly for highly articulated poses and the latter ones either require dense 3D character scans in different poses or cannot generate an explicit mesh with vertex correspondence over time. To address these challenges we propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights which can be solely learned from multi-view video. Therefore we first acquire a rigged template which is then statically skinned. Next a coordinate-based MLP learns a skinning weights field parameterized over the position in a canonical pose space and the respective pose. Moreover we introduce our pose- and view-dependent appearance field allowing us to differentiably render and supervise the posed mesh using multi-view imagery. We show that our approach outperforms state-of-the-art while not relying on dense 4D scans. More details can be found on our project page.
[]
[]
[]
[]
212
213
Zero-shot Referring Expression Comprehension via Structural Similarity Between Images and Captions
http://arxiv.org/abs/2311.17048
Zeyu Han, Fangrui Zhu, Qianru Lao, Huaizu Jiang
2,311.17048
Zero-shot referring expression comprehension aims at localizing bounding boxes in an image corresponding to provided textual prompts which requires: (i) a fine-grained disentanglement of complex visual scene and textual context and (ii) a capacity to understand relationships among disentangled entities. Unfortunately existing large vision-language alignment (VLA) models e.g. CLIP struggle with both aspects so cannot be directly used for this task. To mitigate this gap we leverage large foundation models to disentangle both images and texts into triplets in the format of (subject predicate object). After that grounding is accomplished by calculating the structural similarity matrix between visual and textual triplets with a VLA model and subsequently propagate it to an instance-level similarity matrix. Furthermore to equip VLA models with the ability of relationship understanding we design a triplet-matching objective to fine-tune the VLA models on a collection of curated dataset containing abundant entity relationships. Experiments demonstrate that our visual grounding performance increase of up to 19.5% over the SOTA zero-shot model on RefCOCO/+/g. On the more challenging Who's Waldo dataset our zero-shot approach achieves comparable accuracy to the fully supervised model. Code is available at https://github.com/Show-han/Zeroshot_REC.
[]
[]
[]
[]
213
214
Domain Prompt Learning with Quaternion Networks
http://arxiv.org/abs/2312.08878
Qinglong Cao, Zhengqin Xu, Yuntian Chen, Chao Ma, Xiaokang Yang
2,312.08878
Prompt learning has emerged as an effective and data-efficient technique in large Vision-Language Models (VLMs). However when adapting VLMs to specialized domains such as remote sensing and medical imaging domain prompt learning remains underexplored. While large-scale domain-specific foundation models can help tackle this challenge their concentration on a single vision level makes it challenging to prompt both vision and language modalities. To overcome this we propose to leverage domain-specific knowledge from domain-specific foundation models to transfer the robust recognition ability of VLMs from generalized to specialized domains using quaternion networks. Specifically the proposed method involves using domain-specific vision features from domain-specific foundation models to guide the transformation of generalized contextual embeddings from the language branch into a specialized space within the quaternion networks. Moreover we present a hierarchical approach that generates vision prompt features by analyzing intermodal relationships between hierarchical language prompt features and domain-specific vision features. In this way quaternion networks can effectively mine the intermodal relationships in the specific domain facilitating domain-specific vision-language contrastive learning. Extensive experiments on domain-specific datasets show that our proposed method achieves new state-of-the-art results in prompt learning.
[]
[]
[]
[]
214
215
BEHAVIOR Vision Suite: Customizable Dataset Generation via Simulation
http://arxiv.org/abs/2405.09546
Yunhao Ge, Yihe Tang, Jiashu Xu, Cem Gokmen, Chengshu Li, Wensi Ai, Benjamin Jose Martinez, Arman Aydin, Mona Anvari, Ayush K Chakravarthy, Hong-Xing Yu, Josiah Wong, Sanjana Srivastava, Sharon Lee, Shengxin Zha, Laurent Itti, Yunzhu Li, Roberto Martín-Martín, Miao Liu, Pengchuan Zhang, Ruohan Zhang, Li Fei-Fei, Jiajun Wu
2,405.09546
The systematic evaluation and understanding of computer vision models under varying conditions require large amounts of data with comprehensive and customized labels which real-world vision datasets rarely satisfy. While current synthetic data generators offer a promising alternative particularly for embodied AI tasks they often fall short for computer vision tasks due to low asset and rendering quality limited diversity and unrealistic physical properties. We introduce the BEHAVIOR Vision Suite (BVS) a set of tools and assets to generate fully customized synthetic data for systematic evaluation of computer vision models based on the newly developed embodied AI benchmark BEHAVIOR-1K. BVS supports a large number of adjustable parameters at the scene level (e.g. lighting object placement) the object level (e.g. joint configuration attributes such as "filled" and "folded") and the camera level (e.g. field of view focal length). Researchers can arbitrarily vary these parameters during data generation to perform controlled experiments. We showcase three example application scenarios: systematically evaluating the robustness of models across different continuous axes of domain shift evaluating scene understanding models on the same set of images and training and evaluating simulation-to-real transfer for a novel vision task: unary and binary state prediction. Project website: https://behavior-vision-suite.github.io/
[]
[]
[]
[]
215
216
Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers
http://arxiv.org/abs/2312.09147
Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, Song-Hai Zhang
2,312.09147
Recent advancements in 3D reconstruction from single images have been driven by the evolution of generative models. Prominent among these are methods based on Score Distillation Sampling (SDS) and the adaptation of diffusion models in the 3D domain. Despite their progress these techniques often face limitations due to slow optimization or rendering processes leading to extensive training and optimization times. In this paper we introduce a novel approach for single-view reconstruction that efficiently generates a 3D model from a single image via feed-forward inference. Our method utilizes two transformer-based networks namely a point decoder and a triplane decoder to reconstruct 3D objects using a hybrid Triplane-Gaussian intermediate representation. This hybrid representation strikes a balance achieving a faster rendering speed compared to implicit representations while simultaneously delivering superior rendering quality than explicit representations. The point decoder is designed for generating point clouds from single images offering an explicit representation which is then utilized by the triplane decoder to query Gaussian features for each point. This design choice addresses the challenges associated with directly regressing explicit 3D Gaussian attributes characterized by their non-structural nature. Subsequently the 3D Gaussians are decoded by an MLP to enable rapid rendering through splatting. Both decoders are built upon a scalable transformer-based architecture and have been efficiently trained on large-scale 3D datasets. The evaluations conducted on both synthetic datasets and real-world images demonstrate that our method not only achieves higher quality but also ensures a faster runtime in comparison to previous state-of-the-art techniques. Please see our project page at https://zouzx.github.io/TriplaneGaussian/
[]
[]
[]
[]
216
217
WateRF: Robust Watermarks in Radiance Fields for Protection of Copyrights
http://arxiv.org/abs/2405.02066
Youngdong Jang, Dong In Lee, MinHyuk Jang, Jong Wook Kim, Feng Yang, Sangpil Kim
2,405.02066
The advances in the Neural Radiance Fields (NeRF) research offer extensive applications in diverse domains but protecting their copyrights has not yet been researched in depth. Recently NeRF watermarking has been considered one of the pivotal solutions for safely deploying NeRF-based 3D representations. However existing methods are designed to apply only to implicit or explicit NeRF representations. In this work we introduce an innovative watermarking method that can be employed in both representations of NeRF. This is achieved by fine-tuning NeRF to embed binary messages in the rendering process. In detail we propose utilizing the discrete wavelet transform in the NeRF space for watermarking. Furthermore we adopt a deferred back-propagation technique and introduce a combination with the patch-wise loss to improve rendering quality and bit accuracy with minimum trade-offs. We evaluate our method in three different aspects: capacity invisibility and robustness of the embedded watermarks in the 2D-rendered images. Our method achieves state-of-the-art performance with faster training speed over the compared state-of-the-art methods. Project page: https://kuai-lab.github.io/cvpr2024waterf/
[]
[]
[]
[]
217
218
Gaussian-Flow: 4D Reconstruction with Dynamic 3D Gaussian Particle
Youtian Lin, Zuozhuo Dai, Siyu Zhu, Yao Yao
null
We introduce Gaussian-Flow a novel point-based approach for fast dynamic scene reconstruction and real-time rendering from both multi-view and monocular videos. In contrast to the prevalent NeRF-based approaches hampered by slow training and rendering speeds our approach harnesses recent advancements in point-based 3D Gaussian Splatting (3DGS). Specifically a novel Dual-Domain Deformation Model (DDDM) is proposed to explicitly model attribute deformations of each Gaussian point where the time-dependent residual of each attribute is captured by a polynomial fitting in the time domain and a Fourier series fitting in the frequency domain. The proposed DDDM is capable of modeling complex scene deformations across long video footage eliminating the need for training separate 3DGS for each frame or introducing an additional implicit neural field to model 3D dynamics. Moreover the explicit deformation modeling for discretized Gaussian points ensures ultra-fast training and rendering of a 4D scene which is comparable to the original 3DGS designed for static 3D reconstruction. Our proposed approach showcases a substantial efficiency improvement achieving a 5xfaster training speed compared to the per-frame 3DGS modeling. In addition quantitative results demonstrate that the proposed Gaussian-Flow significantly outperforms previous leading methods in novel view rendering quality.
[]
[]
[]
[]
218
219
Your Student is Better Than Expected: Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models
http://arxiv.org/abs/2312.10835
Nikita Starodubcev, Dmitry Baranchuk, Artem Fedorov, Artem Babenko
2,312.10835
Knowledge distillation methods have recently shown to be a promising direction to speedup the synthesis of large-scale diffusion models by requiring only a few inference steps. While several powerful distillation methods were recently proposed the overall quality of student samples is typically lower compared to the teacher ones which hinders their practical usage. In this work we investigate the relative quality of samples produced by the teacher text-to-image diffusion model and its distilled student version. As our main empirical finding we discover that a noticeable portion of student samples exhibit superior fidelity compared to the teacher ones despite the approximate nature of the student. Based on this finding we propose an adaptive collaboration between student and teacher diffusion models for effective text-to-image synthesis. Specifically the distilled model produces an initial image sample and then an oracle decides whether it needs further improvements with the teacher model. Extensive experiments demonstrate that the designed pipeline surpasses state-of-the-art text-to-image alternatives for various inference budgets in terms of human preference. Furthermore the proposed approach can be naturally used in popular applications such as text-guided image editing and controllable generation.
[]
[]
[]
[]
219
220
DiVAS: Video and Audio Synchronization with Dynamic Frame Rates
Clara Fernandez-Labrador, Mertcan Akçay, Eitan Abecassis, Joan Massich, Christopher Schroers
null
Synchronization issues between audio and video are one of the most disturbing quality defects in film production and live broadcasting. Even a discrepancy as short as 45 millisecond can degrade the viewer's experience enough to warrant manual quality checks over entire movies. In this paper we study the automatic discovery of such issues. Specifically we focus on the alignment of lip movements with spoken words targeting realistic production scenarios which can include background noise and music intricate head poses excessive makeup or scenes with multiple individuals where the speaker is unknown. Our model's robustness also extends to various media specifications including different video frame rates and audio sample rates. To address these challenges we present a model fully based on transformers that encodes face crops or full video frames and raw audio using timestamp information identifies the speaker and provides highly accurate synchronization predictions much faster than previous methods.
[]
[]
[]
[]
220
221
SHViT: Single-Head Vision Transformer with Memory Efficient Macro Design
http://arxiv.org/abs/2401.16456
Seokju Yun, Youngmin Ro
2,401.16456
Recently efficient Vision Transformers have shown great performance with low latency on resource-constrained devices. Conventionally they use 4x4 patch embeddings and a 4-stage structure at the macro level while utilizing sophisticated attention with multi-head configuration at the micro level. This paper aims to address computational redundancy at all design levels in a memory-efficient manner. We discover that using larger-stride patchify stem not only reduces memory access costs but also achieves competitive performance by leveraging token representations with reduced spatial redundancy from the early stages. Furthermore our preliminary analyses suggest that attention layers in the early stages can be substituted with convolutions and several attention heads in the latter stages are computationally redundant. To handle this we introduce a single-head attention module that inherently prevents head redundancy and simultaneously boosts accuracy by parallelly combining global and local information. Building upon our solutions we introduce SHViT a Single-Head Vision Transformer that obtains the state-of-the-art speed-accuracy tradeoff. For example on ImageNet-1k our SHViT-S4 is 3.3x 8.1x and 2.4x faster than MobileViTv2x1.0 on GPU CPU and iPhone12 mobile device respectively while being 1.3% more accurate. For object detection and instance segmentation on MS COCO using Mask-RCNN head our model achieves performance comparable to FastViT-SA12 while exhibiting 3.8x and 2.0x lower backbone latency on GPU and mobile device respectively.
[]
[]
[]
[]
221
222
HDRFlow: Real-Time HDR Video Reconstruction with Large Motions
http://arxiv.org/abs/2403.03447
Gangwei Xu, Yujin Wang, Jinwei Gu, Tianfan Xue, Xin Yang
2,403.03447
Reconstructing High Dynamic Range (HDR) video from image sequences captured with alternating exposures is challenging especially in the presence of large camera or object motion. Existing methods typically align low dynamic range sequences using optical flow or attention mechanism for deghosting. However they often struggle to handle large complex motions and are computationally expensive. To address these challenges we propose a robust and efficient flow estimator tailored for real-time HDR video reconstruction named HDRFlow. HDRFlow has three novel designs: an HDR-domain alignment loss (HALoss) an efficient flow network with a multi-size large kernel (MLK) and a new HDR flow training scheme. The HALoss supervises our flow network to learn an HDR-oriented flow for accurate alignment in saturated and dark regions. The MLK can effectively model large motions at a negligible cost. In addition we incorporate synthetic data Sintel into our training dataset utilizing both its provided forward flow and backward flow generated by us to supervise our flow network enhancing our performance in large motion regions. Extensive experiments demonstrate that our HDRFlow outperforms previous methods on standard benchmarks. To the best of our knowledge HDRFlow is the first real-time HDR video reconstruction method for video sequences captured with alternating exposures capable of processing 720p resolution inputs at 25ms.
[]
[]
[]
[]
222
223
SPIDeRS: Structured Polarization for Invisible Depth and Reflectance Sensing
http://arxiv.org/abs/2312.04553
Tomoki Ichikawa, Shohei Nobuhara, Ko Nishino
2,312.04553
Can we capture shape and reflectance in stealth? Such capability would be valuable for many application domains in vision xR robotics and HCI. We introduce structured polarization for invisible depth and reflectance sensing (SPIDeRS) the first depth and reflectance sensing method using patterns of polarized light. The key idea is to modulate the angle of linear polarization (AoLP) of projected light at each pixel. The use of polarization makes it invisible and lets us recover not only depth but also directly surface normals and even reflectance. We implement SPIDeRS with a liquid crystal spatial light modulator (SLM) and a polarimetric camera. We derive a novel method for robustly extracting the projected structured polarization pattern from the polarimetric object appearance. We evaluate the effectiveness of SPIDeRS by applying it to a number of real-world objects. The results show that our method successfully reconstructs object shapes of various materials and is robust to diffuse reflection and ambient light. We also demonstrate relighting using recovered surface normals and reflectance. We believe SPIDeRS opens a new avenue of polarization use in visual sensing.
[]
[]
[]
[]
223
224
SuperNormal: Neural Surface Reconstruction via Multi-View Normal Integration
http://arxiv.org/abs/2312.04803
Xu Cao, Takafumi Taketomi
2,312.04803
We present SuperNormal a fast high-fidelity approach to multi-view 3D reconstruction using surface normal maps. With a few minutes SuperNormal produces detailed surfaces on par with 3D scanners. We harness volume rendering to optimize a neural signed distance function (SDF) powered by multi-resolution hash encoding. To accelerate training we propose directional finite difference and patchbased ray marching to approximate the SDF gradients numerically. While not compromising reconstruction quality this strategy is nearly twice as efficient as analytical gradients and about three times faster than axis-aligned finite difference. Experiments on the benchmark dataset demonstrate the superiority of SuperNormal in efficiency and accuracy compared to existing multi-view photometric stereo methods. On our captured objects SuperNormal produces more fine-grained geometry than recent neural 3D reconstruction methods. Our code is available at https://github.com/CyberAgentAILab/SuperNormal.git.
[]
[]
[]
[]
224
225
Instance-aware Contrastive Learning for Occluded Human Mesh Reconstruction
Mi-Gyeong Gwon, Gi-Mun Um, Won-Sik Cheong, Wonjun Kim
null
A simple yet effective method for occlusion-robust 3D human mesh reconstruction from a single image is presented in this paper. Although many recent studies have shown the remarkable improvement in human mesh reconstruction it is still difficult to generate accurate meshes when person-to-person occlusion occurs due to the ambiguity of who a body part belongs to. To address this problem we propose an instance-aware contrastive learning scheme. Specifically joint features belonging to the target human are trained to be proximate with the anchor feature (i.e. feature extracted from the body center position). On the other hand anchor features of different human instances are forced to be far apart so that joint features of each person can be clearly distinguished from others. By interpreting the joint possession based on such contrastive learning scheme the proposed method easily understands the spatial occupancy of body parts for each person in a given image thus can reconstruct reliable human meshes even with severely overlapped cases between multiple persons. Experimental results on benchmark datasets demonstrate the robustness of the proposed method compared to previous approaches under person-to-person occlusions. The code and model are publicly available at: https://github.com/DCVL-3D/InstanceHMR_release.
[]
[]
[]
[]
225
226
ADFactory: An Effective Framework for Generalizing Optical Flow with NeRF
Han Ling, Quansen Sun, Yinghui Sun, Xian Xu, Xinfeng Li
null
A significant challenge facing current optical flow methods is the difficulty in generalizing them well to the real world. This is mainly due to the lack of large-scale real-world datasets and existing self-supervised methods are limited by indirect loss and occlusions resulting in fuzzy outcomes. To address this challenge we introduce a novel optical flow training framework: automatic data factory (ADF). ADF only requires RGB images as input to effectively train the optical flow network on the target data domain. Specifically we use advanced NeRF technology to reconstruct scenes from photo groups collected by a monocular camera and then calculate optical flow labels between camera pose pairs based on the rendering results. To eliminate erroneous labels caused by defects in the scene reconstructed by NeRF we screened the generated labels from multiple aspects such as optical flow matching accuracy radiation field confidence and depth consistency. The filtered labels can be directly used for network supervision. Experimentally the generalization ability of ADF on KITTI surpasses existing self-supervised optical flow and monocular scene flow algorithms. In addition ADF achieves impressive results in real-world zero-point generalization evaluations and surpasses most supervised methods.
[]
[]
[]
[]
226
227
Robust Noisy Correspondence Learning with Equivariant Similarity Consistency
Yuchen Yang, Likai Wang, Erkun Yang, Cheng Deng
null
The surge in multi-modal data has propelled cross-modal matching to the forefront of research interest. However the challenge lies in the laborious and expensive process of curating a large and accurately matched multimodal dataset. Commonly sourced from the Internet these datasets often suffer from a significant presence of mismatched data impairing the performance of matching models. To address this problem we introduce a novel regularization approach named Equivariant Similarity Consistency (ESC) which can facilitate robust clean and noisy data separation and improve the training for cross-modal matching. Intuitively our method posits that the semantic variations caused by image changes should be proportional to those caused by text changes for any two matched samples. Accordingly we first calculate the ESC by comparing image and text semantic variations between a set of elaborated anchor points and other undivided training data. Then pairs with high ESC are filtered out as noisy correspondence pairs. We implement our method by combining the ESC with a traditional hinge-based triplet loss. Extensive experiments on three widely used datasets including Flickr30K MS-COCO and Conceptual Captions verify the effectiveness of our method.
[]
[]
[]
[]
227
228
CommonCanvas: Open Diffusion Models Trained on Creative-Commons Images
Aaron Gokaslan, A. Feder Cooper, Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel, Jonathan Frankle, Cory Stephenson, Volodymyr Kuleshov
null
We train a set of open text-to-image (T2I) diffusion models on a dataset of curated Creative-Commons-licensed (CC) images which yields models that are competitive with Stable Diffusion 2 (SD2). This task presents two challenges: (1) high-resolution CC images lack the captions necessary to train T2I models; (2) CC images are relatively scarce. To address these challenges we use an intuitive transfer learning technique to produce a set of high-quality synthetic captions paired with our assembled CC images. We then develop a data- and compute-efficient training recipe that requires as little as 3% of the LAION data (i.e. roughly 70 million examples) needed to train existing SD2 models but obtains the same quality. These results indicate that we have a sufficient number of CC images (also roughly 70 million) for training high-quality models. Our recipe also implements a variety of optimizations that achieve 2.71x training speed-ups enabling rapid model iteration. We leverage this recipe to train several high-quality T2I mod- els which we dub the CommonCanvas family. Our largest model achieves comparable performance to SD2 on human evaluation even though we use a synthetically captioned CC-image dataset that is only <3% the size of LAION for training. We release our models data and code on GitHub.
[]
[]
[]
[]
228
229
Prompt-Driven Referring Image Segmentation with Instance Contrasting
Chao Shang, Zichen Song, Heqian Qiu, Lanxiao Wang, Fanman Meng, Hongliang Li
null
Referring image segmentation (RIS) aims to segment the target referent described by natural language. Recently large-scale pre-trained models e.g. CLIP and SAM have been successfully applied in many downstream tasks but they are not well adapted to RIS task due to inter-task differences. In this paper we propose a new prompt-driven framework named Prompt-RIS which bridges CLIP and SAM end-to-end and transfers their rich knowledge and powerful capabilities to RIS task through prompt learning. To adapt CLIP to pixel-level task we first propose a Cross-Modal Prompting method which acquires more comprehensive vision-language interaction and fine-grained text-to-pixel alignment by performing bidirectional prompting. Then the prompt-tuned CLIP generates masks points and text prompts for SAM to generate more accurate mask predictions. Moreover we further propose Instance Contrastive Learning to improve the model's discriminability to different instances and robustness to diverse languages describing the same instance. Extensive experiments demonstrate that the performance of our method outperforms the state-of-the-art methods consistently in both general and open-vocabulary settings.
[]
[]
[]
[]
229
230
Image Sculpting: Precise Object Editing with 3D Geometry Control
http://arxiv.org/abs/2401.01702
Jiraphon Yenphraphai, Xichen Pan, Sainan Liu, Daniele Panozzo, Saining Xie
2,401.01702
We present Image Sculpting a new framework for editing 2D images by incorporating tools from 3D geometry and graphics. This approach differs markedly from existing methods which are confined to 2D spaces and typically rely on textual instructions leading to ambiguity and limited control. Image Sculpting converts 2D objects into 3D enabling direct interaction with their 3D geometry. Post-editing these objects are re-rendered into 2D merging into the original image to produce high-fidelity results through a coarse-to-fine enhancement process. The framework supports precise quantifiable and physically-plausible editing options such as pose editing rotation translation 3D composition carving and serial addition. It marks an initial step towards combining the creative freedom of generative models with the precision of graphics pipelines.
[]
[]
[]
[]
230
231
Compositional Video Understanding with Spatiotemporal Structure-based Transformers
Hoyeoung Yun, Jinwoo Ahn, Minseo Kim, Eun-Sol Kim
null
In this paper we suggest a new novel method to understand complex semantic structures through long video inputs. Conventional methods for understanding videos have been focused on short-term clips and trained to get visual representations for the short clips using convolutional neural networks or transformer architectures. However most real-world videos are composed of long videos ranging from minutes to hours therefore it essentially brings limitations to understanding the overall semantic structures of the long videos by dividing them into small clips and learning the representations of them. We suggest a new algorithm to learn the multi-granular semantic structures of videos by defining spatiotemporal high-order relationships among object-based representations as semantic units. The proposed method includes a new transformer architecture capable of learning spatiotemporal graphs and a compositional learning method to learn disentangled features for each semantic unit. Using the suggested method we resolve the challenging video task which is compositional generalization understanding of unseen videos. In experiments we demonstrate new state-of-the-art performances for two challenging video datasets.
[]
[]
[]
[]
231
232
3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation
http://arxiv.org/abs/2405.03388
Xingguang Zhong, Yue Pan, Cyrill Stachniss, Jens Behley
2,405.03388
Building accurate maps is a key building block to enable reliable localization planning and navigation of autonomous vehicles. We propose a novel approach for building accurate 3D maps of dynamic environments utilizing a sequence of LiDAR scans. To this end we propose encoding the 4D scene into a novel spatio-temporal implicit neural map representation by fitting a time-dependent truncated signed distance function to each point. Using our representation we can extract the static map by filtering the dynamic parts. Our neural representation is based on sparse feature grids a globally shared decoder and time-dependent basis functions which can be jointly optimized in an unsupervised fashion. To learn this representation from a sequence of LiDAR scans we design a simple yet efficient loss function to supervise the map optimization in a piecewise way. We evaluate our approach on various scenes containing moving objects in terms of the reconstruction quality of static maps and the segmentation of dynamic point clouds. The experimental results demonstrate that our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete large-scale 3D maps outperforming several state-of-the-art methods for static map generation and scene reconstruction.
[]
[]
[]
[]
232
233
What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions
Brian Chen, Nina Shvetsova, Andrew Rouditchenko, Daniel Kondermann, Samuel Thomas, Shih-Fu Chang, Rogerio Feris, James Glass, Hilde Kuehne
null
Spatio-temporal grounding describes the task of localizing events in space and time e.g. in video data based on verbal descriptions only. Models for this task are usually trained with human-annotated sentences and bounding box supervision. This work addresses this task from a multimodal supervision perspective proposing a framework for spatio-temporal action grounding trained on loose video and subtitle supervision only without human annotation. To this end we combine local representation learning which focuses on leveraging fine-grained spatial information with a global representation encoding that captures higher-level representations and incorporates both in a joint approach. To evaluate this challenging task in a real-life setting a new benchmark dataset is proposed providing dense spatio-temporal grounding annotations in long untrimmed multi-action instructional videos for over 5K events. We evaluate the proposed approach and other methods on the proposed and standard downstream tasks showing that our method improves over current baselines in various settings including spatial temporal and untrimmed multi-action spatio-temporal grounding.
[]
[]
[]
[]
233
234
FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
http://arxiv.org/abs/2312.08344
Bowen Wen, Wei Yang, Jan Kautz, Stan Birchfield
2,312.08344
We present FoundationPose a unified foundation model for 6D object pose estimation and tracking supporting both model-based and model-free setups. Our approach can be instantly applied at test-time to a novel object without finetuning as long as its CAD model is given or a small number of reference images are captured. Thanks to the unified framework the downstream pose estimation modules are the same in both setups with a neural implicit representation used for efficient novel view synthesis when no CAD model is available. Strong generalizability is achieved via large-scale synthetic training aided by a large language model (LLM) a novel transformer-based architecture and contrastive learning formulation. Extensive evaluation on multiple public datasets involving challenging scenarios and objects indicate our unified approach outperforms existing methods specialized for each task by a large margin. In addition it even achieves comparable results to instance-level methods despite the reduced assumptions. Project page: https://nvlabs.github.io/FoundationPose/
[]
[]
[]
[]
234
235
How Far Can We Compress Instant-NGP-Based NeRF?
Yihang Chen, Qianyi Wu, Mehrtash Harandi, Jianfei Cai
null
In recent years Neural Radiance Field (NeRF) has demonstrated remarkable capabilities in representing 3D scenes. To expedite the rendering process learnable explicit representations have been introduced for combination with implicit NeRF representation which however results in a large storage space requirement. In this paper we introduce the Context-based NeRF Compression (CNC) framework which leverages highly efficient context models to provide a storage-friendly NeRF representation. Specifically we excavate both level-wise and dimension-wise context dependencies to enable probability prediction for information entropy reduction. Additionally we exploit hash collision and occupancy grids as strong prior knowledge for better context modeling. To the best of our knowledge we are the first to construct and exploit context models for NeRF compression. We achieve a size reduction of 100X and 70X with improved fidelity against the baseline Instant-NGP on Synthesic-NeRF and Tanks and Temples datasets respectively. Additionally we attain 86.7% and 82.3% storage size reduction against the SOTA NeRF compression method BiRF. Our code is available here: https://github.com/YihangChen-ee/CNC.
[]
[]
[]
[]
235
236
PFStorer: Personalized Face Restoration and Super-Resolution
http://arxiv.org/abs/2403.08436
Tuomas Varanka, Tapani Toivonen, Soumya Tripathy, Guoying Zhao, Erman Acar
2,403.08436
Recent developments in face restoration have achieved remarkable results in producing high-quality and lifelike outputs. The stunning results however often fail to be faithful with respect to the identity of the person as the models lack necessary context. In this paper we explore the potential of personalized face restoration with diffusion models. In our approach a restoration model is personalized using a few images of the identity leading to tailored restoration with respect to the identity while retaining fine-grained details. By using independent trainable blocks for personalization the rich prior of a base restoration model can be exploited to its fullest. To avoid the model relying on parts of identity left in the conditioning low-quality images a generative regularizer is employed. With a learnable parameter the model learns to balance between the details generated based on the input image and the degree of personalization. Moreover we improve the training pipeline of face restoration models to enable an alignment-free approach. We showcase the robust capabilities of our approach in several real-world scenarios with multiple identities demonstrating our method's ability to generate fine-grained details with faithful restoration. In the user study we evaluate the perceptual quality and faithfulness of the generated details with our method being voted best 61% of the time compared to the second best with 25% of the votes.
[]
[]
[]
[]
236
237
TextureDreamer: Image-Guided Texture Synthesis Through Geometry-Aware Diffusion
http://arxiv.org/abs/2401.09416
Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc, Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl S Marshall, Zhao Dong, Zhengqin Li
2,401.09416
We present TextureDreamer a novel image-guided texture synthesis method to transfer relightable textures from a small number of input images (3 to 5) to target 3D shapes across arbitrary categories. Texture creation is a pivotal challenge in vision and graphics. Industrial companies hire experienced artists to manually craft textures for 3D assets. Classical methods require densely sampled views and accurately aligned geometry while learning-based methods are confined to category-specific shapes within the dataset. In contrast TextureDreamer can transfer highly detailed intricate textures from real-world environments to arbitrary objects with only a few casually captured images potentially significantly democratizing texture creation. Our core idea personalized geometry-aware score distillation (PGSD) draws inspiration from recent advancements in diffuse models including personalized modeling for texture information extraction score distillation for detailed appearance synthesis and explicit geometry guidance with ControlNet. Our integration and several essential modifications substantially improve the texture quality. Experiments on real images spanning different categories show that TextureDreamer can successfully transfer highly realistic semantic meaningful texture to arbitrary objects surpassing the visual quality of previous state-of-the-art. Project page: https://texturedreamer.github.io
[]
[]
[]
[]
237
238
Boosting Image Quality Assessment through Efficient Transformer Adaptation with Local Feature Enhancement
Kangmin Xu, Liang Liao, Jing Xiao, Chaofeng Chen, Haoning Wu, Qiong Yan, Weisi Lin
null
Image Quality Assessment (IQA) constitutes a fundamental task within the field of computer vision yet it remains an unresolved challenge owing to the intricate distortion conditions diverse image contents and limited availability of data. Recently the community has witnessed the emergence of numerous large-scale pretrained foundation models. However it remains an open problem whether the scaling law in high-level tasks is also applicable to IQA tasks which are closely related to low-level clues. In this paper we demonstrate that with a proper injection of local distortion features a larger pretrained vision transformer (ViT) foundation model performs better in IQA tasks. Specifically for the lack of local distortion structure and inductive bias of the large-scale pretrained ViT we use another pretrained convolution neural networks (CNNs) which is well known for capturing the local structure to extract multi-scale image features. Further we propose a local distortion extractor to obtain local distortion features from the pretrained CNNs and a local distortion injector to inject the local distortion features into ViT. By only training the extractor and injector our method can benefit from the rich knowledge in the powerful foundation models and achieve state-of-the-art performance on popular IQA datasets indicating that IQA is not only a low-level problem but also benefits from stronger high-level features drawn from large-scale pretrained models. Codes are publicly available at: https://github.com/NeosXu/LoDa.
[]
[]
[]
[]
238
239
Hyperbolic Anomaly Detection
Huimin Li, Zhentao Chen, Yunhao Xu, Junlin Hu
null
Anomaly detection is a challenging computer vision task in industrial scenario. Advancements in deep learning constantly revolutionize vision-based anomaly detection methods and considerable progress has been made in both supervised and self-supervised anomaly detection. The commonly-used pipeline is to optimize the model by constraining the feature embeddings using a distance-based loss function. However these methods work in Euclidean space and they cannot well exploit the data lied in non-Euclidean space. In this paper we are the first to explore anomaly detection task in hyperbolic space that is a representative of non-Euclidean space and propose a hyperbolic anomaly detection (HypAD) method. Specifically we first extract image features and then map them from Euclidean space to hyperbolic space where the hyperbolic distance metric is employed to optimize the proposed HypAD. Extensive experiments on the benchmarking datasets including MVTec AD and VisA show that our HypAD approach obtains the state-of-the-art performance demonstrating the effectiveness of our HypAD and the promise of investigating anomaly detection in hyperbolic space.
[]
[]
[]
[]
239
240
VLP: Vision Language Planning for Autonomous Driving
http://arxiv.org/abs/2401.05577
Chenbin Pan, Burhaneddin Yaman, Tommaso Nesti, Abhirup Mallik, Alessandro G Allievi, Senem Velipasalar, Liu Ren
2,401.05577
Autonomous driving is a complex and challenging task that aims at safe motion planning through scene understanding and reasoning. While vision-only autonomous driving methods have recently achieved notable performance through enhanced scene understanding several key issues including lack of reasoning low generalization performance and long-tail scenarios still need to be addressed. In this paper we present VLP a novel Vision-Language-Planning framework that exploits language models to bridge the gap between linguistic understanding and autonomous driving. VLP enhances autonomous driving systems by strengthening both the source memory foundation and the self-driving car's contextual understanding. VLP achieves state-of-the-art end-to-end planning performance on the challenging NuScenes dataset by achieving 35.9% and 60.5% reduction in terms of average L2 error and collision rates respectively compared to the previous best method. Moreover VLP shows improved performance in challenging long-tail scenarios and strong generalization capabilities when faced with new urban environments.
[]
[]
[]
[]
240
241
Attention Calibration for Disentangled Text-to-Image Personalization
http://arxiv.org/abs/2403.18551
Yanbing Zhang, Mengping Yang, Qin Zhou, Zhe Wang
2,403.18551
Recent thrilling progress in large-scale text-to-image (T2I) models has unlocked unprecedented synthesis quality of AI-generated content (AIGC) including image generation 3D and video composition. Further personalized techniques enable appealing customized production of a novel concept given only several images as reference. However an intriguing problem persists: Is it possible to capture multiple novel concepts from one single reference image? In this paper we identify that existing approaches fail to preserve visual consistency with the reference image and eliminate cross-influence from concepts. To alleviate this we propose an attention calibration mechanism to improve the concept-level understanding of the T2I model. Specifically we first introduce new learnable modifiers bound with classes to capture attributes of multiple concepts. Then the classes are separated and strengthened following the activation of the cross-attention operation ensuring comprehensive and self-contained concepts. Additionally we suppress the attention activation of different classes to mitigate mutual influence among concepts. Together our proposed method dubbed DisenDiff can learn disentangled multiple concepts from one single image and produce novel customized images with learned concepts. We demonstrate that our method outperforms the current state of the art in both qualitative and quantitative evaluations. More importantly our proposed techniques are compatible with LoRA and inpainting pipelines enabling more interactive experiences.
[]
[]
[]
[]
241
242
ProMark: Proactive Diffusion Watermarking for Causal Attribution
http://arxiv.org/abs/2403.09914
Vishal Asnani, John Collomosse, Tu Bui, Xiaoming Liu, Shruti Agarwal
2,403.09914
Generative AI (GenAI) is transforming creative workflows through the capability to synthesize and manipulate images via high-level prompts. Yet creatives are not well supported to receive recognition or reward for the use of their content in GenAI training. To this end we propose ProMark a causal attribution technique to attribute a synthetically generated image to its training data concepts like objects motifs templates artists or styles. The concept information is proactively embedded into the input training images using imperceptible watermarks and the diffusion models (unconditional or conditional) are trained to retain the corresponding watermarks in generated images. We show that we can embed as many as 2^ 16 unique watermarks into the training data and each training image can contain more than one watermark. ProMark can maintain image quality whilst outperforming correlation-based attribution. Finally several qualitative examples are presented providing the confidence that the presence of the watermark conveys a causative relationship between training data and synthetic images.
[]
[]
[]
[]
242
243
One-Shot Structure-Aware Stylized Image Synthesis
http://arxiv.org/abs/2402.17275
Hansam Cho, Jonghyun Lee, Seunggyu Chang, Yonghyun Jeong
2,402.17275
While GAN-based models have been successful in image stylization tasks they often struggle with structure preservation while stylizing a wide range of input images. Recently diffusion models have been adopted for image stylization but still lack the capability to maintain the original quality of input images. Building on this we propose OSASIS: a novel one-shot stylization method that is robust in structure preservation. We show that OSASIS is able to effectively disentangle the semantics from the structure of an image allowing it to control the level of content and style implemented to a given input. We apply OSASIS to various experimental settings including stylization with out-of-domain reference images and stylization with text-driven manipulation. Results show that OSASIS outperforms other stylization methods especially for input images that were rarely encountered during training providing a promising solution to stylization via diffusion models.
[]
[]
[]
[]
243
244
GPT4Point: A Unified Framework for Point-Language Understanding and Generation
http://arxiv.org/abs/2312.02980
Zhangyang Qi, Ye Fang, Zeyi Sun, Xiaoyang Wu, Tong Wu, Jiaqi Wang, Dahua Lin, Hengshuang Zhao
2,312.0298
Multimodal Large Language Models (MLLMs) have excelled in 2D image-text comprehension and image generation but their understanding of the 3D world is notably deficient limiting progress in 3D language understanding and generation. To solve this problem we introduce GPT4Point an innovative groundbreaking point-language multimodal model designed specifically for unified 3D object understanding and generation within the MLLM framework. GPT4Point as a powerful 3D MLLM seamlessly can execute a variety of point-text reference tasks such as point-cloud captioning and Q&A. Additionally GPT4Point is equipped with advanced capabilities for controllable 3D generation it can get high-quality results through a low-quality point-text feature maintaining the geometric shapes and colors. To support the expansive needs of 3D object-text pairs we develop Pyramid-XL a point-language dataset annotation engine. It constructs a large-scale database over 1M objects of varied text granularity levels from the Objaverse-XL dataset essential for training GPT4Point. A comprehensive benchmark has been proposed to evaluate 3D point-language understanding capabilities. In extensive evaluations GPT4Point has demonstrated superior performance in understanding and generation.
[]
[]
[]
[]
244
245
SemCity: Semantic Scene Generation with Triplane Diffusion
http://arxiv.org/abs/2403.07773
Jumin Lee, Sebin Lee, Changho Jo, Woobin Im, Juhyeong Seon, Sung-Eui Yoon
2,403.07773
We present "SemCity" a 3D diffusion model for semantic scene generation in real-world outdoor environments. Most 3D diffusion models focus on generating a single object synthetic indoor scenes or synthetic outdoor scenes while the generation of real-world outdoor scenes is rarely addressed. In this paper we concentrate on generating a real-outdoor scene through learning a diffusion model on a real-world outdoor dataset. In contrast to synthetic data real-outdoor datasets often contain more empty spaces due to sensor limitations causing challenges in learning real-outdoor distributions. To address this issue we exploit a triplane representation as a proxy form of scene distributions to be learned by our diffusion model. Furthermore we propose a triplane manipulation that integrates seamlessly with our triplane diffusion model. The manipulation improves our diffusion model's applicability in a variety of downstream tasks related to outdoor scene generation such as scene inpainting scene outpainting and semantic scene completion refinements. In experimental results we demonstrate that our triplane diffusion model shows meaningful generation results compared with existing work in a real-outdoor dataset SemanticKITTI. We also show our triplane manipulation facilitates seamlessly adding removing or modifying objects within a scene. Further it also enables the expansion of scenes toward a city-level scale. Finally we evaluate our method on semantic scene completion refinements where our diffusion model enhances predictions of semantic scene completion networks by learning scene distribution. Our code is available at https://github.com/zoomin-lee/SemCity.
[]
[]
[]
[]
245
246
Improving Semantic Correspondence with Viewpoint-Guided Spherical Maps
http://arxiv.org/abs/2312.13216
Octave Mariotti, Oisin Mac Aodha, Hakan Bilen
2,312.13216
Recent self-supervised models produce visual features that are not only effective at encoding image-level but also pixel-level semantics. They have been reported to obtain impressive results for dense visual semantic correspondence estimation even outperforming fully-supervised methods. Nevertheless these models still fail in the presence of challenging image characteristics such as symmetries and repeated parts. To address these limitations we propose a new semantic correspondence estimation method that supplements state-of-the-art self-supervised features with 3D understanding via a weak geometric spherical prior. Compared to more involved 3D pipelines our model provides a simple and effective way of injecting informative geometric priors into the learned representation while requiring only weak viewpoint information. We also propose a new evaluation metric that better accounts for repeated part and symmetry-induced mistakes. We show that our method succeeds in distinguishing between symmetric views and repeated parts across many object categories in the challenging SPair-71k dataset and also in generalizing to previously unseen classes in the AwA dataset.
[]
[]
[]
[]
246
247
MR-VNet: Media Restoration using Volterra Networks
Siddharth Roheda, Amit Unde, Loay Rashid
null
This research paper presents a novel class of restoration network architecture based on the Volterra series formulation. By incorporating non-linearity into the system response function through higher order convolutions instead of traditional activation functions we introduce a general framework for image/video restoration. Through extensive experimentation we demonstrate that our proposed architecture achieves state-of-the-art (SOTA) performance in the field of Image/Video Restoration. Moreover we establish that the recently introduced Non-Linear Activation Free Network (NAF-NET) can be considered a special case within the broader class of Volterra Neural Networks. These findings highlight the potential of Volterra Neural Networks as a versatile and powerful tool for addressing complex restoration tasks in computer vision.
[]
[]
[]
[]
247
248
Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models
http://arxiv.org/abs/2403.17589
Yabin Zhang, Wenjie Zhu, Hui Tang, Zhiyuan Ma, Kaiyang Zhou, Lei Zhang
2,403.17589
With the emergence of pre-trained vision-language models like CLIP how to adapt them to various downstream classification tasks has garnered significant attention in recent research. The adaptation strategies can be typically categorized into three paradigms: zero-shot adaptation few-shot adaptation and the recently-proposed training-free few-shot adaptation. Most existing approaches are tailored for a specific setting and can only cater to one or two of these paradigms. In this paper we introduce a versatile adaptation approach that can effectively work under all three settings. Specifically we propose the dual memory networks that comprise dynamic and static memory components. The static memory caches training data knowledge enabling training-free few-shot adaptation while the dynamic memory preserves historical test features online during the testing process allowing for the exploration of additional data insights beyond the training set. This novel capability enhances model performance in the few-shot setting and enables model usability in the absence of training data. The two memory networks employ the same flexible memory interactive strategy which can operate in a training-free mode and can be further enhanced by incorporating learnable projection layers. Our approach is tested across 11 datasets under the three task settings. Remarkably in the zero-shot scenario it outperforms existing methods by over 3% and even shows superior results against methods utilizing external training data. Additionally our method exhibits robust performance against natural distribution shifts.
[]
[]
[]
[]
248
249
Single Mesh Diffusion Models with Field Latents for Texture Generation
http://arxiv.org/abs/2312.09250
Thomas W. Mitchel, Carlos Esteves, Ameesh Makadia
2,312.0925
We introduce a framework for intrinsic latent diffusion models operating directly on the surfaces of 3D shapes with the goal of synthesizing high-quality textures. Our approach is underpinned by two contributions: Field Latents a latent representation encoding textures as discrete vector fields on the mesh vertices and Field Latent Diffusion Models which learn to denoise a diffusion process in the learned latent space on the surface. We consider a single-textured-mesh paradigm where our models are trained to generate variations of a given texture on a mesh. We show the synthesized textures are of superior fidelity compared those from existing single-textured-mesh generative models. Our models can also be adapted for user-controlled editing tasks such as inpainting and label-guided generation. The efficacy of our approach is due in part to the equivariance of our proposed framework under isometries allowing our models to seamlessly reproduce details across locally similar regions and opening the door to a notion of generative texture transfer. Code and visualizations are available at https://single-mesh-diffusion.github.io/.
[]
[]
[]
[]
249
250
LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge
http://arxiv.org/abs/2311.11860
Gongwei Chen, Leyang Shen, Rui Shao, Xiang Deng, Liqiang Nie
2,311.1186
Multimodal Large Language Models (MLLMs) have endowed LLMs with the ability to perceive and understand multi-modal signals. However most of the existing MLLMs mainly adopt vision encoders pretrained on coarsely aligned image-text pairs leading to insufficient extraction and reasoning of visual knowledge. To address this issue we devise a dual-Level vIsual knOwledge eNhanced Multimodal Large Language Model (LION) which empowers the MLLM by injecting visual knowledge in two levels. 1) Progressive incorporation of fine-grained spatial-aware visual knowledge. We design a vision aggregator cooperated with region-level vision-language (VL) tasks to incorporate fine-grained spatial-aware visual knowledge into the MLLM. To alleviate the conflict between image-level and region-level VL tasks during incorporation we devise a dedicated stage-wise instruction-tuning strategy with mixture-of-adapters. This progressive incorporation scheme contributes to the mutual promotion between these two kinds of VL tasks. 2) Soft prompting of high-level semantic visual evidence. We facilitate the MLLM with high-level semantic visual evidence by leveraging diverse image tags. To mitigate the potential influence caused by imperfect predicted tags we propose a soft prompting method by embedding a learnable token into the tailored text instruction. Comprehensive experiments on several multi-modal benchmarks demonstrate the superiority of our model (e.g. improvement of 5% accuracy on VSR and 3% CIDEr on TextCaps over InstructBLIP 5% accuracy on RefCOCOg over Kosmos-2).
[]
[]
[]
[]
250
251
Learning to Select Views for Efficient Multi-View Understanding
Yunzhong Hou, Stephen Gould, Liang Zheng
null
Multiple camera view (multi-view) setups have proven useful in many computer vision applications. However the high computational cost associated with multiple views creates a significant challenge for end devices with limited computational resources. In modern CPU pipelining breaks a longer job into steps and enables parallelism over sequential steps from multiple jobs. Inspired by this we study selective view pipelining for efficient multi-view understanding which breaks computation of multiple views into steps and only computes the most helpful views/steps in a parallel manner for the best efficiency. To this end we use reinforcement learning to learn a very light view selection module that analyzes the target object or scenario from initial views and selects the next-best-view for recognition or detection for pipeline computation. Experimental results on multi-view classification and detection tasks show that our approach achieves promising performance while using only 2 or 3 out of N available views significantly reducing computational costs while maintaining parallelism over GPU through selective view pipelining.
[]
[]
[]
[]
251
252
Consistency and Uncertainty: Identifying Unreliable Responses From Black-Box Vision-Language Models for Selective Visual Question Answering
http://arxiv.org/abs/2404.10193
Zaid Khan, Yun Fu
2,404.10193
The goal of selective prediction is to allow an a model to abstain when it may not be able to deliver a reliable prediction which is important in safety-critical contexts. Existing approaches to selective prediction typically require access to the internals of a model require retraining a model or study only unimodal models. However the most powerful models (e.g. GPT-4) are typically only available as black boxes with inaccessible internals are not retrainable by end-users and are frequently used for multimodal tasks. We study the possibility of selective prediction for vision-language models in a realistic black-box setting. We propose using the principle of neighborhood consistency to identify unreliable responses from a black-box vision-language model in question answering tasks. We hypothesize that given only a visual question and model response the consistency of the model's responses over the neighborhood of a visual question will indicate reliability. It is impossible to directly sample neighbors in feature space in a black-box setting. Instead we show that it is possible to use a smaller proxy model to approximately sample from the neighborhood. We find that neighborhood consistency can be used to identify model responses to visual questions that are likely unreliable even in adversarial settings or settings that are out-of-distribution to the proxy model.
[]
[]
[]
[]
252
253
SAI3D: Segment Any Instance in 3D Scenes
http://arxiv.org/abs/2312.11557
Yingda Yin, Yuzheng Liu, Yang Xiao, Daniel Cohen-Or, Jingwei Huang, Baoquan Chen
2,312.11557
Advancements in 3D instance segmentation have traditionally been tethered to the availability of annotated datasets limiting their application to a narrow spectrum of object categories. Recent efforts have sought to harness vision-language models like CLIP for open-set semantic reasoning yet these methods struggle to distinguish between objects of the same categories and rely on specific prompts that are not universally applicable. In this paper we introduce SAI3D a novel zero-shot 3D instance segmentation approach that synergistically leverages geometric priors and semantic cues derived from Segment Anything Model (SAM). Our method partitions a 3D scene into geometric primitives which are then progressively merged into 3D instance segmentations that are consistent with the multi-view SAM masks. Moreover we design a hierarchical region-growing algorithm with a dynamic thresholding mechanism which largely improves the robustness of fine-grained 3D scene parsing. Empirical evaluations on ScanNet Matterport3D and the more challenging ScanNet++ datasets demonstrate the superiority of our approach. Notably SAI3D outperforms existing open-vocabulary baselines and even surpasses fully-supervised methods in class-agnostic segmentation on ScanNet++. Our project page is at https://yd-yin.github.io/SAI3D/.
[]
[]
[]
[]
253
254
Implicit Motion Function
Yue Gao, Jiahao Li, Lei Chu, Yan Lu
null
Recent advancements in video modeling extensively rely on optical flow to represent the relationships across frames but this approach often lacks efficiency and fails to model the probability of the intrinsic motion of objects. In addition conventional encoder-decoder frameworks in video processing focus on modeling the correlation in the encoder leading to limited generative capabilities and redundant intermediate representations. To address these challenges this paper proposes a novel Implicit Motion Function (IMF) method. Our approach utilizes a low-dimensional latent token as the implicit representation along with the use of cross-attention to implicitly model the correlation between frames. This enables the implicit modeling of temporal correlations and understanding of object motions. Our method not only improves sparsity and efficiency in representation but also explores the generative capabilities of the decoder by integrating correlation modeling within it. The IMF framework facilitates video editing and other generative tasks by allowing the direct manipulation of latent tokens. We validate the effectiveness of IMF through extensive experiments on multiple video tasks demonstrating superior performance in terms of reconstructed video quality compression efficiency and generation ability.
[]
[]
[]
[]
254
255
Unified Entropy Optimization for Open-Set Test-Time Adaptation
http://arxiv.org/abs/2404.06065
Zhengqing Gao, Xu-Yao Zhang, Cheng-Lin Liu
2,404.06065
Test-time adaptation (TTA) aims at adapting a model pre-trained on the labeled source domain to the unlabeled target domain. Existing methods usually focus on improving TTA performance under covariate shifts while neglecting semantic shifts. In this paper we delve into a realistic open-set TTA setting where the target domain may contain samples from unknown classes. Many state-of-the-art closed-set TTA methods perform poorly when applied to open-set scenarios which can be attributed to the inaccurate estimation of data distribution and model confidence. To address these issues we propose a simple but effective framework called unified entropy optimization (UniEnt) which is capable of simultaneously adapting to covariate-shifted in-distribution (csID) data and detecting covariate-shifted out-of-distribution (csOOD) data. Specifically UniEnt first mines pseudo-csID and pseudo-csOOD samples from test data followed by entropy minimization on the pseudo-csID data and entropy maximization on the pseudo-csOOD data. Furthermore we introduce UniEnt+ to alleviate the noise caused by hard data partition leveraging sample-level confidence. Extensive experiments on CIFAR benchmarks and Tiny-ImageNet-C show the superiority of our framework. The code is available at https://github.com/gaozhengqing/UniEnt.
[]
[]
[]
[]
255
256
TexOct: Generating Textures of 3D Models with Octree-based Diffusion
Jialun Liu, Chenming Wu, Xinqi Liu, Xing Liu, Jinbo Wu, Haotian Peng, Chen Zhao, Haocheng Feng, Jingtuo Liu, Errui Ding
null
This paper focuses on synthesizing high-quality and complete textures directly on the surface of 3D models within 3D space. 2D diffusion-based methods face challenges in generating 2D texture maps due to the infinite possibilities of UV mapping for a given 3D mesh. Utilizing point clouds helps circumvent variations arising from diverse mesh topologies and UV mappings. Nevertheless achieving dense point clouds to accurately represent texture details poses a challenge due to limited computational resources. To address these challenges we propose an efficient octree-based diffusion pipeline called TexOct. Our method starts by sampling a point cloud from the surface of a given 3D model with each point containing texture noise values. We utilize an octree structure to efficiently represent this point cloud. Additionally we introduce an innovative octree-based diffusion model that leverages the denoising capabilities of the Denoising Diffusion Probabilistic Model (DDPM). This model gradually reduces the texture noise on the octree nodes resulting in the restoration of fine texture. Experimental results on ShapeNet demonstrate that TexOct effectively generates high-quality 3D textures in both unconditional and text / image-conditional scenarios.
[]
[]
[]
[]
256
257
Anatomically Constrained Implicit Face Models
http://arxiv.org/abs/2312.07538
Prashanth Chandran, Gaspard Zoss
2,312.07538
Coordinate based implicit neural representations have gained rapid popularity in recent years as they have been successfully used in image geometry and scene modeling tasks. In this work we present a novel use case for such implicit representations in the context of learning anatomically constrained face models. Actor specific anatomically constrained face models are the state of the art in both facial performance capture and performance retargeting. Despite their practical success these anatomical models are slow to evaluate and often require extensive data capture to be built. We propose the anatomical implicit face model; an ensemble of implicit neural networks that jointly learn to model the facial anatomy and the skin surface with high-fidelity and can readily be used as a drop in replacement to conventional blendshape models. Given an arbitrary set of skin surface meshes of an actor and only a neutral shape with estimated skull and jaw bones our method can recover a dense anatomical substructure which constrains every point on the facial surface. We demonstrate the usefulness of our approach in several tasks ranging from shape fitting shape editing and performance retargeting.
[]
[]
[]
[]
257
258
Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning
http://arxiv.org/abs/2403.12030
Da-Wei Zhou, Hai-Long Sun, Han-Jia Ye, De-Chuan Zhan
2,403.1203
Class-Incremental Learning (CIL) requires a learning system to continually learn new classes without forgetting. Despite the strong performance of Pre-Trained Models (PTMs) in CIL a critical issue persists: learning new classes often results in the overwriting of old ones. Excessive modification of the network causes forgetting while minimal adjustments lead to an inadequate fit for new classes. As a result it is desired to figure out a way of efficient model updating without harming former knowledge. In this paper we propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL. To enable model updating without conflict we train a distinct lightweight adapter module for each new task aiming to create task-specific subspaces. These adapters span a high-dimensional feature space enabling joint decision-making across multiple subspaces. As data evolves the expanding subspaces render the old class classifiers incompatible with new-stage spaces. Correspondingly we design a semantic-guided prototype complement strategy that synthesizes old classes' new features without using any old class instance. Extensive experiments on seven benchmark datasets verify EASE's state-of-the-art performance. Code is available at: https://github.com/sun-hailong/CVPR24-Ease
[]
[]
[]
[]
258
259
Capturing Closely Interacted Two-Person Motions with Reaction Priors
Qi Fang, Yinghui Fan, Yanjun Li, Junting Dong, Dingwei Wu, Weidong Zhang, Kang Chen
null
In this paper we focus on capturing closely interacted two-person motions from monocular videos an important yet understudied topic. Unlike less-interacted motions closely interacted motions contain frequently occurring inter-human occlusions which pose significant challenges to existing capturing algorithms. To address this problem our key observation is that close physical interactions between two subjects typically happen under very specific situations (e.g. handshake hug etc.) and such situational contexts contain strong prior semantics to help infer the poses of occluded joints. In this spirit we introduce reaction priors which are invertible neural networks that bi-directionally model the pose probability distributions of one person given the pose of the other. The learned reaction priors are then incorporated into a query-based pose estimator which is a decoder-only Transformer with self-attentions on both intra-joint and inter-joint relationships. We demonstrate that our design achieves considerably higher performance than previous methods on multiple benchmarks. What's more as existing datasets lack sufficient cases of close human-human interactions we also build a new dataset called Dual-Human to better evaluate different methods. Dual-Human contains around 2k sequences of closely interacted two-person motions each with synthetic multi-view renderings contact annotations and text descriptions. We believe that this new public dataset can significantly promote further research in this area.
[]
[]
[]
[]
259
260
RobustSAM: Segment Anything Robustly on Degraded Images
Wei-Ting Chen, Yu-Jiet Vong, Sy-Yen Kuo, Sizhou Ma, Jian Wang
null
Segment Anything Model (SAM) has emerged as a transformative approach in image segmentation acclaimed for its robust zero-shot segmentation capabilities and flexible prompting system. Nonetheless its performance is challenged by images with degraded quality. Addressing this limitation we propose the Robust Segment Anything Model (RobustSAM) which enhances SAM's performance on low-quality images while preserving its promptability and zero-shot generalization. Our method leverages the pre-trained SAM model with only marginal parameter increments and computational requirements. The additional parameters of RobustSAM can be optimized within 30 hours on eight GPUs demonstrating its feasibility and practicality for typical research laboratories. We also introduce the Robust-Seg dataset a collection of 688K image-mask pairs with different degradations designed to train and evaluate our model optimally. Extensive experiments across various segmentation tasks and datasets confirm RobustSAM's superior performance especially under zero-shot conditions underscoring its potential for extensive real-world application. Additionally our method has been shown to effectively improve the performance of SAM-based downstream tasks such as single image dehazing and deblurring.
[]
[]
[]
[]
260
261
MultiDiff: Consistent Novel View Synthesis from a Single Image
Norman Müller, Katja Schwarz, Barbara Rössle, Lorenzo Porzi, Samuel Rota Bulò, Matthias Nießner, Peter Kontschieder
null
We introduce MultiDiff a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature as there exist multiple plausible explanations for unobserved areas. To address this issue we incorporate strong priors in form of monocular depth predictors and video-diffusion models. Monocular depth enables us to condition our model on warped reference images for the target views increasing geometric stability. The video-diffusion prior provides a strong proxy for 3D scenes allowing the model to learn continuous and pixel-accurate correspondences across generated images. In contrast to approaches relying on autoregressive image generation that are prone to drifts and error accumulation MultiDiff jointly synthesizes a sequence of frames yielding high-quality and multi-view consistent results -- even for long-term scene generation with large camera movements while reducing inference time by an order of magnitude. For additional consistency and image quality improvements we introduce a novel structured noise distribution. Our experimental results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging real-world datasets RealEstate10K and ScanNet. Finally our model naturally supports multi-view consistent editing without the need for further tuning.
[]
[]
[]
[]
261
262
In-N-Out: Faithful 3D GAN Inversion with Volumetric Decomposition for Face Editing
Yiran Xu, Zhixin Shu, Cameron Smith, Seoung Wug Oh, Jia-Bin Huang
null
3D-aware GANs offer new capabilities for view synthesis while preserving the editing functionalities of their 2D counterparts. GAN inversion is a crucial step that seeks the latent code to reconstruct input images or videos subsequently enabling diverse editing tasks through manipulation of this latent code. However a model pre-trained on a particular dataset (e.g. FFHQ) often has difficulty reconstructing images with out-of-distribution (OOD) objects such as faces with heavy make-up or occluding objects. We address this issue by explicitly modeling OOD objects from the input in 3D-aware GANs. Our core idea is to represent the image using two individual neural radiance fields: one for the in-distribution content and the other for the out-of-distribution object. The final reconstruction is achieved by optimizing the composition of these two radiance fields with carefully designed regularization. We demonstrate that our explicit decomposition alleviates the inherent trade-off between reconstruction fidelity and editability. We evaluate reconstruction accuracy and editability of our method on challenging real face images and videos and showcase favorable results against other baselines.
[]
[]
[]
[]
262
263
Atom-Level Optical Chemical Structure Recognition with Limited Supervision
http://arxiv.org/abs/2404.01743
Martijn Oldenhof, Edward De Brouwer, Adam Arany, Yves Moreau
2,404.01743
Identifying the chemical structure from a graphical representation or image of a molecule is a challenging pattern recognition task that would greatly benefit drug development. Yet existing methods for chemical structure recognition do not typically generalize well and show diminished effectiveness when confronted with domains where data is sparse or costly to generate such as hand-drawn molecule images. To address this limitation we propose a new chemical structure recognition tool that delivers state-of-the-art performance and can adapt to new domains with a limited number of data samples and supervision. Unlike previous approaches our method provides atom-level localization and can therefore segment the image into the different atoms and bonds. Our model is the first model to perform OCSR with atom-level entity detection with only SMILES supervision. Through rigorous and extensive benchmarking we demonstrate the preeminence of our chemical structure recognition approach in terms of data efficiency accuracy and atom-level entity prediction.
[]
[]
[]
[]
263
264
L4D-Track: Language-to-4D Modeling Towards 6-DoF Tracking and Shape Reconstruction in 3D Point Cloud Stream
Jingtao Sun, Yaonan Wang, Mingtao Feng, Yulan Guo, Ajmal Mian, Mike Zheng Shou
null
3D visual language multi-modal modeling plays an important role in actual human-computer interaction. However the inaccessibility of large-scale 3D-language pairs restricts their applicability in real-world scenarios. In this paper we aim to handle a real-time multi-task for 6-DoF pose tracking of unknown objects leveraging 3D-language pre-training scheme from a series of 3D point cloud video streams while simultaneously performing 3D shape reconstruction in current observation. To this end we present a generic Language-to-4D modeling paradigm termed L4D-Track that tackles zero-shot 6-DoF \underline Track ing and shape reconstruction by learning pairwise implicit 3D representation and multi-level multi-modal alignment. Our method constitutes two core parts. 1) Pairwise Implicit 3D Space Representation that establishes spatial-temporal to language coherence descriptions across continuous 3D point cloud video. 2) Language-to-4D Association and Contrastive Alignment enables multi-modality semantic connections between 3D point cloud video and language. Our method trained exclusively on public NOCS-REAL275 dataset achieves promising results on both two publicly benchmarks. This not only shows powerful generalization performance but also proves its remarkable capability in zero-shot inference.
[]
[]
[]
[]
264
265
General Point Model Pretraining with Autoencoding and Autoregressive
Zhe Li, Zhangyang Gao, Cheng Tan, Bocheng Ren, Laurence T. Yang, Stan Z. Li
null
The pre-training architectures of large language models encompass various types including autoencoding models autoregressive models and encoder-decoder models. We posit that any modality can potentially benefit from a large language model as long as it undergoes vector quantization to become discrete tokens. Inspired by the General Language Model we propose a General Point Model (GPM) that seamlessly integrates autoencoding and autoregressive tasks in a point cloud transformer. This model is versatile allowing fine-tuning for downstream point cloud representation tasks as well as unconditional and conditional generation tasks. GPM enhances masked prediction in autoencoding through various forms of mask padding tasks leading to improved performance in point cloud understanding. Additionally GPM demonstrates highly competitive results in unconditional point cloud generation tasks even exhibiting the potential for conditional generation tasks by modifying the input's conditional information. Compared to models like Point-BERT MaskPoint and PointMAE our GPM achieves superior performance in point cloud understanding tasks. Furthermore the integration of autoregressive and autoencoding within the same transformer underscores its versatility across different downstream tasks.
[]
[]
[]
[]
265
266
Combining Frame and GOP Embeddings for Neural Video Representation
Jens Eirik Saethre, Roberto Azevedo, Christopher Schroers
null
Implicit neural representations (INRs) were recently proposed as a new video compression paradigm with existing approaches performing on par with HEVC. However such methods only perform well in limited settings e.g. specific model sizes fixed aspect ratios and low-motion videos. We address this issue by proposing T-NeRV a hybrid video INR that combines frame-specific embeddings with GOP-specific features providing a lever for content-specific fine-tuning. We employ entropy-constrained training to jointly optimize our model for rate and distortion and demonstrate that T-NeRV can thereby automatically adjust this lever during training effectively fine-tuning itself to the target content. We evaluate T-NeRV on the UVG dataset where it achieves state-of-the-art results on the video representation task outperforming previous works by up to 3dB PSNR on challenging high-motion sequences. Further our method improves on the compression performance of previous methods and is the first video INR to outperform HEVC on all UVG sequences.
[]
[]
[]
[]
266
267
LiDAR-based Person Re-identification
http://arxiv.org/abs/2312.03033
Wenxuan Guo, Zhiyu Pan, Yingping Liang, Ziheng Xi, Zhicheng Zhong, Jianjiang Feng, Jie Zhou
2,312.03033
Camera-based person re-identification (ReID) systems have been widely applied in the field of public security. However cameras often lack the perception of 3D morphological information of human and are susceptible to various limitations such as inadequate illumination complex background and personal privacy. In this paper we propose a LiDAR-based ReID framework ReID3D that utilizes pre-training strategy to retrieve features of 3D body shape and introduces Graph-based Complementary Enhancement Encoder for extracting comprehensive features. Due to the lack of LiDAR datasets we build LReID the first LiDAR-based person ReID dataset which is collected in several outdoor scenes with variations in natural conditions. Additionally we introduce LReID-sync a simulated pedestrian dataset designed for pre-training encoders with tasks of point cloud completion and shape parameter learning. Extensive experiments on LReID show that ReID3D achieves exceptional performance with a rank-1 accuracy of 94.0 highlighting the significant potential of LiDAR in addressing person ReID tasks. To the best of our knowledge we are the first to propose a solution for LiDAR-based ReID. The code and dataset are available at https://github.com/GWxuan/ReID3D.
[]
[]
[]
[]
267
268
Fantastic Animals and Where to Find Them: Segment Any Marine Animal with Dual SAM
http://arxiv.org/abs/2404.04996
Pingping Zhang, Tianyu Yan, Yang Liu, Huchuan Lu
2,404.04996
As an important pillar of underwater intelligence Marine Animal Segmentation (MAS) involves segmenting animals within marine environments. Previous methods don't excel in extracting long-range contextual features and overlook the connectivity between discrete pixels. Recently Segment Anything Model (SAM) offers a universal framework for general segmentation tasks. Unfortunately trained with natural images SAM does not obtain the prior knowledge from marine images. In addition the single-position prompt of SAM is very insufficient for prior guidance. To address these issues we propose a novel feature learning framework named Dual-SAM for high-performance MAS. To this end we first introduce a dual structure with SAM's paradigm to enhance feature learning of marine images. Then we propose a Multi-level Coupled Prompt (MCP) strategy to instruct comprehensive underwater prior information and enhance the multi-level features of SAM's encoder with adapters. Subsequently we design a Dilated Fusion Attention Module (DFAM) to progressively integrate multi-level features from SAM's encoder. Finally instead of directly predicting the masks of marine animals we propose a Criss-Cross Connectivity Prediction (C3P) paradigm to capture the inter-connectivity between discrete pixels. With dual decoders it generates pseudo-labels and achieves mutual supervision for complementary feature representations resulting in considerable improvements over previous techniques. Extensive experiments verify that our proposed method achieves state-of-the-art performances on five widely-used MAS datasets. The code is available at https://github.com/Drchip61/Dual SAM.
[]
[]
[]
[]
268
269
Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners
http://arxiv.org/abs/2402.17723
Yazhou Xing, Yingqing He, Zeyue Tian, Xintao Wang, Qifeng Chen
2,402.17723
Video and audio content creation serves as the core technique for the movie industry and professional users. Recently existing diffusion-based methods tackle video and audio generation separately which hinders the technique transfer from academia to industry. In this work we aim at filling the gap with a carefully designed optimization-based framework for cross-visual-audio and joint-visual-audio generation. We observe the powerful generation ability of off-the-shelf video or audio generation models. Thus instead of training the giant models from scratch we propose to bridge the existing strong models with a shared latent representation space. Specifically we propose a multimodality latent aligner with the pre-trained ImageBind model. Our latent aligner shares a similar core as the classifier guidance that guides the diffusion denoising process during inference time. Through carefully designed optimization strategy and loss functions we show the superior performance of our method on joint video-audio generation visual-steered audio generation and audio-steered visual generation tasks. The project website can be found at \href https://yzxing87.github.io/Seeing-and-Hearing/ https://yzxing87.github.io/Seeing-and-Hearing/ .
[]
[]
[]
[]
269
270
Model Adaptation for Time Constrained Embodied Control
Jaehyun Song, Minjong Yoo, Honguk Woo
null
When adopting a deep learning model for embodied agents it is required that the model structure be optimized for specific tasks and operational conditions. Such optimization can be static such as model compression or dynamic such as adaptive inference. Yet these techniques have not been fully investigated for embodied control systems subject to time constraints which necessitate sequential decision-making for multiple tasks each with distinct inference latency limitations. In this paper we present MoDeC a time constraint-aware embodied control framework using the modular model adaptation. We formulate model adaptation to varying operational conditions on resource and time restrictions as dynamic routing on a modular network incorporating these conditions as part of multi-task objectives. Our evaluation across several vision-based embodied environments demonstrates the robustness of MoDeC showing that it outperforms other model adaptation methods in both performance and adherence to time constraints in robotic manipulation and autonomous driving applications.
[]
[]
[]
[]
270
271
Objects as Volumes: A Stochastic Geometry View of Opaque Solids
http://arxiv.org/abs/2312.15406
Bailey Miller, Hanyu Chen, Alice Lai, Ioannis Gkioulekas
2,312.15406
We develop a theory for the representation of opaque solids as volumes. Starting from a stochastic representation of opaque solids as random indicator functions we prove the conditions under which such solids can be modeled using exponential volumetric transport. We also derive expressions for the volumetric attenuation coefficient as a functional of the probability distributions of the underlying indicator functions. We generalize our theory to account for isotropic and anisotropic scattering at different parts of the solid and for representations of opaque solids as stochastic implicit surfaces. We derive our volumetric representation from first principles which ensures that it satisfies physical constraints such as reciprocity and reversibility. We use our theory to explain compare and correct previous volumetric representations as well as propose meaningful extensions that lead to improved performance in 3D reconstruction tasks.
[]
[]
[]
[]
271
272
ActiveDC: Distribution Calibration for Active Finetuning
http://arxiv.org/abs/2311.07634
Wenshuai Xu, Zhenghui Hu, Yu Lu, Jinzhou Meng, Qingjie Liu, Yunhong Wang
2,311.07634
The pretraining-finetuning paradigm has gained popularity in various computer vision tasks. In this paradigm the emergence of active finetuning arises due to the abundance of large-scale data and costly annotation requirements. Active finetuning involves selecting a subset of data from an unlabeled pool for annotation facilitating subsequent finetuning. However the use of a limited number of training samples can lead to a biased distribution potentially resulting in model overfitting. In this paper we propose a new method called ActiveDC for the active finetuning tasks. Firstly we select samples for annotation by optimizing the distribution similarity between the subset to be selected and the entire unlabeled pool in continuous space. Secondly we calibrate the distribution of the selected samples by exploiting implicit category information in the unlabeled pool. The feature visualization provides an intuitive sense of the effectiveness of our approach to distribution calibration. We conducted extensive experiments on three image classification datasets with different sampling ratios. The results indicate that ActiveDC consistently outperforms the baseline performance in all image classification tasks. The improvement is particularly significant when the sampling ratio is low with performance gains of up to 10%. Our code will be released.
[]
[]
[]
[]
272
273
Seeing Unseen: Discover Novel Biomedical Concepts via Geometry-Constrained Probabilistic Modeling
http://arxiv.org/abs/2403.01053
Jianan Fan, Dongnan Liu, Hang Chang, Heng Huang, Mei Chen, Weidong Cai
2,403.01053
Machine learning holds tremendous promise for transforming the fundamental practice of scientific discovery by virtue of its data-driven nature. With the ever-increasing stream of research data collection it would be appealing to autonomously explore patterns and insights from observational data for discovering novel classes of phenotypes and concepts. However in the biomedical domain there are several challenges inherently presented in the cumulated data which hamper the progress of novel class discovery. The non-i.i.d. data distribution accompanied by the severe imbalance among different groups of classes essentially leads to ambiguous and biased semantic representations. In this work we present a geometry-constrained probabilistic modeling treatment to resolve the identified issues. First we propose to parameterize the approximated posterior of instance embedding as a marginal von Mises-Fisher distribution to account for the interference of distributional latent bias. Then we incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space which in turn minimizes the uncontrollable risk for unknown class learning and structuring. Furthermore a spectral graph-theoretic method is devised to estimate the number of potential novel classes. It inherits two intriguing merits compared to existent approaches namely high computational efficiency and flexibility for taxonomy-adaptive estimation. Extensive experiments across various biomedical scenarios substantiate the effectiveness and general applicability of our method.
[]
[]
[]
[]
273
274
MVHumanNet: A Large-scale Dataset of Multi-view Daily Dressing Human Captures
http://arxiv.org/abs/2312.02963
Zhangyang Xiong, Chenghong Li, Kenkun Liu, Hongjie Liao, Jianqiao Hu, Junyi Zhu, Shuliang Ning, Lingteng Qiu, Chongjie Wang, Shijie Wang, Shuguang Cui, Xiaoguang Han
2,312.02963
In this era the success of large language models and text-to-image models can be attributed to the driving force of large-scale datasets. However in the realm of 3D vision while remarkable progress has been made with models trained on large-scale synthetic and real-captured object data like Objaverse and MVImgNet a similar level of progress has not been observed in the domain of human-centric tasks partially due to the lack of a large-scale human dataset. Existing datasets of high-fidelity 3D human capture continue to be mid-sized due to the significant challenges in acquiring large-scale high-quality 3D human data. To bridge this gap we present MVHumanNet a dataset that comprises multi-view human action sequences of 4500 human identities. The primary focus of our work is on collecting human data that features a large number of diverse identities and everyday clothing using a multi-view human capture system which facilitates easily scalable data collection. Our dataset contains 9000 daily outfits 60000 motion sequences and 645 million frames with extensive annotations including human masks camera parameters 2D and 3D keypoints SMPL/SMPLX parameters and corresponding textual descriptions. To explore the potential of MVHumanNet in various 2D and 3D visual tasks we conducted pilot studies on view-consistent action recognition human NeRF reconstruction text-driven view-unconstrained human image generation as well as 2D view-unconstrained human image and 3D avatar generation. Extensive experiments demonstrate the performance improvements and effective applications enabled by the scale provided by MVHumanNet. As the current largest-scale 3D human dataset we hope that the release of MVHumanNet data with annotations will foster further innovations in the domain of 3D human-centric tasks at scale.
[]
[]
[]
[]
274
275
Communication-Efficient Federated Learning with Accelerated Client Gradient
http://arxiv.org/abs/2201.03172
Geeho Kim, Jinkyu Kim, Bohyung Han
2,201.03172
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets. Such a tendency is aggravated when the client participation ratio is low since the information collected from the clients has large variations. To address this challenge we propose a simple but effective federated learning framework which improves the consistency across clients and facilitates the convergence of the server model. This is achieved by making the server broadcast a global model with a lookahead gradient. This strategy enables the proposed approach to convey the projected global update information to participants effectively without additional client memory and extra communication costs. We also regularize local updates by aligning each client with the overshot global model to reduce bias and improve the stability of our algorithm. We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency compared to the state-of-the-art methods especially with low client participation rates. The source code is available at our project page.
[]
[]
[]
[]
275
276
LLMs are Good Action Recognizers
http://arxiv.org/abs/2404.00532
Haoxuan Qu, Yujun Cai, Jun Liu
2,404.00532
Skeleton-based action recognition has attracted lots of research attention. Recently to build an accurate skeleton-based action recognizer a variety of works have been proposed. Among them some works use large model architectures as backbones of their recognizers to boost the skeleton data representation capability while some other works pre-train their recognizers on external data to enrich the knowledge. In this work we observe that large language models which have been extensively used in various natural language processing tasks generally hold both large model architectures and rich implicit knowledge. Motivated by this we propose a novel LLM-AR framework in which we investigate treating the Large Language Model as an Action Recognizer. In our framework we propose a linguistic projection process to project each input action signal (i.e. each skeleton sequence) into its "sentence format" (i.e. an "action sentence"). Moreover we also incorporate our framework with several designs to further facilitate this linguistic projection process. Extensive experiments demonstrate the efficacy of our proposed framework.
[]
[]
[]
[]
276
277
NoiseCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions in Diffusion Models
http://arxiv.org/abs/2312.05390
Yusuf Dalva, Pinar Yanardag
2,312.0539
Generative models have been very popular in the recent years for their image generation capabilities. GAN-based models are highly regarded for their disentangled latent space which is a key feature contributing to their success in controlled image editing. On the other hand diffusion models have emerged as powerful tools for generating high-quality images. However the latent space of diffusion models is not as thoroughly explored or understood. Existing methods that aim to explore the latent space of diffusion models usually relies on text prompts to pinpoint specific semantics. However this approach may be restrictive in areas such as art fashion or specialized fields like medicine where suitable text prompts might not be available or easy to conceive thus limiting the scope of existing work. In this paper we propose an unsupervised method to discover latent semantics in text-to-image diffusion models without relying on text prompts. Our method takes a small set of unlabeled images from specific domains such as faces or cats and a pre-trained diffusion model and discovers diverse semantics in unsupervised fashion using a contrastive learning objective. Moreover the learned directions can be applied simultaneously either within the same domain (such as various types of facial edits) or across different domains (such as applying cat and face edits within the same image) without interfering with each other. Our extensive experiments show that our method achieves highly disentangled edits outperforming existing approaches in both diffusion-based and GAN-based latent space editing methods.
[]
[]
[]
[]
277
278
SpecNeRF: Gaussian Directional Encoding for Specular Reflections
http://arxiv.org/abs/2312.13102
Li Ma, Vasu Agrawal, Haithem Turki, Changil Kim, Chen Gao, Pedro Sander, Michael Zollhöfer, Christian Richardt
2,312.13102
Neural radiance fields have achieved remarkable performance in modeling the appearance of 3D scenes. However existing approaches still struggle with the view-dependent appearance of glossy surfaces especially under complex lighting of indoor environments. Unlike existing methods which typically assume distant lighting like an environment map we propose a learnable Gaussian directional encoding to better model the view-dependent effects under near-field lighting conditions. Importantly our new directional encoding captures the spatially-varying nature of near-field lighting and emulates the behavior of prefiltered environment maps. As a result it enables the efficient evaluation of preconvolved specular color at any 3D location with varying roughness coefficients. We further introduce a data-driven geometry prior that helps alleviate the shape radiance ambiguity in reflection modeling. We show that our Gaussian directional encoding and geometry prior significantly improve the modeling of challenging specular reflections in neural radiance fields which helps decompose appearance into more physically meaningful components.
[]
[]
[]
[]
278
279
Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance
http://arxiv.org/abs/2405.01356
Kelvin C.K. Chan, Yang Zhao, Xuhui Jia, Ming-Hsuan Yang, Huisheng Wang
2,405.01356
In subject-driven text-to-image synthesis the synthesis process tends to be heavily influenced by the reference images provided by users often overlooking crucial attributes detailed in the text prompt. In this work we propose Subject-Agnostic Guidance (SAG) a simple yet effective solution to remedy the problem. We show that through constructing a subject-agnostic condition and applying our proposed dual classifier-free guidance one could obtain outputs consistent with both the given subject and input text prompts. We validate the efficacy of our approach through both optimization-based and encoder-based methods. Additionally we demonstrate its applicability in second-order customization methods where an encoder-based model is fine-tuned with DreamBooth. Our approach is conceptually simple and requires only minimal code modifications but leads to substantial quality improvements as evidenced by our evaluations and user studies.
[]
[]
[]
[]
279
280
Diffusion Model Alignment Using Direct Preference Optimization
http://arxiv.org/abs/2311.12908
Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, Nikhil Naik
2,311.12908
Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences. In contrast to LLMs human preference learning has not been widely explored in text-to-image diffusion models; the best existing approach is to fine-tune a pretrained model using carefully curated high quality images and captions to improve visual appeal and text alignment. We propose Diffusion-DPO a method to align diffusion models to human preferences by directly optimizing on human comparison data. Diffusion-DPO is adapted from the recently developed Direct Preference Optimization (DPO) a simpler alternative to RLHF which directly optimizes a policy that best satisfies human preferences under a classification objective. We re-formulate DPO to account for a diffusion model notion of likelihood utilizing the evidence lower bound to derive a differentiable objective. Using the Pick-a-pic dataset of 851K crowdsourced pairwise preferences we fine-tune the base model of the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with Diffusion-DPO. Our fine-tuned base model significantly outperforms both base SDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement model in human evaluation improving visual appeal and prompt alignment. We also develop a variant that uses AI feedback and has comparable performance to training on human preferences opening the door for scaling of diffusion model alignment methods.
[]
[]
[]
[]
280
281
Interactive Continual Learning: Fast and Slow Thinking
http://arxiv.org/abs/2403.02628
Biqing Qi, Xinquan Chen, Junqi Gao, Dong Li, Jianxing Liu, Ligang Wu, Bowen Zhou
2,403.02628
Advanced life forms sustained by the synergistic interaction of neural cognitive mechanisms continually acquire and transfer knowledge throughout their lifespan. In contrast contemporary machine learning paradigms exhibit limitations in emulating the facets of continual learning (CL). Nonetheless the emergence of large language models (LLMs) presents promising avenues for realizing CL via interactions with these models. Drawing on Complementary Learning System theory this paper presents a novel Interactive Continual Learning (ICL) framework enabled by collaborative interactions among models of various sizes. Specifically we assign the ViT model as System1 and multimodal LLM as System2. To enable the memory module to deduce tasks from class information and enhance Set2Set retrieval we propose the Class-Knowledge-Task Multi-Head Attention (CKT-MHA). Additionally to improve memory retrieval in System1 through enhanced geometric representation we introduce the CL-vMF mechanism based on the von Mises-Fisher (vMF) distribution. Meanwhile we introduce the von Mises-Fisher Outlier Detection and Interaction (vMF-ODI) strategy to identify hard examples thus enhancing collaboration between System1 and System2 for complex reasoning realization. Comprehensive evaluation of our proposed ICL demonstrates significant resistance to forgetting and superior performance relative to existing methods. Code is available at github.com/ICL.
[]
[]
[]
[]
281
282
ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Image
http://arxiv.org/abs/2310.17994
Kyle Sargent, Zizhang Li, Tanmay Shah, Charles Herrmann, Hong-Xing Yu, Yunzhi Zhang, Eric Ryan Chan, Dmitry Lagun, Li Fei-Fei, Deqing Sun, Jiajun Wu
2,310.17994
We introduce a 3D-aware diffusion model ZeroNVS for single-image novel view synthesis for in-the-wild scenes. While existing methods are designed for single objects with masked backgrounds we propose new techniques to address challenges introduced by in-the-wild multi-object scenes with complex backgrounds. Specifically we train a generative prior on a mixture of data sources that capture object-centric indoor and outdoor scenes. To address issues from data mixture such as depth-scale ambiguity we propose a novel camera conditioning parameterization and normalization scheme. Further we observe that Score Distillation Sampling (SDS) tends to truncate the distribution of complex backgrounds during distillation of 360-degree scenes and propose "SDS anchoring" to improve the diversity of synthesized novel views. Our model sets a new state-of-the-art result in LPIPS on the DTU dataset in the zero-shot setting even outperforming methods specifically trained on DTU. We further adapt the challenging Mip-NeRF 360 dataset as a new benchmark for single-image novel view synthesis and demonstrate strong performance in this setting. Code and models will be publicly available.
[]
[]
[]
[]
282
283
Restoration by Generation with Constrained Priors
http://arxiv.org/abs/2312.17161
Zheng Ding, Xuaner Zhang, Zhuowen Tu, Zhihao Xia
2,312.17161
The inherent generative power of denoising diffusion models makes them well-suited for image restoration tasks where the objective is to find the optimal high-quality image within the generative space that closely resembles the input image. We propose a method to adapt a pretrained diffusion model for image restoration by simply adding noise to the input image to be restored and then denoise. Our method is based on the observation that the space of a generative model needs to be constrained. We impose this constraint by finetuning the generative model with a set of anchor images that capture the characteristics of the input image. With the constrained space we can then leverage the sampling strategy used for generation to do image restoration. We evaluate against previous methods and show superior performances on multiple real-world restoration datasets in preserving identity and image quality. We also demonstrate an important and practical application on personalized restoration where we use a personal album as the anchor images to constrain the generative space. This approach allows us to produce results that accurately preserve high-frequency details which previous works are unable to do. Project webpage: https://gen2res.github.io.
[]
[]
[]
[]
283
284
Snapshot Lidar: Fourier Embedding of Amplitude and Phase for Single-Image Depth Reconstruction
Sarah Friday, Yunzi Shi, Yaswanth Cherivirala, Vishwanath Saragadam, Adithya Pediredla
null
Amplitude modulated continuous-wave time-of-flight (AMCW-ToF) cameras are finding applications as flash Lidars in autonomous navigation robotics and AR/VR applications. A conventional CW-ToF camera requires illuminating the scene with a temporally varying light source and demodulating a set of quadrature measurements to recover the scene's depth and intensity. Capturing the four measurements in sequence renders the system slow invariably causing inaccuracies in depth estimates due to motion in the scene or the camera. To mitigate this problem we propose a snapshot Lidar that captures amplitude and phase simultaneously as a single time-of-flight hologram. Uniquely our approach requires minimal changes to existing CW-ToF imaging hardware. To demonstrate the efficacy of the proposed system we design and build a lab prototype and evaluate it under varying scene geometries illumination conditions and compare the reconstructed depth measurements against conventional techniques. We rigorously evaluate the robustness of our system on diverse real-world scenes to show that our technique results in a significant reduction in data bandwidth with minimal loss in reconstruction accuracy. As high-resolution CW-ToF cameras are becoming ubiquitous increasing their temporal resolution by four times enables robust real-time capture of geometries of dynamic scenes.
[]
[]
[]
[]
284
285
Convolutional Prompting meets Language Models for Continual Learning
http://arxiv.org/abs/2403.20317
Anurag Roy, Riddhiman Moulick, Vinay K. Verma, Saptarshi Ghosh, Abir Das
2,403.20317
Continual Learning (CL) enables machine learning models to learn from continuously shifting new training data in absence of data from old tasks. Recently pre-trained vision transformers combined with prompt tuning have shown promise for overcoming catastrophic forgetting in CL. These approaches rely on a pool of learnable prompts which can be inefficient in sharing knowledge across tasks leading to inferior performance. In addition the lack of fine-grained layer specific prompts does not allow these to fully express the strength of the prompts for CL. We address these limitations by proposing ConvPrompt a novel convolutional prompt creation mechanism that maintains layer-wise shared embeddings enabling both layer-specific learning and better concept transfer across tasks. The intelligent use of convolution enables us to maintain a low parameter overhead without compromising performance. We further leverage Large Language Models to generate fine-grained text descriptions of each category which are used to get task similarity and dynamically decide the number of prompts to be learned. Extensive experiments demonstrate the superiority of ConvPrompt and improves SOTA by 3% with significantly less parameter overhead. We also perform strong ablation over various modules to disentangle the importance of different components.
[]
[]
[]
[]
285
286
Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring
Huicong Zhang, Haozhe Xie, Hongxun Yao
null
Video deblurring relies on leveraging information from other frames in the video sequence to restore the blurred regions in the current frame. Mainstream approaches employ bidirectional feature propagation spatio-temporal transformers or a combination of both to extract information from the video sequence. However limitations in memory and computational resources constraints the temporal window length of the spatio-temporal transformer preventing the extraction of longer temporal contextual information from the video sequence. Additionally bidirectional feature propagation is highly sensitive to inaccurate optical flow in blurry frames leading to error accumulation during the propagation process. To address these issues we propose BSSTNet Blur-aware Spatio-temporal Sparse Transformer Network. It introduces the blur map which converts the originally dense attention into a sparse form enabling a more extensive utilization of information throughout the entire video sequence. Specifically BSSTNet (1) uses a longer temporal window in the transformer leveraging information from more distant frames to restore the blurry pixels in the current frame. (2) introduces bidirectional feature propagation guided by blur maps which reduces error accumulation caused by the blur frame. The experimental results demonstrate the proposed BSSTNet outperforms the state-of-the-art methods on the GoPro and DVD datasets.
[]
[]
[]
[]
286
287
Towards Learning a Generalist Model for Embodied Navigation
http://arxiv.org/abs/2312.02010
Duo Zheng, Shijia Huang, Lin Zhao, Yiwu Zhong, Liwei Wang
2,312.0201
Building a generalist agent that can interact with the world is an ultimate goal for humans thus spurring the research for embodied navigation where an agent is required to navigate according to instructions or respond to queries. Despite the major progress attained previous works primarily focus on task-specific agents and lack generalizability to unseen scenarios. Recently LLMs have presented remarkable capabilities across various fields and provided a promising opportunity for embodied navigation. Drawing on this we propose the first generalist model for embodied navigation NaviLLM. It adapts LLMs to embodied navigation by introducing schema-based instruction. The schema-based instruction flexibly casts various tasks into generation problems thereby unifying a wide range of tasks. This approach allows us to integrate diverse data sources from various datasets into the training equipping NaviLLM with a wide range of capabilities required by embodied navigation. We conduct extensive experiments to evaluate the performance and generalizability of our model. The experimental results demonstrate that our unified model achieves state-of-the-art performance on CVDN SOON and ScanQA. Specifically it surpasses the previous stats-of-the-art method by a significant margin of 29% in goal progress on CVDN. Moreover our model also demonstrates strong generalizability and presents impressive results on unseen tasks e.g. embodied question answering and 3D captioning.
[]
[]
[]
[]
287
288
DiffusionPoser: Real-time Human Motion Reconstruction From Arbitrary Sparse Sensors Using Autoregressive Diffusion
http://arxiv.org/abs/2308.16682
Tom Van Wouwe, Seunghwan Lee, Antoine Falisse, Scott Delp, C. Karen Liu
2,308.16682
Motion capture from a limited number of body-worn sensors such as inertial measurement units (IMUs) and pressure insoles has important applications in health human performance and entertainment. Recent work has focused on accurately reconstructing whole-body motion from a specific sensor configuration using six IMUs. While a common goal across applications is to use the minimal number of sensors to achieve required accuracy the optimal arrangement of the sensors might differ from application to application. We propose a single diffusion model DiffusionPoser which reconstructs human motion in real-time from an arbitrary combination of sensors including IMUs placed at specified locations and pressure insoles. Unlike existing methods our model grants users the flexibility to determine the number and arrangement of sensors tailored to the specific activity of interest without the need for retraining. A novel autoregressive inferencing scheme ensures real-time motion reconstruction that closely aligns with measured sensor signals. The generative nature of DiffusionPoser ensures realistic behavior even for degrees-of-freedom not directly measured. Qualitative results can be found on our project website.
[]
[]
[]
[]
288
289
MANUS: Markerless Grasp Capture using Articulated 3D Gaussians
http://arxiv.org/abs/2312.02137
Chandradeep Pokhariya, Ishaan Nikhil Shah, Angela Xing, Zekun Li, Kefan Chen, Avinash Sharma, Srinath Sridhar
2,312.02137
Understanding how we grasp objects with our hands has important applications in areas like robotics and mixed reality. However this challenging problem requires accurate modeling of the contact between hands and objects.To capture grasps existing methods use skeletons meshes or parametric models that does not represent hand shape accurately resulting in inaccurate contacts. We present MANUS a method for Markerless Hand-Object Grasp Capture using Articulated 3D Gaussians. We build a novel articulated 3D Gaussians representation that extends 3D Gaussian splatting for high-fidelity representation of articulating hands. Since our representation uses Gaussian primitives optimized from the multi-view pixel-aligned losses it enables us to efficiently and accurately estimate contacts between the hand and the object. For the most accurate results our method requires tens of camera views that current datasets do not provide. We therefore build MANUS-Grasps a new dataset that contains hand-object grasps viewed from 50+ cameras across 30+ scenes 3 subjects and comprising over 7M frames. In addition to extensive qualitative results we also show that our method outperforms others on a quantitative contact evaluation method that uses paint transfer from the object to the hand.
[]
[]
[]
[]
289
290
Distilling Semantic Priors from SAM to Efficient Image Restoration Models
http://arxiv.org/abs/2403.16368
Quan Zhang, Xiaoyu Liu, Wei Li, Hanting Chen, Junchao Liu, Jie Hu, Zhiwei Xiong, Chun Yuan, Yunhe Wang
2,403.16368
In image restoration (IR) leveraging semantic priors from segmentation models has been a common approach to improve performance. The recent segment anything model (SAM) has emerged as a powerful tool for extracting advanced semantic priors to enhance IR tasks. However the computational cost of SAM is prohibitive for IR compared to existing smaller IR models. The incorporation of SAM for extracting semantic priors considerably hampers the model inference efficiency. To address this issue we propose a general framework to distill SAM's semantic knowledge to boost exiting IR models without interfering with their inference process. Specifically our proposed framework consists of the semantic priors fusion (SPF) scheme and the semantic priors distillation (SPD) scheme. SPF fuses two kinds of information between the restored image predicted by the original IR model and the semantic mask predicted by SAM for the refined restored image. SPD leverages a self-distillation manner to distill the fused semantic priors to boost the performance of original IR models. Additionallywe design a semantic-guided relation (SGR) module for SPD which ensures semantic feature representation space consistency to fully distill the priors. We demonstrate the effectiveness of our framework across multiple IR models and tasks including deraining deblurring and denoising.
[]
[]
[]
[]
290
291
Learning Intra-view and Cross-view Geometric Knowledge for Stereo Matching
http://arxiv.org/abs/2402.19270
Rui Gong, Weide Liu, Zaiwang Gu, Xulei Yang, Jun Cheng
2,402.1927
Geometric knowledge has been shown to be beneficial for the stereo matching task. However prior attempts to integrate geometric insights into stereo matching algorithms have largely focused on geometric knowledge from single images while crucial cross-view factors such as occlusion and matching uniqueness have been overlooked. To address this gap we propose a novel Intra-view and Cross-view Geometric knowledge learning Network (ICGNet) specifically crafted to assimilate both intra-view and cross-view geometric knowledge. ICGNet harnesses the power of interest points to serve as a channel for intra-view geometric understanding. Simultaneously it employs the correspondences among these points to capture cross-view geometric relationships. This dual incorporation empowers the proposed ICGNet to leverage both intra-view and cross-view geometric knowledge in its learning process substantially improving its ability to estimate disparities. Our extensive experiments demonstrate the superiority of the ICGNet over contemporary leading models. The code will be available at https://github.com/DFSDDDDD1199/ICGNet.
[]
[]
[]
[]
291
292
Rethinking the Evaluation Protocol of Domain Generalization
http://arxiv.org/abs/2305.15253
Han Yu, Xingxuan Zhang, Renzhe Xu, Jiashuo Liu, Yue He, Peng Cui
2,305.15253
Domain generalization aims to solve the challenge of Out-of-Distribution (OOD) generalization by leveraging common knowledge learned from multiple training domains to generalize to unseen test domains. To accurately evaluate the OOD generalization ability it is required that test data information is unavailable. However the current domain generalization protocol may still have potential test data information leakage. This paper examines the risks of test data information leakage from two aspects of the current evaluation protocol: supervised pretraining on ImageNet and oracle model selection. We propose modifications to the current protocol that we should employ self-supervised pretraining or train from scratch instead of employing the current supervised pretraining and we should use multiple test domains. These would result in a more precise evaluation of OOD generalization ability. We also rerun the algorithms with the modified protocol and introduce new leaderboards to encourage future research in domain generalization with a fairer comparison.
[]
[]
[]
[]
292
293
Aligning Logits Generatively for Principled Black-Box Knowledge Distillation
http://arxiv.org/abs/2205.10490
Jing Ma, Xiang Xiang, Ke Wang, Yuchuan Wu, Yongbin Li
2,205.1049
Black-Box Knowledge Distillation (B2KD) is a formulated problem for cloud-to-edge model compression with invisible data and models hosted on the server. B2KD faces challenges such as limited Internet exchange and edge-cloud disparity of data distributions. In this paper we formalize a two-step workflow consisting of deprivatization and distillation and theoretically provide a new optimization direction from logits to cell boundary different from direct logits alignment. With its guidance we propose a new method Mapping-Emulation KD (MEKD) that distills a black-box cumbersome model into a lightweight one. Our method does not differentiate between treating soft or hard responses and consists of: 1) deprivatization: emulating the inverse mapping of the teacher function with a generator and 2) distillation: aligning low-dimensional logits of the teacher and student models by reducing the distance of high-dimensional image points. For different teacher-student pairs our method yields inspiring distillation performance on various benchmarks and outperforms the previous state-of-the-art approaches.
[]
[]
[]
[]
293
294
BerfScene: Bev-conditioned Equivariant Radiance Fields for Infinite 3D Scene Generation
http://arxiv.org/abs/2312.02136
Qihang Zhang, Yinghao Xu, Yujun Shen, Bo Dai, Bolei Zhou, Ceyuan Yang
2,312.02136
Generating large-scale 3D scenes cannot simply apply existing 3D object synthesis technique since 3D scenes usually hold complex spatial configurations and consist of a number of objects at varying scales. We thus propose a practical and efficient 3D representation that incorporates an equivariant radiance field with the guidance of a bird's-eye view (BEV) map. Concretely objects of synthesized 3D scenes could be easily manipulated through steering the corresponding BEV maps. Moreover by adequately incorporating positional encoding and low-pass filters into the generator the representation becomes equivariant to the given BEV map. Such equivariance allows us to produce large-scale even infinite-scale 3D scenes via synthesizing local scenes and then stitching them with smooth consistency. Extensive experiments on 3D scene datasets demonstrate the effectiveness of our approach. Our project website is at: https://https://zqh0253.github.io/BerfScene.
[]
[]
[]
[]
294
295
3D Facial Expressions through Analysis-by-Neural-Synthesis
http://arxiv.org/abs/2404.04104
George Retsinas, Panagiotis P. Filntisis, Radek Danecek, Victoria F. Abrevaya, Anastasios Roussos, Timo Bolkart, Petros Maragos
2,404.04104
While existing methods for 3D face reconstruction from in-the-wild images excel at recovering the overall face shape they commonly miss subtle extreme asymmetric or rarely observed expressions. We improve upon these methods with SMIRK (Spatial Modeling for Image-based Reconstruction of Kinesics) which faithfully reconstructs expressive 3D faces from images. We identify two key limitations in existing methods: shortcomings in their self-supervised training formulation and a lack of expression diversity in the training images. For training most methods employ differentiable rendering to compare a predicted face mesh with the input image along with a plethora of additional loss functions. This differentiable rendering loss not only has to provide supervision to optimize for 3D face geometry camera albedo and lighting which is an ill-posed optimization problem but the domain gap between rendering and input image further hinders the learning process. Instead SMIRK replaces the differentiable rendering with a neural rendering module that given the rendered predicted mesh geometry and sparsely sampled pixels of the input image generates a face image. As the neural rendering gets color information from sampled image pixels supervising with neural rendering-based reconstruction loss can focus solely on the geometry. Further it enables us to generate images of the input identity with varying expressions while training. These are then utilized as input to the reconstruction model and used as supervision with ground truth geometry. This effectively augments the training data and enhances the generalization for diverse expressions. Our qualitative quantitative and particularly our perceptual evaluations demonstrate that SMIRK achieves the new state-of-the art performance on accurate expression reconstruction. For our method's source code demo video and more please visit our project webpage: https://georgeretsi.github.io/smirk/.
[]
[]
[]
[]
295
296
HoloVIC: Large-scale Dataset and Benchmark for Multi-Sensor Holographic Intersection and Vehicle-Infrastructure Cooperative
http://arxiv.org/abs/2403.02640
Cong Ma, Lei Qiao, Chengkai Zhu, Kai Liu, Zelong Kong, Qing Li, Xueqi Zhou, Yuheng Kan, Wei Wu
2,403.0264
Vehicle-to-everything (V2X) is a popular topic in the field of Autonomous Driving in recent years. Vehicle-infrastructure cooperation (VIC) becomes one of the important research area. Due to the complexity of traffic conditions such as blind spots and occlusion it greatly limits the perception capabilities of single-view roadside sensing systems. To further enhance the accuracy of roadside perception and provide better information to the vehicle side in this paper we constructed holographic intersections with various layouts to build a large-scale multi-sensor holographic vehicle-infrastructure cooperation dataset called HoloVIC. Our dataset includes 3 different types of sensors (Camera Lidar Fisheye) and employs 4 sensor-layouts based on the different intersections. Each intersection is equipped with 6-18 sensors to capture synchronous data. While autonomous vehicles pass through these intersections for collecting VIC data. HoloVIC contains in total on 100k+ synchronous frames from different sensors. Additionally we annotated 3D bounding boxes based on Camera Fisheye and Lidar. We also associate the IDs of the same objects across different devices and consecutive frames in sequence. Based on HoloVIC we formulated four tasks to facilitate the development of related research. We also provide benchmarks for these tasks.
[]
[]
[]
[]
296
297
Unleashing the Potential of SAM for Medical Adaptation via Hierarchical Decoding
http://arxiv.org/abs/2403.18271
Zhiheng Cheng, Qingyue Wei, Hongru Zhu, Yan Wang, Liangqiong Qu, Wei Shao, Yuyin Zhou
2,403.18271
The Segment Anything Model (SAM) has garnered significant attention for its versatile segmentation abilities and intuitive prompt-based interface. However its application in medical imaging presents challenges requiring either substantial training costs and extensive medical datasets for full model fine-tuning or high-quality prompts for optimal performance. This paper introduces H-SAM: a prompt-free adaptation of SAM tailored for efficient fine-tuning of medical images via a two-stage hierarchical decoding procedure. In the initial stage H-SAM employs SAM's original decoder to generate a prior probabilistic mask guiding a more intricate decoding process in the second stage. Specifically we propose two key designs: 1) A class-balanced mask-guided self-attention mechanism addressing the unbalanced label distribution enhancing image embedding; 2) A learnable mask cross-attention mechanism spatially modulating the interplay among different image regions based on the prior mask. Moreover the inclusion of a hierarchical pixel decoder in H-SAM enhances its proficiency in capturing fine-grained and localized details. This approach enables SAM to effectively integrate learned medical priors facilitating enhanced adaptation for medical image segmentation with limited samples. Our H-SAM demonstrates a 4.78% improvement in average Dice compared to existing prompt-free SAM variants for multi-organ segmentation using only 10% of 2D slices. Notably without using any unlabeled data H-SAM even outperforms state-of-the-art semi-supervised models relying on extensive unlabeled training data across various medical datasets. Our code is available at https://github.com/Cccccczh404/H-SAM.
[]
[]
[]
[]
297
298
Puff-Net: Efficient Style Transfer with Pure Content and Style Feature Fusion Network
Sizhe Zheng, Pan Gao, Peng Zhou, Jie Qin
null
Style transfer aims to render an image with the artistic features of a style image while maintaining the original structure. Various methods have been put forward for this task but some challenges still exist. For instance it is difficult for CNN-based methods to handle global information and long-range dependencies between input images for which transformer-based methods have been proposed. Although transformer can better model the relationship between content and style images they require high-cost hardware and time-consuming inference. To address these issues we design a novel transformer model that includes only encoders thus significantly reducing the computational cost. In addition we also find that existing style transfer methods may lead to images under-stylied or missing content. In order to achieve better stylization we design a content feature extractor and a style feature extractor. Then we can feed pure content and style images into the transformer. Finally we propose a network model termed Puff-Net i.e. efficient style transfer with pure content and style feature fusion network. Through qualitative and quantitative experiments we demonstrate the advantages of our model compared to state-of-the-art ones in the literature. The code is availabel at https://github.com/ZszYmy9/Puff-Net.
[]
[]
[]
[]
298
299
Towards Progressive Multi-Frequency Representation for Image Warping
Jun Xiao, Zihang Lyu, Cong Zhang, Yakun Ju, Changjian Shui, Kin-Man Lam
null
Image warping a classic task in computer vision aims to use geometric transformations to change the appearance of images. Recent methods learn the resampling kernels for warping through neural networks to estimate missing values in irregular grids which however fail to capture local variations in deformed content and produce images with distortion and less high-frequency details. To address this issue this paper proposes an effective method namely MFR to learn Multi-Frequency Representations from input images for image warping. Specifically we propose a progressive filtering network to learn image representations from different frequency subbands and generate deformable images in a coarse-to-fine manner. Furthermore we employ learnable Gabor wavelet filters to improve the model's capability to learn local spatial-frequency representations. Comprehensive experiments including homography transformation equirectangular to perspective projection and asymmetric image super-resolution demonstrate that the proposed MFR significantly outperforms state-of-the-art image warping methods. Our method also showcases superior generalization to out-of-distribution domains where the generated images are equipped with rich details and less distortion thereby high visual quality. The source code is available at https://github.com/junxiao01/MFR.
[]
[]
[]
[]
299