Unnamed: 0
int64
0
2.72k
title
stringlengths
14
153
Arxiv link
stringlengths
1
31
authors
stringlengths
5
1.5k
arxiv_id
float64
2k
2.41k
abstract
stringlengths
435
2.86k
Model
stringclasses
1 value
GitHub
stringclasses
1 value
Space
stringclasses
1 value
Dataset
stringclasses
1 value
id
int64
0
2.72k
700
Efficient Detection of Long Consistent Cycles and its Application to Distributed Synchronization
Shaohan Li, Yunpeng Shi, Gilad Lerman
null
Group synchronization plays a crucial role in global pipelines for Structure from Motion (SfM). Its formulation is nonconvex and it is faced with highly corrupted measurements. Cycle consistency has been effective in addressing these challenges. However computationally efficient solutions are needed for cycles longer than three especially in practical scenarios where 3-cycles are unavailable. To overcome this computational bottleneck we propose an algorithm for group synchronization that leverages information from cycles of lengths ranging from three to six with a complexity of O(n^3) (or O(n^ 2.373 ) when using a faster matrix multiplication algorithm). We establish non-trivial theory for this and related methods that achieves competitive sample complexity assuming the uniform corruption model. To advocate the practical need for our method we consider distributed group synchronization which requires at least 4-cycles and we illustrate state-of-the-art performance by our method in this context.
[]
[]
[]
[]
700
701
LayoutFormer: Hierarchical Text Detection Towards Scene Text Understanding
Min Liang, Jia-Wei Ma, Xiaobin Zhu, Jingyan Qin, Xu-Cheng Yin
null
Existing scene text detectors generally focus on accurately detecting single-level (i.e. word-level line-level or paragraph-level) text entities without exploring the relationships among different levels of text entities. To comprehensively understand scene texts detecting multi-level texts while exploring their contextual information is critical. To this end we propose a unified framework (dubbed LayoutFormer) for hierarchical text detection which simultaneously conducts multi-level text detection and predicts the geometric layouts for promoting scene text understanding. In LayoutFormer WordDecoder LineDecoder and ParaDecoder are proposed to be responsible for word-level text prediction line-level text prediction and paragraph-level text prediction respectively. Meanwhile WordDecoder and ParaDecoder adaptively learn word-line and line-paragraph relationships respectively. In addition we propose a Prior Location Sampler to be used on multi-scale features to adaptively select a few representative foreground features for updating text queries. It can improve hierarchical detection performance while significantly reducing the computational cost. Comprehensive experiments verify that our method achieves state-of-the-art performance on single-level and hierarchical text detection.
[]
[]
[]
[]
701
702
Vlogger: Make Your Dream A Vlog
http://arxiv.org/abs/2401.09414
Shaobin Zhuang, Kunchang Li, Xinyuan Chen, Yaohui Wang, Ziwei Liu, Yu Qiao, Yali Wang
2,401.09414
In this work we present Vlogger a generic AI system for generating a minute-level video blog (i.e. vlog) of user descriptions. Different from short videos with a few seconds vlog often contains a complex storyline with diversified scenes which is challenging for most existing video generation approaches. To break through this bottleneck our Vlogger smartly leverages Large Language Model (LLM) as Director and decomposes a long video generation task of vlog into four key stages where we invoke various foundation models to play the critical roles of vlog professionals including (1) Script (2) Actor (3) ShowMaker and (4) Voicer. With such a design of mimicking human beings our Vlogger can generate vlogs through explainable cooperation of top-down planning and bottom-up shooting. More over we introduce a novel video diffusion model ShowMaker which serves as a videographer in our Vlogger for generating the video snippet of each shooting scene. By incorporating Script and Actor attentively as textual and visual prompts it can effectively enhance spatial-temporal coherence in the snippet. Besides we design a concise mixed training paradigm for ShowMaker boosting its capacity for both T2V generation and prediction. Finally the extensive experiments show that our method achieves state-of-the-art performance on zero-shot T2V generation and prediction tasks. More importantly Vlogger can generate over 5-minute vlogs from open-world descriptions without loss of video coherence on script and actor.
[]
[]
[]
[]
702
703
CodedEvents: Optimal Point-Spread-Function Engineering for 3D-Tracking with Event Cameras
Sachin Shah, Matthew A. Chan, Haoming Cai, Jingxi Chen, Sakshum Kulshrestha, Chahat Deep Singh, Yiannis Aloimonos, Christopher A. Metzler
null
Point-spread-function (PSF) engineering is a well-established computational imaging technique that uses phase masks and other optical elements to embed extra information (e.g. depth) into the images captured by conventional CMOS image sensors. To date however PSF-engineering has not been applied to neuromorphic event cameras; a powerful new image sensing technology that responds to changes in the log-intensity of light. This paper establishes theoretical limits (Cramer Rao bounds) on 3D point localization and tracking with PSF-engineered event cameras. Using these bounds we first demonstrate that existing Fisher phase masks are already near-optimal for localizing static flashing point sources (e.g. blinking fluorescent molecules). We then demonstrate that existing designs are sub-optimal for tracking moving point sources and proceed to use our theory to design optimal phase masks and binary amplitude masks for this task. To overcome the non-convexity of the design problem we leverage novel implicit neural representation based parameterizations of the phase and amplitude masks. We demonstrate the efficacy of our designs through extensive simulations. We also validate our method with a simple prototype.
[]
[]
[]
[]
703
704
GLOW: Global Layout Aware Attacks on Object Detection
Jun Bao, Buyu Liu, Kui Ren, Jun Yu
null
Adversarial attacks aim to perturb images such that a predictor outputs incorrect results. Due to the limited research in structured attacks imposing consistency checks on natural multi-object scenes is a practical defense against conventional adversarial attacks. More desired attacks should be able to fool defenses with such consistency checks. Therefore we present the first approach GLOW that copes with various attack requests by generating global layout-aware adversarial attacks in which both categorical and geometric layout constraints are explicitly established. Specifically we focus on object detection tasks and given a victim image GLOW first localizes victim objects according to target labels. And then it generates multiple attack plans together with their context-consistency scores. GLOW on the one hand is capable of handling various types of requests including single or multiple victim objects with or without specified victim objects. On the other hand it produces a consistency score for each attack plan reflecting the overall contextual consistency that both semantic category and global scene layout are considered. We conduct our experiments on MS COCO and Pascal. Extensive experimental results demonstrate that we can achieve about 30% average relative improvement compared to state-of-the-art methods in conventional single object attack request; Moreover such superiority is also valid across more generic attack requests under both white-box and zero-query black-box settings. Finally we conduct comprehensive human analysis which not only validates our claim further but also provides strong evidence that our evaluation metrics reflect human reviews well.
[]
[]
[]
[]
704
705
Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection
http://arxiv.org/abs/2405.19902
Suyeon Kim, Dongha Lee, SeongKu Kang, Sukang Chae, Sanghwan Jang, Hwanjo Yu
2,405.19902
Label noise commonly found in real-world datasets has a detrimental impact on a model's generalization. To effectively detect incorrectly labeled instances previous works have mostly relied on distinguishable training signals such as training loss as indicators to differentiate between clean and noisy labels. However they have limitations in that the training signals incompletely reveal the model's behavior and are not effectively generalized to various noise types resulting in limited detection accuracy. In this paper we propose DynaCor framework that distinguishes incorrectly labeled instances from correctly labeled ones based on the dynamics of the training signals. To cope with the absence of supervision for clean and noisy labels DynaCor first introduces a label corruption strategy that augments the original dataset with intentionally corrupted labels enabling indirect simulation of the model's behavior on noisy labels. Then DynaCor learns to identify clean and noisy instances by inducing two clearly distinguishable clusters from the latent representations of training dynamics. Our comprehensive experiments show that DynaCor outperforms the state-of-the-art competitors and shows strong robustness to various noise types and noise rates.
[]
[]
[]
[]
705
706
Neural 3D Strokes: Creating Stylized 3D Scenes with Vectorized 3D Strokes
http://arxiv.org/abs/2311.15637
Hao-Bin Duan, Miao Wang, Yan-Xun Li, Yong-Liang Yang
2,311.15637
We present Neural 3D Strokes a novel technique to generate stylized images of a 3D scene at arbitrary novel views from multi-view 2D images. Different from existing methods which apply stylization to trained neural radiance fields at the voxel level our approach draws inspiration from image-to-painting methods simulating the progressive painting process of human artwork with vector strokes. We develop a palette of stylized 3D strokes from basic primitives and splines and consider the 3D scene stylization task as a multi-view reconstruction process based on these 3D stroke primitives. Instead of directly searching for the parameters of these 3D strokes which would be too costly we introduce a differentiable renderer that allows optimizing stroke parameters using gradient descent and propose a training scheme to alleviate the vanishing gradient issue. The extensive evaluation demonstrates that our approach effectively synthesizes 3D scenes with significant geometric and aesthetic stylization while maintaining a consistent appearance across different views. Our method can be further integrated with style loss and image-text contrastive models to extend its applications including color transfer and text-driven 3D scene drawing. Results and code are available at http://buaavrcg.github.io/Neural3DStrokes.
[]
[]
[]
[]
706
707
SIRA: Scalable Inter-frame Relation and Association for Radar Perception
Ryoma Yataka, Pu Wang, Petros Boufounos, Ryuhei Takahashi
null
Conventional radar feature extraction faces limitations due to low spatial resolution noise multipath reflection the presence of ghost targets and motion blur. Such limitations can be exacerbated by nonlinear object motion particularly from an ego-centric viewpoint. It becomes evident that to address these challenges the key lies in exploiting temporal feature relation over an extended horizon and enforcing spatial motion consistence for effective association. To this end this paper proposes SIRA (Scalable Inter-frame Relation and Association) with two designs. First inspired by Swin Transformer we introduce extended temporal relation generalizing the existing temporal relation layer from two consecutive frames to multiple inter-frames with temporally regrouped window attention for scalability. Second we propose motion consistency track with the concept of a pseudo-tracklet generated from observational data for better trajectory prediction and subsequent object association. Our approach achieves 58.11 mAP@0.5 for oriented object detection and 47.79 MOTA for multiple object tracking on the Radiate dataset surpassing previous state-of-the-art by a margin of +4.11 mAP@0.5 and +9.94 MOTA respectively.
[]
[]
[]
[]
707
708
VOODOO 3D: Volumetric Portrait Disentanglement For One-Shot 3D Head Reenactment
http://arxiv.org/abs/2312.04651
Phong Tran, Egor Zakharov, Long-Nhat Ho, Anh Tuan Tran, Liwen Hu, Hao Li
2,312.04651
We present a 3D-aware one-shot head reenactment method based on a fully volumetric neural disentanglement framework for source appearance and driver expressions. Our method is real-time and produces high-fidelity and view-consistent output suitable for 3D teleconferencing systems based on holographic displays. Existing cutting-edge 3D-aware reenactment methods often use neural radiance fields or 3D meshes to produce view-consistent appearance encoding but at the same time they rely on linear face models such as 3DMM to achieve its disentanglement with facial expressions. As a result their reenactment results often exhibit identity leakage from the driver or have unnatural expressions. To address these problems we propose a neural self-supervised disentanglement approach that lifts both the source image and driver video frame into a shared 3D volumetric representation based on tri-planes. This representation can then be freely manipulated with expression tri-planes extracted from the driving images and rendered from an arbitrary view using neural radiance fields. We achieve this disentanglement via self-supervised learning on a large in-the-wild video dataset. We further introduce a highly effective fine-tuning approach to improve the generalizability of the 3D lifting using the same real-world data. We demonstrate state-of-the-art performance on a wide range of datasets and also showcase high-quality 3D-aware head reenactment on highly challenging and diverse subjects including non-frontal head poses and complex expressions for both source and driver.
[]
[]
[]
[]
708
709
Visual Fact Checker: Enabling High-Fidelity Detailed Caption Generation
http://arxiv.org/abs/2404.19752
Yunhao Ge, Xiaohui Zeng, Jacob Samuel Huffman, Tsung-Yi Lin, Ming-Yu Liu, Yin Cui
2,404.19752
Existing automatic captioning methods for visual content face challenges such as lack of detail content hallucination and poor instruction following. In this work we propose VisualFactChecker (VFC) a flexible training-free pipeline that generates high-fidelity and detailed captions for both 2D images and 3D objects. VFC consists of three steps: 1) proposal where image-to-text captioning models propose multiple initial captions; 2) verification where a large language model (LLM) utilizes tools such as object detection and VQA models to fact-check proposed captions; 3) captioning where an LLM generates the final caption by summarizing caption proposals and the fact check verification results. In this step VFC can flexibly generate captions in various styles following complex instructions. We conduct comprehensive captioning evaluations using four metrics: 1) CLIP-Score for image-text similarity; 2) CLIP-Image-Score for measuring the image-image similarity between the original and the reconstructed image generated by a text-to-image model using the caption. 3) human study on Amazon Mechanical Turk; 4) GPT-4V for fine-grained evaluation. Evaluation results show that VFC outperforms state-of-the-art open-sourced captioning methods for 2D images on the COCO dataset and 3D assets on the Objaverse dataset. Our study demonstrates that by combining open-source models into a pipeline we can attain captioning capability comparable to proprietary models such as GPT-4V despite being over 10x smaller in model size.
[]
[]
[]
[]
709
710
Communication-Efficient Collaborative Perception via Information Filling with Codebook
http://arxiv.org/abs/2405.04966
Yue Hu, Juntong Peng, Sifei Liu, Junhao Ge, Si Liu, Siheng Chen
2,405.04966
Collaborative perception empowers each agent to improve its perceptual ability through the exchange of perceptual messages with other agents. It inherently results in a fundamental trade-off between perception ability and communication cost. To address this bottleneck issue our core idea is to optimize the collaborative messages from two key aspects: representation and selection. The proposed codebook-based message representation enables the transmission of integer codes rather than high-dimensional feature maps. The proposed information-filling-driven message selection optimizes local messages to collectively fill each agent's information demand preventing information overflow among multiple agents. By integrating these two designs we propose CodeFilling a novel communication-efficient collaborative perception system which significantly advances the perception-communication trade-off and is inclusive to both homogeneous and heterogeneous collaboration settings. We evaluate CodeFilling in both a real-world dataset DAIR-V2X and a new simulation dataset OPV2VH+. Results show that CodeFilling outperforms previous SOTA Where2comm on DAIR-V2X/OPV2VH+ with 1333/1206x lower communication volume. Our code is available at https://github.com/PhyllisH/CodeFilling.
[]
[]
[]
[]
710
711
DiPrompT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning
http://arxiv.org/abs/2403.08506
Sikai Bai, Jie Zhang, Song Guo, Shuaicheng Li, Jingcai Guo, Jun Hou, Tao Han, Xiaocheng Lu
2,403.08506
Federated learning (FL) has emerged as a powerful paradigm for learning from decentralized data and federated domain generalization further considers the test dataset (target domain) is absent from the decentralized training data (source domains). However most existing FL methods assume that domain labels are provided during training and their evaluation imposes explicit constraints on the number of domains which must strictly match the number of clients. Because of the underutilization of numerous edge devices and additional cross-client domain annotations in the real world such restrictions may be impractical and involve potential privacy leaks. In this paper we propose an efficient and novel approach called Disentangled Prompt Tuning (DiPrompT) a method that tackles the above restrictions by learning adaptive prompts for domain generalization in a distributed manner. Specifically we first design two types of prompts i.e. global prompt to capture general knowledge across all clients and domain prompts to capture domain-specific knowledge. They eliminate the restriction on the one-to-one mapping between source domains and local clients. Furthermore a dynamic query metric is introduced to automatically search the suitable domain label for each sample which includes two-substep text-image alignments based on prompt tuning without labor-intensive annotation. Extensive experiments on multiple datasets demonstrate that our DiPrompT achieves superior domain generalization performance over state-of-the-art FL methods when domain labels are not provided and even outperforms many centralized learning methods using domain labels.
[]
[]
[]
[]
711
712
MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation
Hanzhe Hu, Zhizhuo Zhou, Varun Jampani, Shubham Tulsiani
null
We present MVD-Fusion: a method for single-view 3D inference via generative modeling of multi-view-consistent RGB-D images. While recent methods pursuing 3D inference advocate learning novel-view generative models these generations are not 3D-consistent and require a distillation process to generate a 3D output. We instead cast the task of 3D inference as directly generating mutually-consistent multiple views and build on the insight that additionally inferring depth can provide a mechanism for enforcing this consistency. Specifically we train a denoising diffusion model to generate multi-view RGB-D images given a single RGB input image and leverage the (intermediate noisy) depth estimates to obtain reprojection-based conditioning to maintain multi-view consistency. We train our model using large-scale synthetic dataset Obajverse as well as the real-world CO3D dataset comprising of generic camera viewpoints. We demonstrate that our approach can yield more accurate synthesis compared to recent state-of-the-art including distillation-based 3D inference and prior multi-view generation methods. We also evaluate the geometry induced by our multi-view depth prediction and find that it yields a more accurate representation than other direct 3D inference approaches.
[]
[]
[]
[]
712
713
Effective Video Mirror Detection with Inconsistent Motion Cues
Alex Warren, Ke Xu, Jiaying Lin, Gary K.L. Tam, Rynson W.H. Lau
null
Image-based mirror detection has recently undergone rapid research due to its significance in applications such as robotic navigation semantic segmentation and scene reconstruction. Recently VMD-Net was proposed as the first video mirror detection technique by modeling dual correspondences between the inside and outside of the mirror both spatially and temporally. However this approach is not reliable as correspondences can occur completely inside or outside of the mirrors. In addition the proposed dataset VMD-D contains many small mirrors limiting its applicability to real-world scenarios. To address these problems we developed a more challenging dataset that includes mirrors of various shapes and sizes at different locations of the frames providing a better reflection of real-world scenarios. Next we observed that the motions between the inside and outside of the mirror are often inconsistent. For instance when moving in front of a mirror the motion inside the mirror is often much smaller than the motion outside due to increased depth perception. With these observations we propose modeling inconsistent motion cues to detect mirrors and a new network with two novel modules. The Motion Attention Module (MAM) explicitly models inconsistent motions around mirrors via optical flow and the Motion-Guided Edge Detection Module (MEDM) uses motions to guide mirror edge feature learning. Experimental results on our proposed dataset show that our method outperforms state-of-the-arts. The code and dataset are available at https://github.com/AlexAnthonyWarren/MG-VMD.
[]
[]
[]
[]
713
714
Multi-Object Tracking in the Dark
http://arxiv.org/abs/2405.06600
Xinzhe Wang, Kang Ma, Qiankun Liu, Yunhao Zou, Ying Fu
2,405.066
Low-light scenes are prevalent in real-world applications (e.g. autonomous driving and surveillance at night). Recently multi-object tracking in various practical use cases have received much attention but multi-object tracking in dark scenes is rarely considered. In this paper we focus on multi-object tracking in dark scenes. To address the lack of datasets we first build a Low-light Multi-Object Tracking (LMOT) dataset. LMOT provides well-aligned low-light video pairs captured by our dual-camera system and high-quality multi-object tracking annotations for all videos. Then we propose a low-light multi-object tracking method termed as LTrack. We introduce the adaptive low-pass downsample module to enhance low-frequency components of images outside the sensor noises. The degradation suppression learning strategy enables the model to learn invariant information under noise disturbance and image quality degradation. These components improve the robustness of multi-object tracking in dark scenes. We conducted a comprehensive analysis of our LMOT dataset and proposed LTrack. Experimental results demonstrate the superiority of the proposed method and its competitiveness in real night low-light scenes. Dataset and Code: https://github.com/ying-fu/LMOT
[]
[]
[]
[]
714
715
UniHuman: A Unified Model For Editing Human Images in the Wild
http://arxiv.org/abs/2312.14985
Nannan Li, Qing Liu, Krishna Kumar Singh, Yilin Wang, Jianming Zhang, Bryan A. Plummer, Zhe Lin
2,312.14985
Human image editing includes tasks like changing a person's pose their clothing or editing the image according to a text prompt. However prior work often tackles these tasks separately overlooking the benefit of mutual reinforcement from learning them jointly. In this paper we propose UniHuman a unified model that addresses multiple facets of human image editing in real-world settings. To enhance the model's generation quality and generalization capacity we leverage guidance from human visual encoders and introduce a lightweight pose-warping module that can exploit different pose representations accommodating unseen textures and patterns. Furthermore to bridge the disparity between existing human editing benchmarks with real-world data we curated 400K high-quality human image-text pairs for training and collected 2K human images for out-of-domain testing both encompassing diverse clothing styles backgrounds and age groups. Experiments on both in-domain and out-of-domain test sets demonstrate that UniHuman outperforms task-specific models by a significant margin. In user studies UniHuman is preferred by the users in an average of 77% of cases. Our project is available at https://github.com/NannanLi999/UniHuman.
[]
[]
[]
[]
715
716
DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model
http://arxiv.org/abs/2404.01342
Lirui Zhao, Yue Yang, Kaipeng Zhang, Wenqi Shao, Yuxin Zhang, Yu Qiao, Ping Luo, Rongrong Ji
2,404.01342
Text-to-image (T2I) generative models have attracted significant attention and found extensive applications within and beyond academic research. For example the Civitai community a platform for T2I innovation currently hosts an impressive array of 74492 distinct models. However this diversity presents a formidable challenge in selecting the most appropriate model and parameters a process that typically requires numerous trials. Drawing inspiration from the tool usage research of large language models (LLMs) we introduce DiffAgent an LLM agent designed to screen the accurate selection in seconds via API calls. DiffAgent leverages a novel two-stage training framework SFTA enabling it to accurately align T2I API responses with user input in accordance with human preferences. To train and evaluate DiffAgent's capabilities we present DABench a comprehensive dataset encompassing an extensive range of T2I APIs from the community. Our evaluations reveal that DiffAgent not only excels in identifying the appropriate T2I API but also underscores the effectiveness of the SFTA training framework. Codes are available at https://github.com/OpenGVLab/DiffAgent.
[]
[]
[]
[]
716
717
In Search of a Data Transformation That Accelerates Neural Field Training
http://arxiv.org/abs/2311.17094
Junwon Seo, Sangyoon Lee, Kwang In Kim, Jaeho Lee
2,311.17094
Neural field is an emerging paradigm in data representation that trains a neural network to approximate the given signal. A key obstacle that prevents its widespread adoption is the encoding speed---generating neural fields requires an overfitting of a neural network which can take a significant number of SGD steps to reach the desired fidelity level. In this paper we delve into the impacts of data transformations on the speed of neural field training specifically focusing on how permuting pixel locations affect the convergence speed of SGD. Counterintuitively we find that randomly permuting the pixel locations can considerably accelerate the training. To explain this phenomenon we examine the neural field training through the lens of PSNR curves loss landscapes and error patterns. Our analyses suggest that the random pixel permutations remove the easy-to-fit patterns which facilitate easy optimization in the early stage but hinder capturing fine details of the signal.
[]
[]
[]
[]
717
718
Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis
Marianna Ohanyan, Hayk Manukyan, Zhangyang Wang, Shant Navasardyan, Humphrey Shi
null
We present Zero-Painter a novel training-free framework for layout-conditional text-to-image synthesis that facilitates the creation of detailed and controlled imagery from textual prompts. Our method utilizes object masks and individual descriptions coupled with a global text prompt to generate images with high fidelity. Zero-Painter employs a two-stage process involving our novel Prompt-Adjusted Cross-Attention (PACA) and Region-Grouped Cross-Attention (ReGCA) blocks ensuring precise alignment of generated objects with textual prompts and mask shapes. Our extensive experiments demonstrate that Zero-Painter surpasses current state-of-the-art methods in preserving textual details and adhering to mask shapes. We will make the codes and the models publicly available.
[]
[]
[]
[]
718
719
DiffLoc: Diffusion Model for Outdoor LiDAR Localization
Wen Li, Yuyang Yang, Shangshu Yu, Guosheng Hu, Chenglu Wen, Ming Cheng, Cheng Wang
null
Absolute pose regression (APR) estimates global pose in an end-to-end manner achieving impressive results in learn-based LiDAR localization. However compared to the top-performing methods reliant on 3D-3D correspondence matching APR's accuracy still has room for improvement. We recognize APR's lack of robust features learning and iterative denoising process leads to suboptimal results. In this paper we propose DiffLoc a novel framework that formulates LiDAR localization as a conditional generation of poses. First we propose to utilize the foundation model and static-object-aware pool to learn robust features. Second we incorporate the iterative denoising process into APR via a diffusion model conditioned on the learned geometrically robust features. In addition due to the unique nature of diffusion models we propose to adapt our models to two additional applications: (1) using multiple inferences to evaluate pose uncertainty and (2) seamlessly introducing geometric constraints on denoising steps to improve prediction accuracy. Extensive experiments conducted on the Oxford Radar RobotCar and NCLT datasets demonstrate that DiffLoc outperforms better than the stateof-the-art methods. Especially on the NCLT dataset we achieve 35% and 34.7% improvement on position and orientation accuracy respectively. Our code is released at https://github.com/liw95/DiffLoc.
[]
[]
[]
[]
719
720
Towards 3D Vision with Low-Cost Single-Photon Cameras
http://arxiv.org/abs/2403.17801
Fangzhou Mu, Carter Sifferman, Sacha Jungerman, Yiquan Li, Mark Han, Michael Gleicher, Mohit Gupta, Yin Li
2,403.17801
We present a method for reconstructing 3D shape of arbitrary Lambertian objects based on measurements by miniature energy-efficient low-cost single-photon cameras. These cameras operating as time resolved image sensors illuminate the scene with a very fast pulse of diffuse light and record the shape of that pulse as it returns back from the scene at a high temporal resolution. We propose to model this image formation process account for its non-idealities and adapt neural rendering to reconstruct 3D geometry from a set of spatially distributed sensors with known poses. We show that our approach can successfully recover complex 3D shapes from simulated data. We further demonstrate 3D object reconstruction from real-world captures utilizing measurements from a commodity proximity sensor. Our work draws a connection between image-based modeling and active range scanning and offers a step towards 3D vision with single-photon cameras.
[]
[]
[]
[]
720
721
WonderJourney: Going from Anywhere to Everywhere
http://arxiv.org/abs/2312.03884
Hong-Xing Yu, Haoyi Duan, Junhwa Hur, Kyle Sargent, Michael Rubinstein, William T. Freeman, Forrester Cole, Deqing Sun, Noah Snavely, Jiajun Wu, Charles Herrmann
2,312.03884
We introduce WonderJourney a modular framework for perpetual 3D scene generation. Unlike prior work on view generation that focuses on a single type of scenes we start at any user-provided location (by a text description or an image) and generate a journey through a long sequence of diverse yet coherently connected 3D scenes. We leverage an LLM to generate textual descriptions of the scenes in this journey a text-driven point cloud generation pipeline to make a compelling and coherent sequence of 3D scenes and a large VLM to verify the generated scenes. We show compelling diverse visual results across various scene types and styles forming imaginary "wonderjourneys". Project website: https://kovenyu.com/WonderJourney.
[]
[]
[]
[]
721
722
On Scaling Up a Multilingual Vision and Language Model
http://arxiv.org/abs/2305.18565
Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, AJ Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut
2,305.18565
We explore the boundaries of scaling up a multilingual vision and language model both in terms of size of the components and the breadth of its training task mixture. Our model achieves new levels of performance on a wide-range of varied and complex tasks including multiple image-based captioning and question-answering tasks image-based document understanding and few-shot (in-context) learning as well as object detection video question answering and video captioning. Our model advances the state-of-the-art on most vision-and-language benchmarks considered (20+ of them). Finally we observe emerging capabilities such as complex counting and multilingual object detection tasks that are not explicitly in the training mix.
[]
[]
[]
[]
722
723
Day-Night Cross-domain Vehicle Re-identification
Hongchao Li, Jingong Chen, Aihua Zheng, Yong Wu, Yonglong Luo
null
Previous advances in vehicle re-identification (ReID) are mostly reported under favorable lighting conditions while cross-day-and-night performance is neglected which greatly hinders the development of related traffic intelligence applications. This work instead develops a novel Day-Night Dual-domain Modulation (DNDM) vehicle re-identification framework for day-night cross-domain traffic scenarios. Specifically a unique night-domain glare suppression module is provided to attenuate the headlight glare from raw nighttime vehicle images. To enhance vehicle features under low-light environments we propose a dual-domain structure enhancement module in the feature extractor which enhances geometric structures between appearance features. To alleviate day-night domain discrepancies we develop a cross-domain class awareness module that facilitates the interaction between appearance and structure features in both domains. In this work we address the Day-Night cross-domain ReID (DN-ReID) problem and provide a new cross-domain dataset named DN-Wild including day and night images of 2286 identities giving in total 85945 daytime images and 54952 nighttime images. Furthermore we also take into account the matter of balance between day and night samples and provide a dataset called DN-348. Exhaustive experiments demonstrate the robustness of the proposed framework in the DN-ReID problem. The code and benchmark are released at https://github.com/chenjingong/DN-ReID.
[]
[]
[]
[]
723
724
4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling
Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, David B. Lindell
null
Recent breakthroughs in text-to-4D generation rely on pre-trained text-to-image and text-to-video models to generate dynamic 3D scenes. However current text-to-4D methods face a three-way tradeoff between the quality of scene appearance 3D structure and motion. For example text-to-image models and their 3D-aware variants are trained on internet-scale image datasets and can be used to produce scenes with realistic appearance and 3D structure---but no motion. Text-to-video models are trained on relatively smaller video datasets and can produce scenes with motion but poorer appearance and 3D structure. While these models have complementary strengths they also have opposing weaknesses making it difficult to combine them in a way that alleviates this three-way tradeoff. Here we introduce hybrid score distillation sampling an alternating optimization procedure that blends supervision signals from multiple pre-trained diffusion models and incorporates benefits of each for high-fidelity text-to-4D generation. Using hybrid SDS we demonstrate synthesis of 4D scenes with compelling appearance 3D structure and motion.
[]
[]
[]
[]
724
725
Adversarial Distillation Based on Slack Matching and Attribution Region Alignment
Shenglin Yin, Zhen Xiao, Mingxuan Song, Jieyi Long
null
Adversarial distillation (AD) is a highly effective method for enhancing the robustness of small models. Contrary to expectations a high-performing teacher model does not always result in a more robust student model. This is due to two main reasons. First when there are significant differences in predictions between the teacher model and the student model exact matching of predicted values using KL divergence interferes with training leading to poor performance of existing methods. Second matching solely based on the output prevents the student model from fully understanding the behavior of the teacher model. To address these challenges this paper proposes a novel AD method named SmaraAD. During the training process we facilitate the student model in better understanding the teacher model's behavior by aligning the attribution region that the student model focuses on with that of the teacher model. Concurrently we relax the condition of exact matching in KL divergence and replace it with a more flexible matching criterion thereby enhancing the model's robustness. Extensive experiments substantiate the effectiveness of our method in improving the robustness of small models outperforming previous SOTA methods.
[]
[]
[]
[]
725
726
Boosting Spike Camera Image Reconstruction from a Perspective of Dealing with Spike Fluctuations
Rui Zhao, Ruiqin Xiong, Jing Zhao, Jian Zhang, Xiaopeng Fan, Zhaofei Yu, Tiejun Huang
null
As a bio-inspired vision sensor with ultra-high speed spike cameras exhibit great potential in recording dynamic scenes with high-speed motion or drastic light changes. Different from traditional cameras each pixel in spike cameras records the arrival of photons continuously by firing binary spikes at an ultra-fine temporal granularity. In this process multiple factors impact the imaging including the photons' Poisson arrival thermal noises from circuits and quantization effects in spike readout. These factors introduce fluctuations to spikes making the recorded spike intervals unstable and unable to reflect accurate light intensities. In this paper we present an approach to deal with spike fluctuations and boost spike camera image reconstruction. We first analyze the quantization effects and reveal the unbiased estimation attribute of the reciprocal of differential of spike firing time (DSFT). Based on this we propose a spike representation module to use DSFT with multiple orders for fluctuation suppression where DSFT with higher orders indicates spike integration duration between multiple spikes. We also propose a module for inter-moment feature alignment at multiple granularities. The coarser alignment is based on patch-level cross-attention with a local search strategy and the finer alignment is based on deformable convolution at the pixel level. Experimental results demonstrate the effectiveness of our method on both synthetic and real-captured data. The source code and dataset are available at https://github.com/ruizhao26/BSF.
[]
[]
[]
[]
726
727
Text-guided Explorable Image Super-resolution
http://arxiv.org/abs/2403.01124
Kanchana Vaishnavi Gandikota, Paramanand Chandramouli
2,403.01124
In this paper we introduce the problem of zero-shot text-guided exploration of the solutions to open-domain image super-resolution. Our goal is to allow users to explore diverse semantically accurate reconstructions that preserve data consistency with the low-resolution inputs for different large downsampling factors without explicitly training for these specific degradations. We propose two approaches for zero-shot text-guided super-resolution - i) modifying the generative process of text-to-image (T2I) diffusion models to promote consistency with low-resolution inputs and ii) incorporating language guidance into zero-shot diffusion-based restoration methods. We show that the proposed approaches result in diverse solutions that match the semantic meaning provided by the text prompt while preserving data consistency with the degraded inputs. We evaluate the proposed baselines for the task of extreme super-resolution and demonstrate advantages in terms of restoration quality diversity and explorability of solutions.
[]
[]
[]
[]
727
728
FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Condition
http://arxiv.org/abs/2312.07536
Sicheng Mo, Fangzhou Mu, Kuan Heng Lin, Yanli Liu, Bochen Guan, Yin Li, Bolei Zhou
2,312.07536
Recent approaches such as ControlNet offer users fine-grained spatial control over text-to-image (T2I) diffusion models. However auxiliary modules have to be trained for each spatial condition type model architecture and checkpoint putting them at odds with the diverse intents and preferences a human designer would like to convey to the AI models during the content creation process. In this work we present FreeControl a training-free approach for controllable T2I generation that supports multiple conditions architectures and checkpoints simultaneously. FreeControl enforces structure guidance to facilitate the global alignment with a guidance image and appearance guidance to collect visual details from images generated without control. Extensive qualitative and quantitative experiments demonstrate the superior performance of FreeControl across a variety of pre-trained T2I models. In particular FreeControl enables convenient training-free control over many different architectures and checkpoints allows the challenging input conditions on which most of the existing training-free methods fail and achieves competitive synthesis quality compared to training-based approaches. Project page:https://genforce.github.io/freecontrol/.
[]
[]
[]
[]
728
729
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models
http://arxiv.org/abs/2312.00845
Hyeonho Jeong, Geon Yeong Park, Jong Chul Ye
2,312.00845
Text-to-video diffusion models have advanced video generation significantly. However customizing these models to generate videos with tailored motions presents a substantial challenge. In specific they encounter hurdles in (1) accurately reproducing motion from a target video and (2) creating diverse visual variations. For example straightforward extensions of static image customization methods to video often lead to intricate entanglements of appearance and motion data. To tackle this here we present the Video Motion Customization (VMC) framework a novel one-shot tuning approach crafted to adapt temporal attention layers within video diffusion models. Our approach introduces a novel motion distillation objective using residual vectors between consecutive noisy latent frames as a motion reference. The diffusion process then preserve low-frequency motion trajectories while mitigating high-frequency motion-unrelated noise in image space. We validate our method against state-of-the-art video generative models across diverse real-world motions and contexts. Our codes and data can be found at: https://video-motion-customization.github.io/
[]
[]
[]
[]
729
730
Holodeck: Language Guided Generation of 3D Embodied AI Environments
http://arxiv.org/abs/2312.09067
Yue Yang, Fan-Yun Sun, Luca Weihs, Eli VanderBilt, Alvaro Herrasti, Winson Han, Jiajun Wu, Nick Haber, Ranjay Krishna, Lingjie Liu, Chris Callison-Burch, Mark Yatskar, Aniruddha Kembhavi, Christopher Clark
2,312.09067
3D simulated environments play a critical role in Embodied AI but their creation requires expertise and extensive manual effort restricting their diversity and scope. To mitigate this limitation we present Holodeck a system that generates 3D environments to match a user-supplied prompt fully automatedly. Holodeck can generate diverse scenes e.g. arcades spas and museums adjust the designs for styles and can capture the semantics of complex queries such as "apartment for a researcher with a cat" and "office of a professor who is a fan of Star Wars". Holodeck leverages a large language model (i.e. GPT-4) for common sense knowledge about what the scene might look like and uses a large collection of 3D assets from Objaverse to populate the scene with diverse objects. To address the challenge of positioning objects correctly we prompt GPT-4 to generate spatial relational constraints between objects and then optimize the layout to satisfy those constraints. Our large-scale human evaluation shows that annotators prefer Holodeck over manually designed procedural baselines in residential scenes and that Holodeck can produce high-quality outputs for diverse scene types. We also demonstrate an exciting application of Holodeck in Embodied AI training agents to navigate in novel scenes like music rooms and daycares without human-constructed data which is a significant step forward in developing general-purpose embodied agents.
[]
[]
[]
[]
730
731
Distilled Datamodel with Reverse Gradient Matching
http://arxiv.org/abs/2404.14006
Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang
2,404.14006
The proliferation of large-scale AI models trained on extensive datasets has revolutionized machine learning. With these models taking on increasingly central roles in various applications the need to understand their behavior and enhance interpretability has become paramount. To investigate the impact of changes in training data on a pre-trained model a common approach is leave-one-out retraining. This entails systematically altering the training dataset by removing specific samples to observe resulting changes within the model. However retraining the model for each altered dataset presents a significant computational challenge given the need to perform this operation for every dataset variation. In this paper we introduce an efficient framework for assessing data impact comprising offline training and online evaluation stages. During the offline training phase we approximate the influence of training data on the target model through a distilled synset formulated as a reversed gradient matching problem. For online evaluation we expedite the leave-one-out process using the synset which is then utilized to compute the attribution matrix based on the evaluation objective. Experimental evaluations including training data attribution and assessments of data quality demonstrate that our proposed method achieves comparable model behavior evaluation while significantly speeding up the process compared to the direct retraining method.
[]
[]
[]
[]
731
732
DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
http://arxiv.org/abs/2402.19481
Muyang Li, Tianle Cai, Jiaxin Cao, Qinsheng Zhang, Han Cai, Junjie Bai, Yangqing Jia, Kai Li, Song Han
2,402.19481
Diffusion models have achieved great success in synthesizing high-quality images. However generating high-resolution images with diffusion models is still challenging due to the enormous computational costs resulting in a prohibitive latency for interactive applications. In this paper we propose DistriFusion to tackle this problem by leveraging parallelism across multiple GPUs. Our method splits the model input into multiple patches and assigns each patch to a GPU. However naively implementing such an algorithm breaks the interaction between patches and loses fidelity while incorporating such an interaction will incur tremendous communication overhead. To overcome this dilemma we observe the high similarity between the input from adjacent diffusion steps and propose Displaced Patch Parallelism which takes advantage of the sequential nature of the diffusion process by reusing the pre-computed feature maps from the previous timestep to provide context for the current step. Therefore our method supports asynchronous communication which can be pipelined by computation. Extensive experiments show that our method can be applied to recent Stable Diffusion XL with no quality degradation and achieve up to a 6.1x speedup on eight NVIDIA A100s compared to one. Our code is publicly available at https://github.com/mit-han-lab/distrifuser.
[]
[]
[]
[]
732
733
Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation
http://arxiv.org/abs/2312.03502
Haojie Zhang, Yongyi Su, Xun Xu, Kui Jia
2,312.03502
The success of large language models has inspired the computer vision community to explore image segmentation foundation model that is able to zero/few-shot generalize through prompt engineering. Segment-Anything (SAM) among others is the state-of-the-art image segmentation foundation model demonstrating strong zero/few-shot generalization. Despite the success recent studies reveal the weakness of SAM under strong distribution shift. In particular SAM performs awkwardly on corrupted natural images camouflaged images medical images etc. Motivated by the observations we aim to develop a self-training based strategy to adapt SAM to target distribution. Given the unique challenges of large source dataset high computation cost and incorrect pseudo label we propose a weakly supervised self-training architecture with anchor regularization and low-rank finetuning to improve the robustness and computation efficiency of adaptation. We validate the effectiveness on 5 types of downstream segmentation tasks including natural clean/corrupted images medical images camouflaged images and robotic images. Our proposed method is task-agnostic in nature and outperforms pre-trained SAM and state-of-the-art domain adaptation methods on almost all downstream tasks with the same testing prompt inputs.
[]
[]
[]
[]
733
734
Pseudo Label Refinery for Unsupervised Domain Adaptation on Cross-dataset 3D Object Detection
http://arxiv.org/abs/2404.19384
Zhanwei Zhang, Minghao Chen, Shuai Xiao, Liang Peng, Hengjia Li, Binbin Lin, Ping Li, Wenxiao Wang, Boxi Wu, Deng Cai
2,404.19384
Recent self-training techniques have shown notable improvements in unsupervised domain adaptation for 3D object detection (3D UDA). These techniques typically select pseudo labels i.e. 3D boxes to supervise models for the target domain. However this selection process inevitably introduces unreliable 3D boxes in which 3D points cannot be definitively assigned as foreground or background. Previous techniques mitigate this by reweighting these boxes as pseudo labels but these boxes can still poison the training process. To resolve this problem in this paper we propose a novel pseudo label refinery framework. Specifically in the selection process to improve the reliability of pseudo boxes we propose a complementary augmentation strategy. This strategy involves either removing all points within an unreliable box or replacing it with a high-confidence box. Moreover the point numbers of instances in high-beam datasets are considerably higher than those in low-beam datasets also degrading the quality of pseudo labels during the training process. We alleviate this issue by generating additional proposals and aligning RoI features across different domains. Experimental results demonstrate that our method effectively enhances the quality of pseudo labels and consistently surpasses the state-of-the-art methods on six autonomous driving benchmarks. Code will be available at https://github.com/Zhanwei-Z/PERE.
[]
[]
[]
[]
734
735
Reconstructing Hands in 3D with Transformers
http://arxiv.org/abs/2312.05251
Georgios Pavlakos, Dandan Shan, Ilija Radosavovic, Angjoo Kanazawa, David Fouhey, Jitendra Malik
2,312.05251
We present an approach that can reconstruct hands in 3D from monocular input. Our approach for Hand Mesh Recovery HaMeR follows a fully transformer-based architecture and can analyze hands with significantly increased accuracy and robustness compared to previous work. The key to HaMeR's success lies in scaling up both the data used for training and the capacity of the deep network for hand reconstruction. For training data we combine multiple datasets that contain 2D or 3D hand annotations. For the deep model we use a large scale Vision Transformer architecture. Our final model consistently outperforms the previous baselines on popular 3D hand pose benchmarks. To further evaluate the effect of our design in non-controlled settings we annotate existing in-the-wild datasets with 2D hand keypoint annotations. On this newly collected dataset of annotations HInt we demonstrate significant improvements over existing baselines. We will make our code data and models publicly available upon publication. We make our code data and models available on the project website: https://geopavlakos.github.io/hamer/.
[]
[]
[]
[]
735
736
AZ-NAS: Assembling Zero-Cost Proxies for Network Architecture Search
Junghyup Lee, Bumsub Ham
null
Training-free network architecture search (NAS) aims to discover high-performing networks with zero-cost proxies capturing network characteristics related to the final performance. However network rankings estimated by previous training-free NAS methods have shown weak correlations with the performance. To address this issue we propose AZ-NAS a novel approach that leverages the ensemble of various zero-cost proxies to enhance the correlation between a predicted ranking of networks and the ground truth substantially in terms of the performance. To achieve this we introduce four novel zero-cost proxies that are complementary to each other analyzing distinct traits of architectures in the views of expressivity progressivity trainability and complexity. The proxy scores can be obtained simultaneously within a single forward and backward pass making an overall NAS process highly efficient. In order to integrate the rankings predicted by our proxies effectively we introduce a non-linear ranking aggregation method that highlights the networks highly-ranked consistently across all the proxies. Experimental results conclusively demonstrate the efficacy and efficiency of AZ-NAS outperforming state-of-the-art methods on standard benchmarks all while maintaining a reasonable runtime cost.
[]
[]
[]
[]
736
737
Correspondence-Free Non-Rigid Point Set Registration Using Unsupervised Clustering Analysis
Mingyang Zhao, Jingen Jiang, Lei Ma, Shiqing Xin, Gaofeng Meng, Dong-Ming Yan
null
This paper presents a novel non-rigid point set registration method that is inspired by unsupervised clustering analysis. Unlike previous approaches that treat the source and target point sets as separate entities we develop a holistic framework where they are formulated as clustering centroids and clustering members separately. We then adopt Tikhonov regularization with an ?1-induced Laplacian kernel instead of the commonly used Gaussian kernel to ensure smooth and more robust displacement fields. Our formulation delivers closed-form solutions theoretical guarantees independence from dimensions and the ability to handle large deformations. Subsequently we introduce a clustering-improved Nystrom method to effectively reduce the computational complexity and storage of the Gram matrix to linear while providing a rigorous bound for the low rank approximation. Our method achieves high accuracy results across various scenarios and surpasses competitors by a significant margin particularly on shapes with substantial deformations. Additionally we demonstrate the versatility of our method in challenging tasks such as shape transfer and medical registration.
[]
[]
[]
[]
737
738
Improving Physics-Augmented Continuum Neural Radiance Field-Based Geometry-Agnostic System Identification with Lagrangian Particle Optimization
Takuhiro Kaneko
null
Geometry-agnostic system identification is a technique for identifying the geometry and physical properties of an object from video sequences without any geometric assumptions. Recently physics-augmented continuum neural radiance fields (PAC-NeRF) has demonstrated promising results for this technique by utilizing a hybrid Eulerian-Lagrangian representation in which the geometry is represented by the Eulerian grid representations of NeRF the physics is described by a material point method (MPM) and they are connected via Lagrangian particles. However a notable limitation of PAC-NeRF is that its performance is sensitive to the learning of the geometry from the first frames owing to its two-step optimization. First the grid representations are optimized with the first frames of video sequences and then the physical properties are optimized through video sequences utilizing the fixed first-frame grid representations. This limitation can be critical when learning of the geometric structure is difficult for example in a few-shot (sparse view) setting. To overcome this limitation we propose Lagrangian particle optimization (LPO) in which the positions and features of particles are optimized through video sequences in Lagrangian space. This method allows for the optimization of the geometric structure across the entire video sequence within the physical constraints imposed by the MPM. The experimental results demonstrate that the LPO is useful for geometric correction and physical identification in sparse-view settings.
[]
[]
[]
[]
738
739
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
http://arxiv.org/abs/2311.16194
Jiawang Bai, Kuofeng Gao, Shaobo Min, Shu-Tao Xia, Zhifeng Li, Wei Liu
2,311.16194
Contrastive Vision-Language Pre-training known as CLIP has shown promising effectiveness in addressing downstream image recognition tasks. However recent works revealed that the CLIP model can be implanted with a downstream-oriented backdoor. On downstream tasks one victim model performs well on clean samples but predicts a specific target class whenever a specific trigger is present. For injecting a backdoor existing attacks depend on a large amount of additional data to maliciously fine-tune the entire pre-trained CLIP model which makes them inapplicable to data-limited scenarios. In this work motivated by the recent success of learnable prompts we address this problem by injecting a backdoor into the CLIP model in the prompt learning stage. Our method named BadCLIP is built on a novel and effective mechanism in backdoor attacks on CLIP i.e. influencing both the image and text encoders with the trigger. It consists of a learnable trigger applied to images and a trigger-aware context generator such that the trigger can change text features via trigger-aware prompts resulting in a powerful and generalizable attack. Extensive experiments conducted on 11 datasets verify that the clean accuracy of BadCLIP is similar to those of advanced prompt learning methods and the attack success rate is higher than 99% in most cases. BadCLIP is also generalizable to unseen classes and shows a strong generalization capability under cross-dataset and cross-domain settings. The code is available at https://github.com/jiawangbai/BadCLIP.
[]
[]
[]
[]
739
740
Beyond Image Super-Resolution for Image Recognition with Task-Driven Perceptual Loss
http://arxiv.org/abs/2404.01692
Jaeha Kim, Junghun Oh, Kyoung Mu Lee
2,404.01692
In real-world scenarios image recognition tasks such as semantic segmentation and object detection often pose greater challenges due to the lack of information available within low-resolution (LR) content. Image super-resolution (SR) is one of the promising solutions for addressing the challenges. However due to the ill-posed property of SR it is challenging for typical SR methods to restore task-relevant high-frequency contents which may dilute the advantage of utilizing the SR method. Therefore in this paper we propose Super-Resolution for Image Recognition (SR4IR) that effectively guides the generation of SR images beneficial to achieving satisfactory image recognition performance when processing LR images. The critical component of our SR4IR is the task-driven perceptual (TDP) loss that enables the SR network to acquire task-specific knowledge from a network tailored for a specific task. Moreover we propose a cross-quality patch mix and an alternate training framework that significantly enhances the efficacy of the TDP loss by addressing potential problems when employing the TDP loss. Through extensive experiments we demonstrate that our SR4IR achieves outstanding task performance by generating SR images useful for a specific image recognition task including semantic segmentation object detection and image classification. The implementation code is available at https://github.com/JaehaKim97/SR4IR.
[]
[]
[]
[]
740
741
PELA: Learning Parameter-Efficient Models with Low-Rank Approximation
http://arxiv.org/abs/2310.10700
Yangyang Guo, Guangzhi Wang, Mohan Kankanhalli
2,310.107
Applying a pre-trained large model to downstream tasks is prohibitive under resource-constrained conditions. Recent dominant approaches for addressing efficiency issues involve adding a few learnable parameters to the fixed backbone model. This strategy however leads to more challenges in loading large models for downstream fine-tuning with limited resources. In this paper we propose a novel method for increasing the parameter efficiency of pre-trained models by introducing an intermediate pre-training stage. To this end we first employ low-rank approximation to compress the original large model and then devise a feature distillation module and a weight perturbation regularization module. These modules are specifically designed to enhance the low-rank model. In particular we update only the low-rank model while freezing the backbone parameters during pre-training. This allows for direct and efficient utilization of the low-rank model for downstream fine-tuning tasks. The proposed method achieves both efficiencies in terms of required parameters and computation time while maintaining comparable results with minimal modifications to the backbone architecture. Specifically when applied to three vision-only and one vision-language Transformer models our approach often demonstrates a merely 0.6-point decrease in performance while reducing the original parameter size by 1/3 to 2/3.
[]
[]
[]
[]
741
742
XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies
Xuanchi Ren, Jiahui Huang, Xiaohui Zeng, Ken Museth, Sanja Fidler, Francis Williams
null
We present XCube a novel generative model for high-resolution sparse 3D voxel grids with arbitrary attributes. Our model can generate millions of voxels with a finest effective resolution of up to 1024^3 in a feed-forward fashion without time-consuming test-time optimization. To achieve this we employ a hierarchical voxel latent diffusion model which generates progressively higher resolution grids in a coarse-to-fine manner using a custom framework built on the highly efficient VDB data structure. Apart from generating high-resolution objects we demonstrate the effectiveness of XCube on large outdoor scenes at scales of 100m x 100m with a voxel size as small as 10cm. We observe clear qualitative and quantitative improvements over past approaches. In addition to unconditional generation we show that our model can be used to solve a variety of tasks such as user-guided editing scene completion from a single scan and text-to-3D.
[]
[]
[]
[]
742
743
PixelRNN: In-pixel Recurrent Neural Networks for End-to-end-optimized Perception with Neural Sensors
http://arxiv.org/abs/2304.05440
Haley M. So, Laurie Bose, Piotr Dudek, Gordon Wetzstein
2,304.0544
Conventional image sensors digitize high-resolution images at fast frame rates producing a large amount of data that needs to be transmitted off the sensor for further processing. This is challenging for perception systems operating on edge devices because communication is power inefficient and induces latency. Fueled by innovations in stacked image sensor fabrication emerging sensor--processors offer programmability and processing capabilities directly on the sensor. We exploit these capabilities by developing an efficient recurrent neural network architecture PixelRNN that encodes spatio-temporal features on the sensor using purely binary operations. PixelRNN reduces the amount of data to be transmitted off the sensor by factors up to 256 compared to the raw sensor data while offering competitive accuracy for hand gesture recognition and lip reading tasks. We experimentally validate PixelRNN using a prototype implementation on the SCAMP-5 sensor--processor platform.
[]
[]
[]
[]
743
744
Reconstruction-free Cascaded Adaptive Compressive Sensing
Chenxi Qiu, Tao Yue, Xuemei Hu
null
Scene-aware Adaptive Compressive Sensing (ACS) has constituted a persistent pursuit holding substantial promise for the enhancement of Compressive Sensing (CS) performance. Cascaded ACS furnishes a proficient multi-stage framework for adaptively allocating the CS sampling based on previous CS measurements. However reconstruction is commonly required for analyzing and steering the successive CS sampling which bottlenecks the ACS speed and impedes the practical application in time-sensitive scenarios. Addressing this challenge we propose a reconstruction-free cascaded ACS method which requires NO reconstruction during the adaptive sampling process. A lightweight Score Network (ScoreNet) is proposed to directly determine the ACS allocation with previous CS measurements and a differentiable adaptive sampling module is proposed for end-to-end training. For image reconstruction we propose a Multi-Grid Spatial-Attention Network (MGSANet) that could facilitate efficient multi-stage training and inferencing. By introducing the reconstruction-fidelity supervision outside the loop of the multi-stage sampling process ACS can be efficiently optimized and achieve high imaging fidelity. The effectiveness of the proposed method is demonstrated with extensive quantitative and qualitative experiments compared with the state-of-the-art CS algorithms.
[]
[]
[]
[]
744
745
Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch
Xidong Wu, Shangqian Gao, Zeyu Zhang, Zhenzhen Li, Runxue Bao, Yanfu Zhang, Xiaoqian Wang, Heng Huang
null
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise making their widespread adoption challenging. To address the limitation the Only-Train-Once (OTO) and OTOv2 are proposed to eliminate the need for additional fine-tuning steps by directly training and compressing a general DNN from scratch. Nevertheless the static design of optimizers (in OTO) can lead to convergence issues of local optima. In this paper we proposed the Auto-Train-Once (ATO) an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs. During the model training phase our approach not only trains the target model but also leverages a controller network as an architecture generator to guide the learning of target model weights. Furthermore we developed a novel stochastic gradient algorithm that enhances the coordination between model training and controller network training thereby improving pruning performance. We provide a comprehensive convergence analysis as well as extensive experiments and the results show that our approach achieves state-of-the-art performance across various model architectures (including ResNet18 ResNet34 ResNet50 ResNet56 and MobileNetv2) on standard benchmark datasets (CIFAR-10 CIFAR-100 and ImageNet).
[]
[]
[]
[]
745
746
Constructing and Exploring Intermediate Domains in Mixed Domain Semi-supervised Medical Image Segmentation
http://arxiv.org/abs/2404.08951
Qinghe Ma, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
2,404.08951
Both limited annotation and domain shift are prevalent challenges in medical image segmentation. Traditional semi-supervised segmentation and unsupervised domain adaptation methods address one of these issues separately. However the coexistence of limited annotation and domain shift is quite common which motivates us to introduce a novel and challenging scenario: Mixed Domain Semi-supervised medical image Segmentation (MiDSS). In this scenario we handle data from multiple medical centers with limited annotations available for a single domain and a large amount of unlabeled data from multiple domains. We found that the key to solving the problem lies in how to generate reliable pseudo labels for the unlabeled data in the presence of domain shift with labeled data. To tackle this issue we employ Unified Copy-Paste (UCP) between images to construct intermediate domains facilitating the knowledge transfer from the domain of labeled data to the domains of unlabeled data. To fully utilize the information within the intermediate domain we propose a symmetric Guidance training strategy (SymGD) which additionally offers direct guidance to unlabeled data by merging pseudo labels from intermediate samples. Subsequently we introduce a Training Process aware Random Amplitude MixUp (TP-RAM) to progressively incorporate style-transition components into intermediate samples. Compared with existing state-of-the-art approaches our method achieves a notable 13.57% improvement in Dice score on Prostate dataset as demonstrated on three public datasets. Our code is available at https://github.com/MQinghe/MiDSS
[]
[]
[]
[]
746
747
DUSt3R: Geometric 3D Vision Made Easy
http://arxiv.org/abs/2312.14132
Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, Jerome Revaud
2,312.14132
Multi-view stereo reconstruction (MVS) in the wild requires to first estimate the camera intrinsic and extrinsic parameters. These are usually tedious and cumbersome to obtain yet they are mandatory to triangulate corresponding pixels in 3D space which is at the core of all best performing MVS algorithms. In this work we take an opposite stance and introduce DUSt3R a radically novel paradigm for Dense and Unconstrained Stereo 3D Reconstruction of arbitrary image collections operating without prior information about camera calibration nor viewpoint poses. We cast the pairwise reconstruction problem as a regression of pointmaps relaxing the hard constraints of usual projective camera models. We show that this formulation smoothly unifies the monocular and binocular reconstruction cases. In the case where more than two images are provided we further propose a simple yet effective global alignment strategy that expresses all pairwise pointmaps in a common reference frame. We base our network architecture on standard Transformer encoders and decoders allowing us to leverage powerful pretrained models. Our formulation directly provides a 3D model of the scene as well as depth information but interestingly we can seamlessly recover from it pixel matches focal lengths relative and absolute cameras. Extensive experiments on all these tasks showcase how DUSt3R effectively unifies various 3D vision tasks setting new performance records on monocular & multi-view depth estimation as well as relative pose estimation. In summary DUSt3R makes many geometric 3D vision tasks easy. Code and models at https://github.com/naver/dust3r
[]
[]
[]
[]
747
748
From Isolated Islands to Pangea: Unifying Semantic Space for Human Action Understanding
http://arxiv.org/abs/2304.00553
Yong-Lu Li, Xiaoqian Wu, Xinpeng Liu, Zehao Wang, Yiming Dou, Yikun Ji, Junyi Zhang, Yixing Li, Xudong Lu, Jingru Tan, Cewu Lu
2,304.00553
Action understanding matters for intelligent agents and has attracted long-term attention. It can be formed as the mapping from the action physical space to the semantic space. Typically researchers built action datasets according to idiosyncratic choices to define classes and push the envelope of benchmarks respectively. Thus datasets are incompatible with each other like "Isolated Islands" due to semantic gaps and various class granularities e.g. do housework in dataset A and wash plate in dataset B. We argue that a more principled semantic space is an urgent need to concentrate the community efforts and enable us to use all datasets together to pursue generalizable action learning. To this end we design a structured action semantic space in view of verb taxonomy hierarchy and covering massive actions. By aligning the classes of previous datasets to our semantic space we gather (image/video/skeleton/MoCap) datasets into a unified database in a unified label system i.e. bridging "isolated islands" into a "Pangea". Accordingly we propose a novel model mapping from the physical space to semantic space to fully use Pangea. In extensive experiments our new system shows significant superiority especially in transfer learning. Our code and data will be made public at https://mvig-rhos.com/pangea.
[]
[]
[]
[]
748
749
Bootstrapping Autonomous Driving Radars with Self-Supervised Learning
http://arxiv.org/abs/2312.04519
Yiduo Hao, Sohrab Madani, Junfeng Guan, Mohammed Alloulah, Saurabh Gupta, Haitham Hassanieh
2,312.04519
The perception of autonomous vehicles using radars has attracted increased research interest due its ability to operate in fog and bad weather. However training radar models is hindered by the cost and difficulty of annotating large-scale radar data. To overcome this bottleneck we propose a self-supervised learning framework to leverage the large amount of unlabeled radar data to pre-train radar-only embeddings for self-driving perception tasks. The proposed method combines radar-to-radar and radar-to-vision contrastive losses to learn a general representation from unlabeled radar heatmaps paired with their corresponding camera images. When used for downstream object detection we demonstrate that the proposed self-supervision framework can improve the accuracy of state-of-the-art supervised baselines by 5.8% in mAP. Code is available at https://github.com/yiduohao/Radical.
[]
[]
[]
[]
749
750
Robust Distillation via Untargeted and Targeted Intermediate Adversarial Samples
Junhao Dong, Piotr Koniusz, Junxi Chen, Z. Jane Wang, Yew-Soon Ong
null
Adversarially robust knowledge distillation aims to compress large-scale models into lightweight models while preserving adversarial robustness and natural performance on a given dataset. Existing methods typically align probability distributions of natural and adversarial samples between teacher and student models but they overlook intermediate adversarial samples along the "adversarial path" formed by the multi-step gradient ascent of a sample towards the decision boundary. Such paths capture rich information about the decision boundary. In this paper we propose a novel adversarially robust knowledge distillation approach by incorporating such adversarial paths into the alignment process. Recognizing the diverse impacts of intermediate adversarial samples (ranging from benign to noisy) we propose an adaptive weighting strategy to selectively emphasize informative adversarial samples thus ensuring efficient utilization of lightweight model capacity. Moreover we propose a dual-branch mechanism exploiting two following insights: (i) complementary dynamics of adversarial paths obtained by targeted and untargeted adversarial learning and (ii) inherent differences between the gradient ascent path from class c_i towards the nearest class boundary and the gradient descent path from a specific class c_j towards the decision region of c_i (i \neq j). Comprehensive experiments demonstrate the effectiveness of our method on lightweight models under various settings.
[]
[]
[]
[]
750
751
USE: Universal Segment Embeddings for Open-Vocabulary Image Segmentation
Xiaoqi Wang, Wenbin He, Xiwei Xuan, Clint Sebastian, Jorge Piazentin Ono, Xin Li, Sima Behpour, Thang Doan, Liang Gou, Han-Wei Shen, Liu Ren
null
The open-vocabulary image segmentation task involves partitioning images into semantically meaningful segments and classifying them with flexible text-defined categories. The recent vision-based foundation models such as the Segment Anything Model (SAM) have shown superior performance in generating class-agnostic image segments. The main challenge in open-vocabulary image segmentation now lies in accurately classifying these segments into text-defined categories. In this paper we introduce the Universal Segment Embedding (USE) framework to address this challenge. This framework is comprised of two key components: 1) a data pipeline designed to efficiently curate a large amount of segment-text pairs at various granularities and 2) a universal segment embedding model that enables precise segment classification into a vast range of text-defined categories. The USE model can not only help open-vocabulary image segmentation but also facilitate other downstream tasks (e.g. querying and ranking). Through comprehensive experimental studies on semantic segmentation and part segmentation benchmarks we demonstrate that the USE framework outperforms state-of-the-art open-vocabulary segmentation methods.
[]
[]
[]
[]
751
752
Functional Diffusion
http://arxiv.org/abs/2311.15435
Biao Zhang, Peter Wonka
2,311.15435
We propose functional diffusion a generative diffusion model focused on infinite-dimensional function data samples. In contrast to previous work functional diffusion works on samples that are represented by functions with a continuous domain. Functional diffusion can be seen as an extension of classical diffusion models to an infinite-dimensional domain. Functional diffusion is very versatile as images videos audio 3D shapes deformations etc. can be handled by the same framework with minimal changes. In addition functional diffusion is especially suited for irregular data or data defined in non-standard domains. In our work we derive the necessary foundations for functional diffusion and propose a first implementation based on the transformer architecture. We show generative results on complicated signed distance functions and deformation functions defined on 3D surfaces.
[]
[]
[]
[]
752
753
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement
http://arxiv.org/abs/2403.09101
Zhuorong Li, Daiwei Yu, Lina Wei, Canghong Jin, Yun Zhang, Sixian Chan
2,403.09101
Adversarial training (AT) is currently one of the most effective ways to obtain the robustness of deep neural networks against adversarial attacks. However most AT methods suffer from robust overfitting i.e. a significant generalization gap in adversarial robustness between the training and testing curves. In this paper we first identify a connection between robust overfitting and the excessive memorization of noisy labels in AT from a view of gradient norm. As such label noise is mainly caused by a distribution mismatch and improper label assignments we are motivated to propose a label refinement approach for AT. Specifically our Self-Guided Label Refinement first self-refines a more accurate and informative label distribution from over-confident hard labels and then it calibrates the training by dynamically incorporating knowledge from self-distilled models into the current model and thus requiring no external teachers. Empirical results demonstrate that our method can simultaneously boost the standard accuracy and robust performance across multiple benchmark datasets attack types and architectures. In addition we also provide a set of analyses from the perspectives of information theory to dive into our method and suggest the importance of soft labels for robust generalization.
[]
[]
[]
[]
753
754
Weakly Supervised Monocular 3D Detection with a Single-View Image
http://arxiv.org/abs/2402.19144
Xueying Jiang, Sheng Jin, Lewei Lu, Xiaoqin Zhang, Shijian Lu
2,402.19144
Monocular 3D detection (M3D) aims for precise 3D object localization from a single-view image which usually involves labor-intensive annotation of 3D detection boxes. Weakly supervised M3D has recently been studied to obviate the 3D annotation process by leveraging many existing 2D annotations but it often requires extra training data such as LiDAR point clouds or multi-view images which greatly degrades its applicability and usability in various applications. We propose SKD-WM3D a weakly supervised monocular 3D detection framework that exploits depth information to achieve M3D with a single-view image exclusively without any 3D annotations or other training data. One key design in SKD-WM3D is a self-knowledge distillation framework which transforms image features into 3D-like representations by fusing depth information and effectively mitigates the inherent depth ambiguity in monocular scenarios with little computational overhead in inference. In addition we design an uncertainty-aware distillation loss and a gradient-targeted transfer modulation strategy which facilitate knowledge acquisition and knowledge transfer respectively. Extensive experiments show that SKD-WM3D surpasses the state-of-the-art clearly and is even on par with many fully supervised methods.
[]
[]
[]
[]
754
755
Pose-Guided Self-Training with Two-Stage Clustering for Unsupervised Landmark Discovery
http://arxiv.org/abs/2403.16194
Siddharth Tourani, Ahmed Alwheibi, Arif Mahmood, Muhammad Haris Khan
2,403.16194
Unsupervised landmarks discovery (ULD) for an object category is a challenging computer vision problem. In pursuit of developing a robust ULD framework we explore the potential of a recent paradigm of self-supervised learning algorithms known as diffusion models. Some recent works have shown that these models implicitly contain important correspondence cues. Towards harnessing the potential of diffusion models for ULD task we make the following core contributions. First we propose a ZeroShot ULD baseline based on simple clustering of random pixel locations with nearest neighbour matching. It delivers better results than the existing ULD methods. Second motivated by the ZeroShot performance we develop a ULD algorithm based on diffusion features using self-training and clustering which also outperforms prior methods by notable margins. Third we introduce a new proxy task based on generating latent pose codes and also propose a two-stage clustering mechanism to facilitate effective pseudo-labeling resulting in a significant performance improvement. Overall our approach consistently outperforms state-of-the-art methods on four challenging benchmarks AFLW MAFL CatHeads and LS3D by significant margins.
[]
[]
[]
[]
755
756
Learning from Synthetic Human Group Activities
http://arxiv.org/abs/2306.16772
Che-Jui Chang, Danrui Li, Deep Patel, Parth Goel, Honglu Zhou, Seonghyeon Moon, Samuel S. Sohn, Sejong Yoon, Vladimir Pavlovic, Mubbasir Kapadia
2,306.16772
The study of complex human interactions and group activities has become a focal point in human-centric computer vision. However progress in related tasks is often hindered by the challenges of obtaining large-scale labeled datasets from real-world scenarios. To address the limitation we introduce M3Act a synthetic data generator for multi-view multi-group multi-person human atomic actions and group activities. Powered by Unity Engine M3Act features multiple semantic groups highly diverse and photorealistic images and a comprehensive set of annotations which facilitates the learning of human-centered tasks across single-person multi-person and multi-group conditions. We demonstrate the advantages of M3Act across three core experiments. The results suggest our synthetic dataset can significantly improve the performance of several downstream methods and replace real-world datasets to reduce cost. Notably M3Act improves the state-of-the-art MOTRv2 on DanceTrack dataset leading to a hop on the leaderboard from 10th to 2nd place. Moreover M3Act opens new research for controllable 3D group activity generation. We define multiple metrics and propose a competitive baseline for the novel task. Our code and data are available at our project page: http://cjerry1243.github.io/M3Act.
[]
[]
[]
[]
756
757
Blind Image Quality Assessment Based on Geometric Order Learning
Nyeong-Ho Shin, Seon-Ho Lee, Chang-Su Kim
null
A novel approach to blind image quality assessment called quality comparison network (QCN) is proposed in this paper which sorts the feature vectors of input images according to their quality scores in an embedding space. QCN employs comparison transformers (CTs) and score pivots which act as the centroids of feature vectors of similar-quality images. Each CT updates the score pivots and the feature vectors of input images based on their ordered correlation. To this end we adopt four loss functions. Then we estimate the quality score of a test image by searching the nearest score pivot to its feature vector in the embedding space. Extensive experiments show that the proposed QCN algorithm yields excellent image quality assessment performances on various datasets. Furthermore QCN achieves great performances in cross-dataset evaluation demonstrating its superb generalization capability. The source codes are available at https://github.com/nhshin-mcl/QCN.
[]
[]
[]
[]
757
758
Text Grouping Adapter: Adapting Pre-trained Text Detector for Layout Analysis
http://arxiv.org/abs/2405.07481
Tianci Bi, Xiaoyi Zhang, Zhizheng Zhang, Wenxuan Xie, Cuiling Lan, Yan Lu, Nanning Zheng
2,405.07481
Significant progress has been made in scene text detection models since the rise of deep learning but scene text layout analysis which aims to group detected text instances as paragraphs has not kept pace. Previous works either treated text detection and grouping using separate models or train a model from scratch while using a unified one. All of them have not yet made full use of the already well-trained text detectors and easily obtainable detection datasets. In this paper we present Text Grouping Adapter (TGA) a module that can enable the utilization of various pre-trained text detectors to learn layout analysis allowing us to adopt a well-trained text detector right off the shelf or just fine-tune it efficiently. Designed to be compatible with various text detector architectures TGA takes detected text regions and image features as universal inputs to assemble text instance features. To capture broader contextual information for layout analysis we propose to predict text group masks from text instance features by one-to-many assignment. Our comprehensive experiments demonstrate that even with frozen pre-trained models incorporating our TGA into various pre-trained text detectors and text spotters can achieve superior layout analysis performance simultaneously inheriting generalized text detection ability from pre-training. In the case of full parameter fine-tuning we can further improve layout analysis performance.
[]
[]
[]
[]
758
759
Generalizable Whole Slide Image Classification with Fine-Grained Visual-Semantic Interaction
http://arxiv.org/abs/2402.19326
Hao Li, Ying Chen, Yifei Chen, Rongshan Yu, Wenxian Yang, Liansheng Wang, Bowen Ding, Yuchen Han
2,402.19326
Whole Slide Image (WSI) classification is often formulated as a Multiple Instance Learning (MIL) problem. Recently Vision-Language Models (VLMs) have demonstrated remarkable performance in WSI classification. However existing methods leverage coarse-grained pathogenetic descriptions for visual representation supervision which are insufficient to capture the complex visual appearance of pathogenetic images hindering the generalizability of models on diverse downstream tasks. Additionally processing high-resolution WSIs can be computationally expensive. In this paper we propose a novel "Fine-grained Visual-Semantic Interaction" (FiVE) framework for WSI classification. It is designed to enhance the model's generalizability by leveraging the interaction between localized visual patterns and fine-grained pathological semantics. Specifically with meticulously designed queries we start by utilizing a large language model to extract fine-grained pathological descriptions from various non-standardized raw reports. The output descriptions are then reconstructed into fine-grained labels used for training. By introducing a Task-specific Fine-grained Semantics (TFS) module we enable prompts to capture crucial visual information in WSIs which enhances representation learning and augments generalization capabilities significantly. Furthermore given that pathological visual patterns are redundantly distributed across tissue slices we sample a subset of visual instances during training. Our method demonstrates robust generalizability and strong transferability dominantly outperforming the counterparts on the TCGA Lung Cancer dataset with at least 9.19% higher accuracy in few-shot experiments. The code is available at: https://github.com/ls1rius/WSI_FiVE.
[]
[]
[]
[]
759
760
THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models
http://arxiv.org/abs/2405.05256
Prannay Kaul, Zhizhong Li, Hao Yang, Yonatan Dukler, Ashwin Swaminathan, C. J. Taylor, Stefano Soatto
2,405.05256
Mitigating hallucinations in large vision-language models (LVLMs) remains an open problem. Recent benchmarks do not address hallucinations in open-ended free-form responses which we term "Type I hallucinations". Instead they focus on hallucinations responding to very specific question formats---typically a multiple-choice response regarding a particular object or attribute---which we term "Type II hallucinations". Additionally such benchmarks often require external API calls to models which are subject to change. In practice we observe that a reduction in Type II hallucinations does not lead to a reduction in Type I hallucinations but rather that the two forms of hallucinations are often anti-correlated. To address this we propose THRONE a novel object-based automatic framework for quantitatively evaluating Type I hallucinations in LVLM free-form outputs. We use public language models (LMs) to identify hallucinations in LVLM responses and compute informative metrics. By evaluating a large selection of recent LVLMs using public datasets we show that an improvement in existing metrics do not lead to a reduction in Type I hallucinations and that established benchmarks for measuring Type I hallucinations are incomplete. Finally we provide a simple and effective data augmentation method to reduce Type I and Type II hallucinations as a strong baseline.
[]
[]
[]
[]
760
761
Wired Perspectives: Multi-View Wire Art Embraces Generative AI
http://arxiv.org/abs/2311.15421
Zhiyu Qu, Lan Yang, Honggang Zhang, Tao Xiang, Kaiyue Pang, Yi-Zhe Song
2,311.15421
Creating multi-view wire art (MVWA) a static 3D sculpture with diverse interpretations from different viewpoints is a complex task even for skilled artists. In response we present DreamWire an AI system enabling everyone to craft MVWA easily. Users express their vision through text prompts or scribbles freeing them from intricate 3D wire organisation. Our approach synergises 3D Bezier curves Prim's algorithm and knowledge distillation from diffusion models or their variants (e.g. ControlNet). This blend enables the system to represent 3D wire art ensuring spatial continuity and overcoming data scarcity. Extensive evaluation and analysis are conducted to shed insight on the inner workings of the proposed system including the trade-off between connectivity and visual aesthetics.
[]
[]
[]
[]
761
762
LUWA Dataset: Learning Lithic Use-Wear Analysis on Microscopic Images
http://arxiv.org/abs/2403.13171
Jing Zhang, Irving Fang, Hao Wu, Akshat Kaushik, Alice Rodriguez, Hanwen Zhao, Juexiao Zhang, Zhuo Zheng, Radu Iovita, Chen Feng
2,403.13171
Lithic Use-Wear Analysis (LUWA) using microscopic images is an underexplored vision-for-science research area. It seeks to distinguish the worked material which is critical for understanding archaeological artifacts material interactions tool functionalities and dental records. However this challenging task goes beyond the well-studied image classification problem for common objects. It is affected by many confounders owing to the complex wear mechanism and microscopic imaging which makes it difficult even for human experts to identify the worked material successfully. In this paper we investigate the following three questions on this unique vision task for the first time:(i) How well can state-of-the-art pre-trained models (like DINOv2) generalize to the rarely seen domain? (ii) How can few-shot learning be exploited for scarce microscopic images? (iii) How do the ambiguous magnification and sensing modality influence the classification accuracy? To study these we collaborated with archaeologists and built the first open-source and the largest LUWA dataset containing 23130 microscopic images with different magnifications and sensing modalities. Extensive experiments show that existing pre-trained models notably outperform human experts but still leave a large gap for improvements. Most importantly the LUWA dataset provides an underexplored opportunity for vision and learning communities and complements existing image classification problems on common objects.
[]
[]
[]
[]
762
763
Generalizing 6-DoF Grasp Detection via Domain Prior Knowledge
Haoxiang Ma, Modi Shi, Boyang Gao, Di Huang
null
We focus on the generalization ability of the 6-DoF grasp detection method in this paper. While learning-based grasp detection methods can predict grasp poses for unseen objects using the grasp distribution learned from the training set they often exhibit a significant performance drop when encountering objects with diverse shapes and structures. To enhance the grasp detection methods' generalization ability we incorporate domain prior knowledge of robotic grasping enabling better adaptation to objects with significant shape and structure differences. More specifically we employ the physical constraint regularization during the training phase to guide the model towards predicting grasps that comply with the physical rule on grasping. For the unstable grasp poses predicted on novel objects we design a contact-score joint optimization using the projection contact map to refine these poses in cluttered scenarios. Extensive experiments conducted on the GraspNet-1billion benchmark demonstrate a substantial performance gain on the novel object set and the real-world grasping experiments also demonstrate the effectiveness of our generalizing 6-DoF grasp detection method.
[]
[]
[]
[]
763
764
The Audio-Visual Conversational Graph: From an Egocentric-Exocentric Perspective
http://arxiv.org/abs/2312.12870
Wenqi Jia, Miao Liu, Hao Jiang, Ishwarya Ananthabhotla, James M. Rehg, Vamsi Krishna Ithapu, Ruohan Gao
2,312.1287
In recent years the thriving development of research related to egocentric videos has provided a unique perspective for the study of conversational interactions where both visual and audio signals play a crucial role. While most prior work focus on learning about behaviors that directly involve the camera wearer we introduce the Ego-Exocentric Conversational Graph Prediction problem marking the first attempt to infer exocentric conversational interactions from egocentric videos. We propose a unified multi-modal framework---Audio-Visual Conversational Attention (AV-CONV) for the joint prediction of conversation behaviors---speaking and listening---for both the camera wearer as well as all other social partners present in the egocentric video. Specifically we adopt the self-attention mechanism to model the representations across-time across-subjects and across-modalities. To validate our method we conduct experiments on a challenging egocentric video dataset that includes multi-speaker and multi-conversation scenarios. Our results demonstrate the superior performance of our method compared to a series of baselines. We also present detailed ablation studies to assess the contribution of each component in our model. Check our \href https://vjwq.github.io/AV-CONV/ Project Page .
[]
[]
[]
[]
764
765
Byzantine-robust Decentralized Federated Learning via Dual-domain Clustering and Trust Bootstrapping
Peng Sun, Xinyang Liu, Zhibo Wang, Bo Liu
null
Decentralized federated learning (DFL) facilitates collaborative model training across multiple connected clients without a central coordination server thereby avoiding the single point of failure in traditional centralized federated learning (CFL). However DFL exhibits heightened susceptibility to Byzantine attacks owing to the lack of a responsible central server. Furthermore a benign client in DFL may be dominated by Byzantine clients (more than half of its neighbors are malicious) posing significant challenges for robust model training. In this work we propose DFL-Dual a novel Byzantine-robust DFL method through dual-domain client clustering and trust bootstrapping. Specifically we first propose to leverage both data-domain and model-domain distance metrics to identify client discrepancies. Then we design a trust evaluation mechanism centered on benign clients which enables them to evaluate their neighbors. Building upon the dual-domain distance metric and trust evaluation mechanism we further develop a two-stage clustering and trust bootstrapping technique to exclude Byzantine clients from local model aggregation. We extensively evaluate the proposed DFL-Dual method through rigorous experimentation demonstrating its remarkable performance superiority over existing robust CFL and DFL schemes.
[]
[]
[]
[]
765
766
Leveraging Camera Triplets for Efficient and Accurate Structure-from-Motion
Lalit Manam, Venu Madhav Govindu
null
In Structure-from-Motion (SfM) the underlying viewgraphs of unordered image collections generally have a highly redundant set of edges that can be sparsified for efficiency without significant loss of reconstruction quality. Often there are also false edges due to incorrect image retrieval and repeated structures (symmetries) that give rise to ghosting and superimposed reconstruction artifacts. We present a unified method to simultaneously sparsify the viewgraph and remove false edges. We propose a scoring mechanism based on camera triplets that identifies edge redundancy as well as false edges. Our edge selection is formulated as an optimization problem which can be provably solved using a simple thresholding scheme. This results in a highly efficient algorithm which can be incorporated as a pre-processing step into any SfM pipeline making it practically usable. We demonstrate the utility of our method on generic and ambiguous datasets that cover the range of small medium and large-scale datasets all with different statistical properties. Sparsification of generic datasets using our method significantly reduces reconstruction time while maintaining the accuracy of the reconstructions as well as removing ghosting artifacts. For ambiguous datasets our method removes false edges thereby avoiding incorrect superimposed reconstructions.
[]
[]
[]
[]
766
767
SimDA: Simple Diffusion Adapter for Efficient Video Generation
http://arxiv.org/abs/2308.09710
Zhen Xing, Qi Dai, Han Hu, Zuxuan Wu, Yu-Gang Jiang
2,308.0971
The recent wave of AI-generated content has witnessed the great development and success of Text-to-Image (T2I) technologies. By contrast Text-to-Video (T2V) still falls short of expectations though attracting increasing interest. Existing works either train from scratch or adapt large T2I model to videos both of which are computation and resource expensive. In this work we propose a Simple Diffusion Adapter (SimDA) that fine-tunes only 24M out of 1.1B parameters of a strong T2I model adapting it to video generation in a parameter-efficient way. In particular we turn the T2I model for T2V by designing light-weight spatial and temporal adapters for transfer learning. Besides we change the original spatial attention to the proposed Latent-Shift Attention (LSA) for temporal consistency. With a similar model architecture we further train a video super-resolution model to generate high-definition (1024 x 1024) videos. In addition to T2V generation in the wild SimDA could also be utilized in one-shot video editing with only 2 minutes tuning. Doing so our method could minimize the training effort with extremely few tunable parameters for model adaptation.
[]
[]
[]
[]
767
768
Multi-view Aggregation Network for Dichotomous Image Segmentation
http://arxiv.org/abs/2404.07445
Qian Yu, Xiaoqi Zhao, Youwei Pang, Lihe Zhang, Huchuan Lu
2,404.07445
Dichotomous Image Segmentation (DIS) has recently emerged towards high-precision object segmentation from high-resolution natural images. When designing an effective DIS model the main challenge is how to balance the semantic dispersion of high-resolution targets in the small receptive field and the loss of high-precision details in the large receptive field. Existing methods rely on tedious multiple encoder-decoder streams and stages to gradually complete the global localization and local refinement. Human visual system captures regions of interest by observing them from multiple views. Inspired by it we model DIS as a multi-view object perception problem and provide a parsimonious multi-view aggregation network (MVANet) which unifies the feature fusion of the distant view and close-up view into a single stream with one encoder-decoder structure. With the help of the proposed multi-view complementary localization and refinement modules our approach established long-range profound visual interactions across multiple views allowing the features of the detailed close-up view to focus on highly slender structures. Experiments on the popular DIS-5K dataset show that our MVANet significantly outperforms state-of-the-art methods in both accuracy and speed. The source code and datasets will be publicly available at \href https://github.com/qianyu-dlut/MVANet MVANet .
[]
[]
[]
[]
768
769
A Recipe for Scaling up Text-to-Video Generation with Text-free Videos
http://arxiv.org/abs/2312.15770
Xiang Wang, Shiwei Zhang, Hangjie Yuan, Zhiwu Qing, Biao Gong, Yingya Zhang, Yujun Shen, Changxin Gao, Nong Sang
2,312.1577
Diffusion-based text-to-video generation has witnessed impressive progress in the past year yet still falls behind text-to-image generation. One of the key reasons is the limited scale of publicly available data (e.g. 10M video-text pairs in WebVid10M vs. 5B image-text pairs in LAION) considering the high cost of video captioning. Instead it could be far easier to collect unlabeled clips from video platforms like YouTube. Motivated by this we come up with a novel text-to-video generation framework termed TF-T2V which can directly learn with text-free videos. The rationale behind is to separate the process of text decoding from that of temporal modeling. To this end we employ a content branch and a motion branch which are jointly optimized with weights shared. Following such a pipeline we study the effect of doubling the scale of training set (i.e. video-only WebVid10M) with some randomly collected text-free videos and are encouraged to observe the performance improvement (FID from 9.67 to 8.19 and FVD from 484 to 441) demonstrating the scalability of our approach. We also find that our model could enjoy sustainable performance gain (FID from 8.19 to 7.64 and FVD from 441 to 366) after reintroducing some text labels for training. Finally we validate the effectiveness and generalizability of our ideology on both native text-to-video generation and compositional video synthesis paradigms. Code and models will be publicly available at here.
[]
[]
[]
[]
769
770
Molecular Data Programming: Towards Molecule Pseudo-labeling with Systematic Weak Supervision
Xin Juan, Kaixiong Zhou, Ninghao Liu, Tianlong Chen, Xin Wang
null
The premise for the great advancement of molecular machine learning is dependent on a considerable amount of labeled data. In many real-world scenarios the labeled molecules are limited in quantity or laborious to derive. Recent pseudo-labeling methods are usually designed based on a single domain knowledge thereby failing to understand the comprehensive molecular configurations and limiting their adaptability to generalize across diverse biochemical context. To this end we introduce an innovative paradigm for dealing with the molecule pseudo-labeling named as Molecular Data Programming (MDP). In particular we adopt systematic supervision sources via crafting multiple graph labeling functions which covers various molecular structural knowledge of graph kernels molecular fingerprints and topological features. Each of them creates an uncertain and biased labels for the unlabeled molecules. To address the decision conflicts among the diverse pseudo-labels we design a label synchronizer to differentiably model confidences and correlations between the labeling functions which yields probabilistic molecular labels to adapt for specific applications. These probabilistic molecular labels are used to train a molecular classifier for improving its generalization capability. On eight benchmark datasets we empirically demonstrate the effectiveness of MDP on the weakly supervised molecule classification tasks.
[]
[]
[]
[]
770
771
RadSimReal: Bridging the Gap Between Synthetic and Real Data in Radar Object Detection With Simulation
http://arxiv.org/abs/2404.18150
Oded Bialer, Yuval Haitman
2,404.1815
Object detection in radar imagery with neural networks shows great potential for improving autonomous driving. However obtaining annotated datasets from real radar images crucial for training these networks is challenging especially in scenarios with long-range detection and adverse weather and lighting conditions where radar performance excels. To address this challenge we present RadSimReal an innovative physical radar simulation capable of generating synthetic radar images with accompanying annotations for various radar types and environmental conditions all without the need for real data collection. Remarkably our findings demonstrate that training object detection models on RadSimReal data and subsequently evaluating them on real-world data produce performance levels comparable to models trained and tested on real data from the same dataset and even achieves better performance when testing across different real datasets. RadSimReal offers advantages over other physical radar simulations that it does not necessitate knowledge of the radar design details which are often not disclosed by radar suppliers and has faster run-time. This innovative tool has the potential to advance the development of computer vision algorithms for radar-based autonomous driving applications.
[]
[]
[]
[]
771
772
No More Ambiguity in 360deg Room Layout via Bi-Layout Estimation
Yu-Ju Tsai, Jin-Cheng Jhang, Jingjing Zheng, Wei Wang, Albert Y. C. Chen, Min Sun, Cheng-Hao Kuo, Ming-Hsuan Yang
null
Inherent ambiguity in layout annotations poses significant challenges to developing accurate 360deg room layout estimation models. To address this issue we propose a novel Bi-Layout model capable of predicting two distinct layout types. One stops at ambiguous regions while the other extends to encompass all visible areas. Our model employs two global context embeddings where each embedding is designed to capture specific contextual information for each layout type. With our novel feature guidance module the image feature retrieves relevant context from these embeddings generating layout-aware features for precise bi-layout predictions. A unique property of our Bi-Layout model is its ability to inherently detect ambiguous regions by comparing the two predictions. To circumvent the need for manual correction of ambiguous annotations during testing we also introduce a new metric for disambiguating ground truth layouts. Our method demonstrates superior performance on benchmark datasets notably outperforming leading approaches. Specifically on the MatterportLayout dataset it improves 3DIoU from 81.70% to 82.57% across the full test set and notably from 54.80% to 59.97% in subsets with significant ambiguity.
[]
[]
[]
[]
772
773
Residual Denoising Diffusion Models
http://arxiv.org/abs/2308.13712
Jiawei Liu, Qiang Wang, Huijie Fan, Yinong Wang, Yandong Tang, Liangqiong Qu
2,308.13712
We propose residual denoising diffusion models (RDDM) a novel dual diffusion process that decouples the traditional single denoising diffusion process into residual diffusion and noise diffusion. This dual diffusion framework expands the denoising-based diffusion models initially uninterpretable for image restoration into a unified and interpretable model for both image generation and restoration by introducing residuals. Specifically our residual diffusion represents directional diffusion from the target image to the degraded input image and explicitly guides the reverse generation process for image restoration while noise diffusion represents random perturbations in the diffusion process. The residual prioritizes certainty while the noise emphasizes diversity enabling RDDM to effectively unify tasks with varying certainty or diversity requirements such as image generation and restoration. We demonstrate that our sampling process is consistent with that of DDPM and DDIM through coefficient transformation and propose a partially path-independent generation process to better understand the reverse process. Notably our RDDM enables a generic UNet trained with only an L1 loss and a batch size of 1 to compete with state-of-the-art image restoration methods. We provide code and pre-trained models to encourage further exploration application and development of our innovative framework (https://github.com/nachifur/RDDM).
[]
[]
[]
[]
773
774
Towards Accurate and Robust Architectures via Neural Architecture Search
http://arxiv.org/abs/2405.05502
Yuwei Ou, Yuqi Feng, Yanan Sun
2,405.05502
To defend deep neural networks from adversarial attacks adversarial training has been drawing increasing attention for its effectiveness. However the accuracy and robustness resulting from the adversarial training are limited by the architecture because adversarial training improves accuracy and robustness by adjusting the weight connection affiliated to the architecture. In this work we propose ARNAS to search for accurate and robust architectures for adversarial training. First we design an accurate and robust search space in which the placement of the cells and the proportional relationship of the filter numbers are carefully determined. With the design the architectures can obtain both accuracy and robustness by deploying accurate and robust structures to their sensitive positions respectively. Then we propose a differentiable multi-objective search strategy performing gradient descent towards directions that are beneficial for both natural loss and adversarial loss thus the accuracy and robustness can be guaranteed at the same time. We conduct comprehensive experiments in terms of white-box attacks black-box attacks and transferability. Experimental results show that the searched architecture has the strongest robustness with the competitive accuracy and breaks the traditional idea that NAS-based architectures cannot transfer well to complex tasks in robustness scenarios. By analyzing outstanding architectures searched we also conclude that accurate and robust neural architectures tend to deploy different structures near the input and output which has great practical significance on both hand-crafting and automatically designing of accurate and robust architectures.
[]
[]
[]
[]
774
775
Closely Interactive Human Reconstruction with Proxemics and Physics-Guided Adaption
http://arxiv.org/abs/2404.11291
Buzhen Huang, Chen Li, Chongyang Xu, Liang Pan, Yangang Wang, Gim Hee Lee
2,404.11291
Existing multi-person human reconstruction approaches mainly focus on recovering accurate poses or avoiding penetration but overlook the modeling of close interactions. In this work we tackle the task of reconstructing closely interactive humans from a monocular video. The main challenge of this task comes from insufficient visual information caused by depth ambiguity and severe inter-person occlusion. In view of this we propose to leverage knowledge from proxemic behavior and physics to compensate the lack of visual information. This is based on the observation that human interaction has specific patterns following the social proxemics. Specifically we first design a latent representation based on Vector Quantised-Variational AutoEncoder (VQ-VAE) to model human interaction. A proxemics and physics guided diffusion model is then introduced to denoise the initial distribution. We design the diffusion model as dual branch with each branch representing one individual such that the interaction can be modeled via cross attention. With the learned priors of VQ-VAE and physical constraint as the additional information our proposed approach is capable of estimating accurate poses that are also proxemics and physics plausible. Experimental results on Hi4D 3DPW and CHI3D demonstrate that our method outperforms existing approaches. The code is available at https://github.com/boycehbz/HumanInteraction.
[]
[]
[]
[]
775
776
A Noisy Elephant in the Room: Is Your Out-of-Distribution Detector Robust to Label Noise?
http://arxiv.org/abs/2404.01775
Galadrielle Humblot-Renaux, Sergio Escalera, Thomas B. Moeslund
2,404.01775
The ability to detect unfamiliar or unexpected images is essential for safe deployment of computer vision systems. In the context of classification the task of detecting images outside of a model's training domain is known as out-of-distribution (OOD) detection. While there has been a growing research interest in developing post-hoc OOD detection methods there has been comparably little discussion around how these methods perform when the underlying classifier is not trained on a clean carefully curated dataset. In this work we take a closer look at 20 state-of-the-art OOD detection methods in the (more realistic) scenario where the labels used to train the underlying classifier are unreliable (e.g. crowd-sourced or web-scraped labels). Extensive experiments across different datasets noise types & levels architectures and checkpointing strategies provide insights into the effect of class label noise on OOD detection and show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods. Code: https://github.com/glhr/ood-labelnoise
[]
[]
[]
[]
776
777
VideoMAC: Video Masked Autoencoders Meet ConvNets
http://arxiv.org/abs/2402.19082
Gensheng Pei, Tao Chen, Xiruo Jiang, Huafeng Liu, Zeren Sun, Yazhou Yao
2,402.19082
Recently the advancement of self-supervised learning techniques like masked autoencoders (MAE) has greatly influenced visual representation learning for images and videos. Nevertheless it is worth noting that the predominant approaches in existing masked image / video modeling rely excessively on resource-intensive vision transformers (ViTs) as the feature encoder. In this paper we propose a new approach termed as VideoMAC which combines video masked autoencoders with resource-friendly ConvNets. Specifically VideoMAC employs symmetric masking on randomly sampled pairs of video frames. To prevent the issue of mask pattern dissipation we utilize ConvNets which are implemented with sparse convolutional operators as encoders. Simultaneously we present a simple yet effective masked video modeling (MVM) approach a dual encoder architecture comprising an online encoder and an exponential moving average target encoder aimed to facilitate inter-frame reconstruction consistency in videos. Additionally we demonstrate that VideoMAC empowering classical (ResNet) / modern (ConvNeXt) convolutional encoders to harness the benefits of MVM outperforms ViT-based approaches on downstream tasks including video object segmentation (+5.2% / 6.4% \mathcal J &\mathcal F ) body part propagation (+6.3% / 3.1% mIoU) and human pose tracking (+10.2% / 11.1% PCK@0.1).
[]
[]
[]
[]
777
778
Taming Stable Diffusion for Text to 360 Panorama Image Generation
Cheng Zhang, Qianyi Wu, Camilo Cruz Gambardella, Xiaoshui Huang, Dinh Phung, Wanli Ouyang, Jianfei Cai
null
Generative models e.g. Stable Diffusion have enabled the creation of photorealistic images from text prompts. Yet the generation of 360-degree panorama images from text remains a challenge particularly due to the dearth of paired text-panorama data and the domain gap between panorama and perspective images. In this paper we introduce a novel dual-branch diffusion model named PanFusion to generate a 360-degree image from a text prompt. We leverage the stable diffusion model as one branch to provide prior knowledge in natural image generation and register it to another panorama branch for holistic image generation. We propose a unique cross-attention mechanism with projection awareness to minimize distortion during the collaborative denoising process. Our experiments validate that PanFusion surpasses existing methods and thanks to its dual-branch structure can integrate additional constraints like room layout for customized panorama outputs.
[]
[]
[]
[]
778
779
3DSFLabelling: Boosting 3D Scene Flow Estimation by Pseudo Auto-labelling
http://arxiv.org/abs/2402.18146
Chaokang Jiang, Guangming Wang, Jiuming Liu, Hesheng Wang, Zhuang Ma, Zhenqiang Liu, Zhujin Liang, Yi Shan, Dalong Du
2,402.18146
Learning 3D scene flow from LiDAR point clouds presents significant difficulties including poor generalization from synthetic datasets to real scenes scarcity of real-world 3D labels and poor performance on real sparse LiDAR point clouds. We present a novel approach from the perspective of auto-labelling aiming to generate a large number of 3D scene flow pseudo labels for real-world LiDAR point clouds. Specifically we employ the assumption of rigid body motion to simulate potential object-level rigid movements in autonomous driving scenarios. By updating different motion attributes for multiple anchor boxes the rigid motion decomposition is obtained for the whole scene. Furthermore we developed a novel 3D scene flow data augmentation method for global and local motion. By perfectly synthesizing target point clouds based on augmented motion parameters we easily obtain lots of 3D scene flow labels in point clouds highly consistent with real scenarios. On multiple real-world datasets including LiDAR KITTI nuScenes and Argoverse our method outperforms all previous supervised and unsupervised methods without requiring manual labelling. Impressively our method achieves a tenfold reduction in EPE3D metric on the LiDAR KITTI dataset reducing it from 0.190m to a mere 0.008m error.
[]
[]
[]
[]
779
780
Unsigned Orthogonal Distance Fields: An Accurate Neural Implicit Representation for Diverse 3D Shapes
http://arxiv.org/abs/2403.01414
Yujie Lu, Long Wan, Nayu Ding, Yulong Wang, Shuhan Shen, Shen Cai, Lin Gao
2,403.01414
Neural implicit representation of geometric shapes has witnessed considerable advancements in recent years. However common distance field based implicit representations specifically signed distance field (SDF) for watertight shapes or unsigned distance field (UDF) for arbitrary shapes routinely suffer from degradation of reconstruction accuracy when converting to explicit surface points and meshes. In this paper we introduce a novel neural implicit representation based on unsigned orthogonal distance fields (UODFs). In UODFs the minimal unsigned distance from any spatial point to the shape surface is defined solely in one orthogonal direction contrasting with the multi-directional determination made by SDF and UDF. Consequently every point in the 3D UODFs can directly access its closest surface points along three orthogonal directions. This distinctive feature leverages the accurate reconstruction of surface points without interpolation errors. We verify the effectiveness of UODFs through a range of reconstruction examples extending from simple watertight or non-watertight shapes to complex shapes that include hollows internal or assembling structures.
[]
[]
[]
[]
780
781
Modular Blind Video Quality Assessment
http://arxiv.org/abs/2402.19276
Wen Wen, Mu Li, Yabin Zhang, Yiting Liao, Junlin Li, Li Zhang, Kede Ma
2,402.19276
Blind video quality assessment (BVQA) plays a pivotal role in evaluating and improving the viewing experience of end-users across a wide range of video-based platforms and services. Contemporary deep learning-based models primarily analyze video content in its aggressively subsampled format while being blind to the impact of the actual spatial resolution and frame rate on video quality. In this paper we propose a modular BVQA model and a method of training it to improve its modularity. Our model comprises a base quality predictor a spatial rectifier and a temporal rectifier responding to the visual content and distortion spatial resolution and frame rate changes on video quality respectively. During training spatial and temporal rectifiers are dropped out with some probabilities to render the base quality predictor a standalone BVQA model which should work better with the rectifiers. Extensive experiments on both professionally-generated content and user-generated content video databases show that our quality model achieves superior or comparable performance to current methods. Additionally the modularity of our model offers an opportunity to analyze existing video quality databases in terms of their spatial and temporal complexity.
[]
[]
[]
[]
781
782
Question Aware Vision Transformer for Multimodal Reasoning
http://arxiv.org/abs/2402.05472
Roy Ganz, Yair Kittenplon, Aviad Aberdam, Elad Ben Avraham, Oren Nuriel, Shai Mazor, Ron Litman
2,402.05472
Vision-Language (VL) models have gained significant research focus enabling remarkable advances in multimodal reasoning. These architectures typically comprise a vision encoder a Large Language Model (LLM) and a projection module that aligns visual features with the LLM's representation space. Despite their success a critical limitation persists: the vision encoding process remains decoupled from user queries often in the form of image-related questions. Consequently the resulting visual features may not be optimally attuned to the query-specific elements of the image. To address this we introduce QA-ViT a Question Aware Vision Transformer approach for multimodal reasoning which embeds question awareness directly within the vision encoder. This integration results in dynamic visual features focusing on relevant image aspects to the posed question. QA-ViT is model-agnostic and can be incorporated efficiently into any VL architecture. Extensive experiments demonstrate the effectiveness of applying our method to various multimodal architectures leading to consistent improvement across diverse tasks and showcasing its potential for enhancing visual and scene-text understanding.
[]
[]
[]
[]
782
783
OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition
http://arxiv.org/abs/2312.00096
Tongjia Chen, Hongshan Yu, Zhengeng Yang, Zechuan Li, Wei Sun, Chen Chen
2,312.00096
Due to the resource-intensive nature of training vision-language models on expansive video data a majority of studies have centered on adapting pre-trained image-language models to the video domain. Dominant pipelines propose to tackle the visual discrepancies with additional temporal learners while overlooking the substantial discrepancy for web-scaled descriptive narratives and concise action category names leading to less distinct semantic space and potential performance limitations. In this work we prioritize the refinement of text knowledge to facilitate generalizable video recognition. To address the limitations of the less distinct semantic space of category names we prompt a large language model (LLM) to augment action class names into Spatio-Temporal Descriptors thus bridging the textual discrepancy and serving as a knowledge base for general recognition. Moreover to assign the best descriptors with different video instances we propose Optimal Descriptor Solver forming the video recognition problem as solving the optimal matching flow across frame-level representations and descriptors. Comprehensive evaluations in zero-shot few-shot and fully supervised video recognition highlight the effectiveness of our approach. Our best model achieves a state-of-the-art zero-shot accuracy of 75.1% on Kinetics-600.
[]
[]
[]
[]
783
784
Habitat Synthetic Scenes Dataset (HSSD-200): An Analysis of 3D Scene Scale and Realism Tradeoffs for ObjectGoal Navigation
http://arxiv.org/abs/2306.11290
Mukul Khanna, Yongsen Mao, Hanxiao Jiang, Sanjay Haresh, Brennan Shacklett, Dhruv Batra, Alexander Clegg, Eric Undersander, Angel X. Chang, Manolis Savva
2,306.1129
We contribute the Habitat Synthetic Scene Dataset a dataset of 211 high-quality 3D scenes and use it to test navigation agent generalization to realistic 3D environments. Our dataset represents real interiors and contains a diverse set of 18656 models of real-world objects. We investigate the impact of synthetic 3D scene dataset scale and realism on the task of training embodied agents to find and navigate to objects (ObjectGoal navigation). By comparing to synthetic 3D scene datasets from prior work we find that scale helps in generalization but the benefits quickly saturate making visual fidelity and correlation to real-world scenes more important. Our experiments show that agents trained on our smaller-scale dataset can match or outperform agents trained on much larger datasets. Surprisingly we observe that agents trained on just 122 scenes from our dataset outperform agents trained on 10000 scenes from the ProcTHOR-10K dataset in terms of zero-shot generalization in real-world scanned environments.
[]
[]
[]
[]
784
785
OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation
Bohao Peng, Xiaoyang Wu, Li Jiang, Yukang Chen, Hengshuang Zhao, Zhuotao Tian, Jiaya Jia
null
The booming of 3D recognition in the 2020s began with the introduction of point cloud transformers. They quickly overwhelmed sparse CNNs and became state-of-the-art models especially in 3D semantic segmentation. However sparse CNNs are still valuable networks due to their efficiency treasure and ease of application. In this work we reexamine the design distinctions and test the limits of what a sparse CNN can achieve. We discover that the key credit to the performance difference is adaptivity. Specifically we propose two key components i.e. adaptive receptive fields (spatially) and adaptive relation to bridge the gap. This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs) a family of networks that integrates a lightweight module to greatly enhance the adaptivity of sparse CNNs at minimal computational cost. Without any self-attention modules OA-CNNs favorably surpass point transformers in terms of accuracy in both indoor and outdoor scenes with much less latency and memory cost. Notably it achieves 76.1% 78.9% and 70.6% mIoU on ScanNet v2 nuScenes and SemanticKITTI validation benchmarks respectively while maintaining at most 5x better speed than transformer counterparts. This revelation highlights the potential of pure sparse CNNs to outperform transformer-related networks. Our code is built upon Pointcept which is available at https://github.com/Pointcept/Pointcept.
[]
[]
[]
[]
785
786
RELI11D: A Comprehensive Multimodal Human Motion Dataset and Method
http://arxiv.org/abs/2403.19501
Ming Yan, Yan Zhang, Shuqiang Cai, Shuqi Fan, Xincheng Lin, Yudi Dai, Siqi Shen, Chenglu Wen, Lan Xu, Yuexin Ma, Cheng Wang
2,403.19501
Comprehensive capturing of human motions requires both accurate captures of complex poses and precise localization of the human within scenes. Most of the HPE datasets and methods primarily rely on RGB LiDAR or IMU data. However solely using these modalities or a combination of them may not be adequate for HPE particularly for complex and fast movements. For holistic human motion understanding we present RELI11D a high-quality multimodal human motion dataset involves LiDAR IMU system RGB camera and Event camera. It records the motions of 10 actors performing 5 sports in 7 scenes including 3.32 hours of synchronized LiDAR point clouds IMU measurement data RGB videos and Event steams. Through extensive experiments we demonstrate that the RELI11D presents considerable challenges and opportunities as it contains many rapid and complex motions that require precise location. To address the challenge of integrating different modalities we propose LEIR a multimodal baseline that effectively utilizes LiDAR Point Cloud Event stream and RGB through our cross-attention fusion strategy. We show that LEIR exhibits promising results for rapid motions and daily motions and that utilizing the characteristics of multiple modalities can indeed improve HPE performance. Both the dataset and source code will be released publicly to the research community fostering collaboration and enabling further exploration in this field.
[]
[]
[]
[]
786
787
Generative Image Dynamics
http://arxiv.org/abs/2309.07906
Zhengqi Li, Richard Tucker, Noah Snavely, Aleksander Holynski
2,309.07906
We present an approach to modeling an image-space prior on scene motion. Our prior is learned from a collection of motion trajectories extracted from real video sequences depicting natural oscillatory dynamics of objects such as treesflowers candles and clothes swaying in the wind. We model dense long-term motion in the Fourier domain as spectral volumes which we find are well-suited to prediction with diffusion models. Given a single image our trained model uses a frequency-coordinated diffusion sampling process to predict a spectral volume which can be converted into a motion texture that spans an entire video. Along with an image-based rendering module the predicted motion representation can be used for a number of downstream applications such as turning still images into seamlessly looping videos or allowing users to interact with objects in real images producing realistic simulated dynamics (by interpreting the spectral volumes as image-space modal bases). See our project page for more results: generative-dynamics.github.io
[]
[]
[]
[]
787
788
One-Class Face Anti-spoofing via Spoof Cue Map-Guided Feature Learning
Pei-Kai Huang, Cheng-Hsuan Chiang, Tzu-Hsien Chen, Jun-Xiong Chong, Tyng-Luh Liu, Chiou-Ting Hsu
null
Many face anti-spoofing (FAS) methods have focused on learning discriminative features from both live and spoof training data to strengthen the security of face recognition systems. However since not every possible attack type is available in the training stage these FAS methods usually fail to detect unseen attacks in the inference stage. In comparison one-class FAS where the training data are from only live faces aims to detect whether a test face image belongs to the live class or not. In this paper we propose a novel One-Class Spoof Cue Map estimation Network (OC-SCMNet) to address the one-class FAS detection problem. Our first goal is to learn to extract latent spoof features from live images so that their estimated Spoof Cue Maps (SCMs) should have zero responses. To avoid trapping to a trivial solution we devise a novel SCM-guided feature learning by combining many SCMs as pseudo ground-truths to guide a conditional generator to generate latent spoof features for spoof data. Our second goal is to approximately simulate the potential out-of-distribution spoof attacks. To this end we propose using a memory bank to dynamically preserve a set of sufficiently "independent" latent spoof features to encourage the generator to probe the latent spoof feature space. Extensive experiments conducted on eight FAS benchmark datasets demonstrate that the proposed OC-SCMNet not only outperforms previous one-class FAS methods but also achieves comparable performances to state-of-the-art two-class FAS method. The codes are available at https://github.com/Pei-KaiHuang/CVPR24_OC_SCMNet.
[]
[]
[]
[]
788
789
On the Test-Time Zero-Shot Generalization of Vision-Language Models: Do We Really Need Prompt Learning?
http://arxiv.org/abs/2405.02266
Maxime Zanella, Ismail Ben Ayed
2,405.02266
The development of large vision-language models notably CLIP has catalyzed research into effective adaptation techniques with a particular focus on soft prompt tuning. Conjointly test-time augmentation which utilizes multiple augmented views of a single image to enhance zero-shot generalization is emerging as a significant area of interest. This has predominantly directed research efforts towards test-time prompt tuning. In contrast we introduce a robust MeanShift for Test-time Augmentation (MTA) which surpasses prompt-based methods without requiring this intensive training procedure. This positions MTA as an ideal solution for both standalone and API-based applications. Additionally our method does not rely on ad hoc rules (e.g. confidence threshold) used in some previous test-time augmentation techniques to filter the augmented views. Instead MTA incorporates a quality assessment variable for each view directly into its optimization process termed as the inlierness score. This score is jointly optimized with a density mode seeking process leading to an efficient training- and hyperparameter-free approach. We extensively benchmark our method on 15 datasets and demonstrate MTA's superiority and computational efficiency. Deployed easily as plug-and-play module on top of zero-shot models and state-of-the-art few-shot methods MTA shows systematic and consistent improvements.
[]
[]
[]
[]
789
790
InteractDiffusion: Interaction Control in Text-to-Image Diffusion Models
http://arxiv.org/abs/2312.05849
Jiun Tian Hoe, Xudong Jiang, Chee Seng Chan, Yap-Peng Tan, Weipeng Hu
2,312.05849
Large-scale text-to-image (T2I) diffusion models have showcased incredible capabilities in generating coherent images based on textual descriptions enabling vast applications in content generation. While recent advancements have introduced control over factors such as object localization posture and image contours a crucial gap remains in our ability to control the interactions between objects in the generated content. Well-controlling interactions in generated images could yield meaningful applications such as creating realistic scenes with interacting characters. In this work we study the problems of conditioning T2I diffusion models with Human-Object Interaction (HOI) information consisting of a triplet label (person action object) and corresponding bounding boxes. We propose a pluggable interaction control model called InteractDiffusion that extends existing pre-trained T2I diffusion models to enable them being better conditioned on interactions. Specifically we tokenize the HOI information and learn their relationships via interaction embeddings. A conditioning self-attention layer is trained to map HOI tokens to visual tokens thereby conditioning the visual tokens better in existing T2I diffusion models. Our model attains the ability to control the interaction and location on existing T2I diffusion models which outperforms existing baselines by a large margin in HOI detection score as well as fidelity in FID and KID. Project page: https://jiuntian.github.io/interactdiffusion.
[]
[]
[]
[]
790
791
NViST: In the Wild New View Synthesis from a Single Image with Transformers
http://arxiv.org/abs/2312.08568
Wonbong Jang, Lourdes Agapito
2,312.08568
We propose NViST a transformer-based model for efficient and generalizable novel-view synthesis from a single image for real-world scenes. In contrast to many methods that are trained on synthetic data object-centred scenarios or in a category-specific manner NViST is trained on MVImgNet a large-scale dataset of casually-captured real-world videos of hundreds of object categories with diverse backgrounds. NViST transforms image inputs directly into a radiance field conditioned on camera parameters via adaptive layer normalisation. In practice NViST exploits fine-tuned masked autoencoder (MAE) features and translates them to 3D output tokens via cross-attention while addressing occlusions with self-attention. To move away from object-centred datasets and enable full scene synthesis NViST adopts a 6-DOF camera pose model and only requires relative pose dropping the need for canonicalization of the training data which removes a substantial barrier to it being used on casually captured datasets. We show results on unseen objects and categories from MVImgNet and even generalization to casual phone captures. We conduct qualitative and quantitative evaluations on MVImgNet and ShapeNet to show that our model represents a step forward towards enabling true in-the-wild generalizable novel-view synthesis from a single image. Project webpage: https://wbjang.github.io/nvist_webpage.
[]
[]
[]
[]
791
792
Beyond Text: Frozen Large Language Models in Visual Signal Comprehension
http://arxiv.org/abs/2403.07874
Lei Zhu, Fangyun Wei, Yanye Lu
2,403.07874
In this work we investigate the potential of a large language model (LLM) to directly comprehend visual signals without the necessity of fine-tuning on multi-modal datasets. The foundational concept of our method views an image as a linguistic entity and translates it to a set of discrete words derived from the LLM's vocabulary. To achieve this we present the Vision-to-Language Tokenizer abbreviated as V2T Tokenizer which transforms an image into a "foreign language" with the combined aid of an encoder-decoder the LLM vocabulary and a CLIP model. With this innovative image encoding the LLM gains the ability not only for visual comprehension but also for image denoising and restoration in an auto-regressive fashion--crucially without any fine-tuning. We undertake rigorous experiments to validate our method encompassing understanding tasks like image recognition image captioning and visual question answering as well as image denoising tasks like inpainting outpainting deblurring and shift restoration. Code and models are available at https://github.com/zh460045050/V2L-Tokenizer.
[]
[]
[]
[]
792
793
Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation
http://arxiv.org/abs/2312.12470
Sihan Liu, Yiwei Ma, Xiaoqing Zhang, Haowei Wang, Jiayi Ji, Xiaoshuai Sun, Rongrong Ji
2,312.1247
Referring Remote Sensing Image Segmentation (RRSIS) is a new challenge that combines computer vision and natural language processing. Traditional Referring Image Segmentation (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery leading to suboptimal segmentation results. To address these challenges we introduce the Rotated Multi-Scale Interaction Network (RMSIN) an innovative approach designed for the unique demands of RRSIS. RMSIN incorporates an Intra-scale Interaction Module (IIM) to effectively address the fine-grained detail required at multiple scales and a Cross-scale Interaction Module (CIM) for integrating these details coherently across the network. Furthermore RMSIN employs an Adaptive Rotated Convolution (ARC) to account for the diverse orientations of objects a novel contribution that significantly enhances segmentation accuracy. To assess the efficacy of RMSIN we have curated an expansive dataset comprising 17402 image-caption-mask triplets which is unparalleled in terms of scale and variety. This dataset not only presents the model with a wide range of spatial and rotational scenarios but also establishes a stringent benchmark for the RRSIS task ensuring a rigorous evaluation of performance. Experimental evaluations demonstrate the exceptional performance of RMSIN surpassing existing state-of-the-art models by a significant margin. Datasets and code are available at https://github.com/Lsan2401/RMSIN.
[]
[]
[]
[]
793
794
GLACE: Global Local Accelerated Coordinate Encoding
Fangjinhua Wang, Xudong Jiang, Silvano Galliani, Christoph Vogel, Marc Pollefeys
null
Scene coordinate regression (SCR) methods are a family of visual localization methods that directly regress 2D-3D matches for camera pose estimation. They are effective in small-scale scenes but face significant challenges in large-scale scenes that are further amplified in the absence of ground truth 3D point clouds for supervision. Here the model can only rely on reprojection constraints and needs to implicitly triangulate the points. The challenges stem from a fundamental dilemma: The network has to be invariant to observations of the same landmark at different viewpoints and lighting conditions etc. but at the same time discriminate unrelated but similar observations. The latter becomes more relevant and severe in larger scenes. In this work we tackle this problem by introducing the concept of co-visibility to the network. We propose GLACE which integrates pre-trained global and local encodings and enables SCR to scale to large scenes with only a single small-sized network. Specifically we propose a novel feature diffusion technique that implicitly groups the reprojection constraints with co-visibility and avoids overfitting to trivial solutions. Additionally our position decoder parameterizes the output positions for large-scale scenes more effectively. Without using 3D models or depth maps for supervision our method achieves state-of-the-art results on large-scale scenes with a low-map-size model. On Cambridge landmarks with a single model we achieve a 17% lower median position error than Poker the ensemble variant of the state-of-the-art SCR method ACE. Code is available at: https://github.com/cvg/glace.
[]
[]
[]
[]
794
795
Emergent Open-Vocabulary Semantic Segmentation from Off-the-shelf Vision-Language Models
http://arxiv.org/abs/2311.17095
Jiayun Luo, Siddhesh Khandelwal, Leonid Sigal, Boyang Li
2,311.17095
From image-text pairs large-scale vision-language models (VLMs) learn to implicitly associate image regions with words which prove effective for tasks like visual question answering. However leveraging the learned association for open-vocabulary semantic segmentation remains a challenge. In this paper we propose a simple yet extremely effective training-free technique Plug-and-Play Open-Vocabulary Semantic Segmentation (PnP-OVSS) for this task. PnP-OVSS leverages a VLM with direct text-to-image cross-attention and an image-text matching loss. To balance between over-segmentation and under-segmentation we introduce Salience Dropout; by iteratively dropping patches that the model is most attentive to we are able to better resolve the entire extent of the segmentation mask. PnP-OVSS does not require any neural network training and performs hyperparameter tuning without the need for any segmentation annotations even for a validation set. PnP-OVSS demonstrates substantial improvements over comparable baselines (+29.4% mIoU on Pascal VOC +13.2% mIoU on Pascal Context +14.0% mIoU on MS COCO +2.4% mIoU on COCO Stuff) and even outperforms most baselines that conduct additional network training on top of pretrained VLMs. Our codebase is at https://github.com/letitiabanana/PnP-OVSS.
[]
[]
[]
[]
795
796
Localization Is All You Evaluate: Data Leakage in Online Mapping Datasets and How to Fix It
http://arxiv.org/abs/2312.06420
Adam Lilja, Junsheng Fu, Erik Stenborg, Lars Hammarstrand
2,312.0642
The task of online mapping is to predict a local map using current sensor observations e.g. from lidar and camera without relying on a pre-built map. State-of-the-art methods are based on supervised learning and are trained predominantly using two datasets: nuScenes and Argoverse 2. However these datasets revisit the same geographic locations across training validation and test sets. Specifically over 80% of nuScenes and 40% of Argoverse 2 validation and test samples are less than 5 m from a training sample. At test time the methods are thus evaluated more on how well they localize within a memorized implicit map built from the training data than on extrapolating to unseen locations. Naturally this data leakage causes inflated performance numbers and we propose geographically disjoint data splits to reveal the true performance in unseen environments. Experimental results show that methods perform considerably worse some dropping more than 45 mAP when trained and evaluated on proper data splits. Additionally a reassessment of prior design choices reveals diverging conclusions from those based on the original split. Notably the impact of lifting methods and the support from auxiliary tasks (e.g. depth supervision) on performance appears less substantial or follows a different trajectory than previously perceived.
[]
[]
[]
[]
796
797
Alchemist: Parametric Control of Material Properties with Diffusion Models
http://arxiv.org/abs/2312.02970
Prafull Sharma, Varun Jampani, Yuanzhen Li, Xuhui Jia, Dmitry Lagun, Fredo Durand, Bill Freeman, Mark Matthews
2,312.0297
We propose a method to control material attributes of objects like roughness metallic albedo and transparency in real images. Our method capitalizes on the generative prior of text-to-image models known for photorealism employing a scalar value and instructions to alter low-level material properties. Addressing the lack of datasets with controlled material attributes we generated an object-centric synthetic dataset with physically-based materials. Fine-tuning a modified pre-trained text-to-image model on this synthetic dataset enables us to edit material properties in real-world images while preserving all other attributes. We show the potential application of our model to material edited NeRFs.
[]
[]
[]
[]
797
798
Step Differences in Instructional Video
http://arxiv.org/abs/2404.16222
Tushar Nagarajan, Lorenzo Torresani
2,404.16222
Comparing a user video to a reference how-to video is a key requirement for AR/VR technology delivering personalized assistance tailored to the user's progress. However current approaches for language-based assistance can only answer questions about a single video. We propose an approach that first automatically generates large amounts of visual instruction tuning data involving pairs of videos from HowTo100M by leveraging existing step annotations and accompanying narrations and then trains a video-conditioned language model to jointly reason across multiple raw videos. Our model achieves state-of-the-art performance at identifying differences between video pairs and ranking videos based on the severity of these differences and shows promising ability to perform general reasoning over multiple videos.
[]
[]
[]
[]
798
799
Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
http://arxiv.org/abs/2401.10891
Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao
2,401.10891
This work presents Depth Anything a highly practical solution for robust monocular depth estimation. Without pursuing novel technical modules we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data ( 62M) which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple yet effective strategies that make data scaling-up promising. First a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire robust representations. Second an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities extensively including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Further through fine-tuning it with metric depth information from NYUv2 and KITTI new SOTAs are set. Our better depth model also results in a better depth-conditioned ControlNet. Our models are released at https://github.com/LiheYoung/Depth-Anything.
[]
[]
[]
[]
799