bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
215
445
abstract
stringlengths
820
2.37k
title
stringlengths
24
147
authors
sequencelengths
1
13
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
33 values
n_linked_authors
int64
-1
4
upvotes
int64
-1
21
num_comments
int64
-1
4
n_authors
int64
-1
11
Models
sequencelengths
0
1
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
4
old_Models
sequencelengths
0
1
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
4
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=ufL0BQdR9L
@inproceedings{ li2024crossmodal, title={Cross-Modal Meta Consensus for Heterogeneous Federated Learning}, author={Shuai Li and Fan Qi and Zixin Zhang and Changsheng Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ufL0BQdR9L} }
In the evolving landscape of federated learning (FL), the integration of multimodal data presents both unprecedented opportunities and significant challenges. Existing work falls short of meeting the growing demand for systems that can efficiently handle diverse tasks and modalities in rapidly changing environments. We propose a meta-learning strategy tailored for Multimodal Federated Learning (MFL) in a multitask setting, which harmonizes intra-modal and inter-modal feature spaces through the Cross-Modal Meta Consensus. This innovative approach enables seamless integration and transfer of knowledge across different data types, enhancing task personalization within modalities and facilitating effective cross-modality knowledge sharing. Additionally, we introduce Gradient Consistency-based Clustering for multimodal convergence, specifically designed to resolve conflicts at meta-initialization points arising from diverse modality distributions, supported by theoretical guarantees. Our approach, evaluated as $M^{3}Fed$ on five federated datasets, with at most four modalities and four downstream tasks, demonstrates strong performance across diverse data distributions, affirming its effectiveness in multimodal federated learning. The code is available at https://anonymous.4open.science/r/M3Fed-44DB.
Cross-Modal Meta Consensus for Heterogeneous Federated Learning
[ "Shuai Li", "Fan Qi", "Zixin Zhang", "Changsheng Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=uf9EEOkcYR
@inproceedings{ peng2024group, title={Group Vision Transformer}, author={Yaopeng Peng and Milan Sonka and Danny Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=uf9EEOkcYR} }
The Vision Transformer has attained remarkable success in various computer vision applications. However, the large computational costs and complex design limit its ability in handling large feature maps. Existing research predominantly focuses on constraining attention to small local regions, which reduces the number of tokens attending the attention computation while overlooking computational demands caused by the feed-forward layer in the Vision Transformer block. In this paper, we introduce Group Vision Transformer (GVT), a relatively simple and efficient variant of Vision Transformer, aiming to improve attention computation. The core idea of our model is to divide and group the entire Transformer layer, instead of only the attention part, into multiple independent branches. This approach offers two advantages: (1) It helps reduce parameters and computational complexity; (2) it enhances the diversity of the learned features. We conduct comprehensive analysis of the impact of different numbers of groups on model performance, as well as their influence on parameters and computational complexity. Our proposed GVT demonstrates competitive performances in several common vision tasks. For example, our GVT-Tiny model achieves 84.8% top-1 accuracy on ImageNet-1K, 51.4% box mAP and 45.2% mask mAP on MS COCO object detection and instance segmentation, and 50.1% mIoU on ADE20K semantic segmentation, outperforming the CAFormer-S36 model by 0.3% in ImageNet-1K top-1 accuracy, 1.2% in box mAP, 1.0% in mask mAP on MS COCO object detection and instance segmentation, and 1.2% in mIoU on ADE20K semantic segmentation, with similar model parameters and computational complexity. Code is accessible at https://github.com/AnonymousAccount6688/GVT.
Group Vision Transformer
[ "Yaopeng Peng", "Milan Sonka", "Danny Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ue6UUvoL8B
@inproceedings{ he2024cacenet, title={{CACE}-Net: Co-guidance Attention and Contrastive Enhancement for Effective Audio-Visual Event Localization}, author={Xiang He and Liuxiangxi and Yang Li and Dongcheng Zhao and Guobin Shen and Qingqun Kong and Xin Yang and Yi Zeng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ue6UUvoL8B} }
The audio-visual event localization task requires identifying concurrent visual and auditory events from unconstrained videos within a model, locating them, and classifying their category. The efficient extraction and integration of audio and visual modal information have always been challenging in this field. In this paper, we introduce CACE-Net, which differs from most existing methods that solely use audio signals to guide visual information. We propose an audio-visual co-guidance attention mechanism that allows for adaptive bi-directional cross-modal attentional guidance between audio and visual clues, thus reducing inconsistencies between modalities. Moreover, we have observed that existing methods have difficulty distinguishing between similar background and event and lack the fine-grained features for event classification. Consequently, we employ background-event contrast enhancement to increase the discrimination of fused features and fine-tuned pre-trained model to extract more discernible features from complex multimodal inputs. Experiments on the AVE dataset demonstrate that CACE-Net sets a new benchmark in the audio-visual event localization task, proving the effectiveness of our proposed methods in handling complex multimodal learning and event localization in unconstrained videos. Code is available at https://github.com/Brain-Cog-Lab/CACE-Net.
CACE-Net: Co-guidance Attention and Contrastive Enhancement for Effective Audio-Visual Event Localization
[ "Xiang He", "Liuxiangxi", "Yang Li", "Dongcheng Zhao", "Guobin Shen", "Qingqun Kong", "Xin Yang", "Yi Zeng" ]
Conference
poster
2408.01952
[ "https://github.com/brain-cog-lab/cace-net" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ucj1H91vzj
@inproceedings{ li2024imagebindd, title={ImageBind3D: Image as Binding Step for Controllable 3D Generation}, author={Zhenqiang Li and Jie LI and Yangjie Cao and Jiayi Wang and Runfeng Lv}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ucj1H91vzj} }
Recent advancements in 3D generation have garnered considerable interest due to their potential applications. Despite these advancements, the field faces persistent challenges in multi-conditional control, primarily due to the lack of paired datasets and the inherent complexity of 3D structures. To address these challenges, we introduce ImageBind3D, a novel framework for controllable 3D generation that integrates text, hand-drawn sketches, and depth maps to enhance user controllability. Our innovative contribution is the adoption of an inversion-align strategy, facilitating controllable 3D generation without requiring paired datasets. Firstly, utilizing GET3D as a baseline, our method innovates a 3D inversion technique that synchronizes 2D images with 3D shapes within the latent space of 3D GAN. Subsequently, we leverage images as intermediaries to facilitate pseudo-pairing between the shapes and various modalities. Moreover, our multi-modal diffusion model design strategically aligns external control signals with the generative model's latent knowledge, enabling precise and controllable 3D generation. Extensive experiments validate that ImageBind3D surpasses existing state-of-the-art methods in both fidelity and controllability. Additionally, our approach can offer composable guidance for any feed-forward 3D generative models, significantly enhancing their controllability.
ImageBind3D: Image as Binding Step for Controllable 3D Generation
[ "Zhenqiang Li", "Jie LI", "Yangjie Cao", "Jiayi Wang", "Runfeng Lv" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ucDj0MfF7m
@inproceedings{ guo2024generalizable, title={Generalizable Face Anti-spoofing via Style-conditional Prompt Token Learning}, author={Jiabao Guo and Huan Liu and Yizhi Luo and Xueli Hu and Hang Zou and Yuan Zhang and Hui Liu and Bo Zhao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ucDj0MfF7m} }
Face anti-spoofing (FAS) based on domain generalization (DG) has garnered increasing attention from researchers. The poor generalization is attributed to the model being overfitted to salient liveness-irrelevant signals. Previous methods addressed this issue by either mapping images from multiple domains into a common feature space or promoting the separation of image features from domain-specific and task-related features. However, direct manipulation of image features inevitably disrupts semantic structure. Utilizing the text features of vision-language pre-trained (VLP) models, such as CLIP, to dynamically adjust image features offers the potential for better generalization, exploring a broader feature space while preserving semantic information. Specifically, we propose a FAS method called style-conditional prompt token learning (S-CTPL), which aims to generate generalized text features by training introduced prompt tokens to encode visual styles. These tokens are then utilized as weights for classifiers, enhancing the model's generalization. Unlike inherently static prompt tokens, our dynamic prompt tokens adaptively capture live-irrelevant signals from instance-specific styles, increasing their diversity through mixed feature statistics to further mitigate model overfitting. Thorough experimental analysis demonstrates that S-CPTL outperforms current top-performing methods across four distinct cross-dataset benchmarks.
Style-conditional Prompt Token Learning for Generalizable Face Anti-spoofing
[ "Jiabao Guo", "Huan Liu", "Yizhi Luo", "Xueli Hu", "Hang Zou", "Yuan Zhang", "Hui Liu", "Bo Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=uaKuvzR74K
@inproceedings{ chen2024ssatadapter, title={{SSAT}-Adapter: Enhancing Vision-Language Model Few-shot Learning with Auxiliary Tasks}, author={Bowen Chen and Yun Sing Koh and Gillian Dobbie}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=uaKuvzR74K} }
Traditional deep learning models often struggle in few-shot learning scenarios, where limited labeled data is available. While the Contrastive Language-Image Pre-training (CLIP) model demonstrates impressive zero-shot capabilities, its performance in few-shot scenarios remains limited. Existing methods primarily aim to leverage the limited labeled dataset, but this offers limited potential for improvement. To overcome the limitations of small datasets in few-shot learning, we introduce a novel framework, SSAT-Adapter, that leverages CLIP's language understanding to generate informative auxiliary tasks and improve CLIP's performance and adaptability in few-shot settings. We utilize CLIP's language understanding to create decision-boundary-focused image latents. These latents form auxiliary tasks, including inter-class instances to bridge CLIP's pre-trained knowledge with the provided examples, and intra-class instances to subtly expand the representation of target classes. A self-paced training regime, progressing from easier to more complex tasks, further promotes robust learning. Experiments show our framework outperforms the state-of-the-art online few-shot learning method by an average of 2.2\% on eleven image classification datasets. Further ablation studies on various tasks demonstrate the effectiveness of our approach to enhance CLIP's adaptability in few-shot image classification.
SSAT-Adapter: Enhancing Vision-Language Model Few-shot Learning with Auxiliary Tasks
[ "Bowen Chen", "Yun Sing Koh", "Gillian Dobbie" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=uPMUof2NeG
@inproceedings{ yang2024semanticsaware, title={Semantics-Aware Image Aesthetics Assessment using Tag Matching and Contrastive Ranking}, author={Zhichao Yang and Leida Li and Pengfei Chen and Jinjian Wu and Weisheng Dong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=uPMUof2NeG} }
The perception of image aesthetics is built upon the understanding of semantic content. However, how to evaluate the aesthetic quality of images with diversified semantic backgrounds remains challenging in image aesthetics assessment (IAA). To address the dilemma, this paper presents a semantics-aware image aesthetics assessment approach, which first analyzes the semantic content of images and then models the aesthetic distinctions among images from two perspectives, i.e., aesthetic attribute and aesthetic level. Concretely, we propose two strategies, dubbed tag matching and contrastive ranking, to extract knowledge pertaining to image aesthetics. The tag matching identifies the semantic category and the dominant aesthetic attributes based on predefined tag libraries. The contrastive ranking is designed to uncover the comparative relationships among images with different aesthetic levels but similar semantic backgrounds. In the process of contrastive ranking, the impact of long-tailed distribution of aesthetic data is also considered by balanced sampling and traversal contrastive learning. Extensive experiments and comparisons on three benchmark IAA databases demonstrate the superior performance of the proposed model in terms of both prediction accuracy and alleviating long-tailed effect. The code of the proposed method will be public.
Semantics-Aware Image Aesthetics Assessment using Tag Matching and Contrastive Ranking
[ "Zhichao Yang", "Leida Li", "Pengfei Chen", "Jinjian Wu", "Weisheng Dong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=uEiLOyT0QK
@inproceedings{ wu2024toward, title={Toward Timeliness-Enhanced Loss Recovery for Large-Scale Live Streaming}, author={Bo Wu and Tong Li and cheng luo and Xu Yan and FuYu Wang and Xinle Du and Ke Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=uEiLOyT0QK} }
Due to the limited permissions for upgrading dual-side (i.e., server-side and client-side) loss tolerance schemes from the perspective of CDN vendors in a multi-supplier market, modern large-scale live streaming services are still using the automatic-repeat-request (ARQ) based paradigm for loss recovery, which only requires server-side modifications. In this paper, we first conduct a large-scale measurement study with a collection of up to 50 million live streams. We find that loss shows dynamics and live streaming contains frequent on-off mode switching in the wild. We further find that the recovery latency, enlarged by the ubiquitous retransmission loss, is a critical factor affecting client-side QoE (e.g., video freezing) of live streaming. We then propose an enhanced recovery mechanism called AutoRec, which can transform the disadvantages of on-off mode switching into an advantage for reducing loss recovery latency without any modifications on the client side. AutoRec also adopts an online learning-based scheduler to fit the dynamics of loss, balancing the tradeoff between the recovery latency and the incurred overhead. We implement AutoRec upon QUIC and evaluate it via both testbed and real-world deployments of commercial services. The experimental results demonstrate the practicability and profitability of AutoRec, in which the 95th-percentile times and duration of client-side video freezing can be lowered by 34.1\% and 16.0\%, respectively.
Toward Timeliness-Enhanced Loss Recovery for Large-Scale Live Streaming
[ "Bo Wu", "Tong Li", "cheng luo", "Xu Yan", "FuYu Wang", "Xinle Du", "Ke Xu" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=uCvsmnTMJg
@inproceedings{ zhang2024prompting, title={Prompting Continual Person Search}, author={Pengcheng Zhang and Xiaohan Yu and Xiao Bai and Jin Zheng and Xin Ning}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=uCvsmnTMJg} }
The development of person search techniques has been greatly promoted in recent years for its superior practicality and challenging goals. Despite their significant progress, existing person search models still lack the ability to continually learn from increasing real-world data and adaptively process input from different domains. To this end, this work introduces the continual person search task that sequentially learns on multiple domains and then performs person search on all seen domains. This requires balancing the stability and plasticity of the model to continually learn new knowledge without catastrophic forgetting. For this, we propose a \textbf{P}rompt-based C\textbf{o}ntinual \textbf{P}erson \textbf{S}earch (PoPS) model in this paper. First, we design a compositional person search transformer to construct an effective pre-trained transformer without exhaustive pre-training from scratch on large-scale person search data. This serves as the fundamental for prompt-based continual learning. On top of that, we design a domain incremental prompt pool with a diverse attribute matching module. For each domain, we independently learn a set of prompts to encode the domain-oriented knowledge. Meanwhile, we jointly learn a group of diverse attribute projection and prototype embeddings to capture discriminative domain attributes. By matching an input image with the learned attributes across domains, the learned prompts can be properly selected for model inference. Extensive experiments are conducted to validate the proposed method for continual person search. The source code will be made available upon publication.
Prompting Continual Person Search
[ "Pengcheng Zhang", "Xiaohan Yu", "Xiao Bai", "Jin Zheng", "Xin Ning" ]
Conference
poster
2410.19239
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=uBoKLl4ZS1
@inproceedings{ tong2024balancing, title={Balancing Generalization and Robustness in Adversarial Training via Steering through Clean and Adversarial Gradient Directions}, author={Haoyu Tong and Xiaoyu Zhang and Jin Yulin and Jian Lou and Kai Wu and Xiaofeng Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=uBoKLl4ZS1} }
Adversarial training (AT) is a fundamental method to enhance the robustness of Deep Neural Networks (DNNs) against adversarial examples. While AT achieves improved robustness on adversarial examples, it often leads to reduced accuracy on clean examples. Considerable effort has been devoted to handling the trade-off from the perspective of \textit{input space}. However, we demonstrate that the trade-off can also be illustrated from the perspective of the \textit{gradient space}. In this paper, we propose Adversarial Training with Adaptive Gradient Reconstruction (\textit{AGR}), a novel approach that balances generalization (accuracy on clean examples) and robustness (accuracy on adversarial examples) in adversarial training via steering through clean and adversarial gradient directions. We first introduce a ingenious technique named Gradient Orthogonal Projection in the case of negative correlation gradients to adjust the adversarial gradient direction to reduce the degradation of generalization. Then we present a gradient interpolation scheme in the case of positive correlation gradients for efficiently increasing the generalization without compromising the robustness of the final obtained. Rigorous theoretical analysis prove that our \textit{AGR} has lower generalization error upper bounds indicating its effectiveness. Comprehensive experiments empirically demonstrate that \textit{AGR} achieves excellent capability of balancing generalization and robustness, and is compatible with various adversarial training methods to achieve superior performance. Our codes are available at: \url{https://github.com/RUIYUN-ML/AGR}.
Balancing Generalization and Robustness in Adversarial Training via Steering through Clean and Adversarial Gradient Directions
[ "Haoyu Tong", "Xiaoyu Zhang", "Jin Yulin", "Jian Lou", "Kai Wu", "Xiaofeng Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=u9d8wZmBmL
@inproceedings{ zhao2024maskbev, title={Mask{BEV}: Towards A Unified Framework for {BEV} Detection and Map Segmentation}, author={Xiao Zhao and XUKUN ZHANG and Dingkang Yang and Mingyang Sun and Mingcheng Li and Shunli Wang and Lihua Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=u9d8wZmBmL} }
Accurate and robust multimodal multi-task perception is crucial for modern autonomous driving systems. However, current multimodal perception research follows independent paradigms designed for specific perception tasks, leading to a lack of complementary learning among tasks and decreased performance in multi-task learning (MTL) due to joint training. In this paper, we propose MaskBEV, a masked attention-based MTL paradigm that unifies 3D object detection and bird's eye view (BEV) map segmentation. MaskBEV introduces a task-agnostic Transformer decoder to process these diverse tasks, enabling MTL to be completed in a unified decoder without requiring additional design of specific task heads. To fully exploit the complementary information between BEV map segmentation and 3D object detection tasks in BEV space, we propose spatial modulation and scene-level context aggregation strategies. These strategies consider the inherent dependencies between BEV segmentation and 3D detection, naturally boosting MTL performance. Extensive experiments on nuScenes dataset show that compared with previous state-of-the-art MTL methods, MaskBEV achieves 1.3 NDS improvement in 3D object detection and 2.7 mIoU improvement in BEV map segmentation, while also demonstrating slightly leading inference speed.
MaskBEV: Towards A Unified Framework for BEV Detection and Map Segmentation
[ "Xiao Zhao", "XUKUN ZHANG", "Dingkang Yang", "Mingyang Sun", "Mingcheng Li", "Shunli Wang", "Lihua Zhang" ]
Conference
poster
2408.09122
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=u7E4OzqBAo
@inproceedings{ zheng2024saliencyguided, title={Saliency-Guided Fine-Grained Temporal Mask Learning for Few-Shot Action Recognition}, author={Shuo Zheng and Yuanjie Dang and Peng Chen and Ruohong Huan and Dongdong Zhao and Ronghua Liang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=u7E4OzqBAo} }
Temporal relation modeling is one of the core aspects of few-shot action recognition. Most previous works mainly focus on temporal relation modeling based on coarse-level actions, without considering the atomic action details and fine-grained temporal information. This oversight represents a significant limitation in this task. Specifically, coarse-level temporal relation modeling can make the few-shot models overfit in high-discrepancy temporal context, and ignore the low-discrepancy but high-semantic relevance action details in the video. To address these issues, we propose a saliency-guided fine-grained temporal mask learning method that models the temporal atomic action relation for few-shot action recognition in a finer manner. First, to model the comprehensive temporal relations of video instances, we design a temporal mask learning architecture to automatically search for the best matching of each atomic action snippet. Next, to exploit the low-discrepancy atomic action features, we introduce a saliency-guided temporal mask module to adaptively locate and excavate the atomic action information. After that, the few-shot predictions can be obtained by feeding the embedded rich temporal-relation features to a common feature matcher. Extensive experimental results on standard datasets demonstrate our method’s superior performance compared to existing state-of-the-art methods.
Saliency-Guided Fine-Grained Temporal Mask Learning for Few-Shot Action Recognition
[ "Shuo Zheng", "Yuanjie Dang", "Peng Chen", "Ruohong Huan", "Dongdong Zhao", "Ronghua Liang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=u5auljUmBk
@inproceedings{ liu2024unsupervised, title={Unsupervised Multi-view Pedestrian Detection}, author={Mengyin Liu and Chao Zhu and Shiqi Ren and Xu-Cheng Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=u5auljUmBk} }
With the prosperity of the intelligent surveillance, multiple cameras have been applied to localize pedestrians more accurately. However, previous methods rely on laborious annotations of pedestrians in every frame and camera view. Therefore, we propose in this paper an Unsupervised Multi-view Pedestrian Detection approach (UMPD) to learn an annotation-free detector via vision-language models and 2D-3D cross-modal mapping: 1) Firstly, Semantic-aware Iterative Segmentation (SIS) is proposed to extract unsupervised representations of multi-view images, which are converted into 2D masks as pseudo labels, via our proposed iterative PCA and zero-shot semantic classes from vision-language models; 2) Secondly, we propose Geometry-aware Volume-based Detector (GVD) to end-to-end encode multi-view 2D images into a 3D volume to predict voxel-wise density and color via 2D-to-3D geometric projection, trained by 3D-to-2D rendering losses with SIS pseudo labels; 3) Thirdly, for better detection results, i.e., the 3D density projected on Birds-Eye-View, we propose Vertical-aware BEV Regularization (VBR) to constrain pedestrians to be vertical like the natural poses. Extensive experiments on popular multi-view pedestrian detection benchmarks Wildtrack, Terrace, and MultiviewX, show that our proposed UMPD, as the first fully-unsupervised method to our best knowledge, performs competitively to the previous state-of-the-art supervised methods. Code is available at https://github.com/lmy98129/UMPD.
Unsupervised Multi-view Pedestrian Detection
[ "Mengyin Liu", "Chao Zhu", "Shiqi Ren", "Xu-Cheng Yin" ]
Conference
poster
2305.12457
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=u0geEr7X2O
@inproceedings{ huang2024motionaware, title={Motion-aware Latent Diffusion Models for Video Frame Interpolation}, author={Zhilin Huang and Yijie Yu and Ling Yang and Chujun Qin and Bing Zheng and Xiawu Zheng and Zikun Zhou and Yaowei Wang and Wenming Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=u0geEr7X2O} }
With the advancement of AIGC, video frame interpolation (VFI) has become a crucial component in existing video generation frameworks, attracting widespread research interest. For the VFI task, the motion estimation between neighboring frames plays a crucial role in avoiding motion ambiguity. However, existing VFI methods always struggle to accurately predict the motion information between consecutive frames, and this imprecise estimation leads to blurred and visually incoherent interpolated frames. In this paper, we propose a novel diffusion framework, motion-aware latent diffusion models (MADiff), which is specifically designed for the VFI task. By incorporating motion priors between the conditional neighboring frames with the target interpolated frame predicted throughout the diffusion sampling procedure, MADiff progressively refines the intermediate outcomes, culminating in generating both visually smooth and realistic results. Extensive experiments conducted on benchmark datasets demonstrate that our method achieves state-of-the-art performance significantly outperforming existing approaches, especially under challenging scenarios involving dynamic textures with complex motion.
Motion-aware Latent Diffusion Models for Video Frame Interpolation
[ "Zhilin Huang", "Yijie Yu", "Ling Yang", "Chujun Qin", "Bing Zheng", "Xiawu Zheng", "Zikun Zhou", "Yaowei Wang", "Wenming Yang" ]
Conference
poster
2404.13534
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tyXNCEprRp
@inproceedings{ yin2024robust, title={Robust Pseudo-label Learning with Neighbor Relation for Unsupervised Visible-Infrared Person Re-Identification}, author={Xiangbo Yin and Jiangming Shi and Yachao Zhang and Yang Lu and zhizhong zhang and Yuan Xie and Yanyun Qu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tyXNCEprRp} }
Unsupervised Visible-Infrared Person Re-identification (USVI-ReID) presents a formidable challenge, which aims to match pedestrian images across visible and infrared modalities without any annotations. Recently, clustered pseudo-label methods have become predominant in USVI-ReID, although the inherent noise in pseudo-labels presents a significant obstacle. Most existing works primarily focus on shielding the model from the harmful effects of noise, neglecting to calibrate noisy pseudo-labels usually associated with hard samples, which will compromise the robustness of the model. To address this issue, we design a Robust Pseudo-label Learning with Neighbor Relation (RPNR) framework for USVI-ReID. To be specific, we first introduce a straightforward yet potent Noisy Pseudo-label Calibration module to correct noisy pseudo-labels. Due to the high intra-class variations, noisy pseudo-labels are difficult to calibrate completely. Therefore, we introduce a Neighbor Relation Learning module to reduce high intra-class variations by modeling potential interactions between all samples. Subsequently, we devise an Optimal Transport Prototype Matching module to establish reliable cross-modality correspondences. On that basis, we design a Memory Hybrid Learning module to jointly learn modality-specific and modality-invariant information. Comprehensive experiments conducted on two widely recognized benchmarks, SYSU-MM01 and RegDB, demonstrate that RPNR outperforms the current state-of-the-art GUR with an average Rank-1 improvement of 10.3%. The source codes will be released soon.
Robust Pseudo-label Learning with Neighbor Relation for Unsupervised Visible-Infrared Person Re-Identification
[ "Xiangbo Yin", "Jiangming Shi", "Yachao Zhang", "Yang Lu", "zhizhong zhang", "Yuan Xie", "Yanyun Qu" ]
Conference
poster
2405.05613
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tuch82hlmx
@inproceedings{ ye2024absgs, title={Abs{GS}: Recovering fine details in 3D Gaussian Splatting}, author={Zongxin Ye and Wenyu Li and Sidun Liu and Peng Qiao and Yong Dou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tuch82hlmx} }
Recent advances on neural rendering have shown photo-realistic results in novel view synthesis. As one of the most promising method, 3D Gaussian Splatting (3D-GS) couple 3D Gaussian primitive with differentiable rasterization to obtain high-fidelity 3D scene reconstruction and achieve real-time rendering. The exceptional performance of 3D-GS is attributed to the carefully designed adpative density control strategy, which progressively populate empty areas by splitting/cloning more Gaussians throughout the optimization process. While 3D-GS offers significant advantages, it frequently suffer from over-reconstruction issue in intricate scenes containing high-frequency details, consequently leading to blur. This issue's underlying causes have still been under-explored. In this work, we present an comprehensive analysis of the cause of aforementioned artifacts and we call it gradient collision, which prevent large Gaussians that cover small-scale geometry from splitting. To address this issue, we further propose novel homodirectional gradient as the guidance for densification. Our strategy efficiently identifies large Gaussians in over-reconstructed regions, and recovers fine details by splitting. We evaluate our proposed method on various challenging datasets, and our approach achieves best rendering quality with reduced memory consumption and yields better distributions of 3D Gaussians in world space. Our method is also easy to implement with just few lines of codes and can be incorporated into a wide variety other Gaussian Splatting-based methods. We will open source our codes upon formal publication.
AbsGS: Recovering fine details in 3D Gaussian Splatting
[ "Zongxin Ye", "Wenyu Li", "Sidun Liu", "Peng Qiao", "Yong Dou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tsYPhmnDuN
@inproceedings{ cai2024auto, title={Auto Drag{GAN}: Editing the Generative Image Manifold in an Autoregressive Manner}, author={Pengxiang Cai and Zhiwei Liu and Guibo Zhu and Yunfang Niu and Jinqiao Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tsYPhmnDuN} }
Pixel-level fine-grained image editing remains an open challenge. Previous works fail to achieve an ideal trade-off between control granularity and inference speed. They either fail to achieve pixel-level fine-grained control, or their inference speed requires optimization. To address this, this paper for the first time employs a regression based network to learn the variation patterns of StyleGAN latent codes during the image dragging process. This method enables pixel-level precision in drag editing with little time cost. Users can specify handle points and target points on any GAN-generated images, and our method will move each handle point to its corresponding target point. To achieve this, we decompose the entire movement process into multiple sub-processes. Specifically, we develop a encoder-decoder based network named 'Latent Predictor' to predict the latent code motion trajectories from handle points to target points in an autoregressive manner. Moreover, to enhance the prediction stability, we introduce a component named 'Latent Regularizer', aimed at constraining the latent code motion within the distribution of natural images. Extensive experiments demonstrate that our method achieves state-of-the-art (SOTA) inference speed and image editing performance at the pixel-level granularity.
Auto DragGAN: Editing the Generative Image Manifold in an Autoregressive Manner
[ "Pengxiang Cai", "Zhiwei Liu", "Guibo Zhu", "Yunfang Niu", "Jinqiao Wang" ]
Conference
poster
2407.18656
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=trrzWroF6z
@inproceedings{ wang2024multifineness, title={Multi-fineness Boundaries and the Shifted Ensemble-aware Encoding for Point Cloud Semantic Segmentation}, author={Ziming Wang and Boxiang Zhang and Ming Ma and Yue Wang and Taoli Du and Wenhui Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=trrzWroF6z} }
Point cloud segmentation forms the foundation of 3D scene understanding. Boundaries, the intersections of regions, are prone to mis-segmentation. Current point cloud segmentation models exhibit unsatisfactory performance on boundaries. There is limited focus on explicitly addressing semantic segmentation of point cloud boundaries. We introduce a method called Multi-fineness Boundary Constraint (MBC) to tackle this challenge. By querying boundaries at various degrees of fineness and imposing feature constraints within these boundary areas, we enhance the discrimination between boundaries and non-boundaries, improving point cloud boundary segmentation. However, solely emphasizing boundaries may compromise the segmentation accuracy in broader non-boundary regions. To mitigate this, we introduce a new concept of point cloud space termed ensemble and a Shifted Ensemble-aware Perception (SEP) module. This module establishes information interactions between points with minimal computational cost, effectively capturing direct point-to-point long-range correlations within ensembles. It enhances segmentation performance for both boundaries and non-boundaries. We conduct experiments on multiple benchmarks. The experimental results demonstrate that our method achieves performance surpassing or comparable to state-of-the-art methods, validating the effectiveness and superiority of our approach.
Multi-fineness Boundaries and the Shifted Ensemble-aware Encoding for Point Cloud Semantic Segmentation
[ "Ziming Wang", "Boxiang Zhang", "Ming Ma", "Yue Wang", "Taoli Du", "Wenhui Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tocfToCGF1
@inproceedings{ wang2024break, title={Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models}, author={Yubo Wang and Chaohu Liu and yanqiuqu and Haoyu Cao and Deqiang Jiang and Linli Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tocfToCGF1} }
Large vision-language models (LVLMs) integrate visual information into large language models, showcasing remarkable multi-modal conversational capabilities. However, the visual modules introduces new challenges in terms of robustness for LVLMs, as attackers can craft adversarial images that are visually clean but may mislead the model to generate incorrect answers. In general, LVLMs rely on vision encoders to transform images into visual tokens, which are crucial for the language models to perceive image contents effectively. Therefore, we are curious about one question: Can LVLMs still generate correct responses when the encoded visual tokens are attacked and disrupting the visual information? To this end, we propose a non-targeted attack method referred to as VT-Attack (Visual Tokens Attack), which constructs adversarial examples from multiple perspectives, with the goal of comprehensively disrupting feature representations and inherent relationships as well as the semantic properties of visual tokens output by image encoders. Using only access to the image encoder in the proposed attack, the generated adversarial examples exhibit transferability across diverse LVLMs utilizing the same image encoder and generality across different tasks. Extensive experiments validate the superior attack performance of the VT-Attack over baseline methods, demonstrating its effectiveness in attacking LVLMs with image encoders, which in turn can provide guidance on the robustness of LVLMs, particularly in terms of the stability of the visual feature space.
Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models
[ "Yubo Wang", "Chaohu Liu", "yanqiuqu", "Haoyu Cao", "Deqiang Jiang", "Linli Xu" ]
Conference
poster
2410.06699
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=thxe3AAnji
@inproceedings{ li2024knn, title={{KNN} Transformer with Pyramid Prompts for Few-Shot Learning}, author={Wenhao Li and Qiangchang Wang and peng zhao and Yilong Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=thxe3AAnji} }
Few-Shot Learning (FSL) aims to recognize new classes with limited labeled data. Recent studies have attempted to address the challenge of rare samples with textual prompts to modulate visual features. However, they usually struggle to capture complex semantic relationships between textual and visual features. Moreover, vanilla self-attention is heavily affected by useless information in images, severely constraining the potential of semantic priors in FSL due to the confusion of numerous irrelevant tokens during interaction. To address these aforementioned issues, a K-NN Transformer with Pyramid Prompts (KTPP) is proposed to select discriminative information with K-NN Context Attention (KCA) and adaptively modulate visual features with Pyramid Cross-modal Prompts (PCP). First, for each token, the KCA only selects the K most relevant tokens to compute the self-attention matrix and incorporates the mean of all tokens as the context prompt to provide the global context in three cascaded stages. As a result, irrelevant tokens can be progressively suppressed. Secondly, pyramid prompts are introduced in the PCP to emphasize visual features via interactions between text-based class-aware prompts and multi-scale visual features. This allows the ViT to dynamically adjust the importance weights of visual features based on rich semantic information at different scales, making models robust to spatial variations. Finally, augmented visual features and class-aware prompts are interacted via the KCA to extract class-specific features. Consequently, our model further enhances noise-free visual representations via deep cross-modal interactions, extracting generalized visual representation in scenarios with few labeled samples. Extensive experiments on four benchmark datasets demonstrate significant gains over the state-of-the-art methods, especially for the 1-shot task with 2.28% improvement on average due to semantically enhanced visual representations.
KNN Transformer with Pyramid Prompts for Few-Shot Learning
[ "Wenhao Li", "Qiangchang Wang", "peng zhao", "Yilong Yin" ]
Conference
poster
2410.10227
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tglYgAIReM
@inproceedings{ yang2024scpsn, title={{SCPSN}: Spectral Clustering-based Pyramid Super-resolution Network for Hyperspectral Images}, author={Yong Yang and Aoqi Zhao and Shuying Huang and Xiaozheng Wang and Yajing Fan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tglYgAIReM} }
Single hyperspectral image super-resolution (HSSR) aims to reconstruct a high-resolution hyperspectral image (HRHSI) from an observed low resolution hyperspectral image (LRHSI). Most current methods combine CNN and Transformer structures to directly extract features of all channels in LRHSI for image reconstruction, but they do not consider the interference of redundant information in adjacent bands, resulting in spectral and spatial distortions in the reconstruction results, as well as an increase in model computational complexity. To address this issue, this paper proposes a spectral clustering-based pyramid super-resolution network (SCPSN) to progressively reconstruct HRHSI at different scales. In each layer of the pyramid network, a clustering super-resolution block consisting of spectral clustering block (SCB), patch non local attention block (PNAB), and dynamic fusion block (DFB) is designed to achieve the reconstruction of detail features for LRHSI. Specifically, for the high correlation between adjacent spectral bands in LRHSI, an SCB is first constructed to achieve clustering of spectral channels and filtering of hyperchannels. This can reduce the interference of redundant spectral information and the computational complexity of the model. Then, by utilizing the non-local similarity of features within the channel, a PNAB is constructed to enhance the features in the hyperchannels. Next, a DFB is designed to reconstruct the features of all channels in LRHSI by establishing correlations between enhanced hyperchannels and other channels. Finally, the reconstructed channels are upsampled and added with the upsampled LRHSI to obtain the reconstructed HRHSI. Extensive experiments validate that the performance of SCPSN is superior to that of some state-of-the-art methods in terms of visual effects and quantitative metrics. In addition, our model does not require training on large-scale datasets compared to other methods. The dataset and code will be released on GitHub.
SCPSN: Spectral Clustering-based Pyramid Super-resolution Network for Hyperspectral Images
[ "Yong Yang", "Aoqi Zhao", "Shuying Huang", "Xiaozheng Wang", "Yajing Fan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tgSl2DkJCQ
@inproceedings{ chen2024learning, title={Learning A Low-Level Vision Generalist via Visual Task Prompt}, author={Xiangyu Chen and Yihao Liu and Yuandong Pu and Wenlong Zhang and Jiantao Zhou and Yu Qiao and Chao Dong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tgSl2DkJCQ} }
Building a unified model for general low-level vision tasks has important research and practical value. However, existing methods still face challenges when dealing with diverse low-level vision problems. Multi-task restoration approaches can simultaneously address various degradation-to-clean restoration tasks, while their applicability to tasks with different target domains (e.g., image stylization) remains limited. Existing methods like PromptGIP that can handle tasks with multiple input-target domains mainly rely on the Masked Autoencoder (MAE) training paradigm. Unfortunately, these approaches are restricted by coupling to the ViT architecture, resulting in suboptimal image reconstruction quality. In addition, they tend to be sensitive to prompt content and often fail when handling more tasks that involve low-frequency information processing, such as color and style. In this paper, we present a Visual task Prompt-based Image Processing (VPIP) framework to address the above challenges. This framework employs the visual task prompt to process tasks with different input-target domains. Besides, it provides the flexibility to select a backbone network suitable for various low-level vision tasks. A prompt cross-attention mechanism is introduced to deal with the information interaction between the input and prompt information. Based on the VPIP framework, we train a low-level vision generalist model, namely GenLV, on 30 diverse tasks. Experimental results show that GenLV can successfully address a variety of low-level tasks, and it significantly outperforms existing methods both quantitatively and qualitatively.
Learning A Low-Level Vision Generalist via Visual Task Prompt
[ "Xiangyu Chen", "Yihao Liu", "Yuandong Pu", "Wenlong Zhang", "Jiantao Zhou", "Yu Qiao", "Chao Dong" ]
Conference
poster
2408.08601
[ "https://github.com/chxy95/genlv" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=td6ndgRL6l
@inproceedings{ zhang2024alignclip, title={Align{CLIP}: Align Multi Domains of Texts Input for {CLIP} models with Object-IoU Loss}, author={Lu Zhang and Ke Yan and Shouhong Ding}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=td6ndgRL6l} }
Since the release of the CLIP model by OpenAI, it has received widespread attention. However, categories in the real world often exhibit a long-tail distribution, and existing CLIP models struggle to effectively recognize rare, tail-end classes, such as an endangered African bird. An intuitive idea is to generate visual descriptions for these tail-end classes and use descriptions to create category prototypes for classification. However, experiments reveal that visual descriptions, image captions, and test prompt templates belong to three distinct domains, leading to distribution shifts. In this paper, we propose the use of caption object parsing to identify the objects set contained within captions. During training, the object sets is used to generate visual descriptions and test prompts, aligning these three domains and enabling the text encoder to generate category prototypes based on visual descriptions. Thanks to the acquired object sets, our approach can construct many-to-many relationships at a lower cost and derive soft labels, addressing the noise issues associated with traditional one-to-one matching. Extensive experimental results demonstrate that our method significantly surpasses the CLIP baseline and exceeds existing methods, achieving a new state-of-the-art (SOTA).
AlignCLIP: Align Multi Domains of Texts Input for CLIP models with Object-IoU Loss
[ "Lu Zhang", "Ke Yan", "Shouhong Ding" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tcCOWNOyg7
@inproceedings{ zhang2024pdrefiner, title={{PD}-Refiner: An Underlying Surface Inheritance Refiner with Adaptive Edge-Aware Supervision for Point Cloud Denoising}, author={Chengwei Zhang and Xueyi Zhang and Xianghu Yue and Mingrui Lao and Tao Jiang and Jiawei Wang and Fubo Zhang and Longyong Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tcCOWNOyg7} }
Point clouds from real-world scenarios inevitably contain complex noise, significantly impairing the accuracy of downstream tasks. To tackle this challenge, cascading encoder-decoder architecture has become a conventional technical route to iterative denoise. However, circularly feeding the output of denoiser as its input again involves the re-extraction of underlying surface, leading to unstable denoising process and over-smoothed geometric details. To address these issues, we propose a novel denoising paradigm dubbed PD-Refiner that employs a single encoder to model the underlying surface. Then, we leverage several lightweight hierarchical Underlying Surface Inheritance Refiners (USIRs) to inherit and strengthen it, thereby avoiding the re-extraction from the intermediate point cloud. Furthermore, we design adaptive edge-aware supervision to improve the edge awareness of the USIRs, allowing for the adjustment of the denoising preferences from global structure to local details. The results demonstrate that our method not only achieves state-of-the-art performance in terms of denoising stability and efficacy, but also enhances edge clarity and point cloud uniformity.
PD-Refiner: An Underlying Surface Inheritance Refiner with Adaptive Edge-Aware Supervision for Point Cloud Denoising
[ "Chengwei Zhang", "Xueyi Zhang", "Xianghu Yue", "Mingrui Lao", "Tao Jiang", "Jiawei Wang", "Fubo Zhang", "Longyong Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tWCOxa9lWs
@inproceedings{ shi2024alleviating, title={Alleviating the Equilibrium Challenge with Sample Virtual Labeling for Adversarial Domain Adaptation}, author={Wenxu Shi and Bochuan Zheng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tWCOxa9lWs} }
Numerous domain adaptive object detection (DAOD) methods leverage domain adversarial training to align the features to mitigate domain gap, where a feature extractor is trained to fool a domain classifier in order to have aligned feature distributions. The discrimination capability of the domain classifier is easy to fall into the local optimum due to the equilibrium challenge, thus cannot effectively further drive the training of feature extractor. In this work, we propose an efficient optimization strategy called \underline{V}irtual-label \underline{F}ooled \underline{D}omain \underline{D}iscrimination (VFDD), which revitalizes the domain classifier during training using \emph{virtual} sample labels. Such virtual sample label makes the separable distributions less separable, and thus leads to a more easily confused domain classifier, which in turn further drives feature alignment. Particularly, we introduce a novel concept of \emph{virtual} label for the unaligned samples and propose the \emph{Virtual}-$\mathcal{H}$-divergence to overcome the problem of falling into local optimum due to the equilibrium challenge. The proposed VFDD is orthogonal to most existing DAOD methods and can be used as a plug-and-play module to facilitate existing DAOD models. Theoretical insights and experimental analyses demonstrate that VFDD improves many popular baselines and also outperforms the recent unsupervised domain adaptive object detection models.
Alleviating the Equilibrium Challenge with Sample Virtual Labeling for Adversarial Domain Adaptation
[ "Wenxu Shi", "Bochuan Zheng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tVrwpFjsBv
@inproceedings{ yue2024adaptive, title={Adaptive Selection based Referring Image Segmentation}, author={Pengfei Yue and Jianghang Lin and Shengchuan Zhang and Jie Hu and Yilin Lu and Hongwei Niu and Haixin Ding and Yan Zhang and GUANNAN JIANG and Liujuan Cao and Rongrong Ji}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tVrwpFjsBv} }
Referring image segmentation (RIS) aims to segment a particular region based on a specific expression. Existing one-stage methods have explored various fusion strategies, yet they encounter two significant issues. Primarily, most methods rely on manually selected visual features from the visual encoder layers, lacking the flexibility to selectively focus on language-preferred visual features. Moreover, the direct fusion of word-level features into coarse aligned features disrupts the established vision-language alignment, resulting in suboptimal performance. In this paper, we introduce an innovative framework for RIS that seeks to overcome these challenges with adaptive alignment of vision and language features, termed the Adaptive Selection with Dual Alignment (ASDA). ASDA innovates in two aspects. Firstly, we design an Adaptive Feature Selection and Fusion (AFSF) module to dynamically select visual features focusing on different regions related to various descriptions. AFSF is equipped with scale-wise feature aggregator to provide hierarchically coarse features that preserve crucial low-level details and provide robust features for successor dual alignment. Secondly, a Word Guided Dual-Branch Aligner (WGDA) is leveraged to integrate coarse features with linguistic cues by word-guided attention, which effectively addresses the common issue of vision-language misalignment by ensuring that linguistic descriptors directly interact with masks prediction. This guides the model to focus on relevant image regions and make robust prediction. Extensive experimental results demonstrate that our ASDA framework surpasses state-of-the-art methods on RefCOCO, RefCOCO+ and G-Ref benchmark. The improvement not only underscores the superiority of ASDA in capturing fine-grained visual details but also its robustness and adaptability to diverse descriptions.
Adaptive Selection based Referring Image Segmentation
[ "Pengfei Yue", "Jianghang Lin", "Shengchuan Zhang", "Jie Hu", "Yilin Lu", "Hongwei Niu", "Haixin Ding", "Yan Zhang", "GUANNAN JIANG", "Liujuan Cao", "Rongrong Ji" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tVVE2Nq9uj
@inproceedings{ wang2024dualstream, title={Dual-stream Feature Augmentation for Domain Generalization}, author={Shanshan Wang and ALuSi and Xun Yang and Ke Xu and Huibin Tan and Xingyi Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tVVE2Nq9uj} }
Domain generalization (DG) task aims to learn a robust model from source domains that could handle the out-of-distribution (OOD) issue. In order to improve the generalization ability of the model in unseen domains, increasing the diversity of training samples is an effective solution. However, existing augmentation approaches always have some limitations. On the one hand, the augmentation manner in most DG methods is not enough as the model may not see the perturbed features in approximate the worst case due to the randomness, thus the transferability in features could not be fully explored. On the other hand, the causality in discriminative features is not involved in these methods, which is harm for the generalization of model due to the spurious correlations. To address these issues, we propose a Dual-stream Feature Augmentation (DFA) method by constructing some hard features from two perspectives. Firstly, to improve the transferability, we construct some targeted features with domain related augmentation manner. Through the guidance of uncertainty, some hard cross-domain fictitious features are generated to simulate domain shift. Secondly, to take the causality into consideration, the spurious correlated non-causal information is disentangled by an adversarial mask, then the more discriminative features can be extracted through these hard causal related information. Different from previous fixed synthesizing strategy, the two augmentations are integrated into a unified learnable model with disentangled feature strategy. Based on these hard features, contrastive learning is employed to keep the semantics consistent and improve the robustness of the model. Extensive experiments on several datasets demonstrated that our approach could achieve state-of-the-art performance for domain generalization.
Dual-stream Feature Augmentation for Domain Generalization
[ "Shanshan Wang", "ALuSi", "Xun Yang", "Ke Xu", "Huibin Tan", "Xingyi Zhang" ]
Conference
poster
2409.04699
[ "https://github.com/alusi123/dfa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tJUFsCP1dg
@inproceedings{ liu2024animatable, title={Animatable 3D Gaussian: Fast and High-Quality Reconstruction of Multiple Human Avatars}, author={Yang Liu and Xiang.Huang and Minghan Qin and Qinwei Lin and Haoqian Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tJUFsCP1dg} }
Neural radiance fields are capable of reconstructing high-quality drivable human avatars but are expensive to train and render and not suitable for multi-human scenes with complex shadows. To reduce consumption, we propose Animatable 3D Gaussian, which learns human avatars from input images and poses. We extend 3D Gaussians to dynamic human scenes by modeling a set of skinned 3D Gaussians and a corresponding skeleton in canonical space and deforming 3D Gaussians to posed space according to the input poses. We introduce a multi-head hash encoder for pose-dependent shape and appearance and a time-dependent ambient occlusion module to achieve high-quality reconstructions in scenes containing complex motions and dynamic shadows. On both novel view synthesis and novel pose synthesis tasks, our method achieves higher reconstruction quality than InstantAvatar with less training time (1/60), less GPU memory (1/4), and faster rendering speed ($7\times$). Our method can be easily extended to multi-human scenes and achieve comparable novel view synthesis results on a scene with ten people in only 25 seconds of training. We will release the code and dataset.
Animatable 3D Gaussian: Fast and High-Quality Reconstruction of Multiple Human Avatars
[ "Yang Liu", "Xiang.Huang", "Minghan Qin", "Qinwei Lin", "Haoqian Wang" ]
Conference
poster
2311.16482
[ "https://github.com/jimmyYliu/Animatable-3D-Gaussian" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tIpOYtxerl
@inproceedings{ wang2024tiva, title={Ti{VA}: Time-Aligned Video-to-Audio Generation}, author={Xihua Wang and Yuyue Wang and Yihan Wu and Ruihua Song and Xu Tan and Zehua Chen and Hongteng Xu and Guodong Sui}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tIpOYtxerl} }
Video-to-audio generation is crucial for autonomous video editing and post-processing, which aims to generate high-quality audio for silent videos with semantic similarity and temporal synchronization. However, most existing methods mainly focus on matching the semantics of the visual and acoustic modalities while merely considering their temporal alignment in a coarse granularity, thus failing to achieve precise synchronization. In this study, we propose a novel time-aligned video-to-audio framework, called TiVA, to achieve semantic matching and temporal synchronization jointly when generating audio. Given a silent video, our method encodes its visual semantics and predicts an audio layout separately. Then, leveraging the semantic latent embeddings and the predicted audio layout as condition, it learns a latent diffusion-based audio generator. Comprehensive objective and subjective experiments demonstrate that our method consistently outperforms state-of-the-art methods on semantic matching and temporal synchronization.
TiVA: Time-Aligned Video-to-Audio Generation
[ "Xihua Wang", "Yuyue Wang", "Yihan Wu", "Ruihua Song", "Xu Tan", "Zehua Chen", "Hongteng Xu", "Guodong Sui" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tI3ZlbEhOM
@inproceedings{ li2024disentangling, title={Disentangling Identity Features from Interference Factors for Cloth-Changing Person Re-identification}, author={Yubo Li and De Cheng and Chaowei Fang and Changzhe Jiao and Nannan Wang and Xinbo Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tI3ZlbEhOM} }
Cloth-Changing Person Re-Identification (CC-ReID) aims to accurately identify a target person in the more realistic surveillance scenario where clothes of the pedestrian may change drastically, which is critical in public security systems for tracking down disguised criminal suspects. Existing methods mainly transform the CC-ReID problem into cross-modality feature alignment from the data-driven perspective, without modelling the interference factors such as clothes and camera view changes meticulously. This may lead to over-consideration or under-consideration of the influence of these factors on the extraction of robust and discriminative identity features. This paper proposes a novel algorithm for thoroughly disentangling identity features from interference factors brought by clothes and camera view changes while ensuring the robustness and discriminativeness. It adopts a dual-stream identity feature learning framework consisting of a raw image stream and a cloth-erasing stream, to explore discriminative and cloth-irrelevant identity feature representations. Specifically, an adaptive cloth-irrelevant contrastive objective is introduced to contrast features extracted by the two streams, aiming to suppress the fluctuation caused by clothes textures in the identity feature space. Moreover, we innovatively mitigate the influence of the interference factors through a generative adversarial interference factor decoupling network. This network is targeted at capturing identity-related information residing in the interference factors and disentangling the identity features from such information. Extensive experimental results demonstrate the effectiveness of the proposed method, achieving superior performances to state-of-the-art methods. Our source code is available in the supplementary materials.
Disentangling Identity Features from Interference Factors for Cloth-Changing Person Re-identification
[ "Yubo Li", "De Cheng", "Chaowei Fang", "Changzhe Jiao", "Nannan Wang", "Xinbo Gao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tH3skJSNfH
@inproceedings{ feng2024multiview, title={Multi-view Clustering Based on Deep Non-negative Tensor Factorization}, author={Wei Feng and Dongyuan Wei and Qianqian Wang and Bo Dong and Quanxue Gao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tH3skJSNfH} }
Multi-view clustering (MVC) methods based on non-negative matrix factorization (NMF) have gained popularity owing to their ability to provide interpretable clustering results. However, these NMF-based MVC methods generally process each view independently and thus ignore the potential relationship between views. Besides, they are limited in the ability to capture the nonlinear data structures. To overcome these weaknesses and inspired by deep learning, we propose a multi-view clustering method based on deep non-negative tensor factorization (MVC-DNTF). With deep tesnor factorization, our method can well exploit the spatial structure of the original data and is capable of extracting more deep and nonlinear features embedded in different views. To further extract the complementary information of different views, we adopt the weighted tensor Schatten $p$-norm regularization term. An optimization algorithm is developed to effectively solves the MVC-DNTF objective. Extensive experiments are performed to demonstrate the effectiveness and superiority of our method.
Multi-view Clustering Based on Deep Non-negative Tensor Factorization
[ "Wei Feng", "Dongyuan Wei", "Qianqian Wang", "Bo Dong", "Quanxue Gao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tCzAHdbLRn
@inproceedings{ espositi2024the, title={The Room: design and embodiment of spaces as social beings}, author={Federico Espositi and Andrea Bonarini}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=tCzAHdbLRn} }
This paper delves into the exploration of spaces as non-anthropomorphic avatars. We are investigating interaction with entities showing features different from humans’, to understand how they can be embodied as avatars and perceived as living, social beings. To push this investigation to its limit, we have designed as an avatar an interactive space (the Room), that challenges both the anthropomorphic structure, and most of the social interaction mechanisms we are used to. We introduce a pilot framework for the Room design, addressing challenges related to its body, perception, and interaction process. We present an implementation of the framework as an interactive installation, namely a real-time, two-player, VR experience, featuring the Room avatar, with a focus on haptic feedback as the main means of perception for the subject embodying the Room. By radically challenging anthropomorphism, we seek to investigate the most basic aspects of embodiment and social cognition.
The Room: design and embodiment of spaces as social beings
[ "Federico Espositi", "Andrea Bonarini" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=t9iptO7Y9p
@inproceedings{ niu2024multimodal, title={Multimodal Multi-turn Conversation Stance Detection: A Challenge Dataset and Effective Model}, author={Fuqiang Niu and Zebang Cheng and Xianghua Fu and Xiaojiang Peng and Genan Dai and Yin Chen and Hu Huang and Bowen Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=t9iptO7Y9p} }
Stance detection, which aims to identify public opinion towards specific targets using social media data, is an important yet challenging task. With the proliferation of diverse multimodal social media content including text, and images multimodal stance detection (MSD) has become a crucial research area. However, existing MSD studies have focused on modeling stance within individual text-image pairs, overlooking the multi-party conversational contexts that naturally occur on social media. This limitation stems from a lack of datasets that authentically capture such conversational scenarios, hindering progress in conversational MSD. To address this, we introduce a new multimodal multi-turn conversational stance detection dataset (called MmMtCSD). To derive stances from this challenging dataset, we propose a novel multimodal large language model stance detection framework (MLLM-SD), that learns joint stance representations from textual and visual modalities. Experiments on MmMtCSD show state-of-the-art performance of our proposed MLLM-SD approach for multimodal stance detection. We believe that MmMtCSD will contribute to advancing real-world applications of stance detection research.
Multimodal Multi-turn Conversation Stance Detection: A Challenge Dataset and Effective Model
[ "Fuqiang Niu", "Zebang Cheng", "Xianghua Fu", "Xiaojiang Peng", "Genan Dai", "Yin Chen", "Hu Huang", "Bowen Zhang" ]
Conference
oral
2409.00597
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=t8kqeE1Pkq
@inproceedings{ ye2024dualpath, title={Dual-path Collaborative Generation Network for Emotional Video Captioning}, author={Cheng Ye and Weidong Chen and Jingyu Li and Lei Zhang and Zhendong Mao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=t8kqeE1Pkq} }
Emotional Video Captioning (EVC) is an emerging task that aims to describe factual content with the intrinsic emotions expressed in videos. The essential of the EVC task is to effectively perceive subtle and ambiguous visual emotional cues during the caption generation, which is neglected by the traditional video captioning. Existing emotional video captioning methods perceive global visual emotional cues at first, and then combine them with the video features to guide the emotional caption generation, which neglects two characteristics of the EVC task. Firstly, their methods neglect the dynamic subtle changes in the intrinsic emotions of the video, which makes it difficult to meet the needs of common scenes with diverse and changeable emotions. Secondly, as their methods incorporate emotional cues into each step, the guidance role of emotion is overemphasized, which makes factual content more or less ignored during generation. To this end, we propose a dual-path collaborative generation network, which dynamically perceives visual emotional cues evolutions while generating emotional captions by collaborative learning. The two paths promote each other and significantly improve the generation performance. Specifically, in the dynamic emotion perception path, we propose a dynamic emotion evolution module, which first aggregates visual features and historical caption features to summarize the global visual emotional cues, and then dynamically selects emotional cues required to be re-composed at each stage as well as re-composed them to achieve emotion evolution by dynamically enhancing or suppressing different granularity subspace’s semantics. Besides, in the adaptive caption generation path, to balance the description of factual content and emotional cues, we propose an emotion adaptive decoder, which firstly estimates emotion intensity via the alignment of emotional features and historical caption features at each generation step, and then, emotional guidance adaptively incorporate into the caption generation based on the emotional intensity. Thus, our methods can generate emotion-related words at the necessary time step, and our caption generation balances the guidance of factual content and emotional cues well. Extensive experiments on three challenging datasets demonstrate the superiority of our approach and each proposed module.
Dual-path Collaborative Generation Network for Emotional Video Captioning
[ "Cheng Ye", "Weidong Chen", "Jingyu Li", "Lei Zhang", "Zhendong Mao" ]
Conference
oral
2408.03006
[ "https://github.com/kyrieye/MM-2024" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=t87LMw4CpY
@inproceedings{ li2024aerialgait, title={AerialGait: Bridging Aerial and Ground Views for Gait Recognition}, author={Aoqi Li and Saihui Hou and Chenye Wang and Qingyuan Cai and Yongzhen Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=t87LMw4CpY} }
In this work, we present AerialGait, a comprehensive dataset for aerial-ground gait recognition. This dataset comprises 82,454 sequences totaling over 10 million frames from 533 subjects, captured from both aerial and ground perspectives. To align with real-life scenarios of aerial and ground surveillance, we utilize a drone and a ground surveillance camera for data acquisition. The drone is operated at various speeds, directions, and altitudes. Meanwhile, we conduct data collection across five diverse surveillance sites to ensure a comprehensive simulation of real-world settings. AerialGait has several unique features: 1) The gait sequences exhibit significant variations in views, resolutions, and illumination across five distinct scenes. 2) It incorporates challenges of motion blur and frame discontinuity due to drone mobility. 3) The dataset reflects the domain gap caused by the view disparity between aerial and ground views, presenting a realistic challenge for drone-based gait recognition. Moreover, we perform a comprehensive analysis of existing gait recognition methods on AerialGait dataset and propose the Aerial-Ground Gait Network (AGG-Net). AGG-Net effectively learns discriminative features from aerial views by uncertainty learning and clusters features across aerial and ground views through prototype learning. Our model achieves state-of-the-art performance on both AerialGait and DroneGait datasets. The dataset and code will be made available upon acceptance.
AerialGait: Bridging Aerial and Ground Views for Gait Recognition
[ "Aoqi Li", "Saihui Hou", "Chenye Wang", "Qingyuan Cai", "Yongzhen Huang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=t6Tvv5SEVs
@inproceedings{ wang2024harmfully, title={Harmfully Manipulated Images Matter in Multimodal Misinformation Detection}, author={Bing Wang and Shengsheng Wang and Changchun Li and Renchu Guan and Ximing Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=t6Tvv5SEVs} }
Nowadays, misinformation is widely spreading over various social media platforms and causes extremely negative impacts on society. To combat this issue, automatically identifying misinformation, especially those containing multimodal content, has attracted growing attention from the academic and industrial communities, and induced an active research topic named Multimodal Misinformation Detection (MMD). Typically, existing MMD methods capture the semantic correlation and inconsistency between multiple modalities, but neglect some potential clues in multimodal content. Recent studies suggest that manipulated traces of the images in articles are non-trivial clues for detecting misinformation. Meanwhile, we find that the underlying intentions behind the manipulation, e.g., harmful and harmless, also matter in MMD. Accordingly, in this work, we propose to detect misinformation by learning manipulation features that indicate whether the image has been manipulated, as well as intention features regarding the harmful and harmless intentions of the manipulation. Unfortunately, the manipulation and intention labels that make these features discriminative are unknown. To overcome the problem, we propose two weakly supervised signals as alternatives by introducing additional datasets on image manipulation detection and formulating two classification tasks as positive and unlabeled learning problems. Based on these ideas, we propose a novel MMD method, namely Harmfully Manipulated Images Matter in MMD (MANI-M$^3$D). Extensive experiments across three benchmark datasets can demonstrate that \baby can consistently improve the performance of any MMD baselines.
Harmfully Manipulated Images Matter in Multimodal Misinformation Detection
[ "Bing Wang", "Shengsheng Wang", "Changchun Li", "Renchu Guan", "Ximing Li" ]
Conference
poster
2407.19192
[ "https://github.com/wangbing1416/hami-m3d" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=t3HHfWsPqo
@inproceedings{ ma2024a, title={A Coarse to Fine Detection Method for Prohibited Object in X-ray Images Based on Progressive Transformer Decoder}, author={Chunjie Ma and Lina Du and Zan Gao and Li Zhuo and Meng Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=t3HHfWsPqo} }
Currently, Transformer-based prohibited object detection methods in X-ray images appear constantly, but there are still some shortcomings such as poor performance and high computational complexity for prohibited object detection with heavily occlusion. Therefore, a coarse to fine detection method for prohibited object in X-ray images based on progressive Transformer decoder is proposed in this paper. Firstly, a coarse to fine framework is proposed, which includes two stages: coarse detection and fine detection. Through adaptive inference in stages, the computational efficiency of the model is effectively improved. Then, a position and class object queries method is proposed, which improves the convergence speed and detection accuracy of the model by fusing the position and class information of prohibited object with object queries. Finally, a progressive Transformer decoder is proposed, which distinguishes high and low score queries by increasing confidence thresholds, so that high-score queries are not affected by low-score queries in the decoding stage, and the model can focus more on decoding low-score queries, which usually correspond to prohibited object with severe occlusion. The experimental results on three public benchmark datasets (SIXray, OPIXray, HiXray) demonstrate that compared with the baseline DETR, the proposed method achieves the state-of-the-art detection accuracy with a 21.6% reduction in model computational complexity. Especially for prohibited objects with heavily occlusion, accurate detection can be carried out.
A Coarse to Fine Detection Method for Prohibited Object in X-ray Images Based on Progressive Transformer Decoder
[ "Chunjie Ma", "Lina Du", "Zan Gao", "Li Zhuo", "Meng Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=svsmKvkzFk
@inproceedings{ li2024minerva, title={Minerva: Enhancing Quantum Network Performance for High-Fidelity Multimedia Transmission}, author={Tingting Li and Ziming Zhao and Jianwei Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=svsmKvkzFk} }
Quantum networks have the potential to transmit multimedia data with high security and efficiency. However, ensuring high-fidelity transmission links remains a significant challenge. This study proposes a novel framework to enhance quantum network performance via link selection and transport strategy. Specifically, we formalize the quantum fidelity estimation and link selection as a best-arm identification problem, leverage median elimination to estimate fidelity and select the quantum link for each multimedia chunk transmission. To optimize the transmission of multimedia chunks in a quantum network, we can employ the scheduling strategy to maximize the cumulative benefit of chunk transmissions while considering the fidelity of the links and the overall network utilization. Through extensive experiments, our proposal demonstrates significant advantages. Compared to the randomized method, Minerva reduces bounce number and execution time by 12% ∼ 28% and 8% ∼ 32%, respectively, while improving average fidelity by 15%. Compared with the uniformly distributed method, our approach decreases bounce number by 24% ∼ 30% and execution time by 8% ∼ 32% and enhances average fidelity by 11% ∼ 21%.
Minerva: Enhancing Quantum Network Performance for High-Fidelity Multimedia Transmission
[ "Tingting Li", "Ziming Zhao", "Jianwei Yin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ssfU4xDtkQ
@inproceedings{ zhang2024captionaware, title={Caption-Aware Multimodal Relation Extraction with Mutual Information Maximization}, author={zefan zhang and Weiqi Zhang and yanhui li and bai tian}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=ssfU4xDtkQ} }
Multimodal Relation Extraction (MRE) has achieved great improvements. However, modern MRE models are easily affected by irrelevant objects during multimodal alignment which are called error sensitivity issues. The main reason is that visual features are not fully aligned with textual features and the reasoning process may suppress redundant and noisy information at the risk of losing critical information. In light of this, we propose a Caption-Aware Multimodal Relation Extraction Network with Mutual Information Maximization (CAMIM). Specifically, we first generate detailed image captions through the Large Language Model (LLM). Then, the Caption-Aware Module (CAM) hierarchically aligns the fine-grained visual entities and textual entities for reasoning. In addition, for preserving crucial information within different modalities, we leverage a Mutual Information Maximization method to regulate the multimodal reasoning module. Experiments show that our model outperforms the state-of-the-art MRE models on the benchmark dataset MNRE. Further ablation studies prove the pluggable and effective performance of our Caption-Aware Module and mutual information maximization method. Our code will be public soon.
Caption-Aware Multimodal Relation Extraction with Mutual Information Maximization
[ "zefan zhang", "Weiqi Zhang", "yanhui li", "bai tian" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=srGZPS1rsN
@inproceedings{ xie2024qptv, title={{QPT}-V2: Masked Image Modeling Advances Visual Scoring}, author={Qizhi Xie and Kun Yuan and Yunpeng Qu and Mingda Wu and Ming Sun and Chao Zhou and Jihong Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=srGZPS1rsN} }
Quality assessment and aesthetics assessment aim to evaluate the perceived quality and aesthetics of visual content. Current learning-based methods suffer greatly from the scarcity of labeled data and usually perform sub-optimally in terms of generalization. Although masked image modeling (MIM) has achieved noteworthy advancements across various high-level tasks (i.e., classification, detection etc.). In this work, we take on a novel perspective to investigate its capabilities in terms of quality- and aesthetics-awareness. To this end, we propose Quality- and aesthectics-aware PreTraining (QPT V2), the first pretraining framework based on MIM that offers a unified solution for quality and asthectics assessment. Specifically, QPT V2 incporporates following key designs: To perceive the high-level semantics and fine-grained details, pretraining data is curated. To comprehensively encompass quality- and aesthetics-related factors, degradation is introduced. To capture multi-scale quality and aesthetics information, model structure is modified. Extensive experimental results on 11 downstream benchmarks clearly show the superior performance of QPT V2 in comparison with current state-of-the-art approaches and other pretraining paradigms.
QPT-V2: Masked Image Modeling Advances Visual Scoring
[ "Qizhi Xie", "Kun Yuan", "Yunpeng Qu", "Mingda Wu", "Ming Sun", "Chao Zhou", "Jihong Zhu" ]
Conference
poster
[ "https://github.com/keichitse/qpt-v2" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=slWlr4WYGt
@inproceedings{ xiaochen2024tsilmclass, title={{TS}-{ILM}:Class Incremental Learning for Online Action Detection}, author={Li Xiaochen and Jian Cheng and Ziying Xia and Zichong Chen and Junhao Shi and Zhicheng Dong and Nyima Tashi}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=slWlr4WYGt} }
Online action detection aims to identify ongoing actions within untrimmed video streams, with extensive applications in real-life scenarios. However, in practical applications, video frames are received sequentially over time and new action categories continually emerge, giving rise to the challenge of catastrophic forgetting - a problem that remains inadequately explored. Generally, in the field of video understanding, researchers address catastrophic forgetting through class-incremental learning. Nevertheless, online action detection is based solely on historical observations, thus demanding higher temporal modeling capabilities for class-incremental learning methods. In this paper, we conceptualize this task as Class-Incremental Online Action Detection (CIOAD) and propose a novel framework, TS-ILM, to address it. Specifically, TS-ILM consists of two components: task-level temporal pattern extractor and temporal-sensitive exemplar selector. The former extracts the temporal patterns of actions in different tasks and saves them, allowing the data to be comprehensively observed on a temporal level before it is input into the backbone. The latter selects a set of frames with the highest causal relevance and minimum information redundancy for subsequent replay, enabling the model to learn the temporal information of previous tasks more effectively. We benchmark our approach against SoTA class-incremental learning methods applied in the image and video domains on THUMOS'14 and TVSeries datasets. Our method outperforms the previous approaches.
TS-ILM:Class Incremental Learning for Online Action Detection
[ "Li Xiaochen", "Jian Cheng", "Ziying Xia", "Zichong Chen", "Junhao Shi", "Zhicheng Dong", "Nyima Tashi" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sk7BjjLIBK
@inproceedings{ wu2024knowledgeaware, title={Knowledge-Aware Artifact Image Synthesis with {LLM}-Enhanced Prompting and Multi-Source Supervision}, author={Shengguang Wu and Zhenglun Chen and Qi Su}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sk7BjjLIBK} }
Ancient artifacts are an important medium for cultural preservation and restoration. However, many physical copies of artifacts are either damaged or lost, leaving a blank space in archaeological and historical studies that calls for artifact image generation techniques. Despite the significant advancements in open-domain text-to-image synthesis, existing approaches fail to capture the important domain knowledge presented in the textual description, resulting in errors in recreated images such as incorrect shapes and patterns. In this paper, we propose a novel knowledge-aware artifact image synthesis approach that brings lost historical objects accurately into their visual forms. We use a pretrained diffusion model as backbone and introduce three key techniques to enhance the text-to-image generation framework: 1) we construct prompt with explicit archeological knowledge elicited from large language models (LLMs); 2) we incorporate additional textual guidance to correlated historical expertise in a contrastive manner; 3) we introduce further visual-semantic constraints on edge and perceptual features that enable our model to learn more intricate visual details of the artifacts. Compared to existing approaches, our proposed model produces higher-quality artifact images that align better with the implicit details and historical knowledge contained within written literature.
Knowledge-Aware Artifact Image Synthesis with LLM-Enhanced Prompting and Multi-Source Supervision
[ "Shengguang Wu", "Zhenglun Chen", "Qi Su" ]
Conference
poster
2312.08056
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=shcoBmaF3s
@inproceedings{ hu2024finegrained, title={Fine-Grained Promote Learning for Face Anti-Spoofing}, author={Xueli Hu and Huan Liu and Haocheng Yuan and Zhiyang Fu and Yizhi Luo and Ning Zhang and Hang Zou and Gan Jianwen and Yuan Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=shcoBmaF3s} }
There has been an increasing focus from researchers on Domain-Generalized (DG) Face Anti-Spoofing (FAS). However, existing methods aim to project a shared visual space through adversarial training, making it difficult to explore the space without losing semantic information. We investigate the inadequacies of DG that result from classifier overfitting to a significantly different domain distribution. To address this issue, we propose a novel Fine-Grained Prompt Learning (FGPL) based on Vision-Language Models (VLMs), such as CLIP, which can adaptively adjust weights for classifiers with text features to mitigate overfitting. Specifically, FGPL first motivates the prompts to learn content and domain semantic information by capturing Domain-Agnostic and Domain-Specific features. Furthermore, our prompts are designed to be category-generalized by diversifying the Domain-Specific prompts. Additionally, we design an Adaptive Convolutional Adapter (AC-adapter), which is implemented through an adaptive combination of Vanilla Convolution and Central Difference Convolution, to be inserted into the image encoder for quickly bridging the gap between general image recognition and FAS task. Extensive experiments demonstrate that the proposed FGPL is effective and outperforms state-of-the-art methods on several cross-domain datasets.
Fine-Grained Prompt Learning for Face Anti-Spoofing
[ "Xueli Hu", "Huan Liu", "Haocheng Yuan", "Zhiyang Fu", "Yizhi Luo", "Ning Zhang", "Hang Zou", "Gan Jianwen", "Yuan Zhang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=shDRfGVRHP
@inproceedings{ cai2024multidan, title={Multi{DAN}: Unsupervised, Multistage, Multisource and Multitarget Domain Adaptation for Semantic Segmentation of Remote Sensing Images}, author={Yuxiang Cai and Yongheng Shang and Jianwei Yin}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=shDRfGVRHP} }
Unsupervised domain adaptation (UDA) has been a crucial way for cross-domain semantic segmentation of remote sensing images and reached apparent advents. However, most existing efforts focus on single source single target domain adaptation, which don't explicitly consider the serious domain shift between multiple source and target domains in real applications, especially inter-domain shift between various target domains and intra-domain shift within each target domain. In this paper, to address simultaneous inter-domain shift and intra-domain shift for multiple target domains, we propose a novel unsupervised, multistage, multisource and multitarget domain adaptation network (MultiDAN), which involves multisource and multitarget domain adaptation (MSMTDA), entropy-based clustering (EC) and multistage domain adaptation (MDA). Specifically, MSMTDA learns feature-level multiple adversarial strategies to alleviate complex domain shift between multiple target and source domains. Then, EC clusters the various target domains into multiple subdomains based on entropy of target predictions of MSMTDA. Besides, we propose a new pseudo label update strategy (PLUS) to dynamically produce more accurate pseudo labels for MDA. Finally, MDA aligns the clean subdomains, including pseudo labels generated by PLUS, with other noisy subdomains in the output space via the proposed multistage adaptation algorithm (MAA). The extensive experiments on the benchmark remote sensing datasets highlight the superiority of our MultiDAN against recent state-of-the-art UDA methods.
MultiDAN: Unsupervised, Multistage, Multisource and Multitarget Domain Adaptation for Semantic Segmentation of Remote Sensing Images
[ "Yuxiang Cai", "Yongheng Shang", "Jianwei Yin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sdF3MuyHtz
@inproceedings{ tong2024mmdfnd, title={{MMDFND}: Multi-modal Multi-Domain Fake News Detection}, author={Yu Tong and Weihai Lu and Zhe Zhao and Song Lai and Tong Shi}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sdF3MuyHtz} }
Recently, automatic multi-domain fake news detection has attracted widespread attention. Many methods achieve domain adaptation by modeling domain category gate networks and domain-invariant features. However, existing multi-domain fake news detection faces three main challenges: (1) Inter-domain modal semantic deviation, where similar texts and images carry different meanings across various domains. (2) Inter-domain modal dependency deviation, where the dependence on different modalities varies across domains. (3) Inter-domain knowledge dependency deviation, where the reliance on cross-domain knowledge and domain-specific knowledge differs across domains. To address these issues, we propose a Multi-modal Multi-Domain Fake News Detection Model (MMDFND). MMDFND incorporates domain embeddings and attention mechanisms into a progressive hierarchical extraction network to achieve domain-adaptive domain-related knowledge extraction. Furthermore, MMDFND utilizes Stepwise Pivot Transformer networks and adaptive instance normalization to effectively utilize information from different modalities and domains. We validate the effectiveness of MMDFND through comprehensive comparative experiments on two real-world datasets and conduct ablation experiments to verify the effectiveness of each module, achieving state-of-the-art results on both datasets. The source code is available at https://github.com/yutchina/MMDFND.
MMDFND: Multi-modal Multi-Domain Fake News Detection
[ "Yu Tong", "Weihai Lu", "Zhe Zhao", "Song Lai", "Tong Shi" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sdCebYAz3L
@inproceedings{ zheng2024resvg, title={Res{VG}: Enhancing Relation and Semantic Understanding in Multiple Instances for Visual Grounding}, author={Minghang Zheng and Jiahua Zhang and Qingchao Chen and Yuxin Peng and Yang Liu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sdCebYAz3L} }
Visual grounding aims to localize the object referred to in an image based on a natural language query. Although progress has been made recently, accurately localizing target objects within multiple-instance distractions (multiple objects of the same category as the target) remains a significant challenge. Existing methods demonstrate a significant performance drop when there are multiple distractions in an image, indicating an insufficient understanding of the fine-grained semantics and spatial relationships between objects. In this paper, we propose a novel approach, the Relation and Semantic-sensitive Visual Grounding (ReSVG) model, to address this issue. Firstly, we enhance the model's understanding of fine-grained semantics by injecting semantic prior information derived from text queries into the model. This is achieved by leveraging text-to-image generation models to produce images representing the semantic attributes of target objects described in queries. Secondly, we tackle the lack of training samples with multiple distractions by introducing a relation-sensitive data augmentation method. This method generates additional training data by synthesizing images containing multiple objects of the same category and pseudo queries based on their spatial relationships. The proposed ReSVG model significantly improves the model's ability to comprehend both object semantics and spatial relations, leading to enhanced performance in visual grounding tasks, particularly in scenarios with multiple-instance distractions. We conduct extensive experiments to validate the effectiveness of our methods on five datasets.
ResVG: Enhancing Relation and Semantic Understanding in Multiple Instances for Visual Grounding
[ "Minghang Zheng", "Jiahua Zhang", "Qingchao Chen", "Yuxin Peng", "Yang Liu" ]
Conference
poster
2408.16314
[ "https://github.com/minghangz/resvg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sZn9Hq0mIQ
@inproceedings{ feng2024cpprompt, title={{CP}-Prompt: Composition-Based Cross-modal Prompting for Domain-Incremental Continual Learning}, author={Yu Feng and Zhen Tian and Yifan Zhu and Zongfu Han and Haoran Luo and Guangwei Zhang and Meina Song}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sZn9Hq0mIQ} }
The key challenge of cross-modal domain-incremental learning (DIL) is to enable the learning model to continuously learn from novel data with different feature distributions under the same task without forgetting old ones. However, existing top-performing methods still cause high forgetting rates, by lacking intra-domain knowledge extraction and inter-domain common prompting strategy. In this paper, we propose a simple yet effective framework, CP-Prompt, by training limited parameters to instruct a pre-trained model to learn new domains and avoid forgetting existing feature distributions. CP-Prompt captures intra-domain knowledge by compositionally inserting personalized prompts on multi-head self-attention layers and then learns the inter-domain knowledge with a common prompting strategy. CP-Prompt shows superiority compared with state-of-the-art baselines among three widely evaluated DIL tasks. The source code is available at https://anonymous.4open.science/r/CP_Prompt-C126.
CP-Prompt: Composition-Based Cross-modal Prompting for Domain-Incremental Continual Learning
[ "Yu Feng", "Zhen Tian", "Yifan Zhu", "Zongfu Han", "Haoran Luo", "Guangwei Zhang", "Meina Song" ]
Conference
poster
2407.21043
[ "https://github.com/dannis97500/cp_prompt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sU16011qnm
@inproceedings{ yu2024semanticaware, title={Semantic-aware Next-Best-View for Multi-DoFs Mobile System in Search-and-Acquisition based Visual Perception}, author={Xiaotong Yu and Chang Wen Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sU16011qnm} }
Efficient visual perception using mobile systems is crucial, particularly in unknown environments such as search and rescue operations, where swift and comprehensive perception of objects of interest is essential. In such real-world applications, objects of interest are often situated in complex environments, making the selection of the 'Next Best' view based solely on maximizing visibility gain suboptimal. Semantics, providing a higher-level interpretation of perception, should significantly contribute to the selection of the next viewpoint for various perception tasks. In this study, we formulate a novel information gain that integrates both visibility gain and semantic gain in a unified form to select the semantic-aware Next-Best-View. Additionally, we design an adaptive strategy with termination criterion to support a two-stage search-and-acquisition manoeuvre on multiple objects of interest aided by a multi-degree-of-freedoms (Multi-DoFs) mobile system. Several semantically relevant reconstruction metrics, including perspective directivity and region of interest (ROI)-to-full reconstruction volume ratio, are introduced to evaluate the performance of the proposed approach. Simulation experiments demonstrate the advantages of the proposed approach over existing methods, achieving improvements of up to 27.13\% for the ROI-to-full reconstruction volume ratio and a 0.88234 average perspective directivity. Furthermore, the planned motion trajectory exhibits better perceiving coverage toward the target.
Semantic-aware Next-Best-View for Multi-DoFs Mobile System in Search-and-Acquisition based Visual Perception
[ "Xiaotong Yu", "Chang Wen Chen" ]
Conference
poster
2404.16507
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sR7BptgSaw
@inproceedings{ huang2024correlationdriven, title={Correlation-Driven Multi-Modality Graph Decomposition for Cross-Subject Emotion Recognition}, author={Wuliang Huang and Yiqiang Chen and Xinlong Jiang and Chenlong Gao and Qian Chen and Teng Zhang and Bingjie Yan and Yifan Wang and Jianrong Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sR7BptgSaw} }
Multi-modality physiological signal-based emotion recognition has attracted increasing attention as its capacity to capture human affective states comprehensively. Due to multi-modality heterogeneity and cross-subject divergence, practical applications struggle with generalizing models across individuals. Effectively addressing both issues requires mitigating the gap between multi-modality signals while acquiring generalizable representations across subjects. However, existing approaches often handle these dual challenges separately, resulting in suboptimal generalization. This study introduces a novel framework, termed Correlation-Driven Multi-Modality Graph Decomposition (CMMGD). The proposed CMMGD initially captures adaptive cross-modal correlations to connect each unimodal graph to a multi-modality mixed graph. To simultaneously address the dual challenges, it incorporates a correlation-driven graph decomposition module that decomposes the mixed graph into concordant and discrepant subgraphs based on the correlations. The decomposed concordant subgraph encompasses consistently activated features across modalities and subjects during emotion elicitation, unveiling a generalizable subspace. Additionally, we design a Multi-Modality Graph Regularized Transformer (MGRT) backbone specifically tailored for multimodal physiological signals. The MGRT can alleviate the over-smoothing issue and mitigate over-reliance on any single modality. Extensive experiments demonstrate that CMMGD outperforms the state-of-the-art methods by 1.79% and 2.65% on DEAP and MAHNOB-HCI datasets, respectively, under the leave-one-subject-out cross-validation strategy.
Correlation-Driven Multi-Modality Graph Decomposition for Cross-Subject Emotion Recognition
[ "Wuliang Huang", "Yiqiang Chen", "Xinlong Jiang", "Chenlong Gao", "Qian Chen", "Teng Zhang", "Bingjie Yan", "Yifan Wang", "Jianrong Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sPE1CQUzCE
@inproceedings{ wen2024depthcloak, title={DepthCloak: Projecting Optical Camouflage Patches for Erroneous Monocular Depth Estimation of Vehicles}, author={Huixiang Wen and Shan Chang and Shizong Yan and Jie Xu and Hongzi Zhu and Yanting Zhang and Bo Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sPE1CQUzCE} }
Adhesive adversarial patches have been common used in attacks against the computer vision task of monocular depth estimation (MDE). Compared to physical patches permanently attached to target objects, optical projection patches show great flexibility and have gained wide research attention. However, applying digital patches for direct projection may lead to partial blurring or omission of details in the captured patches, attributed to high information density, surface depth discrepancies, and non-uniform pixel distribution. To address these challenges, in this work we introduce DepthCloak, an adversarial optical patch designed to interfere with the MDE of vehicles. To this end, we first simplify the patch to a gray pattern because the projected ``black-and-white light'' has strong robustness to ambient light. We propose a GAN-based approach to simulate projections and deduce a projectable list. Then, we employ neighborhood averaging to fill sparse depth values, compress all depth values into a reduced dynamic range via nonlinear mapping, and use these values to adjust the Gaussian blur radius as weight parameters, thereby simulating depth variation effects. Finally, by integrating Moiré pattern and applying style transfer techniques, we customize adversarial patches featuring regularly arranged characteristics. We deploy DepthCloak in real driving scenarios, and extensive experiments demonstrate that DepthCloak can achieve depth errors of over 9 meters in both bright and night-time conditions while achieving an attack success rate of over 80\% in the physical world.
DepthCloak: Projecting Optical Camouflage Patches for Erroneous Monocular Depth Estimation of Vehicles
[ "Huixiang Wen", "Shan Chang", "Shizong Yan", "Jie Xu", "Hongzi Zhu", "Yanting Zhang", "Bo Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sOxLJfQYI0
@inproceedings{ li2024grefine, title={G-Refine: A General Refiner for Text-to-Image Generation}, author={Chunyi Li and Haoning Wu and Hongkun Hao and Zicheng Zhang and Tengchuan Kou and Chaofeng Chen and Xiaohong Liu and LEI BAI and Weisi Lin and Guangtao Zhai}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sOxLJfQYI0} }
With the evolution of Text-to-Image (T2I) models, the quality defects of AI-Generated Images (AIGIs) pose a significant barrier to their widespread adoption. In terms of both perception and alignment, existing models cannot always guarantee high-quality results. To mitigate this limitation, we introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising the integrity of high-quality ones. The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module. Based on the mechanisms of the Human Visual System (HVS) and syntax trees, the first two indicators can respectively identify the perception and alignment deficiencies, and the last module can apply targeted quality enhancement accordingly. Extensive experimentation reveals that when compared to alternative optimization methods, AIGIs after G-Refine outperform in 10+ quality metrics across 4 datasets. This improvement significantly contributes to the practical application of contemporary T2I models, paving the way for their broader adoption.
G-Refine: A General Refiner for Text-to-Image Generation
[ "Chunyi Li", "Haoning Wu", "Hongkun Hao", "Zicheng Zhang", "Tengchuan Kou", "Chaofeng Chen", "Xiaohong Liu", "LEI BAI", "Weisi Lin", "Guangtao Zhai" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sMYO4S5tZo
@inproceedings{ jia2024purified, title={Purified Distillation: Bridging Domain Shift and Category Gap in Incremental Object Detection}, author={Shilong Jia and Tingting WU and Yingying Fang and Tieyong Zeng and Guixu Zhang and Zhi Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sMYO4S5tZo} }
Incremental Object Detection (IOD) simulates the dynamic data flow in real-world applications, which require detectors to learn new classes or adapt to domain shifts while retaining knowledge from previous tasks. Most existing IOD methods focus only on class incremental learning, assuming all data comes from the same domain. However, this is hardly achievable in practical applications, as images collected under different conditions often exhibit completely different characteristics, such as lighting, weather, style, etc. Class IOD methods suffer from severe performance degradation in these scenarios with domain shifts. To bridge domain shifts and category gaps in IOD, we propose Purified Distillation (PD), where we use a set of trainable queries to transfer the teacher's attention on old tasks to the student and adopt the gradient reversal layer to guide the student to learn the teacher's feature space structure from a micro perspective. This strategy further explores the features extracted by the teacher during incremental learning, which has not been extensively studied in previous works. Meanwhile, PD combines classification confidence with localization confidence to purify the most meaningful output nodes, so that the student model inherits a more comprehensive teacher knowledge. Extensive experiments across various IOD settings on six widely used datasets show that PD significantly outperforms state-of-the-art methods. Even after five steps of incremental learning, our method can preserve 60.6\% mAP on the first task, while compared methods can only maintain up to 55.9\%.
Purified Distillation: Bridging Domain Shift and Category Gap in Incremental Object Detection
[ "Shilong Jia", "Tingting WU", "Yingying Fang", "Tieyong Zeng", "Guixu Zhang", "Zhi Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sJUNSQbYpV
@inproceedings{ wang2024wisdom, title={WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge}, author={Wenbin Wang and Liang Ding and Li Shen and Yong Luo and Han Hu and Dacheng Tao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sJUNSQbYpV} }
Multimodal Sentiment Analysis (MSA) focuses on leveraging multimodal signals for understanding human sentiment. Most of the existing works rely on superficial information, neglecting the incorporation of contextual world knowledge (e.g., background information derived from but beyond the given image and text pairs), thereby restricting their ability to achieve better multimodal sentiment analysis (MSA). In this paper, we propose a plug-in framework named WisdoM, to leverage the contextual world knowledge induced from the large vision-language models (LVLMs) for enhanced MSA. WisdoM utilizes LVLMs to comprehensively analyze both images and corresponding texts, simultaneously generating pertinent context. Besides, to reduce the noise in the context, we design a training-free contextual fusion mechanism. We evaluate our WisdoM in both the aspect-level and sentence-level MSA tasks on the Twitter2015, Twitter2017, and MSED datasets. Experiments on three MSA benchmarks upon several advanced LVLMs, show that our approach brings consistent and significant improvements (up to +6.3% F1 score).
WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge
[ "Wenbin Wang", "Liang Ding", "Li Shen", "Yong Luo", "Han Hu", "Dacheng Tao" ]
Conference
poster
2401.06659
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sIwZ6TIn0P
@inproceedings{ zhang2024mpt, title={{MPT}: Multi-grained Prompt Tuning for Text-Video Retrieval}, author={Haonan Zhang and Pengpeng Zeng and Lianli Gao and Jingkuan Song and Heng Tao Shen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sIwZ6TIn0P} }
Recently, significant advancements have been made in supporting text-video retrieval by transferring large-scale image-text pre-training models through model adaptation, i.e., full fine-tuning, or prompt tuning, a parameter-efficient fine-tuning strategy. While full fine-tuning involves high computational costs, particularly with increasing model size, prompt tuning offers greater flexibility and efficiency by adjusting only a few learnable parameters. However, current prompt tuning methods rely on coarse visual and textual cues for text-video retrieval task, neglecting the domain-specific features when performing the adaptation. This approach may lead to sub-optimal performance due to the incorporation of irrelevant and indiscriminate knowledge. To address such an issue, we present a Multi-grained Prompt Tuning (MPT) for text-video retrieval, that designs a variety of specific prompts to effectively explore semantic interaction across different modalities with diverse granularity. Specifically, we devise a multi-grained video encoder that employs spatial, temporal, and global prompts to transfer the base-generic knowledge from the image-text pre-trained model while comprehensively excavating determinative video-specific characteristics. Meanwhile, we introduce a novel multi-grained text encoder aimed at capturing various levels of textual clues through the utilization of word and phrase prompts. Extensive experiments on four benchmark datasets, i.e., MSR-VTT, ActivityNet, DiDeMo, and LSMDC, demonstrate that MPT achieves outstanding performance, surpassing state-of-the-art methods with negligible computational cost. The codebase is publicly available at: https://github.com/zchoi/MPT.
MPT: Multi-grained Prompt Tuning for Text-Video Retrieval
[ "Haonan Zhang", "Pengpeng Zeng", "Lianli Gao", "Jingkuan Song", "Heng Tao Shen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sA2a5a5O4g
@inproceedings{ zheng2024rethinking, title={Rethinking the Architecture Design for Efficient Generic Event Boundary Detection}, author={Ziwei Zheng and Zechuan Zhang and Yulin Wang and Shiji Song and Gao Huang and Le Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=sA2a5a5O4g} }
Generic event boundary detection (GEBD), inspired by human visual cognitive behaviors of consistently segmenting videos into meaningful temporal chunks, finds utility in various applications such as video editing and summarization. In this paper, we demonstrate that state-of-the-art GEBD models often prioritize final performance over model complexity, resulting in low inference speed and hindering efficient deployment in real-world scenarios. We contribute to addressing this challenge by experimentally reexamining the architecture of GEBD models and uncovering several surprising findings. Firstly, we reveal that a concise GEBD baseline model already achieves promising performance without any sophisticated design. Secondly, we find that the common design of GEBD models using image-domain backbones can contain plenty of architecture redundancy, motivating us to gradually “modernize” each component to enhance efficiency. Thirdly, we show that the GEBD models using image-domain backbones conducting the spatiotemporal learning in a spatial-then-temporal greedy manner can suffer from a distraction issue, which might be the inefficient villain for the GEBD. Using a video-domain backbone to jointly conduct spatiotemporal modeling for GEBD is an effective solution for this issue. The outcome of our exploration is a family of GEBD models, named EfficientGEBD, significantly outperforms the previous SOTA methods by up to 1.7% performance growth and 280% practical speedup under the same backbone choice. Our research prompts the community to design modern GEBD methods with the consideration of model complexity, particularly in resource-aware applications. The code is available at https://github.com/anonymous.
Rethinking the Architecture Design for Efficient Generic Event Boundary Detection
[ "Ziwei Zheng", "Zechuan Zhang", "Yulin Wang", "Shiji Song", "Gao Huang", "Le Yang" ]
Conference
poster
2407.12622
[ "https://github.com/ziwei-zheng/efficientgebd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=s9WLhE2c75
@inproceedings{ li2024tagood, title={Tag{OOD}: A Novel Approach to Out-of-Distribution Detection via Vision-Language Representations and Class Center Learning}, author={Jinglun Li and Xinyu Zhou and Kaixun Jiang and Lingyi Hong and Pinxue Guo and Zhaoyu Chen and Weifeng Ge and Wenqiang Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=s9WLhE2c75} }
Multimodal fusion, leveraging data like vision and language, is rapidly gaining traction. This enriched data representation improves performance across various tasks. Existing methods for out-of-distribution (OOD) detection, a critical area where AI models encounter unseen data in real-world scenarios, rely heavily on whole-image features. These image-level features can include irrelevant information that hinders the detection of OOD samples, ultimately limiting overall performance. In this paper, we propose \textbf{TagOOD}, a novel approach for OOD detection that leverages vision-language representations to achieve label-free object feature decoupling from whole images. This decomposition enables a more focused analysis of object semantics, enhancing OOD detection performance. Subsequently, TagOOD trains a lightweight network on the extracted object features to learn representative class centers. These centers capture the central tendencies of IND object classes, minimizing the influence of irrelevant image features during OOD detection. Finally, our approach efficiently detects OOD samples by calculating distance-based metrics as OOD scores between learned centers and test samples. We conduct extensive experiments to evaluate TagOOD on several benchmark datasets and demonstrate its superior performance compared to existing OOD detection methods. This work presents a novel perspective for further exploration of multimodal information utilization in OOD detection, with potential applications across various tasks. Code will be available.
TagOOD: A Novel Approach to Out-of-Distribution Detection via Vision-Language Representations and Class Center Learning
[ "Jinglun Li", "Xinyu Zhou", "Kaixun Jiang", "Lingyi Hong", "Pinxue Guo", "Zhaoyu Chen", "Weifeng Ge", "Wenqiang Zhang" ]
Conference
poster
2408.15566
[ "https://github.com/Jarvisgivemeasuit/tagood" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=s18OUFV9vc
@inproceedings{ cao2024a, title={A Novel State Space Model with Local Enhancement and State Sharing for Image Fusion}, author={Zihan Cao and Xiao Wu and Liang-Jian Deng and Yu Zhong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=s18OUFV9vc} }
In image fusion tasks, images from different sources possess distinct characteristics. This has driven the development of numerous methods to explore better ways of fusing them while preserving their respective characteristics. Mamba, as a state space model, has emerged in the field of natural language processing. Recently, many studies have attempted to extend Mamba to vision tasks. However, due to the nature of images different from casual language sequences, the limited state capacity of Mamba weakens its ability to model image information. Additionally, the sequence modeling ability of Mamba is only capable of spatial information and cannot effectively capture the rich spectral information in images. Motivated by these challenges, we customize and improve the vision Mamba network designed for the image fusion task. Specifically, we propose the local-enhanced vision Mamba block, dubbed as LEVM. The LEVM block can improve local information perception of the network and simultaneously learn local and global spatial information. Furthermore, we propose the state sharing technique to enhance spatial details and integrate spatial and spectral information. Finally, the overall network is a multi-scale structure based on vision Mamba, called LE-Mamba. Extensive experiments show the proposed methods achieve state-of-the-art results on multispectral pansharpening and multispectral and hyperspectral image fusion datasets, and demonstrate the effectiveness of the proposed approach. Code will be made available.
A Novel State Space Model with Local Enhancement and State Sharing for Image Fusion
[ "Zihan Cao", "Xiao Wu", "Liang-Jian Deng", "Yu Zhong" ]
Conference
poster
2404.09293
[ "https://github.com/294coder/efficient-mif" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rtjZHEOcHx
@inproceedings{ jiang2024mitigating, title={Mitigating Social Biases in Text-to-Image Diffusion Models via Linguistic-Aligned Attention Guidance}, author={Yue Jiang and Yueming Lyu and Ziwen He and Bo Peng and Jing Dong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rtjZHEOcHx} }
Recent advancements in text-to-image generative models have showcased remarkable capabilities across various tasks. However, these powerful models have revealed the inherent risks of social biases related to gender, race, and their intersections. Such biases can propagate distorted real-world perspectives and spread unforeseen prejudice and discrimination. Current debiasing methods are primarily designed for scenarios with a single individual in the image and exhibit homogenous race or gender when multiple individuals are involved, harming the diversity of social groups within the image. To address this problem, we consider the semantic consistency between text prompts and generated images in text-to-image diffusion models to identify how biases are generated. We propose a novel method to locate where the biases are based on different tokens and then mitigate them for each individual. Specifically, we introduce a Linguistic-aligned Attention Guidance module consisting of Block Voting and Linguistic Alignment, to effectively locate the semantic regions related to biases. Additionally, we employ Fair Inference in these regions to generate fair attributes across arbitrary distributions while preserving the original structural and semantic information. Extensive experiments and analyses demonstrate our method outperforms existing methods for debiasing with multiple individuals across various scenarios.
Mitigating Social Biases in Text-to-Image Diffusion Models via Linguistic-Aligned Attention Guidance
[ "Yue Jiang", "Yueming Lyu", "Ziwen He", "Bo Peng", "Jing Dong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rqkS5wUizX
@inproceedings{ zhou2024editd, title={Edit3D: Elevating 3D Scene Editing with Attention-Driven Multi-Turn Interactivity}, author={Peng Zhou and Dunbo Cai and Yujian Du and Runqing Zhang and Bingbing Ni and Jie Qin and Ling Qian}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rqkS5wUizX} }
With the rise of new 3D representations like NeRF and 3D Gaussian splatting, creating realistic 3D scenes is easier than ever before. However, the incompatibility of these 3D representations with existing editing software has also introduced unprecedented challenges to 3D editing tasks. Although recent advances in text-to-image generative models have made some progress in 3D editing, these methods either lack precision or require users to manually specify the editing areas in 3D space, complicating the editing process. To overcome these issues, we propose Edit3D, an innovative 3D editing method designed to enhance editing quality. Specifically, we propose a multi-turn editing framework and introduce an attention-driven open-set segmentation (ADSS) technique within this framework. ADSS allows for more precise segmentation of parts, which enhances the editing precision and minimizes interference with pixels in areas that are not being edited. Additionally, we propose a fine-tuning phase, intended to further improve the overall editing quality without compromising the training efficiency. Experiments demonstrate that Edit3D effectively adjusts 3D scenes based on textual instructions. Through continuous and multiple turns of editing, it achieves more intricate combinations, enhancing the diversity of 3D editing effects.
Edit3D: Elevating 3D Scene Editing with Attention-Driven Multi-Turn Interactivity
[ "Peng Zhou", "Dunbo Cai", "Yujian Du", "Runqing Zhang", "Bingbing Ni", "Jie Qin", "Ling Qian" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rkCYgXfj9P
@inproceedings{ yang2024semantic, title={Semantic Editing Increment Benefits Zero-Shot Composed Image Retrieval}, author={Zhenyu Yang and Shengsheng Qian and Dizhan Xue and Jiahong Wu and Fan Yang and Weiming Dong and Changsheng Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rkCYgXfj9P} }
Zero-Shot Composed Image Retrieval (ZS-CIR) has attracted more attention in recent years, focusing on retrieving a specific image based on a query composed of a reference image and a relative text without training samples. Specifically, the relative text describes the differences between the two images. Prevailing ZS-CIR methods employ image-to-text (I2T) models to convert the query image into a single caption, which is further merged with the relative text by text-fusion approaches to form a composed text for retrieval. However, these methods neglect the fact that ZS-CIR entails considering not only the final similarity between the composed text and retrieved images but also the semantic increment during the compositional editing process. To address this limitation, this paper proposes a training-free method called Semantic Editing Increment for ZS-CIR (SEIZE) to retrieve the target image based on the query image and text without training. Firstly, we employ a pre-trained captioning model to generate diverse captions for the reference image and prompt Large Language Models (LLMs) to perform breadth compositional reasoning based on these captions and relative text, thereby covering the potential semantics of the target image. Then, we design a semantic editing search to incorporate the semantic editing increment contributed by the relative text into the retrieval process. Concretely, we comprehensively consider relative semantic increment and absolute similarity as the final retrieval score, which is subsequently utilized to retrieve the target image in the CLIP feature space. Extensive experiments on three public datasets demonstrate that our proposed SEIZE achieves the new state-of-the-art performance. The code is publicly available at https://anonymous.4open.science/r/SEIZE-11BC.
Semantic Editing Increment Benefits Zero-Shot Composed Image Retrieval
[ "Zhenyu Yang", "Shengsheng Qian", "Dizhan Xue", "Jiahong Wu", "Fan Yang", "Weiming Dong", "Changsheng Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rjAY1DGUWC
@inproceedings{ jin2024speechcraft, title={SpeechCraft: A Fine-Grained Expressive Speech Dataset with Natural Language Description}, author={Zeyu Jin and Jia Jia and Qixin Wang and Kehan Li and Shuoyi Zhou and Songtao Zhou and Xiaoyu Qin and Zhiyong Wu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rjAY1DGUWC} }
Speech-language multi-modal learning presents a significant challenge due to the fine nuanced information inherent in speech styles. Therefore, a large-scale dataset providing elaborate comprehension of speech style is urgently needed to facilitate insightful interplay between speech audio and natural language. However, constructing such datasets presents a major trade-off between large-scale data collection and high-quality annotation. To tackle this challenge, we propose an automatic speech annotation system for expressiveness interpretation that annotates in-the-wild speech clips with expressive and vivid human language descriptions. Initially, speech audios are processed by a series of expert classifiers and captioning models to capture diverse speech characteristics, followed by a fine-tuned LLaMA for customized annotation generation. Unlike previous tag/templet-based annotation frameworks with limited information and diversity, our system provides in-depth understandings of speech style through tailored natural language descriptions, thereby enabling accurate and voluminous data generation for large model training. With this system, we create SpeechCraft, a fine-grained bilingual expressive speech dataset. It is distinguished by highly descriptive natural language style prompts, containing approximately 2,000 hours of audio data and encompassing over two million speech clips. Extensive experiments demonstrate that the proposed dataset significantly boosts speech-language task performance in both stylist speech synthesis and speech style understanding.
SpeechCraft: A Fine-Grained Expressive Speech Dataset with Natural Language Description
[ "Zeyu Jin", "Jia Jia", "Qixin Wang", "Kehan Li", "Shuoyi Zhou", "Songtao Zhou", "Xiaoyu Qin", "Zhiyong Wu" ]
Conference
poster
2408.13608
[ "https://github.com/thuhcsi/speechcraft" ]
https://huggingface.co/papers/2408.13608
0
0
0
8
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=rdN6HJo3hD
@inproceedings{ yao2024fdtalk, title={{FD}2Talk: Towards Generalized Talking Head Generation with Facial Decoupled Diffusion Model}, author={Ziyu Yao and Xuxin Cheng and Zhiqi Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rdN6HJo3hD} }
Talking head generation is a significant research topic that still faces numerous challenges. Previous works often adopt generative adversarial networks or regression models, which are plagued by generation quality and average facial shape problem. Although diffusion models show impressive generative ability, their exploration in talking head generation remains unsatisfactory. This is because they either solely use the diffusion model to obtain an intermediate representation and then employ another pre-trained renderer, or they overlook the feature decoupling of complex facial details, such as expressions, head poses and appearance textures. Therefore, we propose a Facial Decoupled Diffusion model for Talking head generation called FD2Talk, which fully leverages the advantages of diffusion models and decouples the complex facial details through multi-stages. Specifically, we separate facial details into motion and appearance. In the initial phase, we design the Diffusion Transformer to accurately predict motion coefficients from raw audio. These motions are highly decoupled from appearance, making them easier for the network to learn compared to high-dimensional RGB images. Subsequently, in the second phase, we encode the reference image to capture appearance textures. The predicted facial and head motions and encoded appearance then serve as the conditions for the Diffusion UNet, guiding the frame generation. Benefiting from decoupling facial details and fully leveraging diffusion models, extensive experiments substantiate that our approach excels in enhancing image quality and generating more accurate and diverse results compared to previous state-of-the-art methods.
FD2Talk: Towards Generalized Talking Head Generation with Facial Decoupled Diffusion Model
[ "Ziyu Yao", "Xuxin Cheng", "Zhiqi Huang" ]
Conference
poster
2408.09384
[ "" ]
https://huggingface.co/papers/2408.09384
0
0
0
3
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=rY031lCoQd
@inproceedings{ liu2024trafficmot, title={Traffic{MOT}: A Challenging Dataset for Multi-Object Tracking in Complex Traffic Scenarios}, author={Lihao Liu and Yanqi Cheng and Zhongying Deng and Shujun Wang and Dongdong Chen and Xiaowei Hu and Pietro Lio and Carola-Bibiane Sch{\"o}nlieb and Angelica I Aviles-Rivero}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rY031lCoQd} }
Multi-object tracking in traffic videos is a crucial research area, offering immense potential for enhancing traffic monitoring accuracy and promoting road safety measures through the utilisation of advanced machine learning algorithms. However, existing datasets for multi-object tracking in traffic videos often feature limited instances or focus on single classes, which cannot well simulate the challenges encountered in complex traffic scenarios. To address this gap, we introduce TrafficMOT, an extensive dataset designed to encompass diverse traffic situations with complex scenarios. To validate the complexity and challenges presented by TrafficMOT, we conducted comprehensive empirical studies using three different settings: fully-supervised, semi-supervised, and a recent powerful zero-shot foundation model Tracking Anything Model (TAM). The experimental results highlight the inherent complexity of this dataset, emphasising its value in driving advancements in the field of traffic monitoring and multi-object tracking.
TrafficMOT: A Challenging Dataset for Multi-Object Tracking in Complex Traffic Scenarios
[ "Lihao Liu", "Yanqi Cheng", "Zhongying Deng", "Shujun Wang", "Dongdong Chen", "Xiaowei Hu", "Pietro Lio", "Carola-Bibiane Schönlieb", "Angelica I Aviles-Rivero" ]
Conference
poster
2311.18839
[ "" ]
https://huggingface.co/papers/2311.18839
0
0
0
9
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=rLw5583hMb
@inproceedings{ li2024motrans, title={MoTrans: Customized Motion Transfer with Text-driven Video Diffusion Models}, author={Xiaomin Li and Xu Jia and Qinghe Wang and Haiwen Diao and mengmeng Ge and Pengxiang Li and You He and Huchuan Lu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rLw5583hMb} }
Existing pretrained text-to-video (T2V) models have demonstrated impressive abilities in generating realistic videos with basic motion or camera movement. However, these models exhibit significant limitations when generating intricate, human-centric motions. Current efforts primarily focus on fine-tuning models on a small set of videos containing a specific motion. They often fail to effectively decouple motion and the appearance in the limited reference videos, thereby weakening the modeling capability of motion patterns. To this end, we propose MoTrans, a customized motion transfer method enabling video generation of similar motion in new context. Specifically, we introduce a multimodal large language model (MLLM)-based recaptioner to expand the initial prompt to focus more on appearance and an appearance injection module to adapt appearance prior from video frames to the motion modeling process. These complementary multimodal representations from recaptioned prompt and video frames promote the modeling of appearance and facilitate the decoupling of appearance and motion. In addition, we devise a motion-specific embedding for further enhancing the modeling of the specific motion. Experimental results demonstrate that our method effectively learns specific motion pattern from singular or multiple reference videos, performing favorably against existing methods in customized video generation.
MoTrans: Customized Motion Transfer with Text-driven Video Diffusion Models
[ "Xiaomin Li", "Xu Jia", "Qinghe Wang", "Haiwen Diao", "mengmeng Ge", "Pengxiang Li", "You He", "Huchuan Lu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rLG6KQMs42
@inproceedings{ wu2024rscsnn, title={{RSC}-{SNN}: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing Coding}, author={Keming Wu and Man Yao and Yuhong Chou and Xuerui Qiu and Rui Yang and Bo XU and Guoqi Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rLG6KQMs42} }
Spiking Neural Networks (SNNs) have received widespread attention due to their unique neuronal dynamics and low-power nature. Previous research empirically shows that SNNs with Poisson coding are more robust than Artificial Neural Networks (ANNs) on small-scale datasets. However, it is still unclear in theory how the adversarial robustness of SNNs is derived, and whether SNNs can still maintain its adversarial robustness advantage on large-scale dataset tasks. This work theoretically demonstrates that SNN's inherent adversarial robustness stems from its Poisson coding. We reveal the conceptual equivalence of Poisson coding and randomized smoothing in defense strategies, and analyze in depth the trade-off between accuracy and adversarial robustness in SNNs via the proposed Randomized Smoothing Coding (RSC) method. Experiments demonstrate that the proposed RSC-SNNs show remarkable adversarial robustness, surpassing ANNs and achieving state-of-the-art robustness results on large-scale dataset ImageNet. Our open-source implementation code is available at ~\href{https://github.com/KemingWu/RSC-SNN}{\textit{https://github.com/KemingWu/RSC-SNN}}.
RSC-SNN: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing Coding
[ "Keming Wu", "Man Yao", "Yuhong Chou", "Xuerui Qiu", "Rui Yang", "Bo XU", "Guoqi Li" ]
Conference
poster
2407.20099
[ "https://github.com/KemingWu/RSC-SNN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rIu1efxaLg
@inproceedings{ yang2024generalize, title={Generalize to Fully Unseen Graphs: Learn Transferable Hyper-Relation Structures for Inductive Link Prediction}, author={Jing Yang and XiaowenJiang and Yuan Gao and Laurence Tianruo Yang and JieMing Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rIu1efxaLg} }
Inductive link prediction aims to infer missing triples on unseen graphs, which contain unseen entities and relations during training. The performances of existing inductive inference methods were hindered by the limited generalization capability in fully unseen graphs, which is rooted in the neglect of the intrinsic graph structure. In this paper, we aim to enhance the model's generalization ability to unseen graphs and thus propose a novel Hyper-Relation aware multi-views model HyRel for learning the global transferable structure of graphs. Distinct from existing studies, we introduce a novel perspective focused on learning the inherent hyper-relation structure consisting of the relation positions and affinity. The hyper-relation structure is independent of specific entities, relations, or features, thus allowing for transferring the learned knowledge to any unseen graphs. We adopt a multi-view approach to model the hyper-relation structure. HyRel incorporates neighborhood learning on each view, capturing nuanced semantics of relative relation position. Meanwhile, dual views contrastive constraints are designed to enforce the robustness of transferable structural knowledge. To the best of our knowledge, our work makes one of the first attempts to generalize the learning of hyper-relation structures, offering high flexibility and ease of use without reliance on any external resources. HyRel demonstrates SOTA performance compared to existing methods under extensive inductive settings, particularly on fully unseen graphs, and validates the efficacy of learning hyper-relation structures for improving generalization. The code is available online at https://github.com/hncps6/HyRel.
Generalize to Fully Unseen Graphs: Learn Transferable Hyper-Relation Structures for Inductive Link Prediction
[ "Jing Yang", "XiaowenJiang", "Yuan Gao", "Laurence Tianruo Yang", "JieMing Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rI5I07ePUq
@inproceedings{ liu2024mlp, title={{MLP} Embedded Inverse Tone Mapping}, author={Panjun Liu and Jiacheng Li and Lizhi Wang and Zheng-Jun Zha and Zhiwei Xiong}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rI5I07ePUq} }
The advent of High Dynamic Range/Wide Color Gamut (HDR/WCG) display technology has made significant progress in providing exceptional richness and vibrancy for the human visual experience. However, the widespread adoption of HDR/WCG images is hindered by their substantial storage requirements, imposing significant bandwidth challenges during distribution. Besides, HDR/WCG images are often tone-mapped into Standard Dynamic Range (SDR) versions for compatibility, necessitating the usage of inverse Tone Mapping (iTM) techniques to reconstruct their original representation. In this work, we propose a meta-transfer learning framework for practical HDR/WCG media transmission by embedding image-wise metadata into their SDR counterparts for later iTM reconstruction. Specifically, we devise a meta-learning strategy to pre-train a lightweight multilayer perceptron (MLP) model that maps SDR pixels to HDR/WCG ones on an external dataset, resulting in a domain-wise iTM model. Subsequently, for the transfer learning process of each HDR/WCG image, we present a spatial-aware online mining mechanism to select challenging training pairs to adapt the meta-trained model to an image-wise iTM model. Finally, the adapted MLP, embedded as metadata, is transmitted alongside the SDR image, facilitating the reconstruction of the original image on HDR/WCG displays. We conduct extensive experiments and evaluate the proposed framework with diverse metrics. Compared with existing solutions, our framework shows superior performance in fidelity (up to 3dB gain in perceptual-uniform PSNR), minimal latency (1.2s for adaptation and 2ms for reconstruction of a 4K image), and negligible overhead (40KB).
MLP Embedded Inverse Tone Mapping
[ "Panjun Liu", "Jiacheng Li", "Lizhi Wang", "Zheng-Jun Zha", "Zhiwei Xiong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rFiB1aTeqs
@inproceedings{ xueying2024from, title={From Covert Hiding To Visual Editing: Robust Generative Video Steganography}, author={Mao Xueying and Xiaoxiao Hu and Wanli Peng and Zhenliang Gan and Zhenxing Qian and Xinpeng Zhang and Sheng Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rFiB1aTeqs} }
Traditional video steganography methods are based on modifying the covert space for embedding, whereas we propose an innovative approach that embeds secret message within semantic feature for steganography during the video editing process. Although existing traditional video steganography methods excel in balancing security and capacity, they lack adequate robustness against common distortions in online social networks (OSNs). In this paper, we propose an end-to-end robust generative video steganography network (RoGVSN), which achieves visual editing by modifying semantic feature of videos to embed secret message. We exemplify the face-swapping scenario as an illustration to demonstrate the visual editing effects. Specifically, we devise an adaptive scheme to seamlessly embed secret messages into the semantic features of videos through fusion blocks. Extensive experiments demonstrate the superiority of our method in terms of robustness, extraction accuracy, visual quality, and capacity.
From Covert Hiding To Visual Editing: Robust Generative Video Steganography
[ "Mao Xueying", "Xiaoxiao Hu", "Wanli Peng", "Zhenliang Gan", "Zhenxing Qian", "Xinpeng Zhang", "Sheng Li" ]
Conference
poster
2401.00652
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rFRerAPdwI
@inproceedings{ lin2024scalable, title={Scalable Multi-Source Pre-training for Graph Neural Networks}, author={Mingkai Lin and Wenzhong Li and Xiaobin Hong and Sanglu Lu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rFRerAPdwI} }
Graph Neural Networks (GNNs) have been shown as powerful tools in various scenarios, such as multimodal and multimedia. A fundamental approach, pre-training on available graphs and subsequently transferring the acquired knowledge to optimize downstream tasks with limited labels, was widely exploited to mitigate the demand for extensive labeled training data. However, previous works commonly assumed that pre-training and fine-tuning occur in the same or closely related domains that share similar feature/label spaces and graph distributions. A limitation is that for each individual graph without accessible pre-training data, a GNN must be trained from scratch, imposing high training overhead and hindering the ability of generalization. In this paper, we address the \emph{GNN multi-domain pre-training problem}, which intends to pre-train a transferable GNN model from heterogeneous multi-source graph domains and then apply it in an unseen one with minor fine-tuning costs. To this end, we propose a sca\underline{LA}ble \underline{M}ulti-source \underline{P}re-training (LAMP) method. For pre-training, LAMP presents a graph dual-distillation approach to distill massive knowledge from various graph domains to form synthetic homogeneous graphs. Simultaneously, high-level meta-knowledge from the synthetic graphs is extracted to train the GNN model, whose capability can be adjusted according to target graph contexts through a co-training modulation architecture. For fine-tuning, LAMP respectively aligns the target graph distribution, graph context, and graph task with the pretext so that the downstream task in the unseen domain can be reshaped to leverage the transferable knowledge efficiently. Extensive experiments on four real-world graph domain datasets demonstrate the superiority of LAMP, showcasing notable improvements in various downstream graph learning tasks. Our codes are publicly available on GitHub.
Scalable Multi-Source Pre-training for Graph Neural Networks
[ "Mingkai Lin", "Wenzhong Li", "Xiaobin Hong", "Sanglu Lu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rEh8KTz9Ba
@inproceedings{ ran2024rainmer, title={Rainmer: Learning Multi-view Representations for Comprehensive Image Deraining and Beyond}, author={Wu Ran and Peirong Ma and Zhiquan He and Hong Lu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rEh8KTz9Ba} }
We address image deraining under complex backgrounds, diverse rain scenarios, and varying illumination conditions, representing a highly practical and challenging problem. Our approach utilizes synthetic, real-world, and nighttime datasets, wherein rich backgrounds, multiple degradation types, and diverse illumination conditions coexist. The primary challenge in training models on these datasets arises from the discrepancies among them, potentially leading to conflicts or competition during the training period. To address this issue, we first align the distribution of synthetic, real-world and nighttime datasets. Then we propose a novel contrastive learning strategy to extract multi-view (multiple) representations that effectively capture image details, degradations, and illuminations, thereby facilitating training across all datasets. Regarding multiple representations as profitable prompts for deraining, we devise a prompting strategy to integrate them into the decoding process. This contributes to a potent deraining model, dubbed Rainmer. Additionally, a spatial-channel interaction module is introduced to fully exploit cues when extracting multi-view representations. Extensive experiments on synthetic, real-world, and nighttime datasets demonstrate that Rainmer outperforms current representative methods. Moreover, Rainmer achieves superior performance on the All-in-One image restoration dataset, underscoring its effectiveness. Furthermore, quantitative results reveal that Rainmer significantly improves object detection performance on both daytime and nighttime rainy datasets. These observations substantiate the potential of Rainmer for practical applications.
Rainmer: Learning Multi-view Representations for Comprehensive Image Deraining and Beyond
[ "Wu Ran", "Peirong Ma", "Zhiquan He", "Hong Lu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rE9i5h4xPl
@inproceedings{ zhao2024efficient, title={Efficient Single Image Super-Resolution with Entropy Attention and Receptive Field Augmentation}, author={Xiaole Zhao and Linze Li and Chengxing Xie and XIAOMING ZHANG and Ting Jiang and Wenjie Lin and Shuaicheng Liu and Tianrui Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rE9i5h4xPl} }
Transformer-based deep models for single image super-resolution (SISR) have greatly improved the performance of lightweight SISR tasks in recent years. However, they often suffer from heavy computational burden and slow inference due to the complex calculation of multi-head self-attention (MSA), seriously hindering their practical application and deployment. In this work, we present an efficient SR model to mitigate the dilemma between model efficiency and SR performance, which is dubbed Entropy Attention and Receptive Field Augmentation network (EARFA), and composed of a novel entropy attention (EA) and a shifting large kernel attention (SLKA). From the perspective of information theory, EA increases the entropy of intermediate features conditioned on a Gaussian distribution, providing more informative input for subsequent reasoning. On the other hand, SLKA extends the receptive field of SR models with the assistance of channel shifting, which also favors to boost the diversity of hierarchical features. Since the implementation of EA and SLKA does not involve complex computations (such as extensive matrix multiplications), the proposed method can achieve faster nonlinear inference than Transformer-based SR models while maintaining better SR performance. Extensive experiments show that the proposed model can significantly reduce the delay of model inference while achieving the SR performance comparable with other advanced models.
Efficient Single Image Super-Resolution with Entropy Attention and Receptive Field Augmentation
[ "Xiaole Zhao", "Linze Li", "Chengxing Xie", "XIAOMING ZHANG", "Ting Jiang", "Wenjie Lin", "Shuaicheng Liu", "Tianrui Li" ]
Conference
poster
2408.04158
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rD7guYi6jZ
@inproceedings{ kim2024efficient, title={Efficient Training for Multilingual Visual Speech Recognition: Pre-training with Discretized Visual Speech Representation}, author={Minsu Kim and Jeonghun Yeo and Se Jin Park and Hyeongseop Rha and Yong Man Ro}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=rD7guYi6jZ} }
This paper explores sentence-level multilingual Visual Speech Recognition (VSR) that can recognize different languages with a single trained model. As the massive multilingual modeling of visual data requires huge computational costs, we propose a novel training strategy, processing with visual speech units. Motivated by the recent success of the audio speech unit, we propose to use a visual speech unit that can be obtained by discretizing the visual speech features extracted from the self-supervised visual speech model. Through analysis, we verify that the visual speech units mainly contain viseme information while suppressing non-linguistic information. By using the visual speech units as the inputs of our system, we propose to pre-train a VSR model to predict corresponding text outputs on multilingual data constructed by merging several VSR databases. As both the inputs (i.e., visual speech units) and outputs (i.e., text) are discrete, we can greatly improve the training efficiency compared to the standard VSR training. Specifically, the input data size is reduced to 0.016% of the original video inputs. In order to complement the insufficient visual information in speech recognition, we apply curriculum learning where the inputs of the system begin with audio-visual speech units and gradually change to visual speech units. After pre-training, the model is finetuned on continuous features. We set new state-of-the-art multilingual VSR performances by achieving comparable performances to the previous language-specific VSR models, with a single trained model.
Efficient Training for Multilingual Visual Speech Recognition: Pre-training with Discretized Visual Speech Representation
[ "Minsu Kim", "Jeonghun Yeo", "Se Jin Park", "Hyeongseop Rha", "Yong Man Ro" ]
Conference
poster
2401.09802
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=r9X3P6qvAj
@inproceedings{ xu2024reversing, title={Reversing Structural Pattern Learning with Biologically Inspired Knowledge Distillation for Spiking Neural Networks}, author={Qi Xu and Yaxin Li and Xuanye Fang and Jiangrong Shen and Qiang Zhang and Gang Pan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=r9X3P6qvAj} }
Spiking neural networks (SNNs) have superb characteristics in sensory information recognition tasks due to their biological plausibility. However, the performance of some current spiking-based models is limited by their structures which means either fully connected or too-deep structures bring too much redundancy. This redundancy from both connection and neurons is one of the key factors hindering the practical application of SNNs. Although Some pruning methods were proposed to tackle this problem, they normally ignored the fact the neural topology in the human brain could be adjusted dynamically. Inspired by this, this paper proposed an evolutionary-based structure construction method for constructing more reasonable SNNs. By integrating the knowledge distillation and connection pruning method, the synaptic connections in SNNs can be optimized dynamically to reach an optimal state. As a result, the structure of SNNs could not only absorb knowledge from the teacher model but also search for deep but sparse network topology. Experimental results on CIFAR100, Tiny-imagenet and DVS-Gesture show that the proposed structure learning method can get pretty well performance while reducing the connection redundancy. The proposed method explores a novel dynamical way for structure learning from scratch in SNNs which could build a bridge to close the gap between deep learning and bio-inspired neural dynamics.
Reversing Structural Pattern Learning with Biologically Inspired Knowledge Distillation for Spiking Neural Networks
[ "Qi Xu", "Yaxin Li", "Xuanye Fang", "Jiangrong Shen", "Qiang Zhang", "Gang Pan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=r8QjbxK6TW
@inproceedings{ luo2024ldcnet, title={{LDCN}et: Long-Distance Context Modeling for Large-Scale 3D Point Cloud Scene Semantic Segmentation}, author={Shoutong Luo and Zhengxing Sun and Yi Wang and Yunhan Sun and Chendi Zhu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=r8QjbxK6TW} }
Large-scale point cloud semantic segmentation is a challenging task in 3D computer vision. A key challenge is how to resolve ambiguities arising from locally high inter-class similarity. In this study, we introduce a solution by modeling long-distance contextual information to understand the scene's overall layout. The context sensitivity of previous methods is typically constrained to small blocks(e.g. $2m \times 2m$) and cannot be directly extended to the entire scene. For this reason, we propose \textbf{L}ong-\textbf{D}istance \textbf{C}ontext Modeling Network(LDCNet). Our key insight is that keypoints are enough for inferring the layout of a scene. Therefore, we represent the entire scene using keypoints along with local descriptors and model long-distance context on these keypoints. Finally, we propagate the long-distance context information from keypoints back to non-keypoints. This allows our method to model long-distance context effectively. We conducted experiments on six datasets, demonstrating that our approach can effectively mitigate ambiguities. Our method performs well on large, irregular objects and exhibits good generalization for typical scenarios.
LDCNet: Long-Distance Context Modeling for Large-Scale 3D Point Cloud Scene Semantic Segmentation
[ "Shoutong Luo", "Zhengxing Sun", "Yi Wang", "Yunhan Sun", "Chendi Zhu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=r83zGoj7qb
@inproceedings{ li2024mmforecast, title={{MM}-Forecast: A Multimodal Approach to Temporal Event Forecasting with Large Language Models}, author={Haoxuan Li and Zhengmao Yang and Yunshan Ma and Yi Bin and Yang Yang and Tat-Seng Chua}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=r83zGoj7qb} }
We study an emerging and intriguing problem of multimodal temporal event forecasting with large language models. Compared to using text or graph modalities, the investigation of utilizing images for temporal event forecasting has received less attention, particularly in the era of large language models (LLMs). To bridge this gap, we are particularly interested in two key questions of: 1) why images will help in temporal event forecasting, and 2) how to integrate images into the LLM-based forecasting framework. To answer these research questions, we propose to identify two essential functions that images play in the scenario of temporal event forecasting, i.e. highlighting and complementary. Then, we develop a novel framework, named MM-Forecast. It employs an Image Function Identification module to recognize these functions as verbal descriptions using multimodal large language models (MLLMs), and subsequently incorporates these function descriptions into LLM-based forecasting models. To evaluate our approach, we construct a new multimodal dataset, MidEast-TE-mm, by extending an existing event dataset MidEast-TE with images. Empirical studies demonstrate that our MM-Forecast can correctly identify the image functions, and further more, incorporating these verbal function descriptions significantly improves the forecasting performance. The dataset, code, and prompt will be released upon acceptanc
MM-Forecast: A Multimodal Approach to Temporal Event Forecasting with Large Language Models
[ "Haoxuan Li", "Zhengmao Yang", "Yunshan Ma", "Yi Bin", "Yang Yang", "Tat-Seng Chua" ]
Conference
poster
2408.04388
[ "https://github.com/luminosityx/mm-forecast" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=r2UcotB5B0
@inproceedings{ cui2024stochastic, title={Stochastic Context Consistency Reasoning for Domain Adaptive Object Detection}, author={Yiming Cui and Liang Li and Jiehua Zhang and Chenggang Yan and Hongkui Wang and Shuai Wang and Jin Heng and Wu Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=r2UcotB5B0} }
Domain Adaptive Object Detection (DAOD) aims to improve the adaptation of the detector for the unlabeled target domain by the labeled source domain. Recent advances leverage a self-training framework to enable a student model to learn the target domain knowledge using pseudo labels generated by a teacher model. Despite great successes, such category-level consistency supervision suffers from poor quality of pseudo labels. To mitigate the problem, we propose a stochastic context consistency reasoning (SOCCER) network with the self-training framework. Firstly, we introduce a stochastic complementary masking module (SCM) to generate complementary masked images thus preventing the network from over-relying on specific visual clues. Secondly, we design an inter-changeable context consistency reasoning module (Inter-CCR), which constructs an inter-context consistency paradigm to capture the texture and contour details in the target domain by aligning the predictions of the student model for complementary masked images. Meanwhile, we develop an intra-changeable context consistency reasoning module (Intra-CCR), which constructs an intra-context consistency paradigm to strengthen the utilization of context relations by utilizing pseudo labels to supervise the predictions of the student model. Experimental results on three DAOD benchmarks demonstrate our method outperforms current state-of-the-art methods by a large margin. Code is released in supplementary materials.
Stochastic Context Consistency Reasoning for Domain Adaptive Object Detection
[ "Yiming Cui", "Liang Li", "Jiehua Zhang", "Chenggang Yan", "Hongkui Wang", "Shuai Wang", "Jin Heng", "Wu Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qwKSLSr8dj
@inproceedings{ wen2024cdea, title={{CDEA}: Context- and Detail-Enhanced Unsupervised Learning for Domain Adaptive Semantic Segmentation}, author={Shuyuan Wen and Bingrui Hu and Wenchao Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qwKSLSr8dj} }
Unsupervised domain adaptation (UDA) aims to adapt a model trained on the source domain (e.g. synthetic data) to the target domain (e.g. real-world data) without requiring further annotations on the target domain. Most previous UDA methods for semantic segmentation focus on minimizing the domain discrepancy of various levels, e.g., pixels and features, for extracting domain-invariant knowledge. However, the primary domain knowledge, such as context and detail correlation, remains underexplored. To address this problem, we propose a context- and detail-enhanced unsupervised learning framework, called CDEA, for domain adaptive semantic segmentation that facilitates image detail correlations and contexts semantic consistency. Firstly, we propose an adaptive masked image consistency module to enhance UDA by learning spatial context relations of the target domain, which enforces the consistency between predictions and masked target images. Secondly, we propose a detail extraction module to enhance UDA by integrating the learning of spatial information into low-level layers, which fuses the low-level detail features with deep semantic features. Extensive experiments verify the effectiveness of the proposed method and demonstrate the superiority of our approach over state-of-the-art methods.
CDEA: Context- and Detail-Enhanced Unsupervised Learning for Domain Adaptive Semantic Segmentation
[ "Shuyuan Wen", "Bingrui Hu", "Wenchao Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qvEhiiayOr
@inproceedings{ li2024fewvs, title={Few{VS}: A Vision-Semantics Integration Framework for Few-Shot Image Classification}, author={Zhuoling Li and Yong Wang and Kaitong Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qvEhiiayOr} }
Some recent methods address few-shot image classification by extracting semantic information from class names and devising mechanisms for aligning vision and semantics to integrate information from both modalities. However, class names provide only abstract information, which is insufficient to capture the visual details present in images. As a result, this vision-semantics alignment is inherently biased, leading to sub-optimal integration outcomes. In this paper, we avoid this biased vision-semantics alignment by introducing CLIP, a natural bridge between vision and semantics, and enforcing unbiased vision-vision alignment as a proxy task. Specifically, we align features encoded from the same image by both the few-shot encoder and CLIP's vision encoder. This alignment is accomplished through a linear layer, with a training objective formulated using optimal transport-based assignment prediction. Thanks to the inherent alignment between CLIP's vision and text encoders, the few-shot encoder is indirectly aligned to CLIP's text encoder, which serves as the foundation for better vision-semantics integration. In addition, to further improve vision-semantics integration at the testing stage, we mine potential fine-grained semantic attributes of class names from large language models. Correspondingly, an online optimization module is designed to adaptively integrate the semantic attributes and visual information extracted from images. Extensive results on four datasets demonstrate that our method outperforms state-of-the-art methods.
FewVS: A Vision-Semantics Integration Framework for Few-Shot Image Classification
[ "Zhuoling Li", "Yong Wang", "Kaitong Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qtcWto9h1H
@inproceedings{ wang2024learning, title={Learning to Transfer Heterogeneous Translucent Materials from a 2D Image to 3D Models}, author={Xiaogang Wang and Yuhang Cheng and Ziyang Fan and Kai Xu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qtcWto9h1H} }
Great progress has been made in rendering translucent materials in recent years, but automatically estimating parameters for heterogeneous materials such as jade and human skin remains a challenging task, often requiring specialized and expensive physical measurement devices. In this paper, we present a novel approach for estimating and transferring the parameters of heterogeneous translucent materials from a single 2D image to 3D models. Our method consists of four key steps: (1) An efficient viewpoint selection algorithm to minimize redundancy and ensure comprehensive coverage of the model. (2) Initializing a homogeneous translucent material to render initial images for translucent dataset. (3) Edit the rendered translucent images to update the translucent dataset. (4) Optimize the edited translucent results onto material parameters using inverse rendering techniques. Our approach offers a practical and accessible solution that overcomes the limitations of existing methods, which often rely on complex and costly specialized devices. We demonstrate the effectiveness and superiority of our proposed method through extensive experiments, showcasing its ability to transfer and edit high-quality heterogeneous translucent materials on 3D models, surpassing the results achieved by previous techniques in 3D scene editing.
Learning to Transfer Heterogeneous Translucent Materials from a 2D Image to 3D Models
[ "Xiaogang Wang", "Yuhang Cheng", "Ziyang Fan", "Kai Xu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qqlD6XAjlk
@inproceedings{ bu2024fakingrecipe, title={FakingRecipe: Detecting Fake News on Short Video Platforms from the Perspective of Creative Process}, author={Yuyan Bu and Qiang Sheng and Juan Cao and Peng Qi and Danding Wang and Jintao Li}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qqlD6XAjlk} }
As short-form video-sharing platforms become a significant channel for news consumption, fake news in short videos has emerged as a serious threat in the online information ecosystem, making developing detection methods for this new scenario an urgent need. Compared with that in text and image formats, fake news on short video platforms contains rich but heterogeneous information in various modalities, posing a challenge to effective feature utilization. Unlike existing works mostly focusing on analyzing what is presented, we introduce a novel perspective that considers how it might be created. Through the lens of the creative process behind news video production, our empirical analysis uncovers the unique characteristics of fake news videos in material selection and editing. Based on the obtained insights, we design FakingRecipe, a creative process-aware model for detecting fake news short videos. It captures the fake news preferences in material selection from sentimental and semantic aspects and considers the traits of material editing from spatial and temporal aspects. To improve evaluation comprehensiveness, we first construct FakeTT, an English dataset for this task, and conduct experiments on both FakeTT and the existing Chinese FakeSV dataset. The results show FakingRecipe's superiority in detecting fake news on short video platforms.
FakingRecipe: Detecting Fake News on Short Video Platforms from the Perspective of Creative Process
[ "Yuyan Bu", "Qiang Sheng", "Juan Cao", "Peng Qi", "Danding Wang", "Jintao Li" ]
Conference
poster
2407.16670
[ "https://github.com/ICTMCG/FakingRecipe" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qnW0LQXY5L
@inproceedings{ khanal2024psm, title={{PSM}: Learning Probabilistic Embeddings for Multi-scale Zero-shot Soundscape Mapping}, author={Subash Khanal and Eric Xing and Srikumar Sastry and Aayush Dhakal and Zhexiao Xiong and Adeel Ahmad and Nathan Jacobs}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qnW0LQXY5L} }
A soundscape is defined by the acoustic environment a person perceives at a location. In this work, we propose a framework for mapping soundscapes across the Earth. Since soundscapes involve sound distributions that span varying spatial scales, we represent locations with multi-scale satellite imagery and learn a joint representation among this imagery, audio, and text. To capture the inherent uncertainty in the soundscape of a location, we additionally design the representation space to be probabilistic. We also fuse ubiquitous metadata (including geolocation, time, and data source) to enable learning of spatially and temporally dynamic representations of soundscapes. We demonstrate the utility of our framework by creating large-scale soundscape maps integrating both audio and text with temporal control. To facilitate future research on this task, we also introduce a large-scale dataset, GeoSound, containing over 300k geotagged audio samples paired with both low- and high-resolution satellite imagery. We demonstrate that our method outperforms the existing state-of-the-art on both GeoSound and the existing SoundingEarth dataset. Our dataset and code will be made available at TBD.
PSM: Learning Probabilistic Embeddings for Multi-scale Zero-shot Soundscape Mapping
[ "Subash Khanal", "Eric Xing", "Srikumar Sastry", "Aayush Dhakal", "Zhexiao Xiong", "Adeel Ahmad", "Nathan Jacobs" ]
Conference
poster
2408.07050
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qmyPQ3XbBZ
@inproceedings{ wu2024d, title={3D Question Answering with Scene Graph Reasoning}, author={Zizhao Wu and Haohan Li and Gongyi Chen and Zhou Yu and Xiaoling Gu and Yigang Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qmyPQ3XbBZ} }
3DQA has gained considerable attention due to its enhanced spatial understanding capabilities compared to image-based VQA. However, existing 3DQA methods have explicitly focused on integrating text and color-coded point cloud features, thereby overlooking the rich high-level semantic relationships among objects. In this paper, we propose a novel graph-based 3DQA method termed 3DGraphQA, which leverages scene graph reasoning to enhance the ability to handle complex reasoning tasks in 3DQA and offers stronger interpretability. Specifically, our method first adaptively constructs dynamic scene graphs for the 3DQA task. Then we inject both the situation and the question inputs into the scene graph, forming the situation-graph and the question-graph, respectively. Based on the constructed graphs, we finally perform intra- and inter-graph feature propagation for efficient graph inference: intra-graph feature propagation is performed based on Graph Transformer in each graph to realize single-modal contextual interaction and high-order contextual interaction; inter-graph feature propagation is performed among graphs based on bilinear graph networks to realize the interaction between different contexts of situations and questions. Drawing on these intra- and inter-graph feature propagation, our approach is poised to better grasp the intricate semantic and spatial relationship issues among objects within the scene and their relations to the questions, thereby facilitating reasoning complex and compositional questions. We validate the effectiveness of our approach on ScanQA and SQA3D datasets, and expand the SQA3D dataset to SQA3D Pro with multi-view information, making it more suitable for our approach. Experimental results demonstrate that our 3DGraphQA outperforms existing methods.
3D Question Answering with Scene Graph Reasoning
[ "Zizhao Wu", "Haohan Li", "Gongyi Chen", "Zhou Yu", "Xiaoling Gu", "Yigang Wang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qmJCo8dMBD
@inproceedings{ chen2024aspects, title={Aspects are Anchors: Towards Multimodal Aspect-based Sentiment Analysis via Aspect-driven Alignment and Refinement}, author={Zhanpeng Chen and Zhihong Zhu and Wanshi Xu and Yunyan Zhang and Xian Wu and Yefeng Zheng}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qmJCo8dMBD} }
Given coupled sentence image pairs, Multimodal Aspect-based Sentiment Analysis (MABSA) aims to detect aspect terms and predict their sentiment polarity. While existing methods have made great efforts in aligning images and text for improved MABSA performance, they still struggle to effectively mitigate the challenge of the noisy correspondence problem (NCP): the text description is often not well-aligned with the visual content. To alleviate NCP, in this paper, we introduce Aspect-driven Alignment and Refinement (ADAR), which is a two-stage coarse-to-fine alignment framework. In the first stage, ADAR devises a novel Coarse-to-fine Aspect-driven Alignment Module, which introduces Optimal Transport (OT) to learn the coarse-grained alignment between visual and textual features. Then the adaptive filter bin is applied to remove the irrelevant image regions at a fine-grained level; In the second stage, ADAR introduces an Aspect-driven Refinement Module to further refine the cross-modality feature representation. Extensive experiments on two benchmark datasets demonstrate the superiority of our model over state-of-the-art performance in the MABSA task.
Aspects are Anchors: Towards Multimodal Aspect-based Sentiment Analysis via Aspect-driven Alignment and Refinement
[ "Zhanpeng Chen", "Zhihong Zhu", "Wanshi Xu", "Yunyan Zhang", "Xian Wu", "Yefeng Zheng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qjUTgApgAN
@inproceedings{ he2024focus, title={Focus \& Gating: A Multimodal Approach for Unveiling Relations in Noisy Social Media}, author={Liang He and Hongke Wang and Zhen Wu and Jianbing Zhang and Xinyu Dai and Jiajun Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qjUTgApgAN} }
With the rise of multimedia-driven content on the internet, multimodal relation extraction has gained significant importance in various domains, such as intelligent search and multimodal knowledge graph construction. Social media, as a rich source of image-text data, plays a crucial role in populating knowledge bases. However, the noisy information present in social media data poses a challenge in multimodal relation extraction. Current methods focus on extracting relevant information from images to improve model performance but often overlook the importance of global image information. In this paper, we propose a novel multimodal relation extraction method, named FocalMRE, which leverages image focal augmentation, focal attention, and gating mechanisms. FocalMRE enables the model to concentrate on the image's focal regions while effectively utilizing the global information in the image. Through gating mechanisms, FocalMRE optimizes the multimodal fusion strategy, allowing the model to select the most relevant augmented regions for overcoming noise interference in relation extraction. The experimental results on the public MNRE dataset reveal that our proposed method exhibits robust and significant performance advantages in the multimodal relation extraction task, especially in scenarios with high noise, long-tail distributions, and limited resources.
Focus Gating: A Multimodal Approach for Unveiling Relations in Noisy Social Media
[ "Liang He", "Hongke Wang", "Zhen Wu", "Jianbing Zhang", "Xinyu Dai", "Jiajun Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qipYQAcvVG
@inproceedings{ wu2024dino, title={{DINO} is Also a Semantic Guider: Exploiting Class-aware Affinity for Weakly Supervised Semantic Segmentation}, author={Yuanchen Wu and Xiaoqiang Li and Jide Li and KequanYang and Pinpin Zhu and Shaohua Zhang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qipYQAcvVG} }
Weakly supervised semantic segmentation (WSSS) using image-level labels is a challenging task, with relying on Class Activation Map (CAM) to derive segmentation supervision. Although many efficient single-stage solutions have been proposed, their performance is hindered by the inherent ambiguity of CAM. This paper introduces a new approach, dubbed ECA, to Exploit the self-supervised Vision Transformer, DINO, inducing the Class-aware semantic Affinity to overcome this limitation. Specifically, we introduce a Semantic Affinity Exploitation module (SAE). It establishes the class-agnostic affinity graph through the self-attention of DINO. Using the highly activated patches on CAMs as “seeds”, we propagate them across the affinity graph and yield the Class-aware Affinity Region Map (CARM) as supplementary semantic guidance. Moreover, the selection of reliable “seeds” is crucial to the CARM generation. Inspired by the observed CAM inconsistency between the global and local views, we develop a CAM Correspondence Enhancement module (CCE) to encourage dense local-to-global CAM correspondences, advancing high-fidelity CAM for seed selection in SAE. Our experimental results demonstrate that ECA effectively improves the model's object pattern understanding. Remarkably, it outperforms state-of-the-art alternatives on the PASCAL VOC 2012 and MS COCO 2014 datasets, achieving 90.1% upper bound performance compared to its fully supervised counterpart.
DINO is Also a Semantic Guider: Exploiting Class-aware Affinity for Weakly Supervised Semantic Segmentation
[ "Yuanchen Wu", "Xiaoqiang Li", "Jide Li", "KequanYang", "Pinpin Zhu", "Shaohua Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qhWY8GJ7n7
@inproceedings{ yin2024parameterefficient, title={Parameter-efficient is not Sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions}, author={Dongshuo Yin and Xueting Han and Bin Li and Hao Feng and Jing Bai}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qhWY8GJ7n7} }
Pre-training \& fine-tuning is a prevalent paradigm in computer vision (CV). Recently, parameter-efficient transfer learning (PETL) methods have shown promising performance in adapting to downstream tasks with only a few trainable parameters. Despite their success, the existing PETL methods in CV can be computationally expensive and require large amounts of memory and time cost during training, which limits low-resource users from conducting research and applications on large models. In this work, we propose Parameter, Memory, and Time Efficient Visual Adapter ($\mathrm{E^3VA}$) tuning to address this issue. We provide a gradient backpropagation highway for low-rank adapters which eliminates the need for expensive backpropagation through the frozen pre-trained model, resulting in substantial savings of training memory and training time. Furthermore, we optimise the $\mathrm{E^3VA}$ structure for CV tasks to promote model performance. Extensive experiments on COCO, ADE20K, and Pascal VOC benchmarks show that $\mathrm{E^3VA}$ can save up to 62.2\% training memory and 26.2\% training time on average, while achieving comparable performance to full fine-tuning and better performance than most PETL methods. Note that we can even train the Swin-Large-based Cascade Mask RCNN on GTX 1080Ti GPUs with less than 1.5\% trainable parameters.
Parameter-efficient is not Sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions
[ "Dongshuo Yin", "Xueting Han", "Bin Li", "Hao Feng", "Jing Bai" ]
Conference
poster
2306.09729
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qcNVusTr48
@inproceedings{ li2024onestage, title={One-Stage Fair Multi-View Spectral Clustering}, author={Rongwen Li and Haiyang Hu and Liang Du and Jiarong Chen and Bingbing Jiang and Peng Zhou}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qcNVusTr48} }
Multi-view clustering is an important task in multimedia and machine learning. In multi-view clustering, multi-view spectral clustering is one kind of the most popular and effective methods. However, existing multi-view spectral clustering ignores the fairness in the clustering result, which may cause discrimination. To tackle this problem, in this paper, we propose an innovative Fair Multi-view Spectral Clustering (FMSC) method. Firstly, we provide a new perspective of fairness from the graph theory viewpoint, which constructs a relation between fairness and the average degree in graph theory. Secondly, based on this relation, we design a novel fairness-aware regularized term, which has the same form as the ratio cut in spectral clustering. Thirdly, we seamlessly plug this fairness-aware regularized term into the multi-view spectral clustering, leading to our one-stage FMSC, which can directly obtain the final clustering result without any post-processing. We also conduct extensive experiments compared with state-of-the-art fair clustering and multi-view clustering methods, which shows that our method can achieve better fairness.
One-Stage Fair Multi-View Spectral Clustering
[ "Rongwen Li", "Haiyang Hu", "Liang Du", "Jiarong Chen", "Bingbing Jiang", "Peng Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qaIS3nvAem
@inproceedings{ tan2024blind, title={Blind Face Video Restoration with Temporal Consistent Generative Prior and Degradation-Aware Prompt}, author={Jingfan Tan and Hyunhee Park and Ying Zhang and Tao Wang and Kaihao Zhang and Xiangyu Kong and Pengwen Dai and Zikun Liu and Wenhan Luo}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qaIS3nvAem} }
Within the domain of blind face restoration (BFR), approaches lacking facial priors frequently result in excessively smoothed visual outputs.Exiting BFR methods predominantly utilize generative facial priors to achieve realistic and authentic details. However, these methods, primarily designed for images, encounter challenges in maintaining temporal consistency when applied to face video restoration. To tackle this issue, we introduce StableBFVR, an innovative Blind Face Video Restoration method based on Stable Diffusion that incorporates temporal information into the generative prior. This is achieved through the introduction of temporal layers in the diffusion process.These temporal layers consider both long-term and short-term information aggregation.Moreover, to improve generalizability, BFR methods employ complex, large-scale degradation during training, but it often sacrifices accuracy. Addressing this, StableBFVR features a novel mixed-degradation-aware prompt module, capable of encoding specific degradation information to dynamically steer the restoration process.Comprehensive experiments demonstrate that our proposed StableBFVR outperforms state-of-the-art methods.
Blind Face Video Restoration with Temporal Consistent Generative Prior and Degradation-Aware Prompt
[ "Jingfan Tan", "Hyunhee Park", "Ying Zhang", "Tao Wang", "Kaihao Zhang", "Xiangyu Kong", "Pengwen Dai", "Zikun Liu", "Wenhan Luo" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qZwp41zYjF
@inproceedings{ chen2024partial, title={Partial Multi-label Learning Based On Near-Far Neighborhood Label Enhancement And Nonlinear Guidance}, author={Yu Chen and Yanan Wu and Na Han and Xiaozhao Fang and Bingzhi Chen and Jie Wen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qZwp41zYjF} }
Partial multi-label learning (PML) deals with the problem of accurately predicting the correct multi-label class for each instance in multi-label data containing noise. Compared with traditional multi-label learning, partial multi-label learning requires learning and completing multi-label classification tasks in an imperfect environment. The existing PML methods have the following problems: (1) the correlation between samples and labels is not fully utilized; (2) the nonlinear nature of the model is not taken into account. To solve these problems, we propose a new method of PML based on label enhancement of near and far neighbor information and nonlinear guidance (PML-LENFN). Specifically, the original binary label information is reconstructed by using the information of sample near neighbors and far neighbors to eliminate the influence of noise. Then we construct a linear multi-label classifier that can explore label correlation. In order to learn the nonlinear relationship between features and labels, we use nonlinear mapping to constrain this classifier, so as to obtain the prediction results that are more consistent with the realistic label distribution.
Partial Multi-label Learning Based On Near-Far Neighborhood Label Enhancement And Nonlinear Guidance
[ "Yu Chen", "Yanan Wu", "Na Han", "Xiaozhao Fang", "Bingzhi Chen", "Jie Wen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qZgL52USrA
@inproceedings{ zhou2024enhanced, title={Enhanced Screen Content Image Compression: A Synergistic Approach for Structural Fidelity and Text Integrity Preservation}, author={Fangtao Zhou and xiaofeng huang and Peng Zhang and Meng Wang and Zhao Wang and Yang Zhou and Haibing YIN}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qZgL52USrA} }
With the rapid development of video conferencing and online education applications, screen content image (SCI) compression has become increasingly crucial. Recently, deep learning techniques have made significant strides in compressing natural images, surpassing the performance of traditional standards like versatile video coding. However, directly applying these methods to SCIs is challenging due to the unique characteristics of SCIs. In this paper, we propose a synergistic approach to preserve structural fidelity and text integrity for SCIs. Firstly, external prior guidance is proposed to enhance structural fidelity and text integrity by providing global spatial attention. Then, a structural enhancement module is proposed to improve the preservation of structural information by enhanced spatial feature transform. Finally, the loss function is optimized for better compression efficiency in text regions by weighted mean square error. Experimental results show that the proposed method achieves 13.3\% BD-Rate saving compared to the baseline window attention convolutional neural networks (WACNN) on the JPEGAI, SIQAD, SCID, and MLSCID datasets on average.
Enhanced Screen Content Image Compression: A Synergistic Approach for Structural Fidelity and Text Integrity Preservation
[ "Fangtao Zhou", "xiaofeng huang", "Peng Zhang", "Meng Wang", "Zhao Wang", "Yang Zhou", "Haibing YIN" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qQph6GscZZ
@inproceedings{ sun2024improved, title={Improved Weighted Tensor Schatten \ensuremath{\mathit{p}}-Norm for Fast Multi-view Graph Clustering}, author={Yinghui Sun and Xingfeng Li and Sun Quansen and Min-Ling Zhang and Zhenwen Ren}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qQph6GscZZ} }
Recently, tensor Schatten $p$-norm has achieved impressive performance for fast multi-view clustering \cite{xia2023tensorized}. This primarily ascribes the superiority of tensor Schatten $p$-norm in exploring high-order structure information among views. Whereas, 1) tensor Schatten $p$-norm treats different singular values equally, such that the larger singular values corresponding to certain significant feature information (i.e., prior information) have not been utilized fully; 2) tensor Schatten $p$-norm also ignore ranking the core entries of core tensor, which may contain noise information; 3) existing methods select fixed anchors or averagely update anchors to construct the neighbor bipartite graphs, greatly limiting the flexibility and expression of anchors. To break these limitations, we propose a novel \textbf{Improved Weighted Tensor Schatten $p$-Norm for Fast Multi-view Graph Clustering (IWTSN-FMGC)}. Specifically, to eliminate the interference of the first two limitations, we propose an improved weighted tensor Schatten $p$-norm to dynamically rank core tensor and automatically shrink singular values. To this end, improved weighted tensor Schatten $p$-norm has the potential to more effectively leverage low-rank structures and prior information, thereby enhancing robustness compared to current tensor Schatten $p$-norm methods. Further, the designed adaptive neighbor bipartite graph learning can more flexibly and expressively encode the local manifold structure information than existing anchor selection and averaged anchor updating. Extensive experiments validate our effectiveness and superiority across multiple benchmark datasets.
Improved Weighted Tensor Schatten 𝑝-Norm for Fast Multi-view Graph Clustering
[ "Yinghui Sun", "Xingfeng Li", "Sun Quansen", "Min-Ling Zhang", "Zhenwen Ren" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qQTr32f832
@inproceedings{ ling2024agent, title={Agent Aggregator with Mask Denoise Mechanism for Histopathology Whole Slide Image Analysis}, author={Xitong Ling and Minxi Ouyang and Yizhi Wang and Xinrui Chen and Renao Yan and Hongbochu and Junru Cheng and Tian Guan and Xiaoping Liu and Sufang Tian and Yonghong He}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qQTr32f832} }
Histopathology analysis is the gold standard for medical diagnosis.Accurate classification of whole slide images (WSIs) and region-of-interests (ROIs) level localization will assist pathologists in clinical diagnosis. With a gigapixel resolution and a scarcity of fine-grained annotations, WSI is difficult to classify directly. In the field of weakly supervised learning, multiple instance learning (MIL) serves as a promising approach to solving WSI classification tasks. Currently, a prevailing aggregation strategy is to apply attention mechanism as a measure of the importance of each instance for further classification. Notwithstanding, attention mechanism fails to capture inter-instance information and self-attention mechanism can cause quadratic computational complexity issues. To address these challenges, we propose an agent aggregator with mask denoise mechanism for multiple instance learning termed AMD-MIL. The agent token represents an intermediate variable between the query and key for implicit computation of the instance importance. Mask and denoising are also learnable matrices mapped from the agents-aggregated value, which first dynamically mask out some low-contribution instance representations and then eliminate the relative noise introduced during the mask process. AMD-MIL can indirectly achieve more reasonable attention allocation by adjusting feature representations, thereby sensitively capturing micro-metastases in cancer and achieving better interpretability. Our extensive experiments on CAMELYON-16, CAMELYON-17, TCGA-KIDNEY, and TCGA-LUNG datasets show our method’s superiority over existing state-of-the-art approaches. The code will be available upon acceptance
Agent Aggregator with Mask Denoise Mechanism for Histopathology Whole Slide Image Analysis
[ "Xitong Ling", "Minxi Ouyang", "Yizhi Wang", "Xinrui Chen", "Renao Yan", "Hongbochu", "Junru Cheng", "Tian Guan", "Xiaoping Liu", "Sufang Tian", "Yonghong He" ]
Conference
poster
2409.11664
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qO17TGceT6
@inproceedings{ zheng2024hpc, title={{HPC}: Hierarchical Progressive Coding Framework for Volumetric Video}, author={Zihan Zheng and Houqiang Zhong and Qiang Hu and Xiaoyun Zhang and Li Song and Ya Zhang and Yanfeng Wang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=qO17TGceT6} }
Volumetric video based on Neural Radiance Field (NeRF) holds vast potential for various 3D applications, but its substantial data volume poses significant challenges for compression and transmission. Current NeRF compression lacks the flexibility to adjust video quality and bitrate within a single model for various network and device capacities. To address these issues, we propose HPC, a novel hierarchical progressive volumetric video coding framework achieving variable bitrate using a single model. Specifically, HPC introduces a hierarchical representation with a multi-resolution residual radiance field to reduce temporal redundancy in long-duration sequences while simultaneously generating various levels of detail. Then, we propose an end-to-end progressive learning approach with a multi-rate-distortion loss function to jointly optimize both hierarchical representation and compression. Our HPC trained only once can realize multiple compression levels, while the current methods need to train multiple fixed-bitrate models for different rate-distortion (RD) tradeoffs. Extensive experiments demonstrate that HPC achieves flexible quality levels with variable bitrate by a single model and exhibits competitive RD performance, even outperforming fixed-bitrate models across various datasets.
HPC: Hierarchical Progressive Coding Framework for Volumetric Video
[ "Zihan Zheng", "Houqiang Zhong", "Qiang Hu", "Xiaoyun Zhang", "Li Song", "Ya Zhang", "Yanfeng Wang" ]
Conference
oral
2407.09026
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=q73FfT7yfp
@inproceedings{ hung2024timenerf, title={TimeNe{RF}: Building Generalizable Neural Radiance Fields across Time from Few-Shot Input Views}, author={Hsiang-Hui Hung and Huu-Phu Do and Yung-Hui Li and Ching-Chun Huang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=q73FfT7yfp} }
We present TimeNeRF, a generalizable neural rendering approach for rendering novel views at arbitrary viewpoints and at arbitrary times, even with few input views. For real-world applications, it is expensive to collect multiple views and inefficient to re-optimize for unseen scenes. Moreover, as the digital realm, particularly the metaverse, strives for increasingly immersive experiences, the ability to model 3D environments that naturally transition between day and night becomes paramount. While current techniques based on Neural Radiance Fields (NeRF) have shown remarkable proficiency in synthesizing novel views, the exploration of NeRF's potential for temporal 3D scene modeling remains limited, with no dedicated datasets available for this purpose. To this end, our approach harnesses the strengths of multi-view stereo, neural radiance fields, and disentanglement strategies across diverse datasets. This equips our model with the capability for generalizability in a few-shot setting, allows us to construct an implicit content radiance field for scene representation, and further enables the building of neural radiance fields at any arbitrary time. Finally, we synthesize novel views of that time via volume rendering. Experiments show that TimeNeRF can render novel views in a few-shot setting without per-scene optimization. Most notably, it excels in creating realistic novel views that transition smoothly across different times, adeptly capturing intricate natural scene changes from dawn to dusk.
TimeNeRF: Building Generalizable Neural Radiance Fields across Time from Few-Shot Input Views
[ "Hsiang-Hui Hung", "Huu-Phu Do", "Yung-Hui Li", "Ching-Chun Huang" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=q1AQmI4VQx
@inproceedings{ xu2024an, title={An End-to-End Real-World Camera Imaging Pipeline}, author={Kepeng Xu and Zijia Ma and Li Xu and Gang He and Yunsong Li and Wenxin Yu and Taichu Han and Cheng Yang}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=q1AQmI4VQx} }
Recent advances in neural camera imaging pipelines have demonstrated notable progress. Nevertheless, the real-world imaging pipeline still faces challenges including the lack of joint optimization in system components, computational redundancies, and optical distortions such as lens shading. In light of this, we propose an end-to-end camera imaging pipeline (RealCamNet) to enhance real-world camera imaging performance. Our methodology diverges from conventional, fragmented multi-stage image signal processing towards end-to-end architecture. This architecture facilitates joint optimization across the full pipeline and the restoration of coordinate-biased distortions. RealCamNet is designed for high-quality conversion from RAW to RGB and compact image compression. Specifically, we deeply analyze coordinate-dependent optical distortions, e.g., vignetting and dark shading, and design a novel Coordinate-Aware Distortion Restoration (CADR) module to restore coordinate-biased distortions. Furthermore, we propose a Coordinate-Independent Mapping Compression (CIMC) module to implement tone mapping and redundant information compression. Existing datasets suffer from misalignment and overly idealized conditions, making them inadequate for training real-world imaging pipelines. Therefore, we collected a real-world imaging dataset. Experiment results show that RealCamNet achieves the best rate-distortion performance with lower inference latency.
An End-to-End Real-World Camera Imaging Pipeline
[ "Kepeng Xu", "Zijia Ma", "Li Xu", "Gang He", "Yunsong Li", "Wenxin Yu", "Taichu Han", "Cheng Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=q07MWYYEtz
@inproceedings{ yang2024shiftmorph, title={ShiftMorph: A Fast and Robust Convolutional Neural Network for 3D Deformable Medical Image Registration}, author={Lijian Yang and Weisheng Li and Yucheng Shu and Jianxun Mi and Yuping Huang and Bin Xiao}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=q07MWYYEtz} }
Deformable image registration (DIR) is crucial for many medical image applications. In recent years, learning-based methods utilizing the convolutional neural network (CNN) or the Transformer have demonstrated their superiority in image registration, dominating a new era for DIR. However, very few of these methods can satisfy the demands of real-time applications due to the high spatial resolution of 3D volumes and the high complexity of 3D operators. To tackle this, we propose losslessly downsampling by shifting the strided convolution. A grouping strategy is then used to reduce redundant computations and support self-consistency learning. As an inherent regularizer of the network design, self-consistency learning improves the deformation quality and enables halving the proposed network after training. Furthermore, the proposed shifted connection converts the decoding operations into a lower-dimensional space, significantly reducing decoding overhead. Extensive experimental results on medical image registration demonstrate that our method is competitive with state-of-the-art methods in terms of registration performance, and additionally, it achieves over $3\times$ the speed of most of them.
ShiftMorph: A Fast and Robust Convolutional Neural Network for 3D Deformable Medical Image Registration
[ "Lijian Yang", "Weisheng Li", "Yucheng Shu", "Jianxun Mi", "Yuping Huang", "Bin Xiao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pjdvHKo2M0
@inproceedings{ jiang2024vrdone, title={Vrd{ONE}: One-stage Video Visual Relation Detection}, author={Xinjie Jiang and Chenxi Zheng and Xuemiao Xu and Bangzhen Liu and Weiying Zheng and Huaidong Zhang and Shengfeng He}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pjdvHKo2M0} }
Video Visual Relation Detection (VidVRD) focuses on understanding how entities interact over time and space in videos, a key step for getting a deeper insight into video scenes beyond basic visual tasks. Traditional methods for VidVRD, challenged by its complexity, usually split the task into two parts: one for identifying what categories are present and another for figuring out their temporal boundaries. This split overlooks the natural connection between these elements. Addressing the need for recognizing entity independence and their interactions across a range of durations, we propose VrdONE, a streamlined yet efficacious one-stage model. VrdONE combines the features of subjects and objects, turning predicate detection into 1D instance segmentation on their combined representations. This setup allows for both category identification and binary mask generation in one go, eliminating the need for extra steps like proposal generation or post-processing. VrdONE facilitates the interaction of features across various frames, adeptly capturing both short-lived and enduring relations. Additionally, we introduce the Subject-Object Synergy (SOS) Module, enhancing how subjects and objects perceive each other before combining. VrdONE achieves state-of-the-art performances on both the VidOR benchmark and ImageNet-VidVRD, showcasing its superior capability in discerning relations across different temporal scales.
VrdONE: One-stage Video Visual Relation Detection
[ "Xinjie Jiang", "Chenxi Zheng", "Xuemiao Xu", "Bangzhen Liu", "Weiying Zheng", "Huaidong Zhang", "Shengfeng He" ]
Conference
poster
2408.09408
[ "https://github.com/lucaspk512/vrdone" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pguhpIXhXh
@inproceedings{ sun2024unveiling, title={Unveiling and Mitigating Bias in Audio Visual Segmentation}, author={Peiwen Sun and Honggang Zhang and Di Hu}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pguhpIXhXh} }
Community researchers have developed a range of advanced audio-visual segmentation models aimed at improving the quality of sounding objects' masks. While masks created by these models may initially appear plausible, they occasionally exhibit anomalies with incorrect grounding logic. We attribute this to real-world inherent preferences and distributions as a simpler signal for learning than the complex audio-visual grounding, which leads to the disregard of important modality information. Generally, the anomalous phenomena are often complex and cannot be directly observed systematically. In this study, we made a pioneering effort with the proper synthetic data to categorize and analyze phenomena as two types “audio priming bias'' and “visual prior'' according to the source of anomalies. For audio priming bias, to enhance audio sensitivity to different intensities and semantics, a perception module specifically for audio perceives the latent semantic information and incorporates information into a limited set of queries, namely active queries. Moreover, the interaction mechanism related to such active queries in the transformer decoder is customized to adapt to the need for interaction regulating among audio semantics. For visual prior, multiple contrastive training strategies are explored to optimize the model by incorporating a biased branch, without even changing the structure of the model. During experiments, observation demonstrates the presence and the impact that has been produced by the biases of the existing model. Finally, through experimental evaluation of AVS benchmarks, we demonstrate the effectiveness of our methods in handling both types of biases, achieving competitive performance across all three subsets.
Unveiling and Mitigating Bias in Audio Visual Segmentation
[ "Peiwen Sun", "Honggang Zhang", "Di Hu" ]
Conference
oral
2407.16638
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=pg9ghw6vu5
@inproceedings{ lyu2024frame, title={Frame Interpolation with Consecutive Brownian Bridge Diffusion}, author={Zonglin Lyu and Ming Li and Jianbo Jiao and Chen Chen}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pg9ghw6vu5} }
Recent work in Video Frame Interpolation (VFI) tries to formulate VFI as a diffusion-based conditional image generation problem, synthesizing the intermediate frame given a random noise and neighboring frames. Due to the relatively high resolution of videos, Latent Diffusion Models (LDMs) are employed as the conditional generation model, where the autoencoder compresses images into latent representations for diffusion and then reconstructs images from these latent representations. Such a formulation poses a crucial challenge: VFI expects that the output is $\textit{deterministically}$ equal to the ground truth intermediate frame, but LDMs $\textit{randomly}$ generate a diverse set of different images when the model runs multiple times. The reason for the diverse generation is that the cumulative variance (variance accumulated at each step of generation) of generated latent representations in LDMs is large. This makes the sampling trajectory random, resulting in diverse rather than deterministic generations. To address this problem, we propose our unique solution: Frame Interpolation with Consecutive Brownian Bridge Diffusion. Specifically, we propose consecutive Brownian Bridge diffusion that takes a deterministic initial value as input, resulting in a much smaller cumulative variance of generated latent representations. Our experiments suggest that our method can improve together with the improvement of the autoencoder and achieve state-of-the-art performance in VFI, leaving strong potential for further enhancement.
Frame Interpolation with Consecutive Brownian Bridge Diffusion
[ "Zonglin Lyu", "Ming Li", "Jianbo Jiao", "Chen Chen" ]
Conference
poster
2405.05953
[ "https://github.com/zonglinl/consecutivebrownianbridge" ]
https://huggingface.co/papers/2405.05953
0
0
0
4
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=pffPeV7GNR
@inproceedings{ ma2024learning, title={Learning Cross-Spectral Prior for Image Super-Resolution}, author={Chenxi Ma and Weimin Tan and Shili Zhou and Bo Yan}, booktitle={ACM Multimedia 2024}, year={2024}, url={https://openreview.net/forum?id=pffPeV7GNR} }
With the rising interest in multi-camera cross-spectral systems, cross-spectral images have been widely used in computer vision and image processing. Therefore, an effective super-resolution (SR) method is significant in providing high-resolution (HR) cross-spectral images for different research and applications. However, existing SR methods rarely consider utilizing cross-spectral information to assist the SR of visible images and cannot handle the complex degradation (noise, high brightness, low light) and misalignment problem in low-resolution (LR) cross-spectral images. Here, we first explore the potential of using near-infrared (NIR) image guidance for better SR, based on the observation that NIR images can preserve valuable information for recovering adequate image details. To take full advantage of the cross-spectral prior, we propose a novel $\textbf{C}$ross-$\textbf{S}$pectral $\textbf{P}$rior guided image $\textbf{SR}$ approach ($\textbf{CSPSR}$). Concretely, we design a cross-view matching (CVM) module and a dynamic multi-modal fusion (DMF) module to enhance the spatial correlation between cross-spectral images and to bridge the multi-modal feature gap, respectively. The DMF module facilitates adaptive feature adaptation and effective information transmission through a dynamic convolution and a cross-spectral feature transfer (CSFT) unit. Extensive experiments demonstrate the effectiveness of our CSPSR, which can exploit the prominent cross-spectral information to produce state-of-the-art results.
Learning Cross-Spectral Prior for Image Super-Resolution
[ "Chenxi Ma", "Weimin Tan", "Shili Zhou", "Bo Yan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0